how to get the amazon product price using its name - python

am sorry if it is considered duplicate, but i've tried all python modules that can communicate to Amazon API, but sadly, all of them seems to require the product ID to get the exact price! and what i need is a price from a product name!
lastly, i've tried an extension of Bottlenose its name is python-amazon-simple-product-api except that it has the same problem: how do i get only the price from the name of a product.
here is what i've tried:
product = api.search(Keyword = "playstation", SearchIndex='All')
for i, produ in enumerate(product):
print "{0}. '{1}'".format(i, produ.title)
(this is the same result as using produ.price_and_currency which in the example with the file is used with ID)
and then give me this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "build\bdist.win-amd64\egg\amazon\api.py", line 174, in __iter__
File "build\bdist.win-amd64\egg\amazon\api.py", line 189, in iterate_pages
File "build\bdist.win-amd64\egg\amazon\api.py", line 211, in _query amazon.api.SearchException: Amazon Search Error: 'AWS.MinimumParameterRequirement', 'Your request should have atleast 1 of the following parameters: 'Keywords','Title','Power','BrowseNode','Artist','Author','Actor','Director','AudienceRati g','Manufacturer','MusicLabel','Composer','Publisher','Brand','Conductor','Orchestra','Tex Stream','Cuisine','City','Neighborhood'.'
Edit: after correcting Keyword to Keywords i get a looong time response (infinite loop! and tried it sevral times)! not like returning just the whole XML, but when using only bottlenose, i ony get tags that dont have Price or something...
<ItemLink>
<Description>Technical Details</Description>
<URL>http://www.amazon.com/*****</URL>
</ItemLink>
Update2: it seems that amazon will return ALL results, so how to limit this to only the first bucket (because it gives results by groups of 10 results)

Without having experience with the Amazon API: it's a matter of performing the search properly and intelligently. Think about it carefully, and read through
http://docs.amazonwebservices.com/AWSECommerceService/2011-08-01/DG/ItemSearch.html
so that you don't miss any important search feature.
The response contains something between 0 and a zillion of items, depending on how intelligent you search query was. In any case, the items in the response identify themselves via their ASIN, the product ID. Example: <ASIN>B00021HBN6</ASIN>
After having collected ASINs via ItemSearch, you can perform an ItemLookup on these items in order to find further details, like the price.

sorry for the delay, solved:
pagination is done using search_n :
test = api.search_n(10, Keywords='example name', SearchIndex='All') # this will return only 10 results
Link

Related

How to download datasat from The Humanitarian Data Exchange (hdx api python)

I don't quite understand how I can download data from a dataset. I only download one file, and there are several of them. How can I solve this problem?
I am using hdx api library. There is a small example in the documentation. A list is returned to me and I use the download method. But only the first file from the list is downloaded, not all of them.
My code
from hdx.hdx_configuration import Configuration
from hdx.data.dataset import Dataset
Configuration.create(hdx_site='prod', user_agent='A_Quick_Example', hdx_read_only=True)
dataset = Dataset.read_from_hdx('novel-coronavirus-2019-ncov-cases')
resources = dataset.get_resources()
print(resources)
url, path = resources[0].download()
print('Resource URL %s downloaded to %s' % (url, path))
I tried to use different methods, but only this one turned out to be working, it seems some kind of error in the loop, but I do not understand how to solve it.
Result
Resource URL https://data.humdata.org/hxlproxy/api/data-preview.csv?url=https%3A%2F%2Fraw.githubusercontent.com%2FCSSEGISandData%2FCOVID-19%2Fmaster%2Fcsse_covid_19_data%2Fcsse_covid_19_time_series%2Ftime_series_covid19_confirmed_global.csv&filename=time_series_covid19_confirmed_global.csv downloaded to C:\Users\tred1\AppData\Local\Temp\time_series_covid19_confirmed_global.csv.CSV
Forgot to add that I get a list of strings where there is a download url value. Probably the problem is in the loop
When I use a for-loop I get this:
for res in resources:
print(res)
res[0].download()
Traceback (most recent call last):
File "C:/Users/tred1/PycharmProjects/pythonProject2/HDXapi.py", line 31, in <module>
main()
File "C:/Users/tred1/PycharmProjects/pythonProject2/HDXapi.py", line 21, in main
res[0].download()
File "C:\Users\tred1\AppData\Local\Programs\Python\Python38\lib\collections\__init__.py", line 1010, in __getitem__
raise KeyError(key)
KeyError: 0
Datasets
You can get the download link as follows:
dataset = Dataset.read_from_hdx('acled-conflict-data-for-africa-1997-lastyear')
lita_resources = dataset.get_resources()
dictio=lista_resources[1]
url=dictio['download_url']

running a simple pdblp code to extract BBG data

I am currently logged on to my BBG anywhere (web login) on my Mac. So first question is would I still be able to extract data using tia (as I am not actually on my terminal)
import pdblp
con = pdblp.BCon(debug=True, port=8194, timeout=5000)
con.start()
I got this error
pdblp.pdblp:WARNING:Message Received:
SessionStartupFailure = {
reason = {
source = "Session"
category = "IO_ERROR"
errorCode = 9
description = "Connection failed"
}
}
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Users/prasadkamath/anaconda2/envs/Pk36/lib/python3.6/site-packages/pdblp/pdblp.py", line 147, in start
raise ConnectionError('Could not start blpapi.Session')
ConnectionError: Could not start blpapi.Session
I am assuming that I need to be on the terminal to be able to extract data, but wanted to confirm that.
This is a duplicate of this issue here on SO. It is not an issue with pdblp per se, but with blpapi not finding a connection. You mention that you are logged in via the web, which only allows you to use the terminal (or Excel add-in) within the browser, but not outside of it, since this way of accessing Bloomberg lacks a data feed and an API. More details and alternatives can be found here.

googlemaps api works in python terminal, but not in script

I'm trying to write a simple web-scraper in python that uses the googlemaps api to find local gas stations, but for some reason, I can't get it to execute by itself. When I step through it, or use the python prompt, the code works, but when I try to run the code by itself, I get an INVALID_REQUEST exception.
Here's my code:
You need an api key to run this, but you can make one here: https://developers.google.com/places/web-service/get-api-key Running the application costs money, but google gives you a $200 a month credit, so you shouldn't need to worry about the cost. That said, I'm not pasting mine here for everyone to take.
import googlemaps
from googlemaps import places
import time
gmaps = googlemaps.Client(key='AnAPIKeyHere')
def find_stations() -> list:
print("Collecting gas station data.")
stations = []
print('Collecting result data')
time.sleep(2)
search_result = places.places_nearby(gmaps, '42.144735,-93.534612', 160935, keyword='gas station')
iter = 1
while True:
stations += search_result['results']
if 'next_page_token' not in search_result:
break
else:
iter += 1
print("Collecting page {}".format(iter), end='\r')
token = search_result['next_page_token']
print("The token is: {}".format(token))
search_result = places.places_nearby(gmaps, '42.144735,-93.534612', 160935, keyword='gas station',
page_token=token)
time.sleep(1)
return stations
if __name__ == "__main__":
for s in find_stations():
print(s)
There's a lot of pauses in there, I did that because I thought maybe I was requesting the pages too fast, but they don't seem to make a difference. I also tried to move the gmaps declaration into the function call, but this also did not make a difference.
Here is the traceback error that I'm getting:
Collecting gas station data.
Collecting result data
The token is: CrQCIgEAAF6QgiE83iz0sERAFSCJ2pAta_xnIID4DWdDIBcOnp89mZ_UWEkDbSRT5eRmGdj4fQ4kqnQAPzEdvsuzMhhAZzfJMbd6yH97aBvU6V1GRL-fVbS5d4yo-fAEcA-9WABaNneCzSp_JzHMdSa1qv7dKSn1d57ltnw_I9g2V6Lw0DHmGYATanhf9g8tbRT9qDbNNbmC3WSdr5nL0ZuPKB9xmx4Q5AISSYGy4gw_sqSsW7NyMPMCuKpZ0oOhl9bfN1nYnEwD_7SHegt1o7we2OBlYIRqGawcUHvxvabkYtCz9G0flxOckzNqNh3PD1jIBmr4xM1AwBvjxmDxbJudsw9evsXrzIqIoewYInh9sz-DbyGnb_N8f9TXN4xU9ljXve-Zz96YXWWQwh_yM8LGhd5elHMSEBUWS3IRS9S59Rd9deU7ZpQaFIYdprNd8Ysj-xbA9cKPkmhdI80D
Traceback (most recent call last):
File "/home/aaron/Workspace/projects/gas_webscraper/maps_test.py", line 32, in <module>
for s in find_stations():
File "/home/aaron/Workspace/projects/gas_webscraper/maps_test.py", line 25, in find_stations
page_token=token)
File "/home/aaron/anaconda3/envs/webscraping/lib/python3.7/site-packages/googlemaps/places.py", line 144, in places_nearby
rank_by=rank_by, type=type, page_token=page_token)
File "/home/aaron/anaconda3/envs/webscraping/lib/python3.7/site-packages/googlemaps/places.py", line 235, in _places
return client._request(url, params)
File "/home/aaron/anaconda3/envs/webscraping/lib/python3.7/site-packages/googlemaps/client.py", line 253, in _request
result = self._get_body(response)
File "/home/aaron/anaconda3/envs/webscraping/lib/python3.7/site-packages/googlemaps/client.py", line 282, in _get_body
raise googlemaps.exceptions.ApiError(api_status)
googlemaps.exceptions.ApiError: INVALID_REQUEST
I just started looking at this api today, so I'm pretty new to this and have struggled to find any real documentation on the python client, so any help would be appreciated.
From the documentation:
There is a short delay between when a next_page_token is issued, and when it will become valid. Requesting the next page before it is available will return an INVALID_REQUEST response. Retrying the request with the same next_page_token will return the next page of results.
In other words, you need to wait a bit before requesting the next page. You could load the next results on user input, or just delay the further requests.

IndexError: list index out of range on selective inputs to a python function's parameter

This is a program to get the first five queries from google. I have it as a function. I have combined codes and tailored to program to work. Oddly it works only sometimes. Sometimes I get this error:
Traceback (most recent call last):
File "google.py", line 36, in <module>
run(i)
File "google.py", line 21, in run
link='http'+split_content[1].split(':http',1)[1].split('%',1)[0]
IndexError: list index out of range
It's worked all times on simple words. However on some names (which I cannot unfortunately reveal which names due to privacy policy) the name is 'xxxxx xxxxx' format, however when I test 'Lebron James', it worked.
The while loop is from another person's program. Somewhere something is happening.
#!/usr/bin/python3
# this is the logic in order to get the first 5
# queries off of google.com
import requests
import sys
import webbrowser
def run(word_to_search):
request = requests.get('http://google.com/search?q='+word_to_search)
content=request.content.decode('UTF-8','replace')
links=[]
while '<h3 class="r">' in content:
content=content.split('<h3 class="r">', 1)[1]
split_content=content.split('</h3>', 1)
link='http'+split_content[1].split(':http',1)[1].split('%',1)[0]
links.append(link)
content=split_content[1]
file = open("google.txt","a")
file.write("\n\n"+word_to_search+"\n")
for link in links[:5]: # max number of links 5
file.write(link + "\n")
list = ['Curly','Moe','Shorty','Laser Beams']
for i in list:
run(i)
The successful program outputs this:
Curly
http://www.dictionary.com/browse/curly
http://www.dictionary.com/browse/curly
https://en.wikipedia.org/wiki/Curly_Howard
http://www.thesaurus.com/browse/curly
https://www.naturallycurly.com/
Moe
http://www.moe.org/
http://www.moe.org/tour_date/
https://www.facebook.com/moe.org/
https://en.wikipedia.org/wiki/Moe_(slang)
https://en.wikipedia.org/wiki/Moe_(band)
Shorty
http://www.urbandictionary.com/define.php
http://www.urbandictionary.com/define.php
https://en.wiktionary.org/wiki/shorty
http://www.mdaniels.com/shorty/
http://shortyawards.com/
Laser Beams
http://www.lazerbrody.typepad.com/
http://www.lazerbrody.typepad.com/
https://www.rp-photonics.com/laser_beams.html
https://en.wikipedia.org/wiki/Laser
http://spaceshipsandlaserbeams.com/

Skype4Py - How to successfully add a contact?

I'm working to implement a few fun features with a SkypeBot, and one of the features I'd like to implement is the ability to add a new contact. While reviewing the Skype4Py docs, I note this method:
http://skype4py.sourceforge.net/doc/html/Skype4Py.client.Client-class.html#OpenAddContactDialog
I am using the following code to try to access this:
sky = Skype4Py.Skype()
client = Skype4Py.client.Client(sky)
sky.Attach()
client.OpenAddContactDialog("test")
However, when trying to utilize almost anything from Skype4py.client.Client I get a timeout with the traceback:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "build/bdist.macosx-10.8-intel/egg/Skype4Py/client.py", line 164, in OpenDialog
self._Skype._DoCommand('OPEN %s' % tounicode(' '.join(params)))
File "build/bdist.macosx-10.8-intel/egg/Skype4Py/skype.py", line 276, in _DoCommand
self.SendCommand(command)
File "build/bdist.macosx-10.8-intel/egg/Skype4Py/skype.py", line 778, in SendCommand
self._Api.send_command(Command)
File "build/bdist.macosx-10.8-intel/egg/Skype4Py/api/darwin.py", line 395, in send_command
raise SkypeAPIError('Skype command timeout')
SkypeAPIError: Skype command timeout
I receive this timeout error on every method I try to access within the client class. (ie:
OpenAuthorizationDialog, OpenCallHistoryTab, OpenContactsTab). Am I accessing this method incorrectly, or perhaps the method is not supported for newer versions of Skype? Any help with getting this working, or a method that adds contacts via Skype4Py successfully will be very appreciated.
sky = Skype4Py.Skype()
sky.Attach()
client = Skype4Py.client.Client(sky)
client.OpenAddContactDialog("Torxed")
Trying a few things out but i'm 99% sure that's the order in which you have to do things.
Otherwise you will time out because the attachment needs time to attach before you start executing things towards the API.
Also take a look at:
http://skype4py.sourceforge.net/doc/html/Skype4Py.user.User-class.html#SetBuddyStatusPendingAuthorization
http://skype4py.sourceforge.net/doc/html/Skype4Py.skype.SkypeEvents-class.html#UserAuthorizationRequestReceived
Also you might be going about this the wrong way.
Adding a skype user directly, is not how Skype works.
search
request add with a message
wait for authorization
So, try one of the following:
(one is a asyncore way of searching and adding as they pop up, the other will bunch your results)
http://skype4py.sourceforge.net/doc/html/Skype4Py.skype.Skype-class.html#AsyncSearchUsers
http://skype4py.sourceforge.net/doc/html/Skype4Py.skype.Skype-class.html#SearchForUsers
So try:
sky = Skype4Py.Skype()
sky.Attach()
print skyp.SearchForUsers('Torxed')
Should get you a handle to add me for instance.
Within the object that you recieve, there will be an option to add me for instance.
#Torxed's answer was right, but here's more information in case anyone wasn't able to make it the last mile.
I was able to add a contact in this way:
import Skype4Py
sky = Skype4Py.Skype()
sky.Attach()
requestMessage = "Please accept my request!"
searchResults = sky.SearchForUsers('echo123')
firstResult = searchResults[0]
firstResult.SetBuddyStatusPendingAuthorization(requestMessage)
Do be careful, though as this merely adds the FIRST result returned by the search. If you have the username exact, it should be fine.

Categories

Resources