I'm new to the Google Maps API and I'm not sure why this code isn't working. I have a list of 80 landmarks in a csv file that im trying to retrieve the lon and lat coordinates to.
I believe something may be wrong with how I'm connecting to the API. From my understanding, I should have 2,500 free requests per day but I'm receiving a timeout error that makes me think I've already reached my limit.
Here is a snapshot of my dashboard
Code:
import pandas as pd
import googlemaps
# IMPORT DATASET
df = pd.read_csv('landmarks.csv')
# GOOGLE MAPS API KEY
gmaps_key = googlemaps.Client(key = 'MY KEY')
df['LAT'] = None
df['LON'] = None
for i in range (0, len(df), 1):
geocode_result = gmaps_key.geocode(df.iat[i,0])
try:
lat = geocode_result[0]['geometry']['location']['lat']
lon = geocode_result[0]['geometry']['location']['lon']
df.iat[i, df.comlumns.get_loc('LAT')] = lat
df.iat[i, df.comlumns.get_loc('LON')] = lon
except:
lat = None
lon = None
print(df)
Error Message:
Traceback (most recent call last): File
"C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 253, in _request
result = self._get_body(response) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 276, in _get_body
raise googlemaps.exceptions._RetriableRequest() googlemaps.exceptions._RetriableRequest
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "c:/Users/JGrov/Google
Drive/pythonProjects/Megalith Map/googleMapsAPI_Batch_Megaliths.py",
line 16, in
geocode_result = gmaps_key.geocode(df.iat[i,0]) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 356, in wrapper
result = func(*args, **kwargs) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\geocoding.py",
line 68, in geocode
return client._request("/maps/api/geocode/json", params)["results"] File
"C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 260, in _request
extract_body, requests_kwargs, post_json) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 260, in _request
extract_body, requests_kwargs, post_json) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 260, in _request
extract_body, requests_kwargs, post_json) [Previous line repeated 9 more times] File
"C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 203, in _request
raise googlemaps.exceptions.Timeout() googlemaps.exceptions.Timeout
Any help on this matter would be appreciated. Thank you.
Related
I am trying to geocode addresses to Lat/Long. My code:
locator = Nominatim(user_agent="myGeocoder_amh")
# 1 - conveneint function to delay between geocoding calls
geocode = RateLimiter(locator.geocode, min_delay_seconds=5)
# 2- - create location column
cong['location'] = cong['Address'].apply(geocode)
# 3 - create longitude, laatitude and altitude from location column (returns tuple)
cong['point'] = cong['location'].apply(lambda loc: tuple(loc.point) if loc else None)
# 4 - split point column into latitude, longitude and altitude columns
cong[['latitude', 'longitude', 'altitude']] = pd.DataFrame(cong['point'].tolist(), index=cong.index)
Results in the following error:
Traceback (most recent call last):
File "C:\Users\alexa\anaconda3\lib\site-packages\geopy\geocoders\base.py", line 368, in _call_geocoder
result = self.adapter.get_json(url, timeout=timeout, headers=req_headers)
File "C:\Users\alexa\anaconda3\lib\site-packages\geopy\adapters.py", line 438, in get_json
resp = self._request(url, timeout=timeout, headers=headers)
File "C:\Users\alexa\anaconda3\lib\site-packages\geopy\adapters.py", line 466, in _request
raise AdapterHTTPError(
geopy.adapters.AdapterHTTPError: Non-successful status code 403
If I make a call for only one cryptocurrency it works, but for multiple it fails.
import pandas_datareader as pdr
...
crypto_df = pdr.DataReader('BTC-USD', data_source = 'yahoo', start = '2015-01-01')
works fine
crypto_df = pdr.DataReader('ETH-USD', data_source = 'yahoo', start = '2015-01-01')
also works fine
crypto_df = pdr.DataReader(['BTC-USD', 'ETH-USD'], data_source = 'yahoo', start = '2015-01-01')
fails with the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/alex/.local/lib/python3.8/site-packages/pandas/util/_decorators.py", line 199, in wrapper
return func(*args, **kwargs)
File "/home/alex/.local/lib/python3.8/site-packages/pandas_datareader/data.py", line 376, in DataReader
return YahooDailyReader(
File "/home/alex/.local/lib/python3.8/site-packages/pandas_datareader/base.py", line 258, in read
df = self._dl_mult_symbols(self.symbols)
File "/home/alex/.local/lib/python3.8/site-packages/pandas_datareader/base.py", line 285, in _dl_mult_symbols
result = concat(stocks, sort=True).unstack(level=0)
File "/home/alex/.local/lib/python3.8/site-packages/pandas/core/frame.py", line 7349, in unstack
result = unstack(self, level, fill_value)
File "/home/alex/.local/lib/python3.8/site-packages/pandas/core/reshape/reshape.py", line 417, in unstack
return _unstack_frame(obj, level, fill_value=fill_value)
File "/home/alex/.local/lib/python3.8/site-packages/pandas/core/reshape/reshape.py", line 444, in _unstack_frame
return _Unstacker(
File "/home/alex/.local/lib/python3.8/site-packages/pandas/core/reshape/reshape.py", line 118, in __init__
self._make_selectors()
File "/home/alex/.local/lib/python3.8/site-packages/pandas/core/reshape/reshape.py", line 167, in _make_selectors
raise ValueError("Index contains duplicate entries, cannot reshape")
This works as expected with stocks, but fails with cryptocurrency.
I'm confident this is not an issue on my side, but I am hoping someone can confirm. I will open a ticket with the developers if this is an unknown bug.
You need to define the index you want to fetch.
#Trying to fetch crypto data from yahoo
from pandas_datareader import data as wb
tickers = ['BTC-USD', 'ETH-USD']
crypto_data = pd.DataFrame()
for t in tickers:
crypto_data[t] = wb.DataReader(t, data_source ='yahoo', start= '2020-12-01')['Adj Close']
You are missing ['Adj Close'] in this case.
I have to read from a OrientDB. To test that everything works I tried to read from the Database with the SELECT Statement.
like this:
import pyorient
client = pyorient.OrientDB("adress", 2424)
session_id = client.connect("root", "password")
client.db_open("table","root","password")
print str(client.db_size())
client.query("SELECT * FROM L1_Req",1)
The Connection works fine and also the print str(client.db_size()) line.
But at client.query("SELECT * FROM L1_Req",1) it returns the following Error Message:
Traceback (most recent call last):
File "testpy.py", line 9, in <module>
client.query("SELECT * FROM L1_Req",1)
File "C:\app\tools\python27\lib\site-packages\pyorient\orient.py", line 470, i
n query
.prepare(( QUERY_SYNC, ) + args).send().fetch_response()
File "C:\app\tools\python27\lib\site-packages\pyorient\messages\commands.py",
line 144, in fetch_response
super( CommandMessage, self ).fetch_response()
File "C:\app\tools\python27\lib\site-packages\pyorient\messages\base.py", line
265, in fetch_response
self._decode_all()
File "C:\app\tools\python27\lib\site-packages\pyorient\messages\base.py", line
249, in _decode_all
self._decode_header()
File "C:\app\tools\python27\lib\site-packages\pyorient\messages\base.py", line
176, in _decode_header
serialized_exception = self._decode_field( FIELD_STRING )
File "C:\app\tools\python27\lib\site-packages\pyorient\messages\base.py", line
366, in _decode_field
_decoded_string = self._orientSocket.read( _len )
File "C:\app\tools\python27\lib\site-packages\pyorient\orient.py", line 164, i
n read
buf = bytearray(_len_to_read)
MemoryError
I also tried some ohter SQL Statements like:
client.query("SELECT subSystem FROM L1_Req",1)
I cant't get why this happends. Can you guys help me ?
So I'm trying to get Instagram photos that fit certain parameters and I'm getting the following stack:
Traceback (most recent call last):
File "instagram_find_shows.py", line 83, in <module>
if __name__ == "__main__": main()
File "instagram_find_shows.py", line 48, in main
get_instagram_posts(show_name, show_time, coordinates)
File "instagram_find_shows.py", line 73, in get_instagram_posts
str(coordinates[1]), min_time, max_time)
File "C:\Users\User Name\Anaconda3\lib\site-packages\instagram\bind.py", line 197, in _call
return method.execute()
File "C:\Users\User Name\Anaconda3\lib\site-packages\instagram\bind.py", line 189, in execute
content, next = self._do_api_request(url, method, body, headers)
File "C:\Users\User Name\Anaconda3\lib\site-packages\instagram\bind.py", line 163, in _do_api_request
raise InstagramAPIError(status_code, content_obj['meta']['error_type'], content_obj['meta']['error_message'])
instagram.bind.InstagramAPIError: (400) OAuthPermissionsException-This request requires scope=public_content, but this access token is not authorized with this scope. The user must re-authorize your application with scope=public_content to be granted this permissions.
The code is as follows:
def get_instagram_posts(name, time, coordinates):
max_time_dt = time + timedelta(hours=3)
min_time_dt = time - timedelta(hours=1)
max_time = str(calendar.timegm(max_time_dt.timetuple()))
min_time = str(calendar.timegm(min_time_dt.timetuple()))
dist_rad_str = str(insta_dist_radius_m)
count_str = str(insta_count)
api = InstagramAPI(access_token=insta_access_token,
client_secret=insta_client_secret)
r = api.media_search(name, count_str, str(coordinates[0]),
str(coordinates[1]), min_time, max_time)
photos = []
for media in r:
photos.append('<img src="%s"/>' % media.images['thumbnail'].url)
print(photos[0])
I can't figure out what to do... Literally I'm just trying to do a simple test, not trying to cripple their API. Is there any way to do this within Instagram's parameters? Thanks so much!
Fixed by going to the following URL in the browser:
https://www.instagram.com/oauth/authorize?client_id=[CLIENT_ID]&redirect_uri=[REDIRECT_URI]&response_type=code&scope=basic+public_content+follower_list+comments+relationships+likes
I am using gdata on python to read the rows of specific worksheet from public spreadsheet when i tried the following code
client = gdata.spreadsheet.service.SpreadsheetsService()
key = 'xxxxxxxxxxxxxxxxxxxxxxxxxx'
worksheets_feed = client.GetWorksheetsFeed(key, visibility='public', projection='values')
# print worksheets_feed
for entry in worksheets_feed.entry:
print entry.title.text
worksheet_id = entry.id.text.rsplit('/',1)[1]
rows = client.GetListFeed(key, worksheet_id).entry
getting the error as
Traceback (most recent call last):
File "lib/scrapper.py", line 89, in <module>
start_it()
File "lib/scrapper.py", line 56, in start_it
rows = client.GetListFeed(key, worksheet_id).entry
File "/Library/Python/2.7/site-packages/gdata/spreadsheet/service.py", line 252, in GetListFeed
converter=gdata.spreadsheet.SpreadsheetsListFeedFromString)
File "/Library/Python/2.7/site-packages/gdata/service.py", line 1074, in Get
return converter(result_body)
File "/Library/Python/2.7/site-packages/gdata/spreadsheet/__init__.py", line 474, in SpreadsheetsListFeedFromString
xml_string)
File "/Library/Python/2.7/site-packages/atom/__init__.py", line 93, in optional_warn_function
return f(*args, **kwargs)
File "/Library/Python/2.7/site-packages/atom/__init__.py", line 127, in CreateClassFromXMLString
tree = ElementTree.fromstring(xml_string)
File "<string>", line 125, in XML
cElementTree.ParseError: no element found: line 1, column 0
can somebody correct me where i am wrong
Try:
worksheet_feed = spreadsheet.GetWorksheetsFeed(spreadsheetId)
worksheetfeed = []
for worksheet in worksheet_feed.entry:
worksheetfeed.append(worksheet.id.text.rsplit('/', 1)[0])
list_feed = spreadsheet.GetListFeed(spreadsheetId, worksheetfeed[0])#get first worksheet
entryList = []
for entry in list_feed.entry:
tempDict = {}
for key in entry.custom:
tempDict[str(key)] = str(entry.custom[key].text)
where spreadsheetId has been defined and you have been previously authenticated.