How to handle "not found" ticker in yfinance? - python

So im trying to fetch some stock data in a loop (not sure if i can pass an array), like this:
def getData(ticker):
print (ticker)
data = pdr.get_data_yahoo(ticker, start=start_date, end=today)
dataname= ticker+'_'+str(today)
files.append(dataname)
SaveData(data, dataname)
But for some reasons, some of the tickers i feed to pdr.get_data_yahoo() are not found, and python throws this error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/pandas_datareader/yahoo/daily.py", line 157, in _read_one_data
data = j["context"]["dispatcher"]["stores"]["HistoricalPriceStore"]
KeyError: 'HistoricalPriceStore'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "borsdata_api.py", line 65, in <module>
getData(row['ticker'])
File "borsdata_api.py", line 47, in getData
data = pdr.get_data_yahoo(ticker, start=start_date, end=today)
File "/usr/local/lib/python3.7/site-packages/pandas_datareader/data.py", line 82, in get_data_yahoo
return YahooDailyReader(*args, **kwargs).read()
File "/usr/local/lib/python3.7/site-packages/pandas_datareader/base.py", line 251, in read
df = self._read_one_data(self.url, params=self._get_params(self.symbols))
File "/usr/local/lib/python3.7/site-packages/pandas_datareader/yahoo/daily.py", line 160, in _read_one_data
raise RemoteDataError(msg.format(symbol, self.__class__.__name__))
pandas_datareader._utils.RemoteDataError: No data fetched for symbol ADDV-TO-1.ST using YahooDailyReader
Is it possible to just skip this iteration and move on the next one in the list?

def getData(ticker):
print (ticker)
try:
data = pdr.get_data_yahoo(ticker, start=start_date, end=today)
dataname= ticker+'_'+str(today)
files.append(dataname)
SaveData(data, dataname)
except:
pass #or traceback.print_exc(), or traceback.format_exc()
#print_exc() will raise the error and print traceback.
#format_exc() will return the error as a string.

Related

Exception in Tkinter callback, KeyError: 0

Exception in Tkinter callback
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\range.py", line 351, in get_loc
return self._range.index(new_key)
ValueError: 0 is not in range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\tkinter\__init__.py", line 1892, in __call__
return self.func(*args)
File "<ipython-input-18-518ef7ebbd84>", line 4, in myClick2
a.plot(p1.total_payoff()[0],p1.total_payoff()[1])
File "<ipython-input-10-57724152e4a6>", line 59, in total_payoff
prices = self.option_payoff()[0]
File "<ipython-input-10-57724152e4a6>", line 47, in option_payoff
temppayoff += callpayoff(i,j.get_strike(),j.find_bidask()[1])*j.get_quantity()
File "<ipython-input-6-59b6ad5c0680>", line 28, in find_bidask
bid = data['bid'][0]
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py", line 853, in __getitem__
return self._get_value(key)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py", line 961, in _get_value
loc = self.index.get_loc(label)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\range.py", line 353, in get_loc
raise KeyError(key) from err
KeyError: 0
This is the error message I am getting when I try to execute a function by clicking a TKinter button. The function is below, basically it takes some data (x, y), and plots it with matplotlib.
def myClick2():
f = Figure(figsize=(4,4), dpi=100)
a = f.add_subplot(111)
a.plot(p1.total_payoff()[0],p1.total_payoff()[1])
a.grid(True, which='both')
a.axhline(y=0, color='k')
a.axvline(x=0, color='k')
canvas = FigureCanvasTkAgg(f, master=root)
canvas.draw()
canvas.get_tk_widget().grid(row = 8, column = 1)
The error is saying that "total_payoff", which calls "option_payoff", which calls "find_bidask" is leading to the error. Specifically, the part which I assign bid = data['bid'][0].
def find_bidask(self):
if str.upper(self.cp) == 'C':
data = self.data['calls']
else:
data = self.data['puts']
data = data[data['contractSymbol']==self.symbol].reset_index(drop=True)
bid = data['bid'][0]
ask = data['ask'][0]
However, when I run this separately outside of TKinter, it produces no error, and ['bid'][0] is available as a value. I don't understand what is wrong with my code - is it something in the tkinter myclick2 function that is wrong?

Pandas AttributeError: 'DataFrame' object has no attribute 'Timestamp'

so i want to get the monthly sum with my script but i always get an AttributeError, which i dont understand. The column Timestamp does indeed exist on my combined_csv. I know for sure that this line is causing the problem since i tested al of my other code before.
AttributeError: 'DataFrame' object has no attribute 'Timestamp'
I'll appreciate every kind of help i can get - thanks
import os
import glob
import pandas as pd
# set working directory
os.chdir("Path to CSVs")
# find all csv files in the folder
# use glob pattern matching -> extension = 'csv'
# save result in list -> all_filenames
extension = 'csv'
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
# print(all_filenames)
# combine all files in the list
combined_csv = pd.concat([pd.read_csv(f, sep=';') for f in all_filenames])
# Format CSV
# Transform Timestamp column into datetime
combined_csv['Timestamp'] = pd.to_datetime(combined_csv.Timestamp)
# Read out first entry of every day of every month
combined_csv = round(combined_csv.resample('D', on='Timestamp')['HtmDht_Energy'].agg(['first']))
# To get the yield of day i have to subtract day 2 HtmDht_Energy - day 1 HtmDht_Energy
combined_csv["dailyYield"] = combined_csv["first"] - combined_csv["first"].shift()
# combined_csv.reset_index()
# combined_csv.index.set_names(["year", "month"], inplace=True)
combined_csv["monthlySum"] = combined_csv.groupby([combined_csv.Timestamp.dt.year, combined_csv.Timestamp.dt.month]).sum()
Output of combined_csv.columns
Index(['Timestamp', 'teHst0101', 'teHst0102', 'teHst0103', 'teHst0104',
'teHst0105', 'teHst0106', 'teHst0107', 'teHst0201', 'teHst0202',
'teHst0203', 'teHst0204', 'teHst0301', 'teHst0302', 'teHst0303',
'teHst0304', 'teAmb', 'teSolFloHexHst', 'teSolRetHexHst',
'teSolCol0501', 'teSolCol1001', 'teSolCol1501', 'vfSol', 'prSolRetSuc',
'rdGlobalColAngle', 'gSolPump01_roActual', 'gSolPump02_roActual',
'gHstPump03_roActual', 'gHstPump04_roActual', 'gDhtPump06_roActual',
'gMB01_isOpened', 'gMB02_isOpened', 'gCV01_posActual',
'gCV02_posActual', 'HtmDht_Energy', 'HtmDht_Flow', 'HtmDht_Power',
'HtmDht_Volume', 'HtmDht_teFlow', 'HtmDht_teReturn', 'HtmHst_Energy',
'HtmHst_Flow', 'HtmHst_Power', 'HtmHst_Volume', 'HtmHst_teFlow',
'HtmHst_teReturn', 'teSolColDes', 'teHstFloDes'],
dtype='object')
Traceback:
When i select it with
combined_csv["monthlySum"] = combined_csv.groupby([combined_csv['Timestamp'].dt.year, combined_csv['Timestamp'].dt.month]).sum()
Traceback (most recent call last):
File "D:\Users\wink\PycharmProjects\csvToExcel\main.py", line 28, in <module>
combined_csv["monthlySum"] = combined_csv.groupby([combined_csv['Timestamp'].dt.year, combined_csv['Timestamp'].dt.month]).sum()
File "D:\Users\wink\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\frame.py", line 3024, in __getitem__
indexer = self.columns.get_loc(key)
File "D:\Users\wink\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\indexes\base.py", line 3082, in get_loc
raise KeyError(key) from err
KeyError: 'Timestamp'
traceback with mustafas solution
Traceback (most recent call last):
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\frame.py", line 3862, in reindexer
value = value.reindex(self.index)._values
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\util\_decorators.py", line 312, in wrapper
return func(*args, **kwargs)
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\frame.py", line 4176, in reindex
return super().reindex(**kwargs)
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\generic.py", line 4811, in reindex
return self._reindex_axes(
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\frame.py", line 4022, in _reindex_axes
frame = frame._reindex_index(
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\frame.py", line 4038, in _reindex_index
new_index, indexer = self.index.reindex(
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\indexes\multi.py", line 2492, in reindex
target = MultiIndex.from_tuples(target)
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\indexes\multi.py", line 175, in new_meth
return meth(self_or_cls, *args, **kwargs)
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\indexes\multi.py", line 531, in from_tuples
arrays = list(lib.tuples_to_object_array(tuples).T)
File "pandas\_libs\lib.pyx", line 2527, in pandas._libs.lib.tuples_to_object_array
ValueError: Buffer dtype mismatch, expected 'Python object' but got 'long long'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\winklerm\PycharmProjects\csvToExcel\main.py", line 28, in <module>
combined_csv["monthlySum"] = combined_csv.groupby([combined_csv.Timestamp.dt.year, combined_csv.Timestamp.dt.month]).sum()
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\frame.py", line 3163, in __setitem__
self._set_item(key, value)
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\frame.py", line 3242, in _set_item
value = self._sanitize_column(key, value)
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\frame.py", line 3888, in _sanitize_column
value = reindexer(value).T
File "C:\Users\winklerm\PycharmProjects\csvToExcel\venv\lib\site-packages\pandas\core\frame.py", line 3870, in reindexer
raise TypeError(
TypeError: incompatible index of inserted column with frame index
This line makes the Timestamp column the index of the combined_csv:
combined_csv = round(combined_csv.resample('D', on='Timestamp')['HtmDht_Energy'].agg(['first']))
and therefore you get an error when you try to access .Timestamp.
Remedy is to reset_index, so instead of above line, you can try this:
combined_csv = round(combined_csv.resample('D', on='Timestamp')['HtmDht_Energy'].agg(['first'])).reset_index()
which will take the Timestamp column back into normal columns from the index and you can then access it.
Side note:
combined_csv["dailyYield"] = combined_csv["first"] - combined_csv["first"].shift()
is equivalent to
combined_csv["dailyYield"] = combined_csv["first"].diff()

Pandas datareader failure

I want to get all the stocks from sp500 to a folder in csv format.
Now while scanning the sp500 everything works great but it seems to be that in some cases the index referred to date is missing because stock doesn't exist or has no date for a specific time, whatever I tried to change startdate and enddate but no effect - in en earlier post I was said to filter those dates with an exception but due to python is new land for me I was like an alien... is there someone who can help me?
If this error occurs:
/home/mu351i/PycharmProjects/untitled/venv/bin/python /home/mu351i/PycharmProjects/untitled/get_sp500_beautifulsoup_intro.py
Traceback (most recent call last):
File "/home/mu351i/PycharmProjects/untitled/venv/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2897, in get_loc
return self._engine.get_loc(key)
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Date'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mu351i/PycharmProjects/untitled/get_sp500_beautifulsoup_intro.py", line 44, in get_data_from_yahoo
df = web.DataReader (ticker, 'yahoo', start, end)
File "/home/mu351i/PycharmProjects/untitled/venv/lib/python3.7/site-packages/pandas/util/_decorators.py", line 208, in wrapper
return func(*args, **kwargs)
File "/home/mu351i/PycharmProjects/untitled/venv/lib/python3.7/site-packages/pandas_datareader/data.py", line 387, in DataReader
session=session,
File "/home/mu351i/PycharmProjects/untitled/venv/lib/python3.7/site-packages/pandas_datareader/base.py", line 251, in read
df = self._read_one_data(self.url, params=self._get_params(self.symbols))
File "/home/mu351i/PycharmProjects/untitled/venv/lib/python3.7/site-packages/pandas_datareader/yahoo/daily.py", line 165, in _read_one_data
prices["Date"] = to_datetime(to_datetime(prices["Date"], unit="s").dt.date)
File "/home/mu351i/PycharmProjects/untitled/venv/lib/python3.7/site-packages/pandas/core/frame.py", line 2995, in getitem
indexer = self.columns.get_loc(key)
File "/home/mu351i/PycharmProjects/untitled/venv/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2899, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 131, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1607, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1614, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Date'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mu351i/PycharmProjects/untitled/get_sp500_beautifulsoup_intro.py", line 57, in
get_data_from_yahoo()
File "/home/mu351i/PycharmProjects/untitled/get_sp500_beautifulsoup_intro.py", line 48, in get_data_from_yahoo
except RemoteDataError:
NameError: name 'RemoteDataError' is not defined
Process finished with exit code 1
how would you avoid this by changing this code?
import datetime as dt
import os
import pickle
import bs4 as bs
import pandas_datareader.data as web
import requests
def safe_sp500_tickers():
resp = requests.get('https://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
soup = bs.BeautifulSoup(resp.text,'lxml')
table = soup.find('table',{'class':'wikitable sortable'})
tickers = []
for row in table.findAll('tr')[1:]:
ticker=row.findAll('td')[0].text.strip()
tickers.append(ticker)
with open('sp500tickers.pickle','wb') as f:
pickle.dump(tickers,f)
return tickers
safe_sp500_tickers()
def get_data_from_yahoo(reload_sp500=False):
if reload_sp500:
tickers=safe_sp500_tickers()
else:
with open('sp500tickers.pickle', 'rb') as f:
tickers = pickle.load(f)
if not os.path.exists('stock_dfs'):
os.makedirs('stock_dfs')
start = dt.datetime(1999,1,1)
end = dt.datetime(2019,12,19)
for ticker in tickers:
try:
if not os.path.exists ('stock_dfs/{}.csv'.format (ticker)):
df = web.DataReader (ticker, 'yahoo', start, end)
df.to_csv ('stock_dfs/{}.csv'.format (ticker))
else:
print ("Ticker from {} already availablle".format (ticker))
except RemoteDataError:
print ("No information for ticker '%s'" % i)
continue
except KeyError:
print("no Date for Ticker: " +ticker )
continue
get_data_from_yahoo()
A Commentator asked for some DATA Sample, well this is DATA form TSLA.csv
Date,High,Low,Open,Close,Volume,Adj Close
2010-06-29,25.0,17.540000915527344,19.0,23.889999389648438,18766300,23.889999389648438
2010-06-30,30.420000076293945,23.299999237060547,25.790000915527344,23.829999923706055,17187100,23.829999923706055
2010-07-01,25.920000076293945,20.270000457763672,25.0,21.959999084472656,8218800,21.959999084472656
2010-07-02,23.100000381469727,18.709999084472656,23.0,19.200000762939453,5139800,19.200000762939453
2010-07-06,20.0,15.829999923706055,20.0,16.110000610351562,6866900,16.110000610351562
2010-07-07,16.6299991607666,14.979999542236328,16.399999618530273,15.800000190734863,6921700,15.800000190734863
2010-07-08,17.520000457763672,15.569999694824219,16.139999389648438,17.459999084472656,7711400,17.459999084472656
2010-07-09,17.899999618530273,16.549999237060547,17.579999923706055,17.399999618530273,4050600,17.399999618530273
2010-07-12,18.06999969482422,17.0,17.950000762939453,17.049999237060547,2202500,17.049999237060547
2010-07-13,18.639999389648438,16.899999618530273,17.389999389648438,18.139999389648438,2680100,18.139999389648438
2010-07-14,20.149999618530273,17.760000228881836,17.940000534057617,19.84000015258789,4195200,19.84000015258789
2010-07-15,21.5,19.0,19.940000534057617,19.889999389648438,3739800,19.889999389648438
2010-07-16,21.299999237060547,20.049999237060547,20.700000762939453,20.639999389648438,2621300,20.639999389648438
Please provide constructive feedback because I'new here.
Thanks :)
You are missing an import
Add the following import at the top of your script
from pandas_datareader._utils import RemoteDataError
import pandas as pd
df = pd.read_html(
"https://en.wikipedia.org/wiki/List_of_S%26P_500_companies")[0]
sort = pd.DataFrame(df).sort_values(by=['Date first added'])
sort['Date first added'] = pd.to_datetime(sort['Date first added'])
start_date = '1-1-1999'
end_date = '11-12-2019'
mask = (sort['Date first added'] > start_date) & (
sort['Date first added'] <= end_date)
sort = sort.loc[mask]
pd.DataFrame(sort).to_csv('result.csv', index=False)
Output: View Online
ScreenShot:

Timeout error while batch geocoding with google maps API in python

I'm new to the Google Maps API and I'm not sure why this code isn't working. I have a list of 80 landmarks in a csv file that im trying to retrieve the lon and lat coordinates to.
I believe something may be wrong with how I'm connecting to the API. From my understanding, I should have 2,500 free requests per day but I'm receiving a timeout error that makes me think I've already reached my limit.
Here is a snapshot of my dashboard
Code:
import pandas as pd
import googlemaps
# IMPORT DATASET
df = pd.read_csv('landmarks.csv')
# GOOGLE MAPS API KEY
gmaps_key = googlemaps.Client(key = 'MY KEY')
df['LAT'] = None
df['LON'] = None
for i in range (0, len(df), 1):
geocode_result = gmaps_key.geocode(df.iat[i,0])
try:
lat = geocode_result[0]['geometry']['location']['lat']
lon = geocode_result[0]['geometry']['location']['lon']
df.iat[i, df.comlumns.get_loc('LAT')] = lat
df.iat[i, df.comlumns.get_loc('LON')] = lon
except:
lat = None
lon = None
print(df)
Error Message:
Traceback (most recent call last): File
"C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 253, in _request
result = self._get_body(response) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 276, in _get_body
raise googlemaps.exceptions._RetriableRequest() googlemaps.exceptions._RetriableRequest
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "c:/Users/JGrov/Google
Drive/pythonProjects/Megalith Map/googleMapsAPI_Batch_Megaliths.py",
line 16, in
geocode_result = gmaps_key.geocode(df.iat[i,0]) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 356, in wrapper
result = func(*args, **kwargs) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\geocoding.py",
line 68, in geocode
return client._request("/maps/api/geocode/json", params)["results"] File
"C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 260, in _request
extract_body, requests_kwargs, post_json) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 260, in _request
extract_body, requests_kwargs, post_json) File "C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 260, in _request
extract_body, requests_kwargs, post_json) [Previous line repeated 9 more times] File
"C:\Users\JGrov\Anaconda3\lib\site-packages\googlemaps\client.py",
line 203, in _request
raise googlemaps.exceptions.Timeout() googlemaps.exceptions.Timeout
Any help on this matter would be appreciated. Thank you.

Handling exceptions in while loop - Python

Here is my code (almost full version for #cdhowie :)):
def getResult(method, argument=None):
result = None
while True:
print('### loop')
try:
print ('### try hard...')
if argument:
result = method(argument)
else:
result = method()
break
except Exception as e:
print('### GithubException')
if 403 == e.status:
print('Warning: ' + str(e.data))
print('I will try again after 10 minutes...')
else:
raise e
return result
def getUsernames(locations, gh):
usernames = set()
for location in locations:
print location
result = getResult(gh.legacy_search_users, location)
for user in result:
usernames.add(user.login)
print user.login,
return usernames
# "main.py"
gh = Github()
locations = ['Washington', 'Berlin']
# "main.py", line 12 is bellow
usernames = getUsernames(locations, gh)
The problem is, that exception is raised, but I can't handle it. Here is an output:
### loop
### try hard...
Traceback (most recent call last):
File "main.py", line 12, in <module>
usernames = getUsernames(locations, gh)
File "/home/ciembor/projekty/github-rank/functions.py", line 39, in getUsernames
for user in result:
File "/usr/lib/python2.7/site-packages/PyGithub-1.8.0-py2.7.egg/github/PaginatedList.py", line 33, in __iter__
newElements = self.__grow()
File "/usr/lib/python2.7/site-packages/PyGithub-1.8.0-py2.7.egg/github/PaginatedList.py", line 45, in __grow
newElements = self._fetchNextPage()
File "/usr/lib/python2.7/site-packages/PyGithub-1.8.0-py2.7.egg/github/Legacy.py", line 37, in _fetchNextPage
return self.get_page(page)
File "/usr/lib/python2.7/site-packages/PyGithub-1.8.0-py2.7.egg/github/Legacy.py", line 48, in get_page
None
File "/usr/lib/python2.7/site-packages/PyGithub-1.8.0-py2.7.egg/github/Requester.py", line 69, in requestAndCheck
raise GithubException.GithubException(status, output)
github.GithubException.GithubException: 403 {u'message': u'API Rate Limit Exceeded for 11.11.11.11'}
Why it doesn't print ### handling exception?
Take a close look at the stack trace in the exception:
Traceback (most recent call last):
File "main.py", line 12, in <module>
usernames = getUsernames(locations, gh)
File "/home/ciembor/projekty/github-rank/functions.py", line 39, in getUsernames
for user in result:
File "/usr/lib/python2.7/site-packages/PyGithub-1.8.0-py2.7.egg/github/PaginatedList.py", line 33, in __iter__
newElements = self.__grow()
...
The exception is being thrown from code being called by the line for user in result: after getResult finishes executing. This means that the API you're using is using lazy evaluation, so the actual API request doesn't quite happen when you expect it to.
In order to catch and handle this exception, you'll need to wrap the code inside the getUsernames function with a try/except handler.

Categories

Resources