List indices must be integers error - python

I have the following error coming in my Python code:
if data['result'] == 0:
TypeError: list indices must be integers, not str
Following is the code :
data = urllib.urlencode(parameters)
req = urllib2.Request(url, data)
try:
response = urllib2.urlopen(req)
except urllib2.URLError, e:
self.redirect('/error')
json_post = response.read()
data = json.loads(json_post)
response_dict = simplejson.loads(json_post)
virustotal = VirusTotal()
if data['result'] == 0:
virustotal_result = True
elif data['result'] == -2:
self.response.out.write("API Request Rate Limit Exceeded<br/>")
elif data['result'] == -1:
self.response.out.write("API Key provided is wrong<br/>")
elif data['result'] == 1:
self.response.out.write("Time Stamp : %s<br/>"% data['report'][0])
I know that data is a list. So I changed it into integers as well but the code then showed the range error. Please Help??

When you say data = json.loads(json_post), it sounds like you got a list, not the dict you seem to be expecting.
If this isn't the problem, try updating with the full traceback and the value of json_post.

You are getting an array of objects back from response.read, which means you're getting a Python list. Try this:
data = json.loads(json_post)[0]

Related

pycurl.multicurl - Cant get my the returned results, only the object memory address

I have been trying to get this example code working https://fragmentsofcode.wordpress.com/2011/01/22/pycurl-curlmulti-example/ for a bit tonight and it has me stumped (not difficult).
m = pycurl.CurlMulti()
for url in urllist:
response = io.StringIO()
handle = pycurl.Curl()
handle.setopt(pycurl.URL, url)
handle.setopt(pycurl.WRITEDATA, response)
req = (url, response)
m.add_handle(handle)
reqs.append(req)
# Perform multi-request.
# This code copied from pycurl docs, modified to explicitly
# set num_handles before the outer while loop.
SELECT_TIMEOUT = 1.0
num_handles = len(reqs)
while num_handles:
ret = m.select(SELECT_TIMEOUT)
if ret == -1:
continue
while 1:
ret, num_handles = m.perform()
if ret != pycurl.E_CALL_MULTI_PERFORM:
break
for req in reqs:
# req[1].getvalue() contains response content
print(req[1].getvalue())
I have tried the following prints
req[1]
req[1].getvalue()
req[1].getvalue().decode('ISO-8859-1')
The closest I have come is printing something like the following for all fetched URL's
_io.StringIO object at 0x7f19f931d9d8
I have also tried using io.BytesIO with the same results.
Comments in the code are from the original author, and I have changed some minor things that I think are version dependent, and removed the comments related.
How can I print the contents of the object at that memory address rather than just the address?
EDIT:
print(req[1].getvalue()) raises the following error
print(req[1].getalue())
AttributeError: '_io.BytesIO' object has no attribute 'getalue
so I squished it into variable like so
temp = req[1].getvalue()
print(temp)
and it just returned a b with single quotes like so:
b''

How to overcome "TypeError: list indices must be integers or slices, not str" in for pa in data["result"]

I face the following problem, for pa in data["result"] the following error is displayed "TypeError: list indices must be integers or slices, not str". Could you suggest me which kind of manipulation should I make to the code for letting it running without displaying the error?
with open("data/indicePA/indicePA.tsv", 'w') as f_indice_pa , open("data/indicePA/otherPA.tsv", 'w') as f_other:
writer_indice_pa = csv.writer(f_indice_pa, delimiter ='\t')
writer_other_pa = csv.writer(f_other, delimiter ='\t')
count = 0
res = ["cf", "cod_amm", "regione", "provincia", "comune", "indirizzo", "tipologia_istat", "tipologia_amm"]
writer_indice_pa.writerow(res)
writer_other_pa.writerow(["cf"])
for pa in data["result"]:
esito = pa["esitoUltimoTentativoAccessoUrl"]
if esito == "successo":
cf = pa["codiceFiscale"]
if cf in cf_set_amm:
try:
cod_amm = df_amm.loc[df_amm['Cf'] == cf].iloc[0]['cod_amm']
take0 = df_amm.loc[df_amm['cod_amm'] == cod_amm].iloc[0]
regione = take0['Regione'].replace("\t", "")
provincia = str(take0['Provincia']).replace("\t", "")
comune = take0['Comune'].replace("\t", "")
indirizzo = take0['Indirizzo'].replace("\t", "")
tipologia_istat = take0['tipologia_istat'].replace("\t", "")
tipologia_amm = take0['tipologia_amm'].replace("\t", "")
res = [cf, cod_amm, regione, provincia, comune, indirizzo, tipologia_istat, tipologia_amm]
writer_indice_pa.writerow(res)
except: # catch *all* exceptions
print("CF in df_amm",cf)
elif cf in cf_set_serv_fatt:
try:
cod_amm = df_serv_fatt.loc[df_serv_fatt['Cf'] == cf].iloc[0]['cod_amm']
take0 = df_amm.loc[df_amm['cod_amm'] == cod_amm].iloc[0]
regione = take0['Regione'].replace("\t", "")
provincia = str(take0['Provincia']).replace("\t", "")
comune = take0['Comune'].replace("\t", "")
indirizzo = take0['Indirizzo'].replace("\t", "")
tipologia_istat = take0['tipologia_istat'].replace("\t", "")
tipologia_amm = take0["tipologia_amm"].replace("\t", "")
res = [cf, cod_amm, regione, provincia, comune, indirizzo, tipologia_istat, tipologia_amm]
writer_indice_pa.writerow(res)
except: # catch *all* exceptions
#e = sys.exc_info()[0]
print("CF in df_serv_fatt",cf)
else:
#print(cf, " is not present")
count = count + 1
writer_other_pa.writerow([cf])
#if(count % 100 == 0):
#print(cf)
print("Totale cf non presenti in IndicePA: ", count)
f_indice_pa.flush()
f_other.flush()
I expect to obtain the following statement "Totale cf non presenti in IndicePA: 1148", because this code has been already run. But I face this problem. How to overcome it? Is there any manipulation that I can make to this original code?
Thanks for your help in advance.
It is possible to find a wider explanation of the code at the following link: link resource
According to the documentation for the json module you are using to parse that json file.
This chart can be found at https://docs.python.org/2/library/json.html Section 18.2.2
It seems like your json object is getting parsed as a list. According to that chart the object must be an array. Like such...
[
{
"result" : { someObject }
}
]
A simple fix might be editing the json so it is parsed correctly to a dict. For example...
{
"result": {
someObjectStuff
}
}
Another fix could be accessing the first element in the list. If your json looks like the following
[
{
"result" : { someObject }
}
]
Then changing the code to be
for pa in data[0]["result"]:
Might also fix the problem.

How to collect a record by a parameter python

Python source :
#app.route('/pret/<int:idPret>', methods=['GET'])
def descrire_un_pret(idPret):
j = 0
for j in prets:
if prets[j]['id'] == idPret:
reponse = make_response(json.dumps(prets[j],200))
reponse.headers = {"Content-Type": "application/json"}
return reponse
I would like to retrieve a record in the prets list by the idPret parameter. Nevertheless I get an error:
TypeError: list indices must be integers, not dict
j is not a number. j is one element in the prets list. Python loops are foreach loops. if j['id'] == idPret: would work:
for j in prets:
if j['id'] == idPret:
reponse = make_response(json.dumps(j, 200))
reponse.headers = {"Content-Type": "application/json"}
return reponse
I'd use a different name here, and use the flask.json.jsonify() function to create the response:
from flask import jsonify
for pret in prets:
if pret['id'] == idPret:
return jsonify(pret)
jsonify() takes care of converting to JSON and creating the response object with the right headers for you.
You are mistaking Python's for behavior for Javascript's. When you do
for j in prets: in Python, the contents of j are already the elements in pretz, not an index number.

How can I append an array of list in dynamodb using python

I want to append a list of array in dynamodb using python. I am using boto for the same. I am able to append the list in python and saving it in a variable. Now just wanted to append that value in an item (list of array) which is empty right now and don't know how to do it. So it would be great if anyone can help me with that.
result = self.User_Connection.scan(main_user__eq=self.username)
for connection_details in result:
main_user = connection_details['main_user']
connection_list = connection_details['connections']
connection_list.append(frnd_user_id)
I want to add this connection_list.append(frnd_user_id) as an item.
I've tried doing like this:-
if user_connections['connections'] == None:
self.User_Connection.put_item(data={'connections' : set(frnd_user_id)})
else:
self.User_Connection.update_item(Key = {user_connections['id']}, UpdateExpression="SET connections = list_append(connections, :i)",
ExpressionAttributeValues={':i': [frnd_user_id],})
but it is not working. Giving error:-
'Table' has no attribute 'update_item'
I've tried doing like this too:-
result = self.User_Connection.scan(main_user__eq=self.username)
for connection_details in result:
main_user = connection_details['main_user']
main_userid = connection_details['id']
connection_list = connection_details['connections']
print connection_list
if connection_details['connections'] == None:
self.User_Connection.put_item(data={'id' : main_userid,
'main_user':self.username,
'connections[]' : frnd_user_id})
else:
item = self.User_Connection.get_item(id = connection_details['id'])
for i in frnd_user_id:
item[connection_list].append(i)
item.save(overwrite=True)
this is showing error too.
unhashable type: 'list'

Dealing with HTTP IncompleteRead Error in Python

I am trying to understand how to handle a http.client.IncompleteRead Error in the code below. I handle the error using the idea in this post. Basically, I thought it might just be the server limiting the number of times I can access the data, but strangely I get a HTTP Status Code of 200 sometimes, but still the code below ends up returning a None type. Is this because zipdata = e.partial is not returning anything when the Error comes up?
def update(self, ABBRV):
if self.ABBRV == 'cd':
try:
my_url = 'http://www.bankofcanada.ca/stats/results/csv'
data = urllib.parse.urlencode({"lookupPage": "lookup_yield_curve.php",
"startRange": "1986-01-01",
"searchRange": "all"})
binary_data = data.encode('utf-8')
req = urllib.request.Request(my_url, binary_data)
result = urllib.request.urlopen(req)
print('status:: {},{}'.format(result.status, my_url))
zipdata = result.read()
zipfile = ZipFile(BytesIO(zipdata))
df = pd.read_csv(zipfile.open(zipfile.namelist()[0]))
df = pd.melt(df, id_vars=['Date'])
return df
#In case of http.client.IncompleteRead Error
except http.client.IncompleteRead as e:
zipdata = e.partial
Thank You
Hmm... I've run your code 20 times without getting an incomplete read error, why don't you just retry in the case of an incomplete read? Or on the other hand if your IP is being blocked, then it would make sense that they're not returning you anything. Your code could look something like this:
maxretries = 3
attempt = 0
while attempt < maxretries:
try:
#http access code goes in here
except http.client.IncompleteRead:
attempt += 1
else:
break

Categories

Resources