Why am I getting this error when trying to print keys from dictionary?
import urllib
import urllib2
import json
ret = urllib2.urlopen(urllib2.Request('https://poloniex.com/public?command=returnTicker'))
a = json.loads(ret.read())
print a.keys()
ret = urllib2.urlopen(urllib2.Request('https://poloniex.com/public?command=return24Volume'))
b = json.loads(ret.read())
print b.keys()
The error is produced by the website - your code as such is ok, it produces a json-object which has apparently the structure '{ "error" : ""}'. Try printing it out & figure out what's wrong, you probably need some authentication tokens or similar stuff.
There seems to be a API wrapper available, you should consider using or at least understanding it: http://pastebin.com/8fBVpjaj
It's directly featured on polniex website, and it clearly shows the need of API secret and key.
The error is originating from the site. The dict is loaded, and expresses the error as it's only key.
Try this:
ret = urllib2.urlopen(urllib2.Request('https://poloniex.com/public?command=return24hVolume'))
b = json.loads(ret.read())
print b.keys()
Notice the 'h' in return24hVolume.
The second URL returns:
{"error":"Invalid command."}
So as Reut Sharabani points out, you have to use 'h' in the URL:
https://poloniex.com/public?command=return24hVolume
Related
I am using an API from this site https://dev.whatismymmr.com, and I want to specifically request for the closestRank but I just get a KeyError: 'ranked.closestRank'. but I can get the entire ['ranked'] object (which contains the closestRank) but I just end up with a lot of information I don't need.
How can I end up with just the Closest rank?
My code
import requests
LeagueName = input ("Summoner name")
base = ("https://eune.whatismymmr.com/api/v1/summoner?name=")
Thething = base + LeagueName
print (Thething)
response = requests.get(Thething)
print(response.status_code)
MMR = response.json()
print (MMR['ranked.closestRank'])
The API command
<queue>.closestRank (the queue is the game mode, it can be normal or ranked)
you can use the summoner name babada27 for testing.
Hope this is what you are looking for -
change The last line to
print (MMR["ranked"]["closestRank"])
I want to list once my script in python search for specific strings , but I also want to add country code first two letters , but when I try then it says invalid KeyError: 'country_code', but the api says ocation.country_code how can I achieve that?
#!/usr/bin/python
import shodan
SHODAN_API_KEY="xxxxxxxxxxxxxxxxxxxx"
api = shodan.Shodan(SHODAN_API_KEY)
try:
# Search Shodan
results = api.search('ProFTPd-1.3.3c')
# Show the results
for result in results['matches']:
print '%s' % result['ip_str']
print '%s' % result['country_code']
except shodan.APIError, e:
print 'Error: %s' % e
I think this is the method You are using in Python
https://github.com/achillean/shodan-python/blob/master/shodan/client.py#L324
and it triggers:
return self._request('/shodan/host/search', args)
Shodan API documentation:
https://developer.shodan.io/api
check out /shodan/host/search API
I just saw that the answer is in Your question but You ate one letter from location (ocation).
Try this:
print '%s' % result['location']['country_code']
So field You are looking for is there but it is in another dictionary.
I would recommend to read API documentation well next time and as Nofal Daud said, Python error are self explanatory if You have KeyError on dict it means that field is not there. Next time listen to Python it will reveal the truth.
I have the following python code that is working ok to use reddit's api and look up the front page of different subreddits and their rising submissions.
from pprint import pprint
import requests
import json
import datetime
import csv
import time
subredditsToScan = ["Arts", "AskReddit", "askscience", "aww", "books", "creepy", "dataisbeautiful", "DIY", "Documentaries", "EarthPorn", "explainlikeimfive", "food", "funny", "gaming", "gifs", "history", "jokes", "LifeProTips", "movies", "music", "pics", "science", "ShowerThoughts", "space", "sports", "tifu", "todayilearned", "videos", "worldnews"]
ofilePosts = open('posts.csv', 'wb')
writerPosts = csv.writer(ofilePosts, delimiter=',')
ofileUrls = open('urls.csv', 'wb')
writerUrls = csv.writer(ofileUrls, delimiter=',')
for subreddit in subredditsToScan:
front = requests.get(r'http://www.reddit.com/r/' + subreddit + '/.json')
rising = requests.get(r'http://www.reddit.com/r/' + subreddit + '/rising/.json')
front.text
rising.text
risingData = rising.json()
frontData = front.json()
print(len(risingData['data']['children']))
print(len(frontData['data']['children']))
for i in range(0, len(risingData['data']['children'])):
author = risingData['data']['children'][i]['data']['author']
score = risingData['data']['children'][i]['data']['score']
subreddit = risingData['data']['children'][i]['data']['subreddit']
gilded = risingData['data']['children'][i]['data']['gilded']
numOfComments = risingData['data']['children'][i]['data']['num_comments']
linkUrl = risingData['data']['children'][i]['data']['permalink']
timeCreated = risingData['data']['children'][i]['data']['created_utc']
writerPosts.writerow([author, score, subreddit, gilded, numOfComments, linkUrl, timeCreated])
writerUrls.writerow([linkUrl])
for j in range(0, len(frontData['data']['children'])):
author = frontData['data']['children'][j]['data']['author'].encode('utf-8').strip()
score = frontData['data']['children'][j]['data']['score']
subreddit = frontData['data']['children'][j]['data']['subreddit'].encode('utf-8').strip()
gilded = frontData['data']['children'][j]['data']['gilded']
numOfComments = frontData['data']['children'][j]['data']['num_comments']
linkUrl = frontData['data']['children'][j]['data']['permalink'].encode('utf-8').strip()
timeCreated = frontData['data']['children'][j]['data']['created_utc']
writerPosts.writerow([author, score, subreddit, gilded, numOfComments, linkUrl, timeCreated])
writerUrls.writerow([linkUrl])
It works well and scrapes the data accurately but it constantly gets interrupted, seemingly randomly, and has a run time crash, saying:
Traceback (most recent call last):
File "dataGather1.py", line 27, in <module>
for i in range(0, len(risingData['data']['children'])):
KeyError: 'data'
I have no idea why this error is occuring on and off and not consistently. I thought maybe I am calling the API too much so it stops me from accessing it so I threw a sleep in my code but that did not help. Any ideas?
When there are no data on the response from the API there are is no key data on the dictionary so you get a keyError on some subreddits. You need to use a try catch
The json you are parsing doesn't contain the 'data' element. Thus you get an error. I think your hunch is correct though. It is probably rate limiting, or that you're asking for hidden/deleted entries.
Reddit is very strict about accessing their API without playing nice. Meaning you should register your app and use a meaningful user-agent to your requets, and you should probably use the python library for this kind of thing: https://praw.readthedocs.io/en/latest/
Without registering it seems to my experience that the direct REST reddit API is even more strict than the 1 request per 2 seconds rule they have (had?).
Python raises a KeyError whenever a dict() object is requested (using the format a = adict[key]) and the key is not in the dictionary.
It seems like when you are getting this error, your data value is empty.
You might just try to get the length of the dictionary before you execute the for loop. If it’s empty, it will just not run. Some interesting error checking here might help.
size = len(risingData)
if size:
for i in range(0,size):
…
This is my first time to ask something here. I've been trying to access the Youtube API to get something for an experiment I'm doing. Everything's working so far. I just wanted to ask about this very inconsistent error that I'm getting.
-----------
1
Title: All Movie Trailers of New York Comic-Con (2016) Power Rangers, John Wick 2...
Uploaded by: KinoCheck International
Uploaded on: 2016-10-12T14:43:42.000Z
Video ID: pWOH-OZQUj0
2
Title: Movieclips Trailers
Uploaded by: Movieclips Trailers
Uploaded on: 2011-04-01T18:43:14.000Z
Video ID: Traceback (most recent call last):
File "scrapeyoutube.py", line 24, in <module>
print "Video ID:\t", search_result['id']['videoId']
KeyError: 'videoId'
I tried getting the video ID ('videoID' as per documentation). But for some reason, the code works for the 1st query, and then totally flops for the 2nd one. It's weird because it's only happening for this particular element. Everything else ('description','publishedAt', etc.) is working. Here's my code:
from apiclient.discovery import build
import json
import pprint
import sys
APINAME = 'youtube'
APIVERSION = 'v3'
APIKEY = 'secret teehee'
service = build(APINAME, APIVERSION, developerKey = APIKEY)
#volumes source ('public'), search query ('androide')
searchrequest = service.search().list(q ='movie trailers', part ='id, snippet', maxResults = 25).execute()
searchcount = 0
print "-----------"
for search_result in searchrequest.get("items", []):
searchcount +=1
print searchcount
print "Title:\t", search_result['snippet']['title']
# print "Description:\t", search_result['snippet']['description']
print "Uploaded by:\t", search_result['snippet']['channelTitle']
print "Uploaded on:\t", search_result['snippet']['publishedAt']
print "Video ID:\t", search_result['id']['videoId']
Hope you guys can help me. Thanks!
Use 'get' method for result.
result['id'].get('videoId')
there are in some element no this key.
if you use square parenteces, python throw exeption keyError, but if you use 'get' method, python return None for element whitch have not key videoId
Using search() method returns channels, playlists as well together with videos in search. That might be why your problem.
I use their interactive playgrounds to learn the structure of returned JSON, functions, etc. For your question, I suggest to visit https://developers.google.com/youtube/v3/docs/search/list .
Make sure if a kind of an item is "youtube#video", then access videoId of that item.
Sample of code:
...
for index in response["items"]: # response is a JSON file I have got from API
tmp = {} # temporary dict to assert into my custom JSON
if index["id"]["kind"] == "youtube#video":
tmp["videoID"] = index["id"]["videoId"]
...
This is a part of code from my personal project I am currently working on.
Because some results to Key "ID", return:
{u'kind': u'youtube#playlist', u'playlistId': u'PLd0_QArxznVHnlvJp0ki5bpmBj4f64J7P'}
You can see, there is no key "videoId".
I'm using Python 2.7.11 on windows to get JSON data from API (data on trees in Warsaw, Poland, but nevermind that). I want to generate output csv file with all the data provided by the api, for further analysis. I started with a script I used for another project (also discussed here on Stackoverflow and corrected for me by #Martin Taylor).That script didn't work so I tried to modify it using my very basic understanding, googling around and applying pdb debugger. At the moment, the result looks like this:
import pdb
import json
import urllib2
import csv
pdb.set_trace()
url = "https://api.um.warszawa.pl/api/action/datastore_search/?resource_id=ed6217dd-c8d0-4f7b-8bed-3b7eb81a95ba"
myfile = 'C:/dane/drzewa.csv'
csv_myfile = csv.writer(open(myfile, 'wb'))
cols = ['numer_adres', 'stan_zdrowia', 'y_wgs84', 'dzielnica', 'adres', 'lokalizacja', 'wiek_w_dni', 'srednica_k', 'pnie_obwod', 'miasto', 'jednostka', 'x_pl2000', 'wysokosc', 'y_pl2000', 'numer_inw', 'x_wgs84', '_id', 'gatunek_1', 'gatunek', 'data_wyk_pom']
csv_myfile.writerow(cols)
def api_iterate(myfile):
while True:
global url
print url
json_page = urllib2.urlopen(url)
data = json.load(json_page)
json_page.close()
for data_object in data ['result']['records']:
csv_myfile.writerow([data_object[col] for col in cols])
try:
url = data['_links']['next']
except KeyError as e:
break
with open(myfile, 'wb'):
api_iterate(myfile)
I'm a very fresh Python user so I get confused all the time. Now I got to the point when, while reading the objects in json dictionary, I get a Keyerror message associated with the 'x_wgs84' element. I suppose it has something to do with the fact that in the source url this element is preceded by a U+FEFF unicode character. I tried to get around this but I got stuck and would appreciate assistance.
I suspect the code may be corrupt in several other ways - as I mentioned, I'm a very unskilled programmer (yet).
You need to put the key with the unicode character:
To know how to do it, one easy way is to print the keys:
>>> import requests
>>> res = requests.get('https://api.um.warszawa.pl/api/action/datastore_search/?resource_id=ed6217dd-c8d0-4f7b-8bed-3b7eb81a95ba')
>>> data = res.json()
>>> records = data['result']['records']
>>> records[0]
{u'numer_adres': u'', u'stan_zdrowia': u'dobry', u'y_wgs84': u'52.21865', u'y_pl2000': u'5787241.04475524', u'adres': u'ul. ALPEJSKA', u'x_pl2000': u'7511793.96937063', u'lokalizacja': u'Ulica ALPEJSKA', u'wiek_w_dni': u'60', u'miasto': u'Warszawa', u'jednostka': u'Dzielnica Wawer', u'pnie_obwod': u'73', u'wysokosc': u'14', u'data_wyk_pom': u'20130709', u'dzielnica': u'Wawer', u'\ufeffx_wgs84': u'21.172584', u'numer_inw': u'D386200', u'_id': 125435, u'gatunek_1': u'Quercus robur', u'gatunek': u'd\u0105b szypu\u0142kowy', u'srednica_k': u'7'}
>>> records[0].keys()
[u'numer_adres', u'stan_zdrowia', u'y_wgs84', u'y_pl2000', u'adres', u'x_pl2000', u'lokalizacja', u'wiek_w_dni', u'miasto', u'jednostka', u'pnie_obwod', u'wysokosc', u'data_wyk_pom', u'dzielnica', u'\ufeffx_wgs84', u'numer_inw', u'_id', u'gatunek_1', u'gatunek', u'srednica_k']
>>> records[0][u'\ufeffx_wgs84']
u'21.172584'
As you can see, to get your key, you need to write it as '\ufeffx_wgs84' with the unicode character that is causing trouble.
Note: I don't know if you are using python2 or 3, but you might need to put a u before your string declaration in python2 to declare it as unicode string.