Receiving JSON Dics with python [closed] - python

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm pretty new to python and i want to get a JSON dic and make an python dic based on it.
An exemple of those JSON dics is this:
http://ajax.googleapis.com/ajax/services/search/web?v=2.0&q=python?cod=&start=2
So i need to get it and print all the keys "URL" on the screen.
I have seen some things like this but did not worked good, my code now is:
while (page < 50):
page_src = urllib.urlopen(search_url + '?cod=&start=' + str(page)).read()
json_src = json.loads (page_src)
for item in json_src['responseData']:
sub_item = json_src['responseData']['results']
for link in sub_item:
for key in link:
if (key == u'"url"'):
print link[key]
page = page + 1
But when executed i get:
TypeError: 'NoneType' object is not iterable
IDK where i'm wrong, please help me..
Thank you all guys.
TheV0iD

Check out to make sure your url is correct, the code worked for me. My only revision would be:
for item in json_src['responseData']['results']:
print link[key]
Also, make sure your starting and ending values of page are real urls, you are getting the NoneType because there was no such thing as 'responseData' found.
Also what is your value of search_url? are you including the ?v=2.0&q=python? in it? if you messed up your url at all your NoneType is coming from trying to iterate through json_src['responseData']['results'] because there is no such thing.
EDIT:
The issue is that you reassign the search_url in the loop. The second iteration the url becomes http://ajax.googleapis.com/ajax/services/search/web?v=2.0&q=python?cod=&start=0?cod=&start=1 with both appended. Simple change the search_url = to cur_url =
Final code:
print "\n\n RESULTS:"
while (page < 2):
current_url = search_url + '?cod=&start=' + str(page)
json_src = json.load(urllib.urlopen(search_url))
print json_src
results = json_src['responseData']['results']
for result in results:
print "\t" + result['url']
page = page + 1

Related

KeyError When JSON Parsing [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I tried to access 'gold_spent` key in some dictionary made from JSON file.
Here is my code:
import json
import requests
response = requests.get("https://sky.shiiyu.moe/api/v2/profile/tProfile")
json_data = json.loads(response.text)
print(json_data['gold_spent'])
When I run this I get this "KeyError: 'gold_spent'"
I don't know what I am doing wrong, any help would be appreciated.
The data you are looking for is nested. See below.
print(json_data['profiles']['590cedda63e145ea98d44015649aba30']['data']['misc']['auctions_buy']['gold_spent'])
output
46294255
You experienced an exception because gold_spent isn't at all a key of first level, you need to investigate the structure to find it. Accessing non-existing key in the dictionary would always end with KeyError exception.
import json
import requests
response = requests.get("https://sky.shiiyu.moe/api/v2/profile/tProfile")
json_data = json.loads(response.text)
print(json_data.keys())
# dict_keys(['profiles'])
print(json_data['profiles'].keys())
# dict_keys(['590cedda63e145ea98d44015649aba30'])
print(json_data['profiles']['590cedda63e145ea98d44015649aba30'].keys())
# dict_keys(['profile_id', 'cute_name', 'current', 'last_save', 'raw', 'items', 'data'])
print(json_data['profiles']['590cedda63e145ea98d44015649aba30']['data']['misc']['auctions_buy']['gold_spent'])
# 46294255

Looping a function with its input being a url [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
So I am trying to get into python, and am using other examples that I find online to understand certain functions better.
I found a post online that shared a way to check prices on an item through CamelCamelCamel.
They had it set to request from a specific url, so I decided to change it to userinput instead.
How can I just simply loop this function?
It runs fine afaik once, but after the inital process i get 'Process finished with exit code 0', which isn't necessarily a problem.
For the script to perform how I would like it to. It would be nice if there was a break from maybe, 'quit' or something, but after it processes the URL that was given, I would like it to request for a new URL.
Im sure theres a way to check for a specific url, IE this should only work for Camelcamelcamel, so to limit to only that domain.
Im more familiar with Batch, and have kinda gotten away with using batch to run my python files to circumvent what I dont understand.
Personally if I could . . .
I would just mark the funct as 'top:'
and put goto top at the bottom of the script.
from bs4 import BeautifulSoup
import requests
print("Enter CamelCamelCamel Link: ")
plink = input("")
headers = {'User-Agent': 'Mozilla/5.0'}
r = requests.get(plink,headers=headers)
data = r.text
soup = BeautifulSoup(data,'html.parser')
table_data = soup.select('table.product_pane tbody tr td')
hprice = table_data[1].string
hdate = table_data[2].string
lprice = table_data[7].string
ldate = table_data[8].string
print ('High price-',hprice)
print ("[H-Date]", hdate)
print ('---------------')
print ('Low price-',lprice)
print ("[L-Date]", ldate)
Also how could I find the difference from the date I obtain from either hdate or ldate, from today/now. How the dates I parsed they're strings and I got. TypeError: unsupported operand type(s) for +=: 'int' and 'str'.
This is really just for learning, any example works, It doesnt have to be that site in specific.
In Python, you have access to several different types of looping control structures, including:
while statements
while (condition) # Will execute until condition is no longer True (or until break is called)
<statements to execute while looping>
for statements
for i in range(10) # Will execute 10 times (or until break is called)
<statements to execute while looping>
Each one has its strengths and weaknesses, and the documentation at Python.org is very thorough but easy to assimilate.
https://docs.python.org/3/tutorial/controlflow.html

How to access webpage with variables in python [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I have a project and want to access a url by python. If i have variable1=1 and variable2=2, I want an output to be like this:
www.example.com/data.php?variable1=1&variable2=2
How do I achieve this? Thanks!
Check this out:
try:
from urllib2 import urlopen # python 2
except:
from urllib.request import urlopen # python 3
vars = ['variable1=1', 'variable2=2']
for i in vars:
url = 'http://www.example.com/data.php?' + i
response = urlopen(url)
html = response.read()
print(html)
The first four lines import some code we can use to make a HTTP request.
Then we create a list of variables named vars.
Then we pass each of those variables into a loop; that loop will run once for each item in vars.
Next we build the url given the current value in vars.
Finally we get the html at that address and print it to the terminal.
You can use formate operation in python for string.
for example
variable1=1
variable1=1
url = 'www.example.com/data.php?variable1={}&variable2={}'.format(variable1,variable1)
or if you want to use the url with request then you can make a dict and pass it in request like this way
import requests
url = 'http://www.example.com/data.php'
data = {'variable1':'1','variable2':'2'}
r = requests.get(url,data)
and it will making request on this url
http://www.example.com/data.php?variable1=1&variable2=2
Try string formatting...
url = 'www.example.com/data.php?variable1={}&variable2={}'.format(variable1, variable2)
This means the 2 {} will be replaced with whatever you pass in .format(), which in this case is just the variables' values

Parsing JSON using nested for loop in python [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I am parsing a huge JSON in python, i am parsing it step by step, i am stuck at a point and i can't figure out why the code is not running properly, my code is;
I want to get value of all the WHO_REGION for all the attr in an array, as i am not an expert in python programming..... here is the JSON, "http://apps.who.int/gho/athena/data/COUNTRY.json"
import json
from pprint import pprint
mylabel = []
mylabel2 = []
with open('C:\Users\Syed Saad Ahmed\Desktop\FL\COUNTRY.json') as data_file:
data = json.load(data_file)
for i in range(0,246):
mylabel.append(data["dimension"][0]["code"][i]["label"])
print mylabel
for j in range(0,246):
for k in range(0,21):
if(data["dimension"][0]["code"][j]["attr"][k]["category"]=='WHO_REGION'):
mylabel2.append(data["dimension"][0]["code"][j]["attr"][k]["value"])
print mylabel2
You can browse your JSON object using nested loops:
import json
obj = json.loads(data)
dimention_list = obj["dimension"]
for dimension in dimention_list:
code_list = dimension["code"]
for code in code_list:
attr_list = code["attr"]
for attr in attr_list:
if attr["category"] == "WHO_REGION":
print(attr["value"])
It is complex because each entry contains a list of something…
Of course, it's up to you to add some filtering.

Is there a Google Insights API? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I've been looking for an API to automatically retrieve Google Insights information for part of another algorithm, but have been unable to find anything. The first result on Google delivers a site with a python plugin which is now out of date.
Does such an API exist, or has anyone written a plugin, perhaps for python?
As far as I can tell, there is no API available as of yet, and neither is there a working implementation of a method for extracting data from Google Insights. However, I have found a solution to my (slightly more specific) problem, which could really just be solved by knowing how many times certain terms are searched for.
This can be done by interfacing with the Google Suggest protocol for webbrowser search bars. When you give it a word, it returns a list of suggested phrases as well as the number of times each phase has been searched (I'm not sure about the time unit, presumably in the last year).
Here is some python code for doing this, slightly adapted from code by odewahn1 at O'reilly Answers and working on Python 2.6 and lower:
from sgmllib import SGMLParser
import urllib2
import urllib
# Define the class that will parse the suggestion XML
class PullSuggestions(SGMLParser):
def reset(self):
SGMLParser.reset(self)
self.suggestions = []
self.queries = []
def start_suggestion(self, attrs):
for a in attrs:
if a[0] == 'data': self.suggestions.append(a[1])
def start_num_queries(self, attrs):
for a in attrs:
if a[0] == 'int': self.queries.append(a[1])
# ENTER THE BASE QUERY HERE
base_query = "" #This is the base query
base_query += "%s"
alphabet = "abcdefghijklmnopqrstuvwxyz"
for letter in alphabet:
q = base_query % letter;
query = urllib.urlencode({'q' : q})
url = "http://google.com/complete/search?output=toolbar&%s" % query
res = urllib2.urlopen(url)
parser = PullSuggestions()
parser.feed(res.read())
parser.close()
for i in range(0,len(parser.suggestions)):
print "%s\t%s" % (parser.suggestions[i], parser.queries[i])
This at least solves the problem in part, but unfortunately it is still difficult to reliably obtain the number of searches for any specific word or phrase and impossible to obtain the search history of different phrases.
I just started searching for it and found a good way to retrieve it using python in the following script.Basically it is passing specialized quote to google historical financial database.
def get_index(gindex, startdate=20040101):
"""
API wrapper for Google Domestic Trends data.
https://www.google.com/finance/domestic_trends
Available Indices:
'ADVERT', 'AIRTVL', 'AUTOBY', 'AUTOFI', 'AUTO', 'BIZIND', 'BNKRPT',
'COMLND', 'COMPUT', 'CONSTR', 'CRCARD', 'DURBLE', 'EDUCAT', 'INVEST',
'FINPLN', 'FURNTR', 'INSUR', 'JOBS', 'LUXURY', 'MOBILE', 'MTGE',
'RLEST', 'RENTAL', 'SHOP', 'TRAVEL', 'UNEMPL'
"""
base_url = 'http://www.google.com/finance/historical?q=GOOGLEINDEX_US:'
full_url = '%s%s&output=csv&startdate=%s' % (base_url, gindex, startdate)
dframe = read_csv(urlopen(full_url), index_col=0)
dframe.index = DatetimeIndex(dframe.index)
dframe = dframe.sort_index(0)
for col in dframe.columns:
if len(dframe[col].unique()) == 1:
dframe.pop(col)
if len(dframe.columns) == 1 and dframe.columns[0] == 'Close':
dframe.columns = [gindex]
return dframe[gindex]
I couldn't find any documentation provided by Google, but Brad Jasper seems to have come up with some method for querying Insights for information. Note: I'm not sure if it still works... Good luck!
Use Python to Access Google Insights API
Sadly no, however the Google Adwords API Keyword Estimator may solve your need

Categories

Resources