Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I am parsing a huge JSON in python, i am parsing it step by step, i am stuck at a point and i can't figure out why the code is not running properly, my code is;
I want to get value of all the WHO_REGION for all the attr in an array, as i am not an expert in python programming..... here is the JSON, "http://apps.who.int/gho/athena/data/COUNTRY.json"
import json
from pprint import pprint
mylabel = []
mylabel2 = []
with open('C:\Users\Syed Saad Ahmed\Desktop\FL\COUNTRY.json') as data_file:
data = json.load(data_file)
for i in range(0,246):
mylabel.append(data["dimension"][0]["code"][i]["label"])
print mylabel
for j in range(0,246):
for k in range(0,21):
if(data["dimension"][0]["code"][j]["attr"][k]["category"]=='WHO_REGION'):
mylabel2.append(data["dimension"][0]["code"][j]["attr"][k]["value"])
print mylabel2
You can browse your JSON object using nested loops:
import json
obj = json.loads(data)
dimention_list = obj["dimension"]
for dimension in dimention_list:
code_list = dimension["code"]
for code in code_list:
attr_list = code["attr"]
for attr in attr_list:
if attr["category"] == "WHO_REGION":
print(attr["value"])
It is complex because each entry contains a list of something…
Of course, it's up to you to add some filtering.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I tried to access 'gold_spent` key in some dictionary made from JSON file.
Here is my code:
import json
import requests
response = requests.get("https://sky.shiiyu.moe/api/v2/profile/tProfile")
json_data = json.loads(response.text)
print(json_data['gold_spent'])
When I run this I get this "KeyError: 'gold_spent'"
I don't know what I am doing wrong, any help would be appreciated.
The data you are looking for is nested. See below.
print(json_data['profiles']['590cedda63e145ea98d44015649aba30']['data']['misc']['auctions_buy']['gold_spent'])
output
46294255
You experienced an exception because gold_spent isn't at all a key of first level, you need to investigate the structure to find it. Accessing non-existing key in the dictionary would always end with KeyError exception.
import json
import requests
response = requests.get("https://sky.shiiyu.moe/api/v2/profile/tProfile")
json_data = json.loads(response.text)
print(json_data.keys())
# dict_keys(['profiles'])
print(json_data['profiles'].keys())
# dict_keys(['590cedda63e145ea98d44015649aba30'])
print(json_data['profiles']['590cedda63e145ea98d44015649aba30'].keys())
# dict_keys(['profile_id', 'cute_name', 'current', 'last_save', 'raw', 'items', 'data'])
print(json_data['profiles']['590cedda63e145ea98d44015649aba30']['data']['misc']['auctions_buy']['gold_spent'])
# 46294255
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have a Large Json Data. It contains thousands of entries for category data.
I want to filter out categories that have no subcategories nested inside. The absence of a subcategory means it's an empty list.
For example: https://paste.ubuntu.com/p/cHg8vnQyZm/
I'm new to StackOverflow. I'm sorry if I bothered you. I fixed the problem, I'll leave the codes below. Thank you everyone, I wish you healthy days.
import ijson
def parser(filename):
with open(filename, mode="r", encoding="utf-8") as json:
parser = ijson.parse(json)
prefixes = []
for prefix, event, value in parser:
if prefix.endswith(".subCategories") and not value:
if prefix not in prefixes:
prefixes.append(prefix)
nullCategories = []
for rank in range(len(prefixes)):
json.seek(0)
for nullCategory in ijson.items(json, prefixes[rank]):
nullCategories.append(nullCategory)
return nullCategories
if __name__ == "__main__":
lists = [category for category in parser("data.json") if category]
for x in lists:
print(x[0]["name"])
Sorry, I forgot to say something. My goal is not just to get subcategories with empty content. My main goal is to get the names of subcategories with empty content.
With ijson I managed to get all subcategories with empty content:
import ijson
def load_json(filename):
with open(filename, "rb") as json:
parser = ijson.parse(json)
prefixes = []
for prefix, event, value in parser:
if prefix.endswith(".subCategories") and not value:
prefixes.append(prefix)
for x in prefixes:
print(x)
load_json("data.json")
But I couldn't find a way to get the names of these categories
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I created a Python script which displays all passwords that are stored on the computer by calling external commands by using the subprocess module.
I just want to get all passwords that are stored on my computer, but when I executed the script I get an error instead of getting the result:
TypeError: argument of type 'int' is not iterable
Here is my code:
import subprocess
data = subprocess.check_output(['netsh', 'wlan', 'show', 'profiles'])
data.decode('utf-8').split('\n')
profiles = [x.split(':')[1][1:-1] for x in data if 'All User Profile' in x]
for i in profiles:
results = subprocess.check_output(['netsh', 'wlan', 'show', 'profile', i, 'key=clear'])
results.decode('utf-8').split('\n')
results = [y.split(':')[1][1:-1] for y in results if 'Key Content' in y]
try:
print('{:<30}| {:<}'.format(i, results[0]))
except IndexError:
print('{:<30}| {:<}'.format(i, ''))
data.decode('utf-8').split('\n') is producing a nice list, but it doesn't put that list into the variable data, you need to reassign the process to data and do the same for results
data = data.decode('utf-8').split('\n')
results = results.decode('utf-8').split('\n')
There may be other issues but this will account for why what you thought was a list of strings is actually still the bytes object so you are iterating over the byte (int) not a string.
I haven't to break this statement like the following:
data = subprocess.check_output(['netsh', 'wlan', 'show', 'profiles'])
data.decode('utf-8').split('\n')
the above statement is what brought an error in my script but the solution is to do it like
data = subprocess.check_output(['netsh', 'wlan', 'show', 'profiles']).decode('utf-8').split('\n')
and this now work properly
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I'm using the requests library in Python to do a post request, but I'm having a problem when I read a value from a spreadsheet.
The following code works (returns a 201 status code):
url = 'http://myport:8092//api/Accounts/1000/Users'
item = {"firstName": "John", "lastName": "Smith", "userName": "JSmith"}
r = requests.post(url, json = item)
print(r.status_code)
As soon as I read "item" from a cell in a spreadsheet, a 501 error code gets returned. When I print out "item" after reading it from the spreadsheet, the output matches the value for item shown above.
I haven't been able to find a solution, the only thing I can think of is that the problem is that it's being read as a string?
Do I need to convert it into a json object before I run the post?
501 is the error code for not implemented. It looks like the url you're sending to doesn't accept post requests. Is the url correct?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I'm pretty new to python and i want to get a JSON dic and make an python dic based on it.
An exemple of those JSON dics is this:
http://ajax.googleapis.com/ajax/services/search/web?v=2.0&q=python?cod=&start=2
So i need to get it and print all the keys "URL" on the screen.
I have seen some things like this but did not worked good, my code now is:
while (page < 50):
page_src = urllib.urlopen(search_url + '?cod=&start=' + str(page)).read()
json_src = json.loads (page_src)
for item in json_src['responseData']:
sub_item = json_src['responseData']['results']
for link in sub_item:
for key in link:
if (key == u'"url"'):
print link[key]
page = page + 1
But when executed i get:
TypeError: 'NoneType' object is not iterable
IDK where i'm wrong, please help me..
Thank you all guys.
TheV0iD
Check out to make sure your url is correct, the code worked for me. My only revision would be:
for item in json_src['responseData']['results']:
print link[key]
Also, make sure your starting and ending values of page are real urls, you are getting the NoneType because there was no such thing as 'responseData' found.
Also what is your value of search_url? are you including the ?v=2.0&q=python? in it? if you messed up your url at all your NoneType is coming from trying to iterate through json_src['responseData']['results'] because there is no such thing.
EDIT:
The issue is that you reassign the search_url in the loop. The second iteration the url becomes http://ajax.googleapis.com/ajax/services/search/web?v=2.0&q=python?cod=&start=0?cod=&start=1 with both appended. Simple change the search_url = to cur_url =
Final code:
print "\n\n RESULTS:"
while (page < 2):
current_url = search_url + '?cod=&start=' + str(page)
json_src = json.load(urllib.urlopen(search_url))
print json_src
results = json_src['responseData']['results']
for result in results:
print "\t" + result['url']
page = page + 1