Python -- get at JSON info that's written like XML - python

In Python, I usually do simple JSON with this sort of template:
url = "url"
file = urllib2.urlopen(url)
json = file.read()
parsed = json.loads(json)
and then get at the variables with calls like:
parsed[obj name][value name]
But, this works with JSON that's formatted roughly like:
{'object':{'index':'value', 'index':'value'}}
The JSON I just encountered is formatted like:
{'index':'value', 'index':'value'},{'index':'value', 'index':'value'}
so there are no names for me to reference the different blocks. Of course the blocks give different info, but have the same "keys" -- much like XML is usually formatted. Using my method above, how would I parse through this JSON?

The following is not a valid JSON.
{'index':'value', 'index':'value'},{'index':'value', 'index':'value'}
Where as
[{'index':'value', 'index':'value'},{'index':'value', 'index':'value'}] is a valid JSON.
and python trackback shows that
import json
string = "{'index':'value', 'index':'value'},{'index':'value', 'index':'value'}"
parsed = json.loads(string)
print parsed
Traceback (most recent call last):
File "/Users/tron/Desktop/test3.py", line 3, in <module>
parsed_json = json.loads(json_string)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 369, in decode
raise ValueError(errmsg("Extra data", s, end, len(s)))
ValueError: Extra data: line 1 column 27 - line 1 column 54 (char 26 - 53)
[Finished in 0.0s with exit code 1]
where is if you do
json_string = '[{"a":"value", "b":"value"},{"a":"value", "b":"value"}]'
everything works fine.
If that is the case, you can refer to it as an array of Jsons. where json_string[0] is the first JSON string. json_string[1] is the second and so on.
Otherwise if you think this is going to be an issue that you "just have to deal with". Here is one option:
Think of the ways JSON can be malformed and write a simple class to account for them. In the case above, here is a hacky way you can deal with it.
import json
json_string = '{"a":"value", "b":"value"},{"a":"value", "b":"value"}'
def parseJson(string):
parsed_json = None
try:
parsed_json = json.loads(string)
print parsed_json
except ValueError, e:
print string, "didnt parse"
if "Extra data" in str(e.args):
newString = "["+string+"]"
print newString
return parseJson(newString)
You could add more if/else to deal with various things you run into. I have to admit, this is very hacky and I don't think you can ever account for every possible mutation.
Good luck

The result must be list of dict:
[{'index1':'value1', 'index2':'value2'},{'index1':'value1', 'index2':'value2'}]
thus you can reference it using numbers: item[1]['index1']

Related

Valid (?) JSON data causing errors in Django, must be served to frontend as string and converted by JSON.parse() in javascript - why?

I have a JSON file hosted locally in my Django directory. It is fetched from that file to a view in views.py, where it is read in like so:
def Stops(request):
json_data = open(finders.find('JSON/myjson.json'))
data1 = json.load(json_data) # deserialises it
data2 = json.dumps(data1) # json formatted string
json_data.close()
return JsonResponse(data2, safe=False)
Using JsonResponse without (safe=False) returns the following error:
TypeError: In order to allow non-dict objects to be serialized set the safe parameter to False.
Similarly, using json.loads(json_data.read()) instead of json.load gives this error:
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
This is confusing to me - I have validated the JSON using an online validator. When the JSON is sent to the frontend with safe=False, the resulting object that arrives is a string, even after calling .json() on it in javascript like so:
fetch("/json").then(response => {
return response.json();
}).then(data => {
console.log("data ", data); <---- This logs a string to console
...
However going another step and calling JSON.parse() on the string converts the object to a JSON object that I can use as intended
data = JSON.parse(data);
console.log("jsonData", data); <---- This logs a JSON object to console
But this solution doesn't strike me as a complete one.
At this point I believe the most likely thing is that there is something wrong with the source JSON - (in the file character encoding?) Either that or json.dumps() is not doing what I think it should, or I am not understanding the Django API's JSONresponse function in a way I'm not aware of...
I've reached the limit of my knowledge on this subject. If you have any wisdom to impart, I would really appreciate it.
EDIT: As in the answer below by Abdul, I was reformatting the JSON into a string with the json.dumps(data1) line
Working code looks like:
def Stops(request):
json_data = open(finders.find('JSON/myjson.json'))
data = json.load(json_data) # deserialises it
json_data.close()
return JsonResponse(data, safe=False) # pass the python object here
Let's see the following lines of your code:
json_data = open(finders.find('JSON/myjson.json'))
data1 = json.load(json_data) # deserialises it
data2 = json.dumps(data1) # json formatted string
You open a file and get a file pointer in json_data, parse it's content and get a python object in data1 and then turn it back into a JSON string and store it into data2. Somewhat redundant right? Next you pass this JSON string to JsonResponse which will further try to serialize it into JSON!! Meaning you then get a string inside a string in JSON.
Try the following code instead:
def Stops(request):
json_data = open(finders.find('JSON/myjson.json'))
data = json.load(json_data) # deserialises it
json_data.close()
return JsonResponse(data, safe=False) # pass the python object here
Note: function names in python should ideally be in snake_case not PascalCase, hence instead of Stops you should use stops. See
PEP 8 -- Style Guide for Python
Code

Python errors when trying to read and query a JSON file

I am trying to write a Python function as part of my job to be able to check the existence of data in a JSON file which I can only get by downloading it from a website. I am the only resource here with any coding or scripting experience (HTML, CSS & SQL) so this has fallen to me to sort out. I have no experience thus far with Python.
I am not allowed to change the structure or format of the JSON file, the format of it is:
{
"naglowek": {
"dataGenerowaniaDanych": "20210514",
"liczbaTransformacji": "5000",
"schemat": "RRRRMMDDNNNNNNNNNNBBBBBBBBBBBBBBBBBBBBBBBBBB"
},
"skrotyPodatnikowCzynnych": [
"examplestring1",
"examplestring2",
"examplestring3",
"examplestring4",
],
"maski": [
"examplemask1",
"examplemask2",
"examplemask3",
"examplemask4"
]
}
I have tried numerous examples found online but none of them seem to work. From looking at various websites the Python code I have is:
import json
with open('20210514.json') as myfile:
data = json.load(myfile)
print(data)
keyVal = 'examplestring2'
if keyVal in data:
# Print the success message and the value of the key
print("Data is found in JSON data")
else:
# Print the message if the value does not exist
print("Data is not found in JSON data")
But I am getting these errors below, I am a complete newbie to Python so am having trouble deciphering them:
D:\PycharmProjects\venv\Scripts\python.exe D:/PycharmProjects/json_test.py
Traceback (most recent call last):
File "D:\PycharmProjects\json_test.py", line 4, in <module>
data = json.load(myfile)
File "C:\Users\xyz\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 293, in load
return loads(fp.read(),
File "C:\Users\xyz\AppData\Local\Programs\Python\Python39\lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Users\xyz\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\xyz\AppData\Local\Programs\Python\Python39\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 12 column 5 (char 921)
Process finished with exit code 1
Any help would be massively appreciated!
{
"naglowek": {
"dataGenerowaniaDanych": "20210514",
"liczbaTransformacji": "5000",
"schemat": "RRRRMMDDNNNNNNNNNNBBBBBBBBBBBBBBBBBBBBBBBBBB"
},
"skrotyPodatnikowCzynnych": [
"examplestring1",
"examplestring2",
"examplestring3",
"examplestring4"
],
"maski": [
"examplemask1",
"examplemask2",
"examplemask3",
"examplemask4"
]
}
This should work. The problem here is that you have a comma at the end of a list which your parser can't handle. ECMAScript 5 introduced the ability to parse that. But apparently JSON in general doesn't support it (yet?). So, make sure to not have a comma at the end of a list.
For your if-else statement to be correct, you'd have to change it to something like this:
keyVal = 'examplestring2'
keyName = 'skrotyPodatnikowCzynnych'
if keyName in data.keys() and keyval in data[keyName]:
# Print the success message and the value of the key
print("Data is found in JSON data")
else:
# Print the message if the value does not exist
print("Data is not found in JSON data")
Remove the trailing comma. JSON specification does not allow a trailing comma
If you don't want to change the file structure then you have to do this:
import yaml
with open('20210514.json') as myfile:
data = yaml.load(myfile, Loader=yaml.FullLoader)
print(data)
You also need to install yaml first.
https://pyyaml.org/

How to check for and discard invalid multi-line JSON log requests in log files?

I'm writing a script to parse some of our requests, and I need to be able to handle when we have a malformed or incomplete requests. So for example, a typical request would come in with the following format:
log-prefix: {JSON request data}\n
all on a single line, etc...
Then I found out that they have a character buffer limit of 1024 in their writer, so the requests could be spread across many lines, like so:
log-prefix: {First line of data
log-prefix: Second line of requests data
log-prefix: Final line of log data}\n
I'm able to handle this by just calling next on the iterator I'm using, and then removing the prefix, concatenating the requests, and then passing it to json.loads to return my dictionary that I need for writing to a file.
I'm doing that in the following way:
lines = (line.strip('\n') for line in inf.readlines())
for line in lines:
if not line.endswith('}'):
bad_lines = [line]
while not line.endswith('}'):
line = next(lines)
bad_lines.append(line)
form_line = malformed_data_handler(bad_lines)
else:
form_line = parse_out_json(line)
And my functions used in the above code are:
def malformed_data_handler(lines: Sequence) -> dict:
"""
Takes n malformed lines of bridge log data (where the JSON response has
been split across n lines, all containing prefixes) and correctly
delegates the parsing to parse_out_json before returning the concatenated
result as a dictionary.
:param lines: An iterable with malformed lines as the elements
:return: A dictionary ready for writing.
"""
logger.debug('Handling malformed data.')
parsed = ''
logger.debug(lines)
print(lines)
for line in lines:
logger.info('{}'.format(line))
parsed += parse_out_malformed(line)
logger.debug(parsed)
return json.loads(parsed, encoding='utf8')
def parse_out_json(line: str) -> dict:
"""
Parses out the JSON response returned from the Apache Bridge logs. Takes a
line and removes the prefix, returning a dictionary.
:param line:
:return:
"""
data = slice(line.find('{'), None)
return json.loads(line[data], encoding='utf8')
def parse_out_malformed(line: str) -> str:
prefix = 'bridge-rails: '
data = slice(line.find(prefix), None)
parsed = line[data].replace(prefix, '')
return parsed
So now to my problem, I've now found instances where the log data can look like this:
log-prefix: {First line of data
....
log-prefix: Last line of data (No closing brace)
log-prefix: {New request}
My first though to handle this was to add some sort of check to see if '{' in line. Since I'm using a generator for scalability to process the lines, I don't know that I have found one of these requests until I have already called next and pulled the line out of the line generator, and at that point I can't re-append it, and I'm not sure how to efficiently tell my process to then start from that line and continue normally.

Getting KeyError when parsing JSON containing three layers of keys, using Python

I'm building a Python program to parse some calls to a social media API into CSV and I'm running into an issue with a key that has two keys above it in the hierarchy. I get this error when I run the code with PyDev in Eclipse.
Traceback (most recent call last):
line 413, in <module>
main()
line 390, in main
postAgeDemos(monitorID)
line 171, in postAgeDemos
age0To17 = str(i["ageCount"]["sortedAgeCounts"]["ZERO_TO_SEVENTEEN"])
KeyError: 'ZERO_TO_SEVENTEEN'
Here's the section of the code I'm using for it. I have a few other functions built already that work with two layers of keys.
import urllib.request
import json
def postAgeDemos(monitorID):
print("Enter the date you'd like the data to start on")
startDate = input('The date must be in the format YYYY-MM-DD. ')
print("Enter the date you'd like the data to end on")
endDate = input('The date must be in the format YYYY-MM-DD. ')
dates = "&start="+startDate+"&end="+endDate
urlStart = getURL()
authToken = getAuthToken()
endpoint = "/monitor/demographics/age?id=";
urlData = urlStart+endpoint+monitorID+authToken+dates
webURL = urllib.request.urlopen(urlData)
fPath = getFilePath()+"AgeDemographics"+startDate+"&"+endDate+".csv"
print("Connecting...")
if (webURL.getcode() == 200):
print("Connected to "+urlData)
print("This query returns information in a CSV file.")
csvFile = open(fPath, "w+")
csvFile.write("postDate,totalPosts,totalPostsWithIdentifiableAge,0-17,18-24,25-34,35+\n")
data = webURL.read().decode('utf8')
theJSON = json.loads(data)
for i in theJSON["ageCounts"]:
postDate = i["startDate"]
totalDocs = str(i["numberOfDocuments"])
totalAged = str(i["ageCount"]["totalAgeCount"])
age0To17 = str(i["ageCount"]["sortedAgeCounts"]["ZERO_TO_SEVENTEEN"])
age18To24 = str(i["ageCount"]["sortedAgeCounts"]["EIGHTEEN_TO_TWENTYFOUR"])
age25To34 = str(i["ageCount"]["sortedAgeCounts"]["TWENTYFIVE_TO_THIRTYFOUR"])
age35Over = str(i["ageCount"]["sortedAgeCounts"]["THIRTYFIVE_AND_OVER"])
csvFile.write(postDate+","+totalDocs+","+totalAged+","+age0To17+","+age18To24+","+age25To34+","+age35Over+"\n")
print("File printed to "+fPath)
csvFile.close()
else:
print("Server Error, No Data" + str(webURL.getcode()))
Here's a sample of the JSON I'm trying to parse.
{"ageCounts":[{"startDate":"2016-01-01T00:00:00","endDate":"2016-01-02T00:00:00","numberOfDocuments":520813,"ageCount":{"sortedAgeCounts":{"ZERO_TO_SEVENTEEN":3245,"EIGHTEEN_TO_TWENTYFOUR":4289,"TWENTYFIVE_TO_THIRTYFOUR":2318,"THIRTYFIVE_AND_OVER":70249},"totalAgeCount":80101}},{"startDate":"2016-01-02T00:00:00","endDate":"2016-01-03T00:00:00","numberOfDocuments":633709,"ageCount":{"sortedAgeCounts":{"ZERO_TO_SEVENTEEN":3560,"EIGHTEEN_TO_TWENTYFOUR":1702,"TWENTYFIVE_TO_THIRTYFOUR":2786,"THIRTYFIVE_AND_OVER":119657},"totalAgeCount":127705}}],"status":"success"}
Here it is again with line breaks so it's a little easier to read.
{"ageCounts":[{"startDate":"2016-01-01T00:00:00","endDate":"2016-01-02T00:00:00","numberOfDocuments":520813,"ageCount":
{"sortedAgeCounts":{"ZERO_TO_SEVENTEEN":3245,"EIGHTEEN_TO_TWENTYFOUR":4289,"TWENTYFIVE_TO_THIRTYFOUR":2318,"THIRTYFIVE_AND_OVER":70249},"totalAgeCount":80101}},
{"startDate":"2016-01-02T00:00:00","endDate":"2016-01-03T00:00:00","numberOfDocuments":633709,"ageCount":
{"sortedAgeCounts":{"ZERO_TO_SEVENTEEN":3560,"EIGHTEEN_TO_TWENTYFOUR":1702,"TWENTYFIVE_TO_THIRTYFOUR":2786,"THIRTYFIVE_AND_OVER":119657},"totalAgeCount":127705}}],"status":"success"}
I've tried removing the ["sortedAgeCounts"] from in the middle of
age0To17 = str(i["ageCount"]["sortedAgeCounts"]["ZERO_TO_SEVENTEEN"])
but I still get the same error. I've remove the 0-17 section to test the other age ranges and I get the same error for them as well. I tried removing all the underscores from the JSON and then using keys without the underscores.
I've also tried moving the str() to convert to string from the call to where the output is printed but the error persists.
Any ideas? Is this section not actually a JSON key, maybe a problem with the all caps or am I just doing something dumb? Any other code improvements are welcome as well but I'm stuck on this one.
Let me know if you need to see anything else. Thanks in advance for your help.
Edited(This works):
JSON=json.loads(s)
for i in JSON:
print str(JSON[i][0]["ageCount"]["sortedAgeCounts"]["ZERO_TO_SEVENTEEN"])
s is a string which contains the your JSON.

How to check if a string is a valid JSON in python

I have a python script providing command line / output in console on remote linux.
I have another script which is reading this output on local machine.
Output is in below format:
ABC: NEG
BCD: NEG
FGH: POS
{aa:bb:cc:dd:ee{"value":"30","type":"Tip 3","targetModule":"Target 3","configurationGroup":null,"name":"Configuration Deneme 3","description":null,"identity":"Configuration Deneme 3","version":0,"systemId":3,"active":true}}
notice last line is in json format, now I want to check which line is in json format of the output.
I tried
if "value" in line:
json.loads(line)
it is not reading and even
json.dumps(line)
not giving output ?
You can use try except clause to check if a string is actually json:
import json
line = '<what you think is json>'
try:
json_line = json.loads(line)
except ValueError:
print("not a json")
In your above code the last line is not a valid JSON. You can use this tool JSONLint to verify if your JSON is a valid JSON.

Categories

Resources