Comparing values in a JSON file using Pycharm - python

New to reading and writing with Python and JSON, so go easy on me, but would appreciate any help!
OK so I am using Pycharm and am reading in a large JSON file with the following code and able to print it out:
import json
from pprint import pprint
with open('Files/InstrumentPublished.json') as data_file:
data = json.load(data_file)
pprint(data)
Within the JSON file I am reading there are 4 fields I want to look at: A_ID, B_ID, C_ID and D_ID. All these fields should have the same value, ie (they are located in different places throughout the JSON file:
"A_ID": "12345",
......
"B_ID": "12345",
.......
"C_ID": "12345",
........
"D_ID": "12345",
Basically I need to add the following logic when reading the file:
Identify the following fields in the JSON file: A_ID, B_ID, C_ID and D_ID
Read/consume their value and compare them
If all values are the equal print true.
If all values are not equal, print false.
I have searched around the internet but can only find reading and writing to a JSON using Python but nothing of comparing individual fields.
Like I said any help would be appreciated!
Thanks!
Kirsty

When you read a JSON in Python it becomes a dictionary, then you can access each key value with the square bracket
Here is the code
import json
from pprint import pprint
with open('Files/InstrumentPublished.json') as data_file:
data = json.load(data_file)
all_ids_equal = data['A_ID'] == data['B_ID'] == data['C_ID'] == data['D_ID']
print(all_ids_equal)

Related

JSON File multiple roots

I have a JSON file I'm trying to manipulate in python but it seems the json formating is not correct:
{{"ticket":{"id":"123", "name":"bill"}},
{"ticket":{"id":"1234", "name":"james"}}}
When i try to format it using a json formatter it gives me Error multiple root elements
How can I fix it?
Update: with the suggestion from funqkey i updated the script:
import json
with open('ticketData8242020-6152021.json', 'r') as f:
data = f.read()
data = json.loads(data)
There is something wrong with the file. I will attempt to remove the ticket object references from the file. to fix it. Thanks everyone.
The problems here include
ticket needs to be in quotes
When you have multiple objects, you need a list, not a dict
You can't have an object with multiple "ticket" keys.
I SUSPECT what you want is a list of objects, like this:
[{"id":"123", "name":"bill"}, {"id":"1234", "name":"james"}]
Or maybe a list of objects with one entry each, as funqkey suggested:
[{"ticket":{"id":"123", "name":"bill"}}, {"ticket":{"id":"1234", "name":"james"}}]
# Should look like this [{"ticket": {"id": "123", "name": "bill"}}, {"ticket": {"id": "1234", "name": "james"}}]
import json
with open('ticketData8242020-6152021.json', 'r') as f:
data = f.read()
data = json.loads(data)
In JSON, the keys should be quoted using ". Therefore
{{ticket:{"id":"123", "name":"bill"}}, {ticket:{"id":"1234", "name":"james"}}}
is not a valid JSON. The corrected version is
{{"ticket":{"id":"123", "name":"bill"}}, {"ticket":{"id":"1234", "name":"james"}}}
You can validate your JSON online: JSON Online Validator and Formatter - JSON Lint

Python - storing and changing variable in json

So I tried store 'number' in JSON, but I still don't know how I can update it. It should look something like on command it read number from JSON (some code) and on the end it's adding 1 to this number and when you use again this command it will show number increased by 1.
Can somebody help me with this?
You can keep your JSON data as a Python dictionary. If your data is stored as a JSON, then you can convert it to a dictionary with the json library.
from json import loads
# Data as JSON string
json_data = '{"number": 0}'
# Convert to a dictionary
python_dictionary = loads(json_data)
Afterwards, you can alter the JSON (now dictionary) values by calling them changing them in the dictionary.
# Increment
python_dictionary["number"] += 1
Finally, when you need to use the data in JSON format again, you can convert it back like so:
# Convert back to json
from json import dumps
new_json = dumps(python_dictionary)

Json serializing and deserializing

I am comparing two nested dictionaries using deepdiff package in python. I want to store it in a file but it's giving me an error
'prettyordered set is not json serializable'
I tried it converting using 'to_dict', same error. I also tried to convert it using 'to_json', it solves the problem but it adds backslash in the keys and values and also while reading I am not able to read, it gives me an error :
'json.decoder.JSONDecodeError'
from deepdiff import DeepDiff
import json
variable1={'key':'key32','hello':'hello1'}
variable2={"key3":'key','hello':'hello2'}
result=DeepDiff(variable1,variable2)
result=result.to_json()
print(result)
filename='json_serializable'+'.txt'
objects_file = 'D:\\'+ filename
f = open(objects_file,'w')
f.write(json.dumps(result))
with open('D:\\Registryvalues\\'+filename) as json_file:
variable1 = json.load(json_file)
print(variable1)
I want to write and read that difference object that I am getting using deepdiff. Can somebody help me?

Mongoexport exporting invalid json files

I collected some tweets from the twitter API and stored it to mongodb, I tried exporting the data to a JSON file and didn't have any issues there, until I tried to make a python script to read the JSON and convert it to a csv. I get this traceback error with my code:
json.decoder.JSONDecodeError: Extra data: line 367 column 1 (char 9745)
So, after digging around the internet I was pointed to check the actual JSON data in an online validator, which I did. This gave me the error of:
Multiple JSON root elements
from the site https://jsonformatter.curiousconcept.com/
Here are pictures of the 1st/2nd object beginning/end of the file:
or a link to the data here
Now, the problem is, I haven't found anything on the internet of how to handle that error. I'm not sure if it's an error with the data I've collected, exported, or if I just don't know how to work with it.
My end game with these tweets is to make a network graph. I was looking at either Networkx or Gephi, which is why I'd like to get a csv file.
Robert Moskal is right. If you can address the issue at source and use --jsonArray flag when you use mongoexport then it will make the problem easier i guess. If you can't address it at source then read the below points.
The code below will extract you the individual json objects from the given file and convert them to python dictionaries.
You can then apply your CSV logic to each individual dictionary.
If you are using csv module then I would say use unicodecsv module as it would handle the unicode data in your json objects.
import json
with open('path_to_your_json_file', 'rb') as infile:
json_block = []
for line in infile:
json_block.append(line)
if line.startswith('}'):
json_dict = json.loads(''.join(json_block))
json_block = []
print json_dict
If you want to convert it to CSV using pandas you can use the below code:
import json, pandas as pd
with open('path_to_your_json_file', 'rb') as infile:
json_block = []
dictlist=[]
for line in infile:
json_block.append(line)
if line.startswith('}'):
json_dict = json.loads(''.join(json_block))
dictlist.append(json_dict)
json_block = []
df = pd.DataFrame(jsonlist)
df.to_csv('out.csv',encoding='utf-8')
If you want to flatten out the json object you can use pandas.io.json.json_normalize() method.
Elaborating on #MYGz suggestion to use --jsonArray
Your post doesn't show how you exported the data from mongo. If you use the following via the terminal, you will get valid json from mongodb:
mongoexport --collection=somecollection --db=somedb --jsonArray --out=validfile.json
Replace somecollection, somedb and validfile.json with your target collection, target database, and desired output filename respectively.
The following: mongoexport --collection=somecollection --db=somedb --out=validfile.json...will NOT give you the results you are looking for because:
By default mongoexport writes data using one JSON document for every
MongoDB document. Ref
A bit late reply, and I am not sure it was available the time this question was posted. Anyway, now there is a simple way to import the mongoexport json data as follows:
df = pd.read_json(filename, lines=True)
mongoexport provides each line as a json objects itself, instead of the whole file as json.

Importing JSON in Python and Removing Header

I'm trying to write a simple JSON to CSV converter in Python for Kiva. The JSON file I am working with looks like this:
{"header":{"total":412045,"page":1,"date":"2012-04-11T06:16:43Z","page_size":500},"loans":[{"id":84,"name":"Justine","description":{"languages":["en"], REST OF DATA
The problem is, when I use json.load, I only get the strings "header" and "loans" in data, but not the actual information such as id, name, description, etc. How can I skip over everything until the [? I have a lot of files to process, so I can't manually delete the beginning in each one. My current code is:
import csv
import json
fp = csv.writer(open("test.csv","wb+"))
f = open("loans/1.json")
data = json.load(f)
f.close()
for item in data:
fp.writerow([item["name"]] + [item["posted_date"]] + OTHER STUFF)
Instead of
for item in data:
use
for item in data['loans']:
The header is stored in data['header'] and data itself is a dictionary, so you'll have to key into it in order to access the data.
data is a dictionary, so for item in data iterates the keys.
You probably want for loan in data['loans']:

Categories

Resources