Json serializing and deserializing - python

I am comparing two nested dictionaries using deepdiff package in python. I want to store it in a file but it's giving me an error
'prettyordered set is not json serializable'
I tried it converting using 'to_dict', same error. I also tried to convert it using 'to_json', it solves the problem but it adds backslash in the keys and values and also while reading I am not able to read, it gives me an error :
'json.decoder.JSONDecodeError'
from deepdiff import DeepDiff
import json
variable1={'key':'key32','hello':'hello1'}
variable2={"key3":'key','hello':'hello2'}
result=DeepDiff(variable1,variable2)
result=result.to_json()
print(result)
filename='json_serializable'+'.txt'
objects_file = 'D:\\'+ filename
f = open(objects_file,'w')
f.write(json.dumps(result))
with open('D:\\Registryvalues\\'+filename) as json_file:
variable1 = json.load(json_file)
print(variable1)
I want to write and read that difference object that I am getting using deepdiff. Can somebody help me?

Related

Python - storing and changing variable in json

So I tried store 'number' in JSON, but I still don't know how I can update it. It should look something like on command it read number from JSON (some code) and on the end it's adding 1 to this number and when you use again this command it will show number increased by 1.
Can somebody help me with this?
You can keep your JSON data as a Python dictionary. If your data is stored as a JSON, then you can convert it to a dictionary with the json library.
from json import loads
# Data as JSON string
json_data = '{"number": 0}'
# Convert to a dictionary
python_dictionary = loads(json_data)
Afterwards, you can alter the JSON (now dictionary) values by calling them changing them in the dictionary.
# Increment
python_dictionary["number"] += 1
Finally, when you need to use the data in JSON format again, you can convert it back like so:
# Convert back to json
from json import dumps
new_json = dumps(python_dictionary)

How to save multiple data at once in Python

I am running a script which takes, say, an hour to generate the data I want. I want to be able to save all of the relevant variables to some external file so I can fiddle with them later without having to run the hour-long calculation over again. Is there an easy way I can save all of the variables I need into one convenient file?
In Matlab I would just contain all of the results of the calculation in a single structure so that later I could just load results.mat and I would have everything I need stored as results.output1, results.output2 or whatever. What is the Python equivalent of this?
In particular, the data that I would like to save includes arrays of complex numbers, which seems to present difficulties for using things like json.
I suggest taking look at built-in shelve module which provides persistent, dictionary-like object and generally does work with all native Python types so you can do:
Write complex to some file (in my example it is named mydata) under key n (keep in mind that keys should be strings).
import shelve
my_number = 2+7j
with shelve.open('mydata') as db:
db['n'] = my_number
Later retrieve that number from given file
import shelve
with shelve.open('mydata') as db:
my_number = db['n']
print(my_number) # (2+7j)
You can use pickle function in Python and then use the dump function to dump all your data into a file. You can reuse the data later.I suggest you find more about pickle.
I would recommend a json file. With json you can assign variables to keywords, just like dictionaries in stock python. The json package is automatically installed when installing python.
import json
dict = {var1: "abcde", var2: "fghij"}
with open(path, "w") as file:
json.dump(dict, file, indent=2, ensure_ascii = False)
You can also load this from a file using the same api:
with open(path, r) as file:
text = file.read()
dict = json.loads(text)
Edit: Json can also handle every datatype python can, so if you want to save an array you can just define that in the dict:
dict = {list1: ["ab", "cd", "ef"]}

How to get a python array from JSON

Can't solve how to convert JSON to python so that all data be in an array.
I used code to extract JSON, but the problem is that to extract strings from each new JSON data set is a new issue due to the inequality of the number of columns.
import json
data = open('acndata_sessions.json')
json.load(data)
I also tried to use https://app.quicktype.io/, but the function result is:
data_from_dict(json.loads(json_string)) doesn't work.
Data set: json
This question seems to have been asked before. Convert JSON array to Python list
They use 'json.loads' instead of 'json.load' which you use. Both are functions, but they are different.
I think your looking for something like this.
import json
with open('myfile.json','r') as jsonFile:
pythonJSON=json.load(jsonFile)
jsonFile.close()
print(pythonJSON.keys())
The json.loads() is used when you have a string type. If the example above doesn't work with just json.load() try it with json.loads().
json.load already gives you a dictionary. You just have to use it and iterate through _items
import json
data = open('acndata_sessions.json')
data_dict = json.load(data)
# Load items from dictionary
items = data_dict['_items']
# Iterate in items
for item in items:
# Print the item
print(item)
# Or if you want to further iterate on user inputs present in this item
user_inputs = item['userInputs']
# Check if there are any user inputs present in this item
if user_inputs:
for user_input in user_inputs:
print(user_input)
Try checking this question. You can parse a json file like this:
import json
# read file
with open('example.json', 'r') as myfile:
data=myfile.read()
# parse file
obj = json.loads(data)
# show values
print(str(obj['_items']))#returns a dict
print(str(obj['_meta']))

Nltk json data loading error

I'm trying to load a json data file in order to analyze it using the nltk framework but get an AttributeError: 'list' object has no attribute 'keys'. I have tried deleting the "json" part at the end as the documentation states that data type is autodetected by the extension of the file. Also tried deleting the database at the beginning to no avail. Any ideas where might I be stumbling?
import json
import nltk
database = nltk.data.load("data.json", "json")
After hours of research, it turns out NLTK does not accept json files if the highest order is a list rather than a dict. In order to access the data, the upper most structure must be a dictionary structure with keys.
jsonfile = open('data.json')
jsonstr = jsonfile.read()
jdata = json.loads(jsonstr)[0]
This allows one to access the first element of the list which includes a dictionary inside, similar to every other element of the list. One solution is to seperate the elements of the list and load the dicts one at a time. I also suspect that while encoding the json, sort_keys = True might make the upper most structure a dictionary.

Mongoexport exporting invalid json files

I collected some tweets from the twitter API and stored it to mongodb, I tried exporting the data to a JSON file and didn't have any issues there, until I tried to make a python script to read the JSON and convert it to a csv. I get this traceback error with my code:
json.decoder.JSONDecodeError: Extra data: line 367 column 1 (char 9745)
So, after digging around the internet I was pointed to check the actual JSON data in an online validator, which I did. This gave me the error of:
Multiple JSON root elements
from the site https://jsonformatter.curiousconcept.com/
Here are pictures of the 1st/2nd object beginning/end of the file:
or a link to the data here
Now, the problem is, I haven't found anything on the internet of how to handle that error. I'm not sure if it's an error with the data I've collected, exported, or if I just don't know how to work with it.
My end game with these tweets is to make a network graph. I was looking at either Networkx or Gephi, which is why I'd like to get a csv file.
Robert Moskal is right. If you can address the issue at source and use --jsonArray flag when you use mongoexport then it will make the problem easier i guess. If you can't address it at source then read the below points.
The code below will extract you the individual json objects from the given file and convert them to python dictionaries.
You can then apply your CSV logic to each individual dictionary.
If you are using csv module then I would say use unicodecsv module as it would handle the unicode data in your json objects.
import json
with open('path_to_your_json_file', 'rb') as infile:
json_block = []
for line in infile:
json_block.append(line)
if line.startswith('}'):
json_dict = json.loads(''.join(json_block))
json_block = []
print json_dict
If you want to convert it to CSV using pandas you can use the below code:
import json, pandas as pd
with open('path_to_your_json_file', 'rb') as infile:
json_block = []
dictlist=[]
for line in infile:
json_block.append(line)
if line.startswith('}'):
json_dict = json.loads(''.join(json_block))
dictlist.append(json_dict)
json_block = []
df = pd.DataFrame(jsonlist)
df.to_csv('out.csv',encoding='utf-8')
If you want to flatten out the json object you can use pandas.io.json.json_normalize() method.
Elaborating on #MYGz suggestion to use --jsonArray
Your post doesn't show how you exported the data from mongo. If you use the following via the terminal, you will get valid json from mongodb:
mongoexport --collection=somecollection --db=somedb --jsonArray --out=validfile.json
Replace somecollection, somedb and validfile.json with your target collection, target database, and desired output filename respectively.
The following: mongoexport --collection=somecollection --db=somedb --out=validfile.json...will NOT give you the results you are looking for because:
By default mongoexport writes data using one JSON document for every
MongoDB document. Ref
A bit late reply, and I am not sure it was available the time this question was posted. Anyway, now there is a simple way to import the mongoexport json data as follows:
df = pd.read_json(filename, lines=True)
mongoexport provides each line as a json objects itself, instead of the whole file as json.

Categories

Resources