Export dictionnary in JSON using Python - python

I have a JSON file (stored in database.txt) I want to modify using python dictionary in method addEvent():
def addEvent(eventName, start, end, place):
newdict={} #make a new dictionnary
newdict["NAME"]=eventName
newdict["START"]=start
newdict["END"]=end
newdict["place"]=place
try:
with open("database.txt",'r') as file:
content=file.read()
dict=json.loads(content) #make dictionnary with original JSON file
liste.append(newdict)
dico["event"]=liste #combine 2dictionnaries
with open("database.txt", 'w') as file:
file.write(str(dico)) #save my new JSON file
except:
...
My problem:
I can run this method only one time, second time I receive an error message:
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
addEvent() method modify my database.txt file: it doesn't incule double quotes anymore but accents so i can't use dict=json.loads(content) second time
My question Did I correctly save my JSON file ? How can I keep my JSON format in my database.txt file (keep double quotes) ?

You produced Python syntax by converting the object with str(); Python strings can use either single or double quotes. To produce a JSON string, use json.dumps():
file.write(json.dumps(dico))
The json module has variants of json.loads() and json.dumps() that work directly with files; there is no need to read or write yourself, just use the function names without the trailing s on file objects directly:
with open("database.txt", 'r') as file:
d = json.load(file)
and
with open("database.txt", 'w') as file:
json.dump(dico, file)

The problem is here:
file.write(str(dico)) #save my new JSON file
Use json.dumps instead:
file.write(json.dumps(dico)) #save my new JSON file

Related

How to read in file with dict objects not separated by commas into dataframe and is this data JSON-LD format?

I have a JSON file (file1.json) that looks this:
{"key":"value","key":"value"}
{"key":"value","key":"value"}
I cant read in this file with pd.read_json('filepath/file1.json') I get error:
ValueError: Expected object or value
I tried opening the file and reading in the with open like so:
with open('~/filepath/file1.json') as f:
json_data = pd.json_normalize(json.loads(f.read()),encodings='utf-8-sig')
But I get error:
json.decoder.JSONDecodeError: Unexpected UTF-8 BOM (decode using utf-8-sig): line 1 column 1 (char 0)
However, when I read remove all rows except the top 2 from the file and add a comma like so:
{"key":"value","key":"value"},
{"key":"value","key":"value"}
I can read in the file as is with pd.read_json().
How can I read in the file with non comma separated file dict objects per line?
Thanks in advance.
I believe you must use json.JSONDecoder()
Check this:
complex json file to csv in python
and this:
how to analyze json objects that are NOT separated by comma (preferably in Python)
You have to read the json, put the dicts in list and then use pd.json_normalize()

How to convert a string/value in a file to a proper JSON and add a new key?

I have a JSON file that contains a string. I want to update my JSON file with key-value pair such that I can add a key corresponding to that string. I want to do this if only a string/value is there.
At the same time, I want to add a new column id where id is a number that will automatically be updated based on the number of files provided.
I am not sure what to do in that case :(
How can we write a python script to do the following changes?
Example:
File1.json contains
"I\nhave\na\ncat"
Expected output: (File1.json)
{id: "1", string :"I\nhave\na\ncat"}
File2.json
"I\nhave\na\ndream"
Expected output: (File2.json)
{id: "2", string :"I\nhave\na\ndream"}
To work with JSON data and files in Python, you use the json module. It has methods for:
load-ing JSON data from files and converting them to Python objects
dump-ing Python objects to files in JSON format
In Python, JSON is represented as a regular dictionary, so you simply have to read the string from the file, turn it into a dictionary, add any other key-value pairs you want or even modify the data, then dump it back to the file.
Now since you only want to do this conversion if the file contains only a string, you can first do json.load, then use isinstance to check if it was converted to a dictionary. If yes, then it's already in proper JSON so you don't have to do anything. If no, and it was converted to a string, then continue with the processing.
Lastly, since you want to overwrite the same file, open the file in "r+" mode for reading and writing.
import json
# Assume script and files are in the same directory
filepaths = [
"File1.json", # Contains "I\nhave\na\ncat"
"File2.json", # Contains "I\nhave\na\ndream"
"File3.json", # Correct JSON
]
# Process each file one-by-one
for file_id, filepath in enumerate(filepaths):
with open(filepath, "r+") as f:
contents = json.load(f)
if isinstance(contents, dict):
# This is already in JSON, nothing to fix
continue
# We need to fix the file
data = {"id": file_id, "string": contents}
f.seek(0)
json.dump(data, f)

Trying to read my .json data in python but keep getting an error

I have a simple code to be able to read my json data in python and convert to dictionary so I can use it for sqlite. However, I keep running into this error.
Code I ran:
import json
with open("users.json", "r") as f:
json_str = f.read()
json_value = json.loads(json_str)
print(type(json_value))
I have a set of data like this:
{"_id":{"$od":"5f12"},"ready":true,"createdon":{"$date":1609687444800},"lastLogin":{"$date":1858},"position":"consumer","signUp":"Email","state":"WI"}
{"_id":{"$od":"5f12"},"ready":true,"createdon":{"$date":1609687444800},"lastLogin":{"$date":1858},"position":"consumer","signUp":"Email","state":"WI"}
The error I get:
Traceback (most recent call last):
File "json_to_sqlite.py", line 5, in <module>
json_value = json.loads(json_str)
File "/Users/malaba/opt/anaconda3/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/Users/malaba/opt/anaconda3/lib/python3.8/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 186)
#rv.kvetch is right, the problem is that your users.json file has two distinct json strings, i.e. you have two dictionaries stored in your file when you can have at most one. An easy way to fix this is to wrap the dictionaries in a list (hence, you have one json object that holds all your other objects). The newly formatted users.json file would look something like this:
[{"_id":{"$od":"5f12"},"ready":true,"createdon":{"$date":1609687444800},"lastLogin":{"$date":1858},"position":"consumer","signUp":"Email","state":"WI"},
{"_id":{"$od":"5f12"},"ready":true,"createdon":{"$date":1609687444800},"lastLogin":{"$date":1858},"position":"consumer","signUp":"Email","state":"WI"}]
Please make note of the opening and closing braces and the extra comma separating the dictionaries in your new file. Reading the json file would be the same procedure as before, but if you wanted to pull a specific dictionary out of your returned json, you would need to index the list accordingly. More details on the json module can be found here.
EDIT:
If you have a large data file, this sort of operation is not feasible to do manually so you will have to get clever to look for additional structure to the file. For example, if you know that each dictionary in the file is separated by a newline and there are no other newline characters in your file (similar to the example you provided), then you can automatically do this conversion like so:
import json
with open("users.json", "r") as f:
new = [json.loads(x) for x in f.read().splitlines()]
with open("users.json", "w") as f:
json.dump(new, f)

Can I replace part of a string in a JSON key in Python?

This is my first question here, I'm new to python and trying to figure some things out to set up an automatic 3D model processing chain that relies on data being stored in JSON files moving from one server to another.
The problem is that I need to store absolute paths to files that are being processed, but these absolute paths should be modified in the original JSON files upon the first time that they are processed.
Basically the JSON file comes in like this:
{
"normaldir": "D:\\Outgoing\\1621_1\\",
"projectdir": "D:\\Outgoing\\1622_2\\"
}
And I would like to rename the file paths to
{
"normaldir": "X:\\Incoming\\1621_1\\",
"projectdir": "X:\\Incoming\\1622_2\\",
}
What I've been trying to do is replace the first part of the path using this code, but it isn't working:
def processscan(scanfile):
configfile= MonitorDirectory + scanfile
with open(configfile, 'r+') as file:
content = file.read()
file.seek(0)
content.replace("D:\\Outgoing\\", "X:\\Incoming\\")
file.write(content)
However this was not working at all, so I tried interpreting the JSON file properly and replacing the key code from here:
def processscan(scanfile):
configfile= MonitorDirectory + scanfile
with open(configfile, 'r+') as settingsData:
settings = json.load(settingsData)
settings['normaldir'] = 'X:\\Incoming\\1621_1\\'
settings['projectdir'] = 'X:\\Incoming\\1622_2\\'
settingsData.seek(0) # rewind to beginning of file
settingsData.write(json.dumps(settings,indent=2,sort_keys=True)) #write the updated version
settingsData.truncate() #truncate the remainder of the data in the file
This works perfectly, however I'm replacing the whole path so it won't really work for every JSON file that I need to process. What I would really like to do is to take a JSON key corresponding to a file path, keep the last 8 characters and replace the rest of the patch with a new string, but I can't figure out how to do this using json in python, as far as I can tell I can't edit part of a key.
Does anyone have a workaround for this?
Thanks!
Your replace logic failed as you need to reassign content to the new string,str.replace is not an inplace operation, it creates a new string:
content = content.replace("D:\\Outgoing\\", "X:\\Incoming\\")
Using the json approach just do a replace too, using the current value:
settings['normaldir'] = settings['normaldir'].replace("D:\\Outgoing\\", "X:\\Incoming\\")
You also would want truncate() before you write or just reopen the file with w and dump/write the new value, if you really wanted to just keep the last 8 chars and prepend a string:
settings['normaldir'] = "X:\\Incoming\\" + settings['normaldir'][-8:]
Python come with a json library.
With this library, you can read and write JSON files (or JSON strings).
Parsed data is converted to Python objects and vice versa.
To use the json library, simply import it:
import json
Say your data is stored in input_data.json file.
input_data_path = "input_data.json"
You read the file like this:
import io
with io.open(input_data_path, mode="rb") as fd:
obj = json.load(fd)
or, alternatively:
with io.open(input_data_path, mode="rb") as fd:
content = fd.read()
obj = json.loads(content)
Your data is automatically converted into Python objects, here you get a dict:
print(repr(obj))
# {u'projectdir': u'D:\\Outgoing\\1622_2\\',
# u'normaldir': u'D:\\Outgoing\\1621_1\\'}
note: I'm using Python 2.7 so you get the unicode string prefixed by "u", like u'projectdir'.
It's now easy to change the values for normaldir and projectdir:
obj["normaldir"] = "X:\\Incoming\\1621_1\\"
obj["projectdir"] = "X:\\Incoming\\1622_2\\"
Since obj is a dict, you can also use the update method like this:
obj.update({'normaldir': "X:\\Incoming\\1621_1\\",
'projectdir': "X:\\Incoming\\1622_2\\"})
That way, you use a similar syntax like JSON.
Finally, you can write your Python object back to JSON file:
output_data_path = "output_data.json"
with io.open(output_data_path, mode="wb") as fd:
json.dump(obj, fd)
or, alternatively with indentation:
content = json.dumps(obj, indent=True)
with io.open(output_data_path, mode="wb") as fd:
fd.write(content)
Remarks: reading/writing JSON objects is faster with a buffer (the content variable).
.replace returns a new string, and don't change it. But you should not treat json-files as normal text files, so you can combine parsing json with replace:
def processscan(scanfile):
configfile= MonitorDirectory + scanfile
with open(configfile, 'rb') as settingsData:
settings = json.load(settingsData)
settings = {k: v.replace("D:\\Outgoing\\", "X:\\Incoming\\")
for k, v in settings.items()
}
with open(configfile, 'wb') as settingsData:
json.dump(settings, settingsData)

How to make python use double quotes while writing a string to a file

I have a list in python
L = [[u'2014-12-02', 727.75], [u'2014-12-01', 733.65]]
to be written to a text file.
I want the file to contain
[["2014-12-02", 727.75], ["2014-12-01", 733.65]]
if I write file.write(str(L))
[['2014-12-02', 727.75], ['2014-12-01', 733.65]]
will be written to the file.
Since the expected output is valid JSON, you can try:
import json
L = [[u'2014-12-02', 727.75], [u'2014-12-01', 733.65]]
with open("outfile", "w") as fdesc:
json.dump(L, fdesc)
You might want to add fdesc.write('\n') after the json.dump() call.

Categories

Resources