I'm working with python, I have a json structure into a dictionary and I have exported it into a file. Now I need to reload the structure from the file, I want to reload it into a dictionary (in order to update it) but I'm experiencing some problems. This is my code:
#export the structure
with open('data.json','w') as f:
data = {}
data['test'] = '1'
f.write(json.dumps(data))
#reload the structure
with open('data.json','r') as f:
dict = {}
dict = json.loads(f.read())
The error is: No JSON object could be decoded.
Try
with open('data.json', 'w') as f:
f.write(json.dumps(data))
with open('data.json', 'r') as f:
json.load(f)
Related
I have the following list data I want to save in a json file to be access later:
data = [{"nomineesWidgetModel":{"title":"","description":"",
"refMarker":"ev_nom","eventEditionSummary":{"awards":[{"awardName":"Oscar","trivia":[]}]}}}]
If saved as txt:
for item in data:
with open('./data/awards.txt', 'w', encoding='utf-8') as f:
f.write(', '.join(str(item) for item in data))
Output:
{"nomineesWidgetModel":{"title":"","description":"","refMarker":"ev_nom",
"eventEditionSummary":{"awards":[{"awardName":"Oscar","trivia":[]}]}}}
But I get an error when opening the file later in Jupyter Notebook
If save as json
for item in data:
with open('data.json', 'w', encoding='utf-8') as f:
json.dump(item, f, ensure_ascii=False, indent=4)
Output with extra backslash:
"{\"nomineesWidgetModel\":{\"title\":\"\",\"description\":\"\",\"refMarker\":\"ev_nom\",
\"eventEditionSummary\":{\"awards\":[{\"awardName\":\"Oscar\",\"trivia\":[],}]}}
Is there a simpler way to do this without having to import the file and replace the extra slashes?
Just use json as usual:
import json
data = [{"nomineesWidgetModel":{"title":"","description":"", "refMarker":"ev_nom","eventEditionSummary":{"awards":[{"awardName":"Oscar","trivia":[]}]}}}]
with open('data.json', 'w') as f:
json.dump(data, f, indent=4)
Thanks to #Alexander explanation above, I was able to save the content I was scraping in a dict, and not a list, and then save as json while iterating the pages with:
with open('data.json', 'a') as file:
json.dump(data, file, indent=1)
I have the code below to save a dictionary to csv in python. How can I load this back in?
There are a few answers on here, but none seem to be working
file='Location'
with open(file, 'w') as f: # Just use 'w' mode in 3.x
w = csv.DictWriter(f, mydic.keys())
w.writeheader()
w.writerow(mydic)
I would suggest to save a dictionary as json file.
import json
with open('data.json', 'w') as fp:
json.dump(mydic, fp)
To load the json file:
with open('data.json', 'r') as fp:
mydic = json.load(fp)
I believe a way to deal with saving and loading dictionary is to use pickle
Saving:
import pickle
some_file = open("file", "wb")
my_dictionary = {"aa":"aa"}
pickle.dump(my_dictionary, some_file)
some_file.close()
Reading back:
import pickle
some_file = open("file", "r")
my_dictionary = pickle.load(some_file)
Please remember that using pickle is not safe while dealing with data received from the untrusted source.
So I have a generated JSON which one looks like ( there is a lot of it just with unique ID )
{
"id": 1,
"name": "name",
"dep": "dep",
"Title": "Title',
"email": "email"
}
I'm trying to do "append" a new field but I get an error
with open('data.json', 'w') as file:
json.dump(write_list, file)
file.close()
with open('data.json', 'w') as json_file:
entry = {'parentId': random.randrange(0, 487, 2)}
json_file.append(entry, json_file)
json_file.close()
It there is some way to add one more "key: value" to it after generating ?
There are two issues:
your are using json.dump to generate a list, but you're not using json.load to re-create the Python data structure.
You're opening the file with the w mode in the second open call, which truncates it
Try breaking each step out into its own and separating mutating data and writing to disk.
with open('data.json', 'w') as file:
json.dump(write_list, file)
#file.close() # manually closing files is unnecessary,
# when using context managers
with open('data.json', 'r') as json_file:
write_list = json.load(json_file)
entry = {'parentId': random.randrange(0, 487, 2)}
write_list.append(entry)
with open('data.json', 'w') as json_file:
json.dump(write_list, file)
The steps to do what you want are as files:
Parse the entire JSON file into a python dictionary.
Add the entry to the dictionary.
Serialize the string back to JSON.
Write the JSON file back to the file.
Also after I done this feature with Tim McNamara advice, I found a more pretty way to add new line to every JSON dict I have in file
for randomID in write_list:
randomID['parentId'] = random.randrange(0, 487, 2)
with open('data.json', 'w') as file:
json.dump(write_list, file)
Is there a way to append single JSON objects to a json file while in a for loop in python. I would prefer not store all my data in one giant json object and dump it all at once, as I am planning on performing millions of API requests. I would like to make a single API request, dump the result into a JSON file and then move to the next API request and dump that into the same JSON file.
The below code overwrites the JSON file, I am looking for something that appends.
for url in urls:
r = sesh.get(url)
data = r.json()
with open('data.json', 'w') as outfile:
json.dump(data, outfile)
Such that:
with open('data.json') as outfile:
data = json.load(data, outfile)
type(data)
>> dict
r.json looks something like this:
{'attribute1':1, 'attribute2':10}
Update
Well since I don't have access to your API I just placed some sample responses, in the format you supplied, inside an array.
import json
urls = ['{"attribute1":1, "attribute2":10}', '{"attribute1":67, "attribute2":32}', '{"attribute1":37, "attribute2":12}'];
json_arr = []
for url in urls:
data = json.loads(url)
json_arr.append(data)
with open('data.json', 'w') as outfile:
json.dump(json_arr, outfile)
Basically we keep an array and append each API response to that array. Then, we can write the accumulative JSON to a file. Also if you want to update the same JSON file on different executions of the code, you can just read the existing output file into an array, in the beginning of the code, and then carry on with my example.
Change write mode to append
Try changing this:
with open('data.json', 'w') as outfile:
To this:
with open('data.json', 'a') as outfile:
The previous answer is surprisingly close to what you need to do.
So I will build upon it.
import json
json_arr = ['{"attribute1":1, "attribute2":10}', '{"attribute1":67, "attribute2":32}', '{"attribute1":37, "attribute2":12}'];
with open('data.json', 'w') as outfile:
outfile.write('[')
for element in json_arr:
with open('data.json', 'w') as outfile:
json.dump(element, outfile)
outfile.write(',')
with open('data.json', 'a') as outfile:
outfile.write(']')
In python 3,3
import json
peinaw = {"hi":4,"pordi":6}
json_data = open('data.json')
json.dump(peinaw, json_data)
json_data.close()
i get
File "C:\Python33\lib\json\__init__.py", line 179, in dump
fp.write(chunk)
io.UnsupportedOperation: not writable
tried same thing in 2,7 and it works.I s there a different way in 3,3?
>>> import json
>>> peinaw = {"hi":4,"pordi":6}
>>> with open('data.json', 'w') as json_data: # 'w' to open for writing
json.dump(peinaw, json_data)
I used a with statement here, where the file is automatically .close()d at the end of the with block.
You are not opening the file for writing. The file is opened in a read mode. to verify do this:
json_data = open('data.json')
print (json_data) # should work with 2.x and 3.x
To solve the probem, just open the file in write mode.
json_data = open('data.json', 'w')
Also, you should use the with statement when woking with files.
with open('data.json', 'w') as json_data:
json.dump(peinaw, json_data)
You need to open file for writing, use 'w' mode parameter:
json_data = open('data.json', 'w')