I want to read a json file in Python and add content to the file respecting the json format, but I can't manage to do it.
The json file "paises.json" :
{
"España": [
{
"Superficie": 505944,
"Población": 47450795
}
],
"Francia": [
{
"Superficie": 675417,
"Población": 67407241
}
]
}
I want to read that file in Python and append new content to it. The code is:
import json
with open("paises.json", "r", encoding="utf-8") as paises:
datos = json.load(paises)
print(datos.keys()) # Show dictionary
nombre_pais = input("\nIndique el nombre del país\n")
nueva_superficie = float(input("\nIndique la superficie\n"))
nueva_poblacion = int(input("\nIndique la Población\n"))
nuevo_contenido = {nombre_pais: {"Superficie": nueva_superficie, "Población": nueva_poblacion}}
datos.update(nuevo_contenido)
print(datos) # Show dictionary with new content
with open("paises.json", "a", encoding="utf-8") as paises:
json.dump(nuevo_contenido, paises)
The result is not what I have expected:
{
"España": [
{
"Superficie": 505944,
"Población": 47450795
}
],
"Francia": [
{
"Superficie": 675417,
"Población": 67407241
}
]
}
{"Portugal": {"Superficie": 92090, "Poblaci\u00f3n": 10295909}}
The formatting is not correct, a comma and a square bracket are missing and the coding of the accents is not ok.
What can I do to correct it?
Thank you
You can't append new data directly to JSON file because it creates incorrect JSON.
(Exception can be when you want to create multi-JSON file)
With normal JSON you have to:
read all data from file to memory,
append new values in memory,
write back all data (datos) from memory to file in "write mode", not "append mode".
with open("paises.json", "w", encoding="utf-8") as paises:
json.dump(datos, paises)
Related
I have a JSON file with 10000 data entries like below in a file.
{
"1":{
"name":"0",
"description":"",
"image":""
},
"2":{
"name":"1",
"description":"",
"image":""
},
...
}
I need to write each entry in this object into its own file.
For example, the output of each file looks like this:
1.json
{
"name": "",
"description": "",
"image": ""
}
I have the following code, but I'm not sure how to proceed from here. Can anyone help with this?
import json
with open('sample.json', 'r') as openfile:
# Reading from json file
json_object = json.load(openfile)
You can use a for loop to iterate over all the fields in the outer object, and then create a new file for each inner object:
import json
with open('sample.json', 'r') as input_file:
json_object = json.load(input_file)
for key, value in json_object.items():
with open(f'{key}.json', 'w') as output_file:
json.dump(value, output_file)
I have a JSON file that has an array of objects like this:
{
"array": [
{
"foo1": "bar",
"spam1": "eggs"
},
{
"foo2": "bar",
"spam2": "eggs"
},
{
"foo3": "bar",
"spam3": "eggs"
}
]
}
And what I'm trying to do in Python is to read a JSON file, then remove an element of an array and then write the contents back to the file. I expect the file to be exactly the same, just without that element, but the problem is that when I write the contents back, they are corrupted in a weird way.
When I run this code:
import json
CONTENTS = {
"array": [
{
"foo1": "bar",
"spam1": "eggs"
},
{
"foo2": "bar",
"spam2": "eggs"
},
{
"foo3": "bar",
"spam3": "eggs"
}
]
}
# Write that object to file
with open("file.json", "w") as file:
json.dump(CONTENTS, file, indent=2)
# You can check here to see the file
input()
# Modify the file
with open("file.json", "r+") as file:
contents = json.load(file)
file.seek(0)
print(contents)
del contents["array"][-1] # Delete the last object of the array
print(contents)
json.dump(contents, file, indent=2)
The file after the second open is exactly like this:
{
"solves": [
{
"foo1": "bar",
"spam1": "eggs"
},
{
"foo2": "bar",
"spam2": "eggs"
}
]
}{
"foo3": "bar",
"spam3": "eggs"
}
]
}
As I said, I was expecting the file to be the same, just without the last object of the array, but instead it is... wrong.
Am I actually doing something wrong? I had no problem changing an object's field or appending an object to that same array in the same with block or with the same file descriptor.
My questions are: What am I doing wrong? Is the problem the fact that I read AND write to the file? How can I fix it, besides doing this:
with open("file.json", "r+") as file:
contents = json.load(file)
del contents["array"][-1] # Delete the last object of the array
with open("file.json", "w") as file:
json.dump(contents, file, indent=2)
After you overwrite the file you need to truncate it to remove the excess JSON at the end.
with open("file.json", "r+") as file:
contents = json.load(file)
file.seek(0)
print(contents)
del contents["array"][-1] # Delete the last object of the array
print(contents)
json.dump(contents, file, indent=2)
file.truncate()
I've been trying to figure out a way to store proxy data in a json form, i know the easier way is to just take each proxy from the text box and save it to the file and then to access it i would just load the information from the file but i want to have groups that work with different types of IP's. Say for example one group uses the proxy IP from a certain provider and another group would use an IP from a different one, i would need to store the IP's in their respected groups which is why i think i need to create a json file to store each of the proxies in their own json array. What i'm having trouble with is adding the IP's to the json array as i am trying to loop over a transfer file with the IP's in them and then add it to the json array. As of now i tried this,
def save_proxy():
proxy = pooled_data_default.get('1.0', 'end-2c')
transfer_file = open('proxies.txt', 'w')
transfer_file.write(proxy)
transfer_file.close()
transfer_file1 = open('proxies.txt', 'r')
try:
with open('proxy_groups.txt', 'r+') as file:
proxy_group = json.load(file)
except:
proxy_group = []
total = []
for line in transfer_file1:
line = transfer_file1.readline().strip()
total.append(line)
proxy_group.append({
'group_name': pool_info.get(),
'proxy': [{
'proxy': total,
}]
}),
with open('proxy_groups.txt', 'w') as outfile:
json.dump(proxy_group, outfile, indent=4)
This doesn't work but it was my attempt at taking each line from the file and adding it to the json array dynamically. Any help is appreciated.
EDIT: this is what is being outputted:
[
{
"group_name": "Defualt",
"proxy": [
{
"proxy": [
"asdf",
""
]
}
]
}
]
This was the input
wdsa
asdf
sfs
It seems that it is only selecting the middle one of the 3. I thought that printing the list of them would work but it is still printing the middle and then a blank space at the end.
An example of my data is the input to the text box may be
wkenwwins:1000:username:password
uwhsuh:1000:username:password
2ewswsd:1000:username:password
gfrfccv:1000:username:password
the selected group which i may want to save this to could be called 'Default'. I select default and then clicking save should add these inputs to the seperate txt sheet called 'proxies.txt', which it does. From the text sheet i then want to loop through each line and append each line to the json data. Which it doesnt do, here it was i expect it to look like in json data
[
{
"group_name": "Defualt",
"proxy": [
{
"proxy": [
'ewswsd:1000:username:password',
'wkenwwins:1000:username:password',
'uwhsuh:1000:username:password'
]
}
]
}
]
So then say if i made 2 groups the json data txt file should look like this:
[
{
"group_name": "Defualt",
"proxy": [
{
"proxy": [
'ewswsd:1000:username:password',
'wkenwwins:1000:username:password',
'uwhsuh:1000:username:password'
]
}
]
}
]
[
{
"group_name": "Test",
"proxy": [
{
"proxy": [
'ewswsd:1000:username:password',
'wkenwwins:1000:username:password',
'uwhsuh:1000:username:password'
]
}
]
}
]
This is so i can access each group by only calling the group name.
You can simplify the save_proxy() as below:
def save_proxy():
proxy = pooled_data_default.get('1.0', 'end-1c')
# save the proxies to file
with open('proxies.txt', 'w') as transfer_file:
transfer_file.write(proxy)
# load the proxy_groups if exists
try:
with open('proxy_groups.txt', 'r') as file:
proxy_group = json.load(file)
except:
proxy_group = []
proxy_group.append({
'group_name': pool_info.get(),
'proxy': proxy.splitlines()
})
with open('proxy_groups.txt', 'w') as outfile:
json.dump(proxy_group, outfile, indent=4)
The output file proxy_groups.txt would look like below:
[
{
"group_name": "default",
"proxy": [
"wkenwwins:1000:username:password",
"uwhsuh:1000:username:password"
]
}
]
I have a json file that look like this:
{
"issueInfo" : [ {
"cid" : 494960,
"occurrences" : [ {
"file" : "/components/applications/diag/_common/src/diag_il.c",
"function" : "diag_il_u8StopLoopbackMicIn",
"mainEventLineNumber" : 6018,
"mainEventDescription" : "Assigning value \"10\" to \"u8ResData\" here, but that stored value is overwritten before it can be used.",
} ],
"triage" : {
"classification" : "Unclassified"
},
}
I want to extract out the information like cid, firstDetectedDateTime, file, function, mainEventLineNumber, mainEventDescription and classification. All of these information needed will be put into a csv file. The following is my coding:
import csv
import json
with open ("a.log","r") as file:
data=json.load(file)
f=csv.writer(open("test.csv", "w", newline=''))
f.writerow(["cid", "firstDetectedDateTime", "file", "function",
"mainEventLineNumber", "mainEventDescription", "classification"])
for data in file:
f.writerow(data["issueInfo"]["cid"],
data["issueInfo"]["firstDetectedDateTime"],
data["issueInfo"]["occurrences"]["file"],
data["issueInfo"]["occurrences"]["function"],
data["issueInfo"]["occurrences"]["mainEventLineNumber"],
data["issueInfo"]["occurrences"]["mainEventDescription"],
data["issueInfo"]["triage"]["classification"])
The error shown after I run the command is :
TypeError: string indices must be integers
Anyone can help me to solve this problem? Thanks
Check the type of data (It must be a dictionary). Also, there is an invalid key error firstDetectedDateTime.
Try this,
import csv
import json
with open ("a.log","r") as file:
data=json.load(file)
f=csv.writer(open("test.csv", "w", newline=''))
f.writerow(["cid", "firstDetectedDateTime", "file", "function","mainEventLineNumber","mainEventDescription", "classification"])
f.writerow([data["issueInfo"][0]["cid"],
"",
data["issueInfo"][0]["occurrences"][0]["file"],
data["issueInfo"][0]["occurrences"][0]["function"],
data["issueInfo"][0]["occurrences"][0]["mainEventLineNumber"],
data["issueInfo"][0]["occurrences"][0]["mainEventDescription"],
data["issueInfo"][0]["triage"]["classification"]])
Output CSV looks like,
cid,firstDetectedDateTime,file,function,mainEventLineNumber,mainEventDescription,classification
494960,,/components/applications/diag/_common/src/diag_il.c,diag_il_u8StopLoopbackMicIn,6018,"Assigning value ""10"" to ""u8ResData"" here, but that stored value is overwritten before it can be used.",Unclassified
If the page contains many JSON sets eg:data_sets here, Keep the headers fixed only change the portion below that.
for data in data_sets:
f.writerow([data["issueInfo"][0]["cid"],
"",
data["issueInfo"][0]["occurrences"][0]["file"],
data["issueInfo"][0]["occurrences"][0]["function"],
data["issueInfo"][0]["occurrences"][0]["mainEventLineNumber"],
data["issueInfo"][0]["occurrences"][0]["mainEventDescription"],
data["issueInfo"][0]["triage"]["classification"]])
The json library in python can parse JSON from strings or files. The library parses JSON into a Python dictionary or list
json.loads() function parses the json string data and it can be used as a normal dictionary in python. And we can access the values using keys.
import json
import csv
employee_data = '{"employee_details":[{"employee_name": "James", "email": "james#gmail.com", "job_profile": "Sr. Developer"},{"employee_name": "Smith", "email": "Smith#gmail.com", "job_profile": "Project Lead"}]}'
employee_parsed = json.loads(employee_data)
emp_data = employee_parsed['employee_details']
# open a file for writing
employ_data = open('..../EmployData.csv', 'w')
# create the csv writer object
csvwriter = csv.writer(employ_data)
count = 0
for emp in emp_data:
if count == 0:
header = emp.keys()
csvwriter.writerow(header)
count += 1
csvwriter.writerow(emp.values())
employ_data.close()
I have a json file which is not in the correct format. (I think?) So I have these blocks of json, but in between there is a comma as you can see in the code below. ,{.
How can I parse this file, and extract only the json part, excluding the commas?
{
"maps":[
{"id":"blabla","iscategorical":"0"},
{"id":"blabla","iscategorical":"0"}
],
"masks":
{"id":"valore"},
"om_points":"value",
"parameters":
{"id":"valore"}
}
,{
"maps":[
{"id":"blabla", "iscategorical":"0"},
{"id":"blabla", "iscategorical":"0"}
],
"masks":
{"id":"valore"},
"om_points":"value",
"parameters":
{"id":"valore"}
}
You can read the file as a string and wrap it into [..] to make a valid json.
import json
with open(fname, 'r') as fp:
text = fp.read()
data = json.loads("["+text+"]")
Now data would contain a list of your json objects.