I want to print out some values from my json file and here is part of the code:
{
"records" : [
{
"dateRep" : "02/05/2021",
"day" : "02",
"month" : "05",
"year" : "2021",
"cases" : 1655,
"deaths" : 16,
"countriesAndTerritories" : "Austria",
"geoId" : "AT",
"countryterritoryCode" : "AUT",
"popData2020" : "8901064",
"continentExp" : "Europe"
},
How can i print dateRep ,cases, deaths or any value i want and put it in a variable to use it?
I am using this code to load my json file:
f = open('DailyUpdatedReport.json',)
data = json.load(f)
print(data)
My other issue : My json file need to be up-to dated I don't know how to do it. Here is URL for the json file I am using
https://opendata.ecdc.europa.eu/covid19/nationalcasedeath_eueea_daily_ei/json/
You would do this just how you would get data from any other dictionary in python. I'll put an example below, and you should be able to generalize from there. In response to your second question, I don't understand what you mean by needing to update your data. Couldn't you just copy and paste the data from the link you posted into a json?
import json
f = open('DailyUpdatedReport.json',)
data = json.load(f)
#['records']: enter records entry in data
#[0]: go into the first record
#['dateRep']: read a value from that record (in this case dateRep)
dateRep = data['records'][0]['dateRep']
print(dateRep)
Related
I have an existing question which may have been initially too big so I am breaking it down into smaller questions so that I can piece it together. Related question is Python JSON transformation from explicit to generic by configuration.
I want to be able to process some transformations using a configuration list which it would try each listed JSON path and where it can successfully create an object, it would append it to an array called characteristics.
INPUT DATA
This data is a single value from an explicit data payload which has over 2,500 values individually defined at schema level. It's nasty hence we want to transform it into a more data driven object which can be maintained through configuration. Unfortunately, we have no control over the input data, else we would ask for the data to arrive in the preferred generic state instead.
data = {
"activities_acceptance" : {
"contractors_sub_contractors" : {
"contractors_subcontractors_engaged" : "yes"
}
}
CONFIGURATION JSON
This configuration example is used to create an object with category and type and for this example, adds a single value. The set_value may be used to add more than one mapped value from the origin data.
config = {
"processing_configuration" : [
{
"origin_path" : "activities_acceptance.contractors_sub_contractors",
"set_category" : "business-activities",
"set_type" : "contractors-sub-contractors-engaged",
"set_value" : [
{
"use_value" : "contractors_subcontractors_engaged",
"set_value" : "value"
}
]
}
MANUAL METHOD in PYTHON
This script is currently working but requires me to create a repeat of this for every generic object I want to create. I need the configuration JSON to be looped through instead thus reducing script and allows the addition of new data to be configuration management.
# Create Business Characteristics
business_characteristics = {
"characteristics" : []
}
# Create Characteristics - Business - Liability
try:
acc_liability = {
"category" : "business-activities",
"type" : "contractors-sub-contractors-engaged",
"description" : "",
"value" : "",
"details" : ""
}
acc_liability['value'] = data['line_of_businesses'][0]['assets']['commercial_operations'][0]['liability_asset']['acceptance']['contractors_and_subcontractors']['contractors_and_subcontractors_engaged']
business_characteristics['characteristics'].append(acc_liability)
except:
acc_liability = {}
What I'm trying to set the path for `acc_liability['value'] using the configuration JSON like as shown below.
Note, I used a . separator for the entire path to avoid having to do all the [''] throughout the whole configuration file so, not only do I need it to read from the configuration but also, for each path depth, wrap in a ['']. If that complicates things from a script perspective, I'll just use the full path as I have entered in the manual version.
DYNAMIC VERSION - currently not working and need help on
# Create Business Characteristics
business_characteristics = {
"characteristics" : []
}
# Create Characteristics - Business - Liability
try:
acc_liability = {
"category" : "",
"type" : "",
"description" : "",
"value" : "",
"details" : ""
}
acc_liability['category'] = config['processing_configuration'][0]['set_category']
acc_liability['type'] = config['processing_configuration'][0]['set_type']
acc_liability['value'] = data config['processing_configuration'][0]['origin_path'] + config['set_value']`
business_characteristics['characteristics'].append(acc_liability)
except:
acc_liability = {}
EXPECTED OUTPUT
{
"characteristics": [
{
"category": "business-activities",
"type": "contractors-sub-scontractors-engaged",
"description": "",
"value": "YES",
"details": ""
}
]
}
I currently am attempting to edit a JSON file that has multiple listed dictionaries that incorporate the same keys but different values. I would like to change specific keys (the same ones) in every dictionary within the file. How can I do this?
For Example:
"JSON_FILE" [
{"type" : "person", 'attributes" : { "Name" : "Jack, "Age" : 24, "Hight" : 6.2}}
{"type" : "person", "attributes" : { "Name" : "Brent", "Age" : 15, "Hight" : 5.6}}
{"type" : "person", "attributes" : { "Name" : "Matt", "Age" : 30, "Hight" : 4.9}} ]
I would like to change all the 'Name' keys to be labeled as " 'NAMES' " and all the 'Hight' keys to be labeled as 'HIGHT (ft)'.
I am using Python and this is a data set of 100 dictionaries I am attempting to edit so going through one at a time is not efficient.
I'm assuming the schema is actually well-formed ("attributes" in quotes, use of double rather than single quotes, commas between objects in a list).
You can do the following to rename your fields:
import json
data = json.load(open('your_file_name.json', 'r'))
for data_entry in data:
# Add new 'NAME' field, then delete old 'Name' field.
data_entry['attributes']['NAME'] = data_entry['attributes']['Name']
del data_entry['attributes']['Name']
# Add new 'HIGHT' (sic) field, then delete old 'Hight' field.
data_entry['attributes']['HIGHT'] = data_entry['attributes']['Hight']
del data_entry['attributes']['Hight']
with open('your_file_name.json', 'w') as output_file:
output_file.write(json.dumps(data))
If you have multiple keys under attributes to upper case, you can do the following:
import json
file_path = "path/to/file"
fields_to_upper = ["Name", "Hight", "Age"]
with open(file_path, "r") as f:
data = json.load(f)
for row in data:
for field in fields_to_upper:
row["attributes"][field.upper()] = row["attributes"].pop(field)
with open(file_path, "w") as f:
f.write(json.dumps(data))
If you want to upper case all the keys under attributes, try:
with open(file_path, "r") as f:
data = json.load(f)
for row in data:
for key in row["attributes"].keys():
row["attributes"][key.upper()] = row["attributes"].pop(key)
with open(file_path, "w") as f:
f.write(json.dumps(data))
Let's say I have text from log file with format like this:
DEBUG: {\"id\":12311,\"pool_num\":\"4125441212441893\",\"full_name\":\"john doe\",\"mobile\":\"000000\","image_1\":\"upload\\/d7379280d549499dd9c948341298703ee.jpeg\",\"image_2\":\"upload\\/4a190fb8941a3d746cff01aa945b.jpeg\",\"image_3\":\"upload\\/3afd55aebb4d1461a4e15b9ac335dd92380.jpeg\"}
DEBUG: {\"id\":12312,\"pool_num\":\"89451222214511221\",\"full_name\":\"jane doe\",\"mobile\":\"000000\","image_1\":\"upload\\/d7379280d5494asdasd9c948341298123.jpeg\",\"image_2\":\"upload\\/4a190fb89asd123746cff01aa945b.jpeg\",\"image_3\":\"upload\\/3afd55aebb4dadasd15b9ac335dd9236661.jpeg\"}
DEBUG: {\"id\":12313,\"pool_num\":\"12312345612312312\",\"full_name\":\"smith doe\",\"mobile\":\"000000\","image_1\":\"upload\\/d7379280d549499dd9c948341298701551.jpeg\",\"image_2\":\"upload\\/123easfdsdagdfhdf213432123123.jpeg\",\"image_3\":\"upload\\/3afd55aebb4d1461a4e15b9ac335dd92380.jpeg\"}
DEBUG: {\"id\":12314,\"pool_num\":\"82123423444112345\",\"full_name\":\"adam doe\",\"mobile\":\"000000\","image_1\":\"upload\\/d7379280d549499dd9c9483412987666.jpeg\",\"image_2\":\"upload\\/asfda1234235we3rtsdasdasdah456.jpeg\",\"image_3\":\"upload\\/3afd55aebb4d1461a4e15b9ac335dd94216.jpeg\"}
Currently I can extract some data with this regex:
\b(?:pool_num|full_name|image_1|image_2|image_3)\\\":\\\"([^\"]+)
Demo: https://regex101.com/r/ZmXaVl/1
but the final text contains "\\" and not clean yet.
Question
I want to extract clean value from pool_num, full_name, image_1, image_2 and image_3 and save to .txt file with JSON format.
My expected output is :
[
{
"pool_num" : 4125441212441893,
"full_name" : "john doe",
"image_1" : "d7379280d549499dd9c948341298703ee.jpeg",
"image_2" : "4a190fb8941a3d746cff01aa945b.jpeg",
"image_3" : "3afd55aebb4d1461a4e15b9ac335dd92380.jpeg"
},
{
"pool_num" : 89451222214511221,
"full_name" : "jane doe",
"image_1" : "d7379280d5494asdasd9c948341298123.jpeg",
"image_2" : "4a190fb89asd123746cff01aa945b.jpeg",
"image_3" : "3afd55aebb4dadasd15b9ac335dd9236661.jpeg"
},
{
"pool_num" : 12312345612312312,
"full_name" : "smith doe",
"image_1" : "d7379280d549499dd9c948341298701551.jpeg",
"image_2" : "123easfdsdagdfhdf213432123123.jpeg",
"image_3" : "3afd55aebb4d1461a4e15b9ac335dd92380.jpeg"
},
{
"pool_num" : 82123423444112345,
"full_name" : "adam doe",
"image_1" : "d7379280d549499dd9c9483412987666.jpeg",
"image_2" : "asfda1234235we3rtsdasdasdah456.jpeg",
"image_3" : "3afd55aebb4d1461a4e15b9ac335dd94216.jpeg"
}
]
How do I get the desired output with the best Python approach?
Here is a possible solution that extracts the lines that start with 'DEBUG: ' from the log and then gets the json section of the line and imports it as suggested by #Tomerikoo's comment.
This produces the expected output format as listed in the question.
This solution is dependent on the lines being preceded with 'DEBUG: '. It could be adjusted to parse lines with additional prefixes too.
If this approach will work for the problem, then it will be more resilient that some of the regular-expresssion based solutions.
import json
import pprint
pp = pprint.PrettyPrinter(indent=4)
mydata = []
lines = log.split("\n")
for line in lines:
if line.startswith("DEBUG: {"):
json_string = line.split("DEBUG: ")[1]
mydata.append(json.loads(json_string))
pp.pprint(mydata)
I have a json file that look like this:
{
"issueInfo" : [ {
"cid" : 494960,
"occurrences" : [ {
"file" : "/components/applications/diag/_common/src/diag_il.c",
"function" : "diag_il_u8StopLoopbackMicIn",
"mainEventLineNumber" : 6018,
"mainEventDescription" : "Assigning value \"10\" to \"u8ResData\" here, but that stored value is overwritten before it can be used.",
} ],
"triage" : {
"classification" : "Unclassified"
},
}
I want to extract out the information like cid, firstDetectedDateTime, file, function, mainEventLineNumber, mainEventDescription and classification. All of these information needed will be put into a csv file. The following is my coding:
import csv
import json
with open ("a.log","r") as file:
data=json.load(file)
f=csv.writer(open("test.csv", "w", newline=''))
f.writerow(["cid", "firstDetectedDateTime", "file", "function",
"mainEventLineNumber", "mainEventDescription", "classification"])
for data in file:
f.writerow(data["issueInfo"]["cid"],
data["issueInfo"]["firstDetectedDateTime"],
data["issueInfo"]["occurrences"]["file"],
data["issueInfo"]["occurrences"]["function"],
data["issueInfo"]["occurrences"]["mainEventLineNumber"],
data["issueInfo"]["occurrences"]["mainEventDescription"],
data["issueInfo"]["triage"]["classification"])
The error shown after I run the command is :
TypeError: string indices must be integers
Anyone can help me to solve this problem? Thanks
Check the type of data (It must be a dictionary). Also, there is an invalid key error firstDetectedDateTime.
Try this,
import csv
import json
with open ("a.log","r") as file:
data=json.load(file)
f=csv.writer(open("test.csv", "w", newline=''))
f.writerow(["cid", "firstDetectedDateTime", "file", "function","mainEventLineNumber","mainEventDescription", "classification"])
f.writerow([data["issueInfo"][0]["cid"],
"",
data["issueInfo"][0]["occurrences"][0]["file"],
data["issueInfo"][0]["occurrences"][0]["function"],
data["issueInfo"][0]["occurrences"][0]["mainEventLineNumber"],
data["issueInfo"][0]["occurrences"][0]["mainEventDescription"],
data["issueInfo"][0]["triage"]["classification"]])
Output CSV looks like,
cid,firstDetectedDateTime,file,function,mainEventLineNumber,mainEventDescription,classification
494960,,/components/applications/diag/_common/src/diag_il.c,diag_il_u8StopLoopbackMicIn,6018,"Assigning value ""10"" to ""u8ResData"" here, but that stored value is overwritten before it can be used.",Unclassified
If the page contains many JSON sets eg:data_sets here, Keep the headers fixed only change the portion below that.
for data in data_sets:
f.writerow([data["issueInfo"][0]["cid"],
"",
data["issueInfo"][0]["occurrences"][0]["file"],
data["issueInfo"][0]["occurrences"][0]["function"],
data["issueInfo"][0]["occurrences"][0]["mainEventLineNumber"],
data["issueInfo"][0]["occurrences"][0]["mainEventDescription"],
data["issueInfo"][0]["triage"]["classification"]])
The json library in python can parse JSON from strings or files. The library parses JSON into a Python dictionary or list
json.loads() function parses the json string data and it can be used as a normal dictionary in python. And we can access the values using keys.
import json
import csv
employee_data = '{"employee_details":[{"employee_name": "James", "email": "james#gmail.com", "job_profile": "Sr. Developer"},{"employee_name": "Smith", "email": "Smith#gmail.com", "job_profile": "Project Lead"}]}'
employee_parsed = json.loads(employee_data)
emp_data = employee_parsed['employee_details']
# open a file for writing
employ_data = open('..../EmployData.csv', 'w')
# create the csv writer object
csvwriter = csv.writer(employ_data)
count = 0
for emp in emp_data:
if count == 0:
header = emp.keys()
csvwriter.writerow(header)
count += 1
csvwriter.writerow(emp.values())
employ_data.close()
I am trying to add something to a .json file.
This is what saves
"106569102398611456" : {
"currentlocation" : "Pallet Town",
"name" : "Anthony",
"party" : [
{
"hp" : "5",
"level" : "1",
"pokemonname" : "bulbasaur"
}
],
"pokedollars" : 0
}
}
What I'm trying to do is make a command to add something else to the "party". Here is an example of what I want.
"106569102398611456" : {
"currentlocation" : "Pallet Town",
"name" : "Anthony",
"party" : [
{
"hp" : "5",
"level" : "1",
"pokemonname" : "bulbasaur"
},
{
"hp" : "3",
"level" : "1",
"pokemonname" : "squirtle"
}
],
"pokedollars" : 0
}
}
edit:
This is what I've attempted but I have no idea
def addPokemon(pokemon):
pokemonName = convert(pokemon)
for pokemon in players['party']:
pokemon.append(pokemonName)
convert(pokemon) basically grabs the pokemon i type in and change gives it a level and health to be added to the .json file
To update a JSON file, write out the object to a temporary file and then replace the target file with the temporary file. Example:
import json
import os
import shutil
import tempfile
def rewriteJsonFile(sourceObj, targetFilePath, **kwargs):
temp = tempfile.mkstemp()
tempHandle = os.fdopen(temp[0], 'w')
tempFilePath = temp[1]
json.dump(sourceObj, tempHandle, **kwargs)
tempHandle.close()
shutil.move(tempFilePath, targetFilePath)
This assumes that updates are happening serially. If updates are potentially happening in parallel, you'd need some kind of locking to ensure only one update is happening at a time. Although at that point, you're better off using a database like sqlite and returning queries in JSON format.