I have file where I have defined python dictionary:
dict = {'Name': 'Zara', 'Age': 7, 'Class': 'First', 'Friends': {'Emil':1, 'Frank':0} };
I want to read this dictionary and use it's element in python code. I've end up with following code:
#!/usr/bin/python
import sys
import os
import ast
import re
from pprint import pprint as pp
def readObjFromFile(file):
with open (file, "r") as myfile:
data=myfile.read()
data = re.sub("^#.*", "", data)
data = data.replace('\n', '')
data = data.split("=", 1)[1].split(";", 1)[0].lstrip()
data = ast.literal_eval(data)
return data
if __name__ == "__main__":
if len(sys.argv[1:]) == 1:
dict = readObjFromFile(sys.argv[1])
print type(dict)
pp(dict)
else:
print "Pass file from which object will be read"
exit
This works also for this larger dictionary. I want to ask if there is better way to do it? I am aware of pickle module but this is not what I want because it stores data in it's own format. I was considering modules which are used for parsing json due to similarity with python dicts, but I do not know if this approach is safe.
I'm assuming that you don't have control over the file format and are being handed it from somewhere. It's close enough to json that I'd aim to make it json. I'd do something like the following:
lines = []
with open(file) as f:
for line in f:
if line[0] == "#":
continue
l = line.strip().split("=")[-1].strip(";")
lines.append(re.sub("'", "\"", l)
return json.loads("".join(lines))
The file you have really represents a human interpreted version of the dictionary. We can read it and see how there is a variable that we want to assign to a dictionary. What you really want to do is store the dict in a programatic friendly format. And JSON is a perfect use of this format. Other formats include XML and YAML, but Python natively and easily will read JSON input.
If you look closely at your example, you see that the sample data set has a dictionary with a nested dictionary. JSON is built for these kinds of use cases. So a JSON file with your data above would look like:
{
"Name": "Zara",
"Age": 7,
"Class": "First",
"Friends": {
"Emil":1,
"Frank":0
}
}
Note the sub-dictionary is part of the structure of the JSON file. Also note that double quotes are used.
Now you can easily have python do your work for you:
import json
from os import open
from pprint import pprint
data = open(filename).read()
d = json.loads(data)
pprint(d)
And you can access the submodules by using:
d['Friends]
Related
I have a txt file that is of the following format (dictionary format):
{'EX1':'No Information Stored',
'EX2':'Foundation',
'EX3':'Foundation',
'EX4':'No Information Stored'}
Does anyone know how I would go about reading this into python to be able to use it like the dictionary that it is?
import json
with open('file.txt', 'r') as w:
data = w.read()
data_as_dict = json.loads(data)
Text file with this structure are JSON, so you can use the json module.
import json
def load_file(filename):
with open(filename) as f:
data = json.load(f)
return data
This is a custom function that return the dictionary you want.
Using the ast.literal_eval().
It can be used for conversion of other data types as well
# importing the module
import ast
# reading the data from the file
with open('dictionary.txt') as f:
data = f.read()
# reconstructing the data as a dictionary
d = ast.literal_eval(data)
print(d)
There are two methods for this,
1. Method using json.load():
.load() use to get from directly form file
import json
with open('data.json') as f:
json.load(f)
2. Method using json.loads():
.loads() use to get from string. So we need to read the file first to get string.
import json
with open('data.json') as f:
json.loads(f.read())
I'm trying to iterate over specific keys and values from a JSON file and write them to a new JSON file. I'm not getting any errors and the second JSON file is being created successfully, though the data passed onto the second JSON file is a whole object of the first JSON file.
def get_keys_values():
json_file = open("get_failed_archives.json")
json_data = json.load(json_file)
for archive_data in json_data["data"]:
archive_data["latestEvent"]["objectName"]
archive_data["latestEvent"]["time"]
archive_data["latestEvent"]["eventType"]
archive_data["latestEvent"]["eventStatus"]
with open("new_json_file.json", "w") as file:
json.dump(archive_data, file)
Is there a way to write specific keys and values from one JSON file to another JSON file?
as stated by josh, the statement archive_data["latestEvent"]["objectName"] functionally does nothing.
if you want to create a new JSON object, in your case this would functionally be a list of dict items (I think), the approach below would match what (I think) you want it to do.
I've added a pprint statement to print out what the result is.
import json
from pprint import pprint
def get_keys_values():
with open("get_failed_archives.json") as json_file:
json_data = json.load(json_file)
new_data = []
for archive_data in json_data["data"]:
new_data.append({
"objectName": archive_data["latestEvent"]["objectName"],
"time": archive_data["latestEvent"]["time"],
"eventType": archive_data["latestEvent"]["eventType"],
"eventStatus": archive_data["latestEvent"]["eventStatus"]
})
pprint(new_data, indent=4)
with open("new_json_file.json", "w") as file:
json.dump(new_data, file)
I wrote the following function that I want to apply to a json file:
import json
def myfunction(dictionary):
#doing things
return new_dictionary
data = """{
#a json file as a dictionary
}"""
info = json.loads(data)
refined = key_replacer(info)
new_data = json.dumps(refined)
print(new_data)
It works fine, but how do I do it when I want to import a file from my computer? json.loads take a string as input and returns a dictionary as output and json.dumps take a dictionary as input and returns a string as output. I tried with:
with open('C:\\Users\\SONY\\Desktop\\test.json', 'r', encoding="utf8") as data:
info = json.loads(data)
But TypeError: the JSON object must be str, bytes or bytearray, not TextIOWrapper.
You are passing a file object instead of string. To fix that, you need to read the file first json.loads(data.read())
Howerver, you can directly load json from files using json.load(open('myFile','r')) or in your case, json.load(data)
loads and dumps work on strings. If you want to work on files you should use load and dump instead.
Here is an example:
from json import dump, load
with open('myfile.json','r') as my_file:
content = load(my_file)
#do stuff on content
with open('myooutput.json','w') as my_output:
dump(content, my_output)
Using the json module for Python, I want to be able to output empty objects/arrays into my JSON. But this module seems to just ignore them.
Part of the legacy code I am working with, requires these objects to be defined even if empty.
This is the code:
import json
import sys
filename = sys.argv[1]
with open(filename) as json_file:
json_decoded = json.load(json_file)
json_decoded['empty'] = []
json_decoded['not_empty'] = 'allocation'
with open(filename, 'w') as json_file:
json.dump(json_decoded, json_file, sort_keys=True, indent=4)
So to summarise: I want the outputted JSON to have the empty objects but they are ignored.
Edit: An example of the JSON used in the sys.argv[1] is:
{
"some_text": "hello",
"use_google": false,
"use_gravatar": false,
"empty": [],
"not_empty": "some more text"
}
The code basically reads in the existing JSON, into the json_decoded variable and then we slightly change the settings by just changing the array, and finally outputting the new array into the file...
It all works fine, but the new JSON will be as follows:
{
"some_text": "hello",
"use_google": false,
"use_gravatar": false,
"not_empty": "allocation"
}
You can see its missing the empty key - which is the problem.
Thanks in advance
The code worked fine. I was told it didn't, and I should have checked.
So the above code all works as it should.
Do I delete the question or leave it here for people to view? Do I accept this as the answer?
I am new to Python and I am playing with JSON data. I would like to retrieve the JSON data from a file and add to that data a JSON key-value "on the fly".
That is, my json_file contains JSON data as-like the following:
{"key1": {"key1A": ["value1", "value2"], "key1B": {"key1B1": "value3"}}}
I would like to add the "ADDED_KEY": "ADDED_VALUE" key-value part to the above data so to use the following JSON in my script:
{"ADDED_KEY": "ADDED_VALUE", "key1": {"key1A": ["value1", "value2"], "key1B": {"key1B1": "value3"}}}
I am trying to write something as-like the following in order to accomplish the above:
import json
json_data = open(json_file)
json_decoded = json.load(json_data)
# What I have to make here?!
json_data.close()
Your json_decoded object is a Python dictionary; you can simply add your key to that, then re-encode and rewrite the file:
import json
with open(json_file) as json_file:
json_decoded = json.load(json_file)
json_decoded['ADDED_KEY'] = 'ADDED_VALUE'
with open(json_file, 'w') as json_file:
json.dump(json_decoded, json_file)
I used the open file objects as context managers here (with the with statement) so Python automatically closes the file when done.
Json returned from json.loads() behave just like native python lists/dictionaries:
import json
with open("your_json_file.txt", 'r') as f:
data = json.loads(f.read()) #data becomes a dictionary
#do things with data here
data['ADDED_KEY'] = 'ADDED_VALUE'
#and then just write the data back on the file
with open("your_json_file.txt", 'w') as f:
f.write(json.dumps(data, sort_keys=True, indent=4, separators=(',', ': ')))
#I added some options for pretty printing, play around with them!
For more info check out the official doc
You can do
json_decoded['ADDED_KEY'] = 'ADDED_VALUE'
OR
json_decoded.update({"ADDED_KEY":"ADDED_VALUE"})
which works nicely if you want to add more than one key/value pair.
Of course, you may want to check for the existence of ADDED_KEY first - depends on your needs.
AND I assume you want might want to save that data back to the file
json.dump(json_decoded, open(json_file,'w'))