Edit & save dictionary in another python file - python

I have 2 python files, file1.py has only 1 dictionary and I would like to read & write to that dictionary from file2.py. Both files are in same directory.
I'm able to read from it using import file1 but how do I write to that file.
Snippet:
file1.py (nothing additional in file1, apart from following data)
dict1 = {
'a' : 1, # value is integer
'b' : '5xy', # value is string
'c' : '10xy',
'd' : '1xy',
'e' : 10,
}
file2.py
import file1
import json
print file1.dict1['a'] #this works fine
print file1.dict1['b']
# Now I want to update the value of a & b, something like this:
dict2 = json.loads(data)
file1.dict1['a'] = dict2.['some_int'] #int value
file1.dict1['b'] = dict2.['some_str'] #string value
The main reason why I'm using dictionary and not text file, is because the new values to be updated come from a json data and converting it to a dictionary is simpler saving me from string parsing each time I want to update the dict1.
Problem is, When I update the value from dict2, I want those value to be written to dict1 in file1
Also, the code runs on a Raspberry Pi and I've SSH into it using Ubuntu machine.
Can someone please help me how to do this?
EDIT:
file1.py could be saved in any other format like .json or .txt. It was just my assumption that saving data as a dictionary in separate file would allow easy update.
file1.py has to be a separate file, it is a configuration file so I don't want to merge it to my main file.
The data for dict2 mention above comes from socket connection at
dict2 = json.loads(data)
I want to update the *file1** with the data that comes from socket connection.

If you are attempting to print the dictionary back to the file, you could use something like...
outFile = open("file1.py","w")
outFile.writeline("dict1 = " % (str(dict2)))
outFile.close()
You might be better off having a json file, then loading the object from and writing the object value back to a file. You could them manipulate the json object in memory, and serialize it simply.
Z

I think you want to save the data from file1 into a separate .json file, then read the .json file in your second file. Here is what you can do:
file1.py
import json
dict1 = {
'a' : 1, # value is integer
'b' : '5xy', # value is string
'c' : '10xy',
'd' : '1xy',
'e' : 10,
}
with open("filepath.json", "w+") as f:
json.dump(dict1, f)
This will dump the dictionary dict1 into a json file which is stored at filepath.json.
Then, in your second file:
file2.py
import json
with open("pathname.json") as f:
dict1 = json.load(f)
# dict1 = {
'a' : 1, # value is integer
'b' : '5xy', # value is string
'c' : '10xy',
'd' : '1xy',
'e' : 10,
}
dict1['a'] = dict2['some_int'] #int value
dict1['b'] = dict2['some_str'] #string value
Note: This will not change the values in your first file. However, if you need to access the changed values, you can dump your data into another json file, then load that json file again whenever you need the data.

You should use the pickle library to save and load the dictionary https://wiki.python.org/moin/UsingPickle
Here is the basic usage of pickle
1 # Save a dictionary into a pickle file.
2 import pickle
3
4 favorite_color = { "lion": "yellow", "kitty": "red" }
5
6 pickle.dump( favorite_color, open( "save.p", "wb" ) )
1 # Load the dictionary back from the pickle file.
2 import pickle
3
4 favorite_color = pickle.load( open( "save.p", "rb" ) )
5 # favorite_color is now { "lion": "yellow", "kitty": "red" }

Finally as #Zaren suggested, I used a json file instead of dictionary in python file.
Here's what I did:
Modified file1.py to file1.json and store the data with appropriate formatting.
From file2.py, I opened file1.json when needed instead of import file1 and used json.dump & json.load on file1.json

Related

Replace value in list of dictionary in JSON file

I have a JSON file containing a list of 9000 dictionaries.
Sample
[{"auftrags_id":348667,"vertrags_id":11699,"ursprungsauftrag":"",
"umsatz":0.28,"brutto":0.33,"vertrauensschadenhaftpflicht":"",
"stornoreserve":"","umsatzsteuer_betrag":"0.05","netto":0.28,
"steuerpflichtig":"0","art_der_rechnung":"Rechnung","vp_nummer":538},
{"auftrags_id":348668,"vertrags_id":11699,"ursprungsauftrag":"",
"umsatz":0.28,"brutto":0.33,"vertrauensschadenhaftpflicht":"",
"stornoreserve":"","umsatzsteuer_betrag":"0.05","netto":0.28,
"steuerpflichtig":"0","art_der_rechnung":"Rechnung","vp_nummer":538},
{"auftrags_id":349210,"vertrags_id":24894,"ursprungsauftrag":"X",
"umsatz":0.87,"brutto":1.04,"vertrauensschadenhaftpflicht":"X",
"stornoreserve":"X","umsatzsteuer_betrag":"0.17","netto":0.87,
"steuerpflichtig":"0","art_der_rechnung":"Rechnung","vp_nummer":538}]
To upload the JSON-File to Postgresql, I need to replace the X with a value accepted as float. I think replacing the value 'X' with '0.0001' in every dictionary would do it. Then there are values with no content, "", I don't know how to handle them,maybe also replacing them with '0.0001', just for purpose of uploading.
Desired output:
[{"auftrags_id":348667,"vertrags_id":11699,"ursprungsauftrag":0.0001,
"umsatz":0.28,"brutto":0.33,"vertrauensschadenhaftpflicht":0.0001,
"stornoreserve":0.0001,"umsatzsteuer_betrag":0.05,"netto":0.28,
"steuerpflichtig":0.0001,"art_der_rechnung":"Rechnung","vp_nummer":538},
{"auftrags_id":348668,"vertrags_id":11699,"ursprungsauftrag":0.0001,
"umsatz":0.28,"brutto":0.33,"vertrauensschadenhaftpflicht":0.0001,
"stornoreserve":0.0001,"umsatzsteuer_betrag":0.05,"netto":0.28,
"steuerpflichtig":0.0001,"art_der_rechnung":"Rechnung","vp_nummer":538},
{"auftrags_id":349210,"vertrags_id":24894,"ursprungsauftrag":0.0001,
"umsatz":0.87,"brutto":1.04,"vertrauensschadenhaftpflicht":0.0001,
"stornoreserve":0.0001,"umsatzsteuer_betrag":0.17,"netto":0.87,
"steuerpflichtig":"0","art_der_rechnung":"Rechnung","vp_nummer":538}]
I already have a code to upload the file, but i need to clean the JSON file for Postgresql to accept it. Appreciate any help!
You could use sed as suggested in the comment under your question. It is a command for the Linux command line (shell). See an article about Linux shell here and about the sed command here.
The Python solution:
#!/usr/bin/python3
import json # load the builtin JSON module
JSON_FILE_NAME = "dictionaries.json" # the name of your file with those dictionaries
RESULT_FILE_NAME = "result.json" # the name of the file that will be created
# Load the file contents into variable dictionaries
with open(JSON_FILE_NAME, "r", encoding="utf8") as file:
dictionaries = json.load(file)
result = []
for dictionary in dictionaries: # loop over the dictionaries
for key, value in dictionary.items(): # loop over the key and value pairs in the dictionary
if value in ("", "X"):
# if the value is an empty string or "X", change it to 0.0001
dictionary[key] = 0.0001
# append the dictionary to the result list
result.append(dictionary)
# save the result to a file
with open(RESULT_FILE_NAME, "w", encoding="utf8") as file:
json.dump(result, file)

"list indices must be integers or slices, not str" while manipulating data from JSON

I am trying to extract some data from JSON files, which are have all the same structure and then write the chosen data into a new JSON file. My goal is to create a new JSON file which is more or less a list of each JSON file in my folder with the data:
Filename, triggerdata, velocity {imgVel, trigVel}, coordinates.
In a further step of my programme, I will need this new splitTest1 for analysing the data of the different files.
I have the following code:
base_dir = 'mypath'
def createJsonFile() :
splitTest1 = {}
splitTest1['20mm PSL'] = []
for file in os.listdir(base_dir):
# If file is a json, construct it's full path and open it, append all json data to list
if 'json' in file:
json_path = os.path.join(base_dir, file)
json_data = pd.read_json(json_path, lines=True)
if splitTest1[file]['20mm PSL'] == to_find:
splitTest1['20mm PSL'].append({
'filename': os.path.basename(base_dir),
'triggerdata': ['rawData']['adcDump']['0B'],
'velocity': {
'imgVel': ['computedData']['particleProperties']['imgVelocity'],
'trigVel': ['computedData']['img0Properties']['coordinates']},
'coordinates': ['computedData']['img1Properties']['coordinates']})
print(len(splitTest1))
When I run the code, I get this error:
'triggerdata': ['rawData']['adcDump']['0B'], TypeError: list indices must be integers or slices, not str
What is wrong with the code? How do I fix this?
This is my previous code how I accessed that data without saving it in another JSON File:
with open('myJsonFile.json') as f0:
d0 = json.load(f0)
y00B = d0['rawData']['adcDump']['0B']
x = np.arange(0, (2048 * 0.004), 0.004) # in ms, 2048 Samples, 4us
def getData():
return y00B, x
def getVel():
imgV = d0['computedData']['particleProperties']['imgVelocity']
trigV = d0['computedData']['trigger']['trigVelocity']
return imgV, trigV
Basically, I am trying to put this last code snippet into a loop which is reading all my JSON files in my folder and make a new JSON file with a list of the names of these files and some other chosen data (like the ['rawData']['adcDump']['0B'], etc)
Hope this helps understanding my problem better
I assume what you want to do is to take some data from several json files and compile those into a list and write that into a new json file.
In order to get the data from your current json file you'll need to add a "reference" to it in front of the indices (otherwise the code has no idea where it should that data from). Like so:
base_dir = 'mypath'
def createJsonFile() :
splitTest1 = {}
splitTest1['20mm PSL'] = []
for file in os.listdir(base_dir):
# If file is a json, construct it's full path and open it, append all json data to list
if 'json' in file:
json_path = os.path.join(base_dir, file)
json_data = pd.read_json(json_path, lines=True)
if splitTest1[file]['20mm PSL'] == to_find:
splitTest1['20mm PSL'].append({
'filename': os.path.basename(base_dir),
'triggerdata': json_data['rawData']['adcDump']['0B'],
'velocity': {
'imgVel': json_data['computedData']['particleProperties']['imgVelocity'],
'trigVel': json_data['computedData']['img0Properties']['coordinates']},
'coordinates': json_data['computedData']['img1Properties']['coordinates']})
print(len(splitTest1))
So basically what you need to do is to add "json_data" in front of the indices.
Also I suggest you to write the variable "json_path" and not "base_dir" into the 'filename' field.
I found the solution with help of the post from Mattu475
I had to add the reference in front of the indices and also change on how to open the files found in my folder with the following code;
with open (json_path) as f0:
json_data = json.load(f0)
instead of pd.read_json(...)
Here the full code:
def createJsonFile() :
splitTest1 = {}
splitTest1['20mm PSL'] = []
for file in os.listdir(base_dir):
# If file is a json, construct it's full path and open it, append all json data to list
if 'json' in file:
print("filename: " ,file) # file is only the file name, the path not included
json_path = os.path.join(base_dir, file)
print("path : ", json_path)
with open (json_path) as f0:
json_data = json.load(f0)
splitTest1['20mm PSL'].append({
'filename': os.path.basename(json_path),
'triggerdata': json_data['rawData']['adcDump']['0B'],
#'imgVel': json_data['computedData']['particleProperties']['imgVelocity'],
'trigVel': json_data['computedData']['trigger']['trigVelocity'],
#'coordinatesImg0': json_data['computedData']['img0Properties']['coordinates'],
#'coordinatesImg1': json_data['computedData']['img1Properties']['coordinates']
})
return splitTest1
few lines (the ones commented out) do not function 100% yet, but the rest works.
Thank you for your help!
The issue is with this line
'imgVel': ['computedData']['particleProperties']['imgVelocity'],
And the two that come after that. What's happening there is you're creating a list with the string 'computedData' as the only element. And then trying to find the index 'particleProperties', which doesn't make sense. You can only index a list with integers. I can't really give you a "solution", but if you want imgVel to just be a list of those strings, then you would do
'imgVel': ['computedData', 'particularProperties', 'imgVelocity']
Your dict value isn't legal Python.
'triggerdata': ['rawData']['adcDump']['0B']
The value doesn't make any sense: you make a list of a single string, then you try to index it with another string. You asked for element "adcDump" of the list ['rawData'], and there isn't any such syntax.
You cannot store arbitrary source code (your partial expression) as if it were a data value.
If you want help to construct a particular reference, then please post a focused question. Please repeat how to ask from the intro tour.

Can you append to a dictionary from a foreign python file?

So I have a project I'm working on for fun but it requires me to append to a dictionary from another python file. In file1.py it will look like
Name: Eric <-- user input
Age: 27 <-- user input
and file2.py,
information = {'Eric':27}
I know that I can temporarily append to a dictionary while running the code, but it seems to reset after I close the program. Like recently I've seen this on a StackOverflow question
d = {'key': 'value'}
print(d)
# {'key': 'value'}
d['mynewkey'] = 'mynewvalue'
print(d)
# {'key': 'value', 'mynewkey': 'mynewvalue'}
But this too, resets after every run so I thought that the only way to save the dictionary is to write it to another file. Is there any way that I can achieve this or maybe a better alternative?
You can use JSON to save data to a file.
This will save the data, that is stored in your dictionary, in a file.
import json
my_dict = {"key": "value", "key2": "value2"}
with open("output_file.txt", "w") as file:
json.dump(my_dict, file, indent=4)
To use that data again, you can load that file.
import json
with open("output_file.txt") as file:
my_dict = json.load(file)
print(my_dict) # Will print {"key": "value", "key2": "value2"}
JSON stands for JavaScriptObjectNotation, and it's a way to save data in a string format (a file)
So JSON can convert a string into data, if it is valid JSON:
import json
string_data = '{"key": "value"}'
dictionary = json.loads(string_data)
print(type(string_data)) # <class 'str'>
print(type(dictionary)) # <class 'dict'>

Update JSON objects with Python script for AWS

I have a bunch of JSON objects that I need to update in order to use the CLI for AWS.
Here is an example of the JSON format. I will need to update lbtest, lbtest-cookie-pol and 80 with different values.
{
"LoadBalancerName": "lbtest",
"PolicyName": "lbtest-cookie-pol",
"CookieExpirationPeriod":80
}
In some cases, there will be multiple values here for each Load Balancer Name.
The output will need to look like this:
{
"LoadBalancerName": "lbtest",
"PolicyName": "lbtest-cookie-pol",
"CookieExpirationPeriod":80
}
{
"LoadBalancerName": "lbtest",
"PolicyName": "lbtest-cookie-pol2",
"CookieExpirationPeriod":8080.
}
Suppose I had a CSV file with all these entries, what kind of python script can I write to loop through these and print out my JSON output? The part where I am having issues is the printing of the nested JSON object. print doesn't seem to like multiple lines or the curly braces that I have. Newbie here so I would appreciate any kind of solution.
you can use json.dumps method and it's options mentioned in documentations:
for example with using indent option you get this on python 2.7 :
>>> dictionary = {
"LoadBalancerName": "lbtest",
"PolicyName": "lbtest-cookie-pol",
"CookieExpirationPeriod":80 } #a dictionary object made from csv
>>> print dictionary
{'PolicyName': 'lbtest-cookie-pol', 'CookieExpirationPeriod': 80, 'LoadBalancerName': 'lbtest'}
>>> import json
>>> jobj = json.dumps(dictionary,indent=4, separators=(',', ': '))
>>> print jobj
{
"PolicyName": "lbtest-cookie-pol",
"CookieExpirationPeriod": 80,
"LoadBalancerName": "lbtest"
}
>>> f = open(r'jtest.txt','w') #save our json object to file
>>> json.dump(dictionary,fp,indent =4 , seperators = (',',': '))
>>> f.close()
>>> f = open(r'jtest.txt!','r') #read our object from file
>>> test = json.load(f)
>>> test
{u'PolicyName': u'lbtest-cookie-pol', u'CookieExpirationPeriod': 80, u'LoadBalancerName': u'lbtest'}
>>> dict(test)
{u'PolicyName': u'lbtest-cookie-pol', u'CookieExpirationPeriod': 80, u'LoadBalancerName': u'lbtest'}
here is how our file looks like:
jtest.txt file

Right way to load data from json file in python

I am trying to write a code in python and deploy on google app engine. I am new to both these things. I have json which contains the following
[
{
"sentiment":-0.113568,
"id":455908588913827840,
"user":"ANI",
"text":"Posters put up against Arvind Kejriwal in Varanasi http://t.co/ZDrzjm84je",
"created_at":1.397532052E9,
"location":"India",
"time_zone":"New Delhi"
},
{
"sentiment":-0.467335,
"id":456034840106643456,
"user":"Kumar Amit",
"text":"Arvind Kejriwal's interactive session with Varansi Supporter and Opponent will start in short while ..Join at http://t.co/f6xI0l2dWc",
"created_at":1.397562153E9,
"location":"New Delhi, Patna.",
"time_zone":"New Delhi"
},
I am trying to load this data in python. I have the following code for it
data = simplejson.load(open('data/convertcsv.json'))
# print data
for row in data:
print data['sentiment']
I am getting the following error - TypeError: list indices must be integers, not str
If I uncomment the print data line and remove the last 2 lines I can see all the data in console. I want to be able to do some computations on the sentiment and also search for some words in the text. But for that I need to know how to get it line by line.
If you'd like to clean it up a bit
import json
with open('data/convertcsv.json') as f:
data = json.loads(f.read())
for row in data:
print row['sentiment']
The 'with' only leaves the file open as its used, then closes it automatically once the indented block under is executed.
Try this:
import json
f = open('data/convertcsv.json');
data = json.loads(f.read())
f.close()
for row in data:
print row['sentiment']
The issue is that you use data['sentiment'] instead of row['sentiment'] otherwise your code is fine:
with open('data/convertcsv.json', 'rb') as file:
data = simplejson.load(file)
# print data
for row in data:
print row['sentiment'] # <-- data is a list, use `row` here

Categories

Resources