saving tkinter entry box text to a JSON format - python

I am trying to store tkinter entrybox text into a JSON format:
The expected output is:
{"objects": [{"neptun_code": "BVQYMZ", "result": "89", "mark": "4"}, {"neptun_code": "NHFKYM", "result": "95", "mark": "5"}]}
My output looks like this:
[{':', 'neptun_code', 'AUU4NA'}, {'result', ':', '98'}, {':', '5', 'mark'}]
[{':', 'neptun_code', 'BVQYMZ'}, {'result', ':', '86'}, {':', '5', 'mark'}]
my code looks like this:
def __sendData(self):
self.list = []
for i in range(len(self.entry)):
self.list.append({self.entryNames[i],":",self.entry[i].get()})
self.entry[i].delete(0, END)
self.counter+=1
self.entries.append(self.list)
My tkinter GUI:

The solution is to create a dict() (eg jsondict ) of what you want to save and then use the json module like this:
with open(file_path, 'w') as fp:
json.dump(jsondict, fp,indent=4)

Related

Python How to add nested fields to Yaml file

I need to modify a YAML file and add several fields.I am using the ruamel.yaml package.
First I load the YAML file:
data = yaml.load(file_name)
I can easily add new simple fields, like-
data['prop1'] = "value1"
The problem I face is that I need to add a nested dictionary incorporate with array:
prop2:
prop3:
- prop4:
prop5: "Some title"
prop6: "Some more data"
I tried to define-
record_to_add = dict(prop2 = dict(prop3 = ['prop4']))
This is working, but when I try to add beneath it prop5 it fails-
record_to_add = dict(prop2 = dict(prop3 = ['prop4'= dict(prop5 = "Value")]))
I get
SyntaxError: expression cannot contain assignment, perhaps you meant "=="?
What am I doing wrong?
The problem has little to do with ruamel.yaml. This:
['prop4'= dict(prop5 = "Value")]
is invalid Python as a list ([ ]) expects comma separated values. You would need to use something like:
record_to_add = dict(prop2 = dict(prop3 = dict(prop4= [dict(prop5 = "Some title"), dict(prop6='Some more data'),])))
As your program is incomplete I am not sure if you are using the old API or not. Make sure to use
import ruamel.yaml
yaml = ruamel.yaml.YAML()
and not
import ruamel.yaml as yaml
Its because of having ['prop4'= <> ].Instead record_to_add = dict(prop2 = dict(prop3 = [dict(prop4 = dict(prop5 = "Value"))])) should work.
Another alternate would be,
import yaml
data = {
"prop1": {
"prop3":
[{ "prop4":
{
"prop5": "some title",
"prop6": "some more data"
}
}]
}
}
with open(filename, 'w') as outfile:
yaml.dump(data, outfile, default_flow_style=False)

Open a JSON files and edit structure

I have produced a couple of json files after scraping a few elements. The structure for each file is as follows:
us.json
{'Pres': 'Biden', 'Vice': 'Harris', 'Secretary': 'Blinken'}
uk.json
{'1st Min': 'Johnson', 'Queen':'Elizabeth', 'Prince': 'Charles'}
I'd like to know how I could edit the structure of each dictionary inside the json file to get an output as it follows:
[
{"title": "Pres",
"name": "Biden"}
,
{"title": "Vice",
"name": "Harris"}
,
{"title": "Secretary",
"name": "Blinken"}
]
As far as I am able to think how to do it (I'm a beginner, studying only since a few weeks) I need first to run a loop to open each file, then I should generate a list of dictionaries and finally modify the dictionary to change the structure. This is what I got NOT WORKING as it overrides always with the same keys.
import os
import json
list_of_dicts = []
for filename in os.listdir("DOCS/Countries Data"):
with open(os.path.join("DOCS/Countries Data", filename), 'r', encoding='utf-8') as f:
text = f.read()
country_json = json.loads(text)
list_of_dicts.append(country_json)
for country in list_of_dicts:
newdict = country
lastdict = {}
for key in newdict:
lastdict = {'Title': key}
for value in newdict.values():
lastdict['Name'] = value
print(lastdict)
Extra bonus if you could also show me how to generate an ID mumber for each entry. Thank you very much
This look like task for list comprehension, I would do it following way
import json
us = '{"Pres": "Biden", "Vice": "Harris", "Secretary": "Blinken"}'
data = json.loads(us)
us2 = [{"title":k,"name":v} for k,v in data.items()]
us2json = json.dumps(us2)
print(us2json)
output
[{"title": "Pres", "name": "Biden"}, {"title": "Vice", "name": "Harris"}, {"title": "Secretary", "name": "Blinken"}]
data is dict, .items() provide key-value pairs, which I unpack into k and v (see tuple unpacking).
You can do this easily by writing a simple function like below
import uuid
def format_dict(data: dict):
return [dict(title=title, name=name, id=str(uuid.uuid4())) for title, name in data.items()]
where you can split the items as different objects and add a identifier for each using uuid.
Full code can be modified like this
import uuid
import os
import json
def format_dict(data: dict):
return [dict(title=title, name=name, id=str(uuid.uuid4())) for title, name in data.items()]
list_of_dicts = []
for filename in os.listdir("DOCS/Countries Data"):
with open(os.path.join("DOCS/Countries Data", filename), 'r', encoding='utf-8') as f:
country_json = json.load(f)
list_of_dicts.append(format_dict(country_json))
# list_of_dicts contains all file contents

Dictionary from a String with particular structure

I am using python 3 to read this file and convert it to a dictionary.
I have this string from a file and I would like to know how could be possible to create a dictionary from it.
[User]
Date=10/26/2003
Time=09:01:01 AM
User=teodor
UserText=Max Cor
UserTextUnicode=392039n9dj90j32
[System]
Type=Absolute
Dnumber=QS236
Software=1.1.1.2
BuildNr=0923875
Source=LAM
Column=OWKD
[Build]
StageX=12345
Spotter=2
ApertureX=0.0098743
ApertureY=0.2431899
ShiftXYZ=-4.234809e-002
[Text]
Text=Here is the Text files
DataBaseNumber=The database number is 918723
..... (There are more than 1000 lines per file) ...
On the text I have "Name=Something" and then I would like to convert it as follows:
{'Date':'10/26/2003',
'Time':'09:01:01 AM'
'User':'teodor'
'UserText':'Max Cor'
'UserTextUnicode':'392039n9dj90j32'.......}
The word between [ ] can be removed, like [User], [System], [Build], [Text], etc...
In some fields there is only the first part of the string:
[Colors]
Red=
Blue=
Yellow=
DarkBlue=
What you have is an ordinary properties file. You can use this example to read the values into map:
try (InputStream input = new FileInputStream("your_file_path")) {
Properties prop = new Properties();
prop.load(input);
// prop.getProperty("User") == "teodor"
} catch (IOException ex) {
ex.printStackTrace();
}
EDIT:
For Python solution, refer to the answerred question.
You can use configparser to read .ini, or .properties files (format you have).
import configparser
config = configparser.ConfigParser()
config.read('your_file_path')
# config['User'] == {'Date': '10/26/2003', 'Time': '09:01:01 AM'...}
# config['User']['User'] == 'teodor'
# config['System'] == {'Type': 'Abosulte', ...}
Can easily be done in python. Assuming your file is named test.txt.
This will also work for lines with nothing after the = as well as lines with multiple =.
d = {}
with open('test.txt', 'r') as f:
for line in f:
line = line.strip() # Remove any space or newline characters
parts = line.split('=') # Split around the `=`
if len(parts) > 1:
d[parts[0]] = ''.join(parts[1:])
print(d)
Output:
{
"Date": "10/26/2003",
"Time": "09:01:01 AM",
"User": "teodor",
"UserText": "Max Cor",
"UserTextUnicode": "392039n9dj90j32",
"Type": "Absolute",
"Dnumber": "QS236",
"Software": "1.1.1.2",
"BuildNr": "0923875",
"Source": "LAM",
"Column": "OWKD",
"StageX": "12345",
"Spotter": "2",
"ApertureX": "0.0098743",
"ApertureY": "0.2431899",
"ShiftXYZ": "-4.234809e-002",
"Text": "Here is the Text files",
"DataBaseNumber": "The database number is 918723"
}
I would suggest to do some cleaning to get rid of the [] lines.
After that you can split those lines by the "=" separator and then convert it to a dictionary.

How to write line by line request output

I am trying to write line by line the JSON output from my Python request. I already checked some similar issue on StackOverflow in the question: write to file line by line python, without success.
Here is the code:
myfile = open ("data.txt", "a")
for item in pretty_json["geonames"]:
print (item["geonameId"],item["name"])
myfile.write ("%s\n" % item["geonameId"] + "https://www.geonames.org/" + item["name"])
myfile.close()
Here the output from my pretty_json["geonames"]
{
"adminCode1": "FR",
"lng": "7.2612",
"geonameId": 2661847,
"toponymName": "Aeschlenberg",
"countryId": "2658434",
"fcl": "P",
"population": 0,
"countryCode": "CH",
"name": "Aeschlenberg",
"fclName": "city, village,...",
"adminCodes1": {
"ISO3166_2": "FR"
},
"countryName": "Switzerland",
"fcodeName": "populated place",
"adminName1": "Fribourg",
"lat": "46.78663",
"fcode": "PPL"
}
Then, as output saved on my data.txt, I'm having :
11048419
https://www.geonames.org/Aïre2661847
https://www.geonames.org/Aeschlenberg2661880
https://www.geonames.org/Aarberg6295535
The expected result should be something like:
Aïre , https://www.geonames.org/11048419
Aeschlenberg , https://www.geonames.org/2661847
Aarberg , https://www.geonames.org/2661880
Writing the output in CSV could be a solution?
Regards.
Using the csv module.
Ex:
import csv
with open("data.txt", "a") as myfile:
writer = csv.writer(myfile) #Create Writer Object
for item in pretty_json["geonames"]: #Iterate list
writer.writerow([item["name"], "https://www.geonames.org/{}".format(item["geonameId"])]) #Write row.
If I understand correctly, you want the same screen output to your file. That's easy. If you are on python 3 just add to your print function:
print (item["geonameId"],item["name"], file=myfile)
Just compose a proper printing format for the needed items:
...
for item in pretty_json["geonames"]:
print("{}, https://www.geonames.org/{}".format(item["name"], item["geonameId"]))
Sample output:
Aeschlenberg, https://www.geonames.org/2661847

Add "entry" to JSON File with Python

I need to modify a JSON-File with python. As I'm working with python (and JSON) for the first time, I read some articles about it, but didn't understand it completely.
I managed to import a JSON to python, as some kind of array (or list?).
JSON looks like this:
{
"sources":[{
"id":100012630,
"name":"Activity Login Page",
"category":"NAM/Activity",
"automaticDateParsing":true,
"multilineProcessingEnabled":false,
"useAutolineMatching":false,
"forceTimeZone":true,
"timeZone":"Europe/Brussels",
"filters":[],
"cutoffTimestamp":1414364400000,
"encoding":"UTF-8",
"pathExpression":"C:\\NamLogs\\nam-login-page.log*",
"blacklist":[],
"sourceType":"LocalFile",
"alive":true
},{
"id":100001824,
"name":"localWinEvent",
"category":"NAM/OS/EventLog",
"automaticDateParsing":true,
"multilineProcessingEnabled":false,
"useAutolineMatching":false,
"forceTimeZone":false,
"filters":[],
"cutoffTimestamp":1409090400000,
"encoding":"UTF-8",
"logNames":["Security","Application","System","Others"],
"sourceType":"LocalWindowsEventLog",
"alive":true
},{
"id":100001830,
"name":"localWinPerf",
"category":"NAM/OS/Perf",
"automaticDateParsing":false,
"multilineProcessingEnabled":false,
"useAutolineMatching":false,
"forceTimeZone":false,
"filters":[],
"cutoffTimestamp":0,
"encoding":"UTF-8",
"interval":60000,
"wmiQueries":[{
"name":"NAMID Service",
"query":"SELECT * FROM Win32_PerfRawData_PerfProc_Process WHERE Name = 'tomcat7'"
},{
"name":"CPU",
"query":"select * from Win32_PerfFormattedData_PerfOS_Processor"
},{
"name":"Logical Disk",
"query":"select * from Win32_PerfFormattedData_PerfDisk_LogicalDisk"
},{
"name":"Physical Disk",
"query":"select * from Win32_PerfFormattedData_PerfDisk_PhysicalDisk"
},{
"name":"Memory",
"query":"select * from Win32_PerfFormattedData_PerfOS_Memory"
},{
"name":"Network",
"query":"select * from Win32_PerfFormattedData_Tcpip_NetworkInterface"
}],
"sourceType":"LocalWindowsPerfMon",
"alive":true
},
Now, as I got hundreds of files like those, I wrote a foreach through the whole directory:
for filename in os.listdir('./json/'):
with open('./json/'+filename) as data_file:
sources = json.load(data_file)
Now I would need something like again a foreach source in sources, which adds a row (or a entry or whatever a "line" in JSON is called) to every source (something like collectorName=fileName) and then overwrite the old file with the new one.
The JSON would then look like this:
{
"sources":[{
"id":100012630,
"name":"Activity Login Page",
"category":"NAM/Activity",
"automaticDateParsing":true,
"multilineProcessingEnabled":false,
"useAutolineMatching":false,
"forceTimeZone":true,
"timeZone":"Europe/Brussels",
"filters":[],
"cutoffTimestamp":1414364400000,
"encoding":"UTF-8",
"pathExpression":"C:\\NamLogs\\nam-login-page.log*",
"blacklist":[],
"sourceType":"LocalFile",
"alive":true,
"collectorName":"Collector2910"
},{
"id":100001824,
"name":"localWinEvent",
"category":"NAM/OS/EventLog",
"automaticDateParsing":true,
"multilineProcessingEnabled":false,
"useAutolineMatching":false,
"forceTimeZone":false,
"filters":[],
"cutoffTimestamp":1409090400000,
"encoding":"UTF-8",
"logNames":["Security","Application","System","Others"],
"sourceType":"LocalWindowsEventLog",
"alive":true,
"collectorName":"Collector2910"
},{.....
I hope I could explain my issue and I'd be happy if someone could help me out (even with a totally different solution).
Thanks in advance
Michael
Here's one way to do it:
for filename in os.listdir('./json/'):
sources = None
with open('./json/'+filename) as data_file:
sources = json.load(data_file)
sourcelist = sources['sources']
for i, s in enumerate(sourcelist):
sources['sources'][i]['collectorName'] = 'Collector' + str(i)
with open('./json/'+filename, 'w') as data_file:
data_file.write(json.dumps(sources))
for filename in os.listdir('./json/'):
with open('./json/'+filename) as data_file:
datadict = json.load(data_file)
# At this point you have a plain python dict.
# This dict has a 'sources' key, pointing to
# a list of dicts. What you want is to add
# a 'collectorName': filename key:value pair
# to each of these dicts
for record in datadict["sources"]:
record["collectorName"] = filename
# now you just have to serialize your datadict back
# to json and write it back to the file - which is
# in fact a single operation
with open('./json/'+filename, "w") as data_file:
json.dump(datadict, data_file)

Categories

Resources