I have a JSON file containing three fields: 2 are strings and third one is field containing a list of values.
{ "STREAM": "stream",
"BASIS_STREAM": "basis",
"PATHS": "[/opt/path1,/opt/path2]"
}
Now I load that JSON
with open('/pathToJsonFile.json', 'r') as f:
data = json.load(f)
Now I want to get those values.
stream=str(data["STREAM"])
basis=str(data["BASIS_STREAM"])
paths=data["BASE_PATHS"]
The issue is that paths is also threated as String, although I have to use it as a list. I am converting with str function other fields because of the Unicode. Code must be in python2.
Thanks a lot!
Say you have a file called data.json with the following contents:
{
"STREAM": "stream",
"BASIS_STREAM": "basis",
"PATHS": "[/opt/path1,/opt/path2]"
}
Maybe you could use str.split after calling json.load:
with open('data.json', 'r') as f:
data = json.load(f)
print 'data = %s' % data
stream = str(data['STREAM'])
basis = str(data['BASIS_STREAM'])
paths = [str(u_s) for u_s in data['PATHS'][1:-1].split(',')]
print 'stream = %s' % stream
print 'basis = %s' % basis
print 'paths = %s' % paths
Output:
data = {u'PATHS': u'[/opt/path1,/opt/path2]', u'BASIS_STREAM': u'basis', u'STREAM': u'stream'}
stream = stream
basis = basis
paths = ['/opt/path1', '/opt/path2']
Your /opt/path1 and /opt/path2 should be in a quotation marks to be converted in a list. If your PATHS always have a similar template such as "[/XXX,/YYY,/ZZZ,/TTT,/KKK]" the following code should also help. I have converted your data as "['/XXX','/YYY','/ZZZ','/TTT','/KKK']" so that it can be easily converted to a list using ast library. Please see the code as following:
import json
import ast
with open("text_text.json") as f:
data = json.load(f)
print(data["PATHS"]) # Your data
for i in data["PATHS"]:
if i == "[":
data["PATHS"] = data["PATHS"].replace("[", "['")
elif i == ",":
data["PATHS"] = data["PATHS"].replace(",/", "','/")
elif i == "]":
data["PATHS"] = data["PATHS"].replace("]", "']")
#print(data["PATHS"])
print(type(data["PATHS"]))
print(data["PATHS"]) #converted to a data which can be converted to a list.
data_paths = ast.literal_eval(data["PATHS"]) # ast is used to convert str to list.
print(data_paths) # 'list' data
print(type(data_paths))
See the output of the code:
It should also work if your PATH has more data as following:
Related
So I am looking to create a script to make a mod for a game using Python. The script will need to copy all files from a directory to another directory, then alter those files to add a new attribute after a specific line. The issue I am having is that this game uses custom coding based on json formatting in a txt file. I know how to do most of this, however, adding the new data is not something I can get to work.
My end goal will be to be able to do this to any file, so other mod authors can use it to add the data to their mods without needing to do it manually. I also want to try to make this script do more advanced things, but that is another goal that can wait till I get this bit working.
Sample data:
The line I need to add is position_priority = ###. The ### will be different based on what the building does (building categories).
Sample code I need to alter:
building_name_number = {
base_build_time = 60
base_cap_amount = 1
category = pop_assembly
<more code>
}
I need to put the new data just after building_name_number, however this exact name will be unique, the only thing that will always be the same is that it will start with building. So regex is what I have been trying to use, but I have never dealt with regex so I cant get it to work.
My Current code:
if testingenabled:
workingdir = R"E:/Illusives-Mods/Stellaris/Building Sorting"
pattern = "^building_"
Usortingindex = sortingindex["sorting_pop_assembly"]
print(f"Testing Perameters: Index: {Usortingindex}, Version: {__VERSION__}, Working DIR: {workingdir}")
# os.chdir(stellaris_buildings_path)
os.chdir(workingdir)
for file in os.listdir(workingdir):
if fnmatch.fnmatch(file, "*.txt"):
print("File found")
with open(file, "r+", encoding="utf-8") as openfiledata:
alllines = openfiledata.read()
for line in alllines:
if line == re.match(r'(^building_)', line, re.M):
print("found match")
# print(f"{sorting_attrib}{Usortingindex}")
# print("position_priority = 200")
openfiledata.write("\n" + sorting_attrib + Usortingindex + "\n")
break
I am not getting any errors with this code. But it doesnt work
I am using Python 3.9.6.
EDIT:
This code is before the script
allow = {
hidden_trigger = {
OR = {
owner = { is_ai = no }
NAND = {
free_district_slots = 0
free_building_slots <= 1
free_housing <= 0
free_jobs <= 0
}
}
}
}
This is after
allow = {
hidden_trigger = {
OR = {
owner = {
is_ai = false
}
NAND = {
free_district_slots = 0
free_building_slots = {
value = 1
operand = <=
}
free_housing = {
value = 0
operand = <=
}
free_jobs = {
value = 0
operand = <=
}
}
}
}
}
The output must be the same as the input, at least in terms of the operators
If you would keep it as JSON then you could read all to Python (to get ti as dictionary), search and add items in dictionary, and write back to JSON new dictionary.
text = '''{
"building_name_number": {
"base_build_time": 60,
"base_cap_amount": 1,
"category": "pop_assembly"
},
"building_other": {}
}'''
import json
data = json.loads(text)
for key in data.keys():
if key.startswith('building_'):
data[key]["position_priority"] = 'some_value'
print(json.dumps(data, indent=4))
Result:
{
"building_name_number": {
"base_build_time": 60,
"base_cap_amount": 1,
"category": "pop_assembly",
"position_priority": "some_value"
},
"building_other": {
"position_priority": "some_value"
}
}
I found module paradox-reader which can convert this file format to JSON file.
Using code from file paradoxReader.py I created example which can convert string to Python dictionary, add some value and convert to something similar to original file. But this may need to add more code in encode()
import json
import re
def decode(data):#, no_json):
data = re.sub(r'#.*', '', data) # Remove comments
data = re.sub(r'(?<=^[^\"\n])*(?<=[0-9\.\-a-zA-Z])+(\s)(?=[0-9\.\-a-zA-Z])+(?=[^\"\n]*$)', '\n', data, flags=re.MULTILINE) # Seperate one line lists
data = re.sub(r'[\t ]', '', data) # Remove tabs and spaces
definitions = re.findall(r'(#\w+)=(.+)', data) # replace #variables with value
if definitions:
for definition in definitions:
data = re.sub(r'^#.+', '', data, flags=re.MULTILINE)
data = re.sub(definition[0], definition[1], data)
data = re.sub(r'\n{2,}', '\n', data) # Remove excessive new lines
data = re.sub(r'\n', '', data, count=1) # Remove the first new line
data = re.sub(r'{(?=\w)', '{\n', data) # reformat one-liners
data = re.sub(r'(?<=\w)}', '\n}', data) # reformat one-liners
data = re.sub(r'^[\w-]+(?=[\=\n><])', r'"\g<0>"', data, flags=re.MULTILINE) # Add quotes around keys
data = re.sub(r'([^><])=', r'\1:', data) # Replace = with : but not >= or <=
data = re.sub(r'(?<=:)(?!-?(?:0|[1-9]\d*)(?:\.\d+)?(?:[eE][+-]?\d+)?)(?!\".*\")[^{\n]+', r'"\g<0>"', data) # Add quotes around string values
data = re.sub(r':"yes"', ':true', data) # Replace yes with true
data = re.sub(r':"no"', ':false', data) # Replace no with false
data = re.sub(r'([<>]=?)(.+)', r':{"value":\g<2>,"operand":"\g<1>"}', data) # Handle < > >= <=
data = re.sub(r'(?<![:{])\n(?!}|$)', ',', data) # Add commas
data = re.sub(r'\s', '', data) # remove all white space
data = re.sub(r'{(("[a-zA-Z_]+")+)}', r'[\g<1>]', data) # make lists
data = re.sub(r'""', r'","', data) # Add commas to lists
data = re.sub(r'{("\w+"(,"\w+")*)}', r'[\g<1>]', data)
data = re.sub(r'((\"hsv\")({\d\.\d{1,3}(,\d\.\d{1,3}){2}})),', r'{\g<2>:\g<3>},', data) # fix hsv objects
data = re.sub(r':{([^}{:]*)}', r':[\1]', data) # if there's no : between list elements need to replace {} with []
data = re.sub(r'\[(\w+)\]', r'"\g<1>"', data)
data = re.sub(r'\",:{', '":{', data) # Fix user_empire_designs
data = '{' + data + '}'
return json.loads(data)
def encode(data):
text = json.dumps(data, indent=4)
text = text[2:-2]
text = text.replace('"', '').replace(':', ' =').replace(',', '')
return text
# ----------
text = '''building_name_number = {
base_build_time = 60
base_cap_amount = 1
category = pop_assembly
}'''
data = decode(text)
data['building_name_number']['new_item'] = 123
text = encode(data)
print(text)
Result:
building_name_number = {
base_build_time = 60
base_cap_amount = 1
category = pop_assembly
new_item = 123
}
I'm trying to process a json file like below and extract its data in the below output format for further processing.
json file
{
"application_robotics-2.1.610.80350109": [
"/home/machine_process/application_robotics/services/linear_service/4.106.50109987/robotics.yaml",
"/home/machine_process/application_robotics/services/linear_service/4.106.50109987/application_robotics-4.106.50109987.zip"
],
"web_robotics-3.116.50100987": [
"/home/machine_process/application_robotics/services/web_robotics/3.116.50100987/robotics.yaml",
"/home/machine_process/application_robotics/services/web_robotics/3.116.50100987/web_robotics-3.116.50100987.zip"
]
}
Expected output format
name = "application_robotics-2.1.610.80350109" # where name is a variable to be used in the other portion of the code.
yaml = "/home/machine_process/application_robotics/services/linear_service/4.106.50109987/robotics.yaml" # where yaml is a variable.
zip = "/home/machine_process/application_robotics/services/linear_service/4.106.50109987/application_robotics-4.106.50109987.zip" # where zip is a variable.
same format applied for other entries.
Below is the snippet code I've come up with and I'm not exactly getting the logic. Any help will be really helpful here. Thanks.
with concurrent.futures.ProcessPoolExecutor() as executor:
with open(file_path, "r") as input_json:
json_data = json.load(input_json)
for key, value in json_data.items():
name = json_data[key]
yaml = json_data[value]
zip = json_data[value]
file_location = os.path.dirname(tar)
futures = executor.submit(
other_function_name, yaml, zip, file_location, name
)
results.append(futures)
Current Output:
['home/machine_process/application_robotics/services/linear_service/4.106.50109987/robotics.yaml', '/home/machine_process/application_robotics/services/linear_service/4.106.50109987/application_robotics-4.106.50109987.zip']
Since name corresponds to the keys; yaml to the first element of lists; and zip_ to the second elements (note that zip is a python builtin, so avoid using it as a variable name), we can directly unpack it as we loop over the dictionary and pass these to executor.
with concurrent.futures.ProcessPoolExecutor() as executor:
with open(file_path, "r") as input_json:
json_data = json.load(input_json)
for name, (yaml, zip_) in json_data.items():
file_location = os.path.dirname(tar)
futures = executor.submit(other_function_name, yaml, zip_, file_location, name)
results.append(futures)
i'm trying to append text to file in specific location.
i want to create program which takes input from user(name, image, id) and adds them to this file
names = []
images = []
id = 0
url = ['https://somewebsiteUsingId10.com',
'https://somewebsiteUsingId20.com']
if id == 5:
names.append("Testing Names")
images.append("Testing Images")
elif id == 0:
names.append("Testing one names")
images.append("Testing one Images")
I want modified file to be like this:
names = []
images = []
id = 0
url = ['https://somewebsiteUsingId20.com',
'https://somewebsiteUsingId10.com',
'https://somewebsiteUsingId50.com']
if id == 5:
names.append("Testing Names")
images.append("Testing Images")
elif id == 0:
names.append("Testing one names")
images.append("Testing one Images")
elif id == 50:
names.append("User input")
images.append("User Input")
Thanks!
In cases like this, a good course of action is to put the variable data in a configuration file.
On start-up, your program reads the configuration file and processes it.
Another program can update the configuration file.
Python has the json module in its standard library. This supports lists and dicts, so it is a good match for Python data structures.
Say you write a file urls.json, looking like this:
[
"https://somewebsiteUsingId20.com",
"https://somewebsiteUsingId10.com",
"https://somewebsiteUsingId50.com"
]
In your program you can then do:
import json
with open("urls.json") as f:
urls = json.load(f)
The variable urls now points to a list containing the aforementioned URLs.
Writing the config data goes about the same:
urls = [
"https://www.parrot.org",
"https://www.ministryofsillywalks.org",
"https://www.cheese.net",
]
with open("newurls.json", "w") as f:
json.dump(urls, f, indent=4)
The file newurls.json now contains:
[
"https://www.parrot.org",
"https://www.ministryofsillywalks.org",
"https://www.cheese.net"
]
Note that JSON is pretty flexible, you are not limited to strings:
import datetime
config = {
'directories': ["https://www.parrot.org", "https://www.ministryofsillywalks.org"],
'saved': str(datetime.datetime.now()),
'count': 12
}
with open("configuration.json", "w") as cf:
json.dump(config, cf, indent=4)
This would result in something like:
{
"directories": [
"https://www.parrot.org",
"https://www.ministryofsillywalks.org"
],
"saved": "2022-02-07 21:21:14.787420",
"count": 12
}
(You'd get another date/time, of course.)
The only major downside to JSON files is that they don't allow comments. If you need comments, use another format like the configparser module.
Note that there are other methods like shelve and read&eval but those have potential safety issues.
So I'm trying to setup json so i can store data in-between user sessions I like a name but i don't know how to add or change a specific value in an external json file like for example {"name": ""} how do i fill that "" for the json file using python?
I have already tried to use dumps and all the tutorials use dumps
the json in another file
{
"human_name": "",
"oracle_name": "",
"human_age": "",
"human_gender": "",
"oracle_gender": ""
}
the python
import json
with open('data.json', '+') as filedata:
data = filedata.read()
used_data = json.loads(data)
if str(used_data(['human_name'])) == "":
print("what is your name")
name = input()
json.dumps(name)
if str(used_data(['oracle_name'])) == "":
print("what is my name")
oracle_name = input()
json.dumps(oracle_name)
print(str(['human_name']))
The expected result is when I print the data it displays input, but when i run it it goes
File "rember.py", line 3, in
with open('data.json', '+') as filedata: ValueError: Must have exactly one of create/read/write/append mode and at most one plus
Try this code.
json.loads loads the entire json string as a python dict object. The values in a dict are changed/added using dict[key] = value. You can't call a dict object to change its value.
The json.dumps method serializes an object to a JSON formatted str. Which you can then write into the same file or a different file based on your requirement.
import json
with open('data.json', 'r') as filedata:
data = filedata.read()
used_data = json.loads(data)
if used_data['human_name'] == "":
print("what is your name")
name = input()
used_data['human_name'] = name
if used_data['oracle_name'] == "":
print("what is my name")
oracle_name = input()
used_data['oracle_name'] = oracle_name
print(used_data)
with open('data.json', 'w') as filewrite:
filewrite.write(json.dumps(used_data, indent=4))
Basically what you need to do is load json file as dictionary, add value, and save it.
import json
with open('./data.json', 'r') as f:
d = json.load(f)
d['human_name'] = 'steve'
d['oracle_name'] = 'maria'
with open('./data.json', 'w') as f:
json.dump(d, f, indent=4)
Below, is the json structure I am pulling from my online weather station. I am also including a json_to_csv python script that is supposed to convert json data to csv output, but only returns a "Key" error. I want to pull data from "current_observation": only.
{
"response": {
"features": {
"conditions": 1
}
}
, "current_observation": {
"display_location": {
"latitude":"40.466442",
"longitude":"-85.362709",
"elevation":"280.4"
},
"observation_time_rfc822":"Fri, 26 Jan 2018 09:40:16 -0500",
"local_time_rfc822":"Sun, 28 Jan 2018 11:22:47 -0500",
"local_epoch":"1517156567",
"local_tz_short":"EST",
"weather":"Clear",
"temperature_string":"44.6 F (7.0 C)",
}
}
import csv, json, sys
inputFile = open("pywu.cache.json", 'r') #open json file
outputFile = open("CurrentObs.csv", 'w') #load csv file
data = json.load(inputFile) #load json content
inputFile.close() #close the input file
output = csv.writer(outputFile) #create a csv.write
output.writerow(data[0].keys())
for row in data:
output = csv.writer(outputFile) #create a csv.write
output.writerow(data[0].keys())
for row in data:
output.writerow(row.values()) #values row
What's the best method to retrieve the temperature string and convert to .csv format? Thank you!
import pandas as pd
df = pd.read_json("pywu.cache.json")
df = df.loc[["local_time_rfc822", "weather", "temperature_string"],"current_observation"].T
df.to_csv("pywu.cache.csv")
maybe pandas can be of help for you. the .read_json() function creates a nice dataframe, from which you can easily choose the desired rows and columns. and it can save as csv as well.
to add latitude and longitude to the csv-line, you can do this:
df = pd.read_json("pywu.cache.csv")
df = df.loc[["local_time_rfc822", "weather", "temperature_string", "display_location"],"current_observation"].T
df = df.append(pd.Series([df["display_location"]["latitude"], df["display_location"]["longitude"]], index=["latitude", "longitude"]))
df = df.drop("display_location")
df.to_csv("pywu.cache.csv")
to print the location in numeric values, you can do this:
df = pd.to_numeric(df, errors="ignore")
print(df['latitude'], df['longitude'])
This will find all keys (e.g. "temperature_string") specified inside of the json blob and then write them to a csv file. You can modify this code to get multiple keys.
import csv, json, sys
def find_deep_value(d, key):
# Find a the value of keys hidden within a dict[dict[...]]
# Modified from https://stackoverflow.com/questions/9807634/find-all-occurrences-of-a-key-in-nested-python-dictionaries-and-lists
# #param d dictionary to search through
# #param key to find
if key in d:
yield d[key]
for k in d.keys():
if isinstance(d[k], dict):
for j in find_deep_value(d[k], key):
yield j
inputFile = open("pywu.cache.json", 'r') # open json file
outputFile = open("mypws.csv", 'w') # load csv file
data = json.load(inputFile) # load json content
inputFile.close() # close the input file
output = csv.writer(outputFile) # create a csv.write
# Gives you a list of temperature_strings from within the json
temps = list(find_deep_value(data, "temperature_string"))
output.writerow(temps)
outputFile.close()