i'm trying to append text to file in specific location.
i want to create program which takes input from user(name, image, id) and adds them to this file
names = []
images = []
id = 0
url = ['https://somewebsiteUsingId10.com',
'https://somewebsiteUsingId20.com']
if id == 5:
names.append("Testing Names")
images.append("Testing Images")
elif id == 0:
names.append("Testing one names")
images.append("Testing one Images")
I want modified file to be like this:
names = []
images = []
id = 0
url = ['https://somewebsiteUsingId20.com',
'https://somewebsiteUsingId10.com',
'https://somewebsiteUsingId50.com']
if id == 5:
names.append("Testing Names")
images.append("Testing Images")
elif id == 0:
names.append("Testing one names")
images.append("Testing one Images")
elif id == 50:
names.append("User input")
images.append("User Input")
Thanks!
In cases like this, a good course of action is to put the variable data in a configuration file.
On start-up, your program reads the configuration file and processes it.
Another program can update the configuration file.
Python has the json module in its standard library. This supports lists and dicts, so it is a good match for Python data structures.
Say you write a file urls.json, looking like this:
[
"https://somewebsiteUsingId20.com",
"https://somewebsiteUsingId10.com",
"https://somewebsiteUsingId50.com"
]
In your program you can then do:
import json
with open("urls.json") as f:
urls = json.load(f)
The variable urls now points to a list containing the aforementioned URLs.
Writing the config data goes about the same:
urls = [
"https://www.parrot.org",
"https://www.ministryofsillywalks.org",
"https://www.cheese.net",
]
with open("newurls.json", "w") as f:
json.dump(urls, f, indent=4)
The file newurls.json now contains:
[
"https://www.parrot.org",
"https://www.ministryofsillywalks.org",
"https://www.cheese.net"
]
Note that JSON is pretty flexible, you are not limited to strings:
import datetime
config = {
'directories': ["https://www.parrot.org", "https://www.ministryofsillywalks.org"],
'saved': str(datetime.datetime.now()),
'count': 12
}
with open("configuration.json", "w") as cf:
json.dump(config, cf, indent=4)
This would result in something like:
{
"directories": [
"https://www.parrot.org",
"https://www.ministryofsillywalks.org"
],
"saved": "2022-02-07 21:21:14.787420",
"count": 12
}
(You'd get another date/time, of course.)
The only major downside to JSON files is that they don't allow comments. If you need comments, use another format like the configparser module.
Note that there are other methods like shelve and read&eval but those have potential safety issues.
Related
I'm trying to process a json file like below and extract its data in the below output format for further processing.
json file
{
"application_robotics-2.1.610.80350109": [
"/home/machine_process/application_robotics/services/linear_service/4.106.50109987/robotics.yaml",
"/home/machine_process/application_robotics/services/linear_service/4.106.50109987/application_robotics-4.106.50109987.zip"
],
"web_robotics-3.116.50100987": [
"/home/machine_process/application_robotics/services/web_robotics/3.116.50100987/robotics.yaml",
"/home/machine_process/application_robotics/services/web_robotics/3.116.50100987/web_robotics-3.116.50100987.zip"
]
}
Expected output format
name = "application_robotics-2.1.610.80350109" # where name is a variable to be used in the other portion of the code.
yaml = "/home/machine_process/application_robotics/services/linear_service/4.106.50109987/robotics.yaml" # where yaml is a variable.
zip = "/home/machine_process/application_robotics/services/linear_service/4.106.50109987/application_robotics-4.106.50109987.zip" # where zip is a variable.
same format applied for other entries.
Below is the snippet code I've come up with and I'm not exactly getting the logic. Any help will be really helpful here. Thanks.
with concurrent.futures.ProcessPoolExecutor() as executor:
with open(file_path, "r") as input_json:
json_data = json.load(input_json)
for key, value in json_data.items():
name = json_data[key]
yaml = json_data[value]
zip = json_data[value]
file_location = os.path.dirname(tar)
futures = executor.submit(
other_function_name, yaml, zip, file_location, name
)
results.append(futures)
Current Output:
['home/machine_process/application_robotics/services/linear_service/4.106.50109987/robotics.yaml', '/home/machine_process/application_robotics/services/linear_service/4.106.50109987/application_robotics-4.106.50109987.zip']
Since name corresponds to the keys; yaml to the first element of lists; and zip_ to the second elements (note that zip is a python builtin, so avoid using it as a variable name), we can directly unpack it as we loop over the dictionary and pass these to executor.
with concurrent.futures.ProcessPoolExecutor() as executor:
with open(file_path, "r") as input_json:
json_data = json.load(input_json)
for name, (yaml, zip_) in json_data.items():
file_location = os.path.dirname(tar)
futures = executor.submit(other_function_name, yaml, zip_, file_location, name)
results.append(futures)
I have a JSON file containing three fields: 2 are strings and third one is field containing a list of values.
{ "STREAM": "stream",
"BASIS_STREAM": "basis",
"PATHS": "[/opt/path1,/opt/path2]"
}
Now I load that JSON
with open('/pathToJsonFile.json', 'r') as f:
data = json.load(f)
Now I want to get those values.
stream=str(data["STREAM"])
basis=str(data["BASIS_STREAM"])
paths=data["BASE_PATHS"]
The issue is that paths is also threated as String, although I have to use it as a list. I am converting with str function other fields because of the Unicode. Code must be in python2.
Thanks a lot!
Say you have a file called data.json with the following contents:
{
"STREAM": "stream",
"BASIS_STREAM": "basis",
"PATHS": "[/opt/path1,/opt/path2]"
}
Maybe you could use str.split after calling json.load:
with open('data.json', 'r') as f:
data = json.load(f)
print 'data = %s' % data
stream = str(data['STREAM'])
basis = str(data['BASIS_STREAM'])
paths = [str(u_s) for u_s in data['PATHS'][1:-1].split(',')]
print 'stream = %s' % stream
print 'basis = %s' % basis
print 'paths = %s' % paths
Output:
data = {u'PATHS': u'[/opt/path1,/opt/path2]', u'BASIS_STREAM': u'basis', u'STREAM': u'stream'}
stream = stream
basis = basis
paths = ['/opt/path1', '/opt/path2']
Your /opt/path1 and /opt/path2 should be in a quotation marks to be converted in a list. If your PATHS always have a similar template such as "[/XXX,/YYY,/ZZZ,/TTT,/KKK]" the following code should also help. I have converted your data as "['/XXX','/YYY','/ZZZ','/TTT','/KKK']" so that it can be easily converted to a list using ast library. Please see the code as following:
import json
import ast
with open("text_text.json") as f:
data = json.load(f)
print(data["PATHS"]) # Your data
for i in data["PATHS"]:
if i == "[":
data["PATHS"] = data["PATHS"].replace("[", "['")
elif i == ",":
data["PATHS"] = data["PATHS"].replace(",/", "','/")
elif i == "]":
data["PATHS"] = data["PATHS"].replace("]", "']")
#print(data["PATHS"])
print(type(data["PATHS"]))
print(data["PATHS"]) #converted to a data which can be converted to a list.
data_paths = ast.literal_eval(data["PATHS"]) # ast is used to convert str to list.
print(data_paths) # 'list' data
print(type(data_paths))
See the output of the code:
It should also work if your PATH has more data as following:
I am reading through a .json file and parsing some of the data to save into an Object. There are only 2000 or so items within the JSON that I need to iterate over, but the script I currently have running takes a lot longer than I'd like.
data_file = 'v1/data/data.json'
user = User.objects.get(username='lsv')
format = Format(format='Limited')
format.save()
lost_cards = []
lost_cards_file = 'v1/data/LostCards.txt'
with open(data_file) as file:
data = json.load(file)
for item in data:
if item['model'] == 'cards.cardmodel':
if len(Card.objects.filter(name=item['fields']['name'])) == 0:
print(f"card not found: {item['fields']['name']}")
lost_cards.append(item['fields']['name'])
try:
Rating(
card=Card.objects.get(name=item['fields']['name'], set__code=item['fields']['set']),
rating=item['fields']['rating'],
reason=item['fields']['reason'],
format=format,
rator=user
).save()
except Exception as e:
print(e, item['fields']['name'], item['fields']['set'])
break
with open(lost_cards_file, 'w') as file:
file.write(str(lost_cards))
The code is working as expected, but it's taking a lot longer than I'd like. I'm hoping there is a built-in JSON or iterator function that could accelerate this process.
There is. It's called the json module.
with open(data_file, 'r') as input_file:
dictionary_from_json = json.load(input_file)
should do it.
I have a python file with a lot of code and lists. I need to add column in particular list.
My questions are - how can I load particular list from .py file? And, how can I add element into particular list?
Here in my code:
import os, datetime, json
login1 = os.environ["login"].split('\')[1].strip()
login = login1.split('.')[0].strip()
USERID = str(os.environ['USERID'])
header = login + USERID + '.json'
with open("d:\\python\\monitor.py", "r") as infile:
data = infile.readlines()
#here I need to load a "dev_personal_files" list and append additional element
Content of monitor.py file
#some python code
dev_personal_files = [
'Yura.json',
'Sasha.json'
]
staging_files = [
#'ple.json',
'retailReleaseServer.json',
'topaz.json',
'ple2.json',
#'klub.json',
'gaabtMX.json'
]
staging_files2 = [
'retailDemo.json',
'resort.json',
'jhnkljkl.json',
'hbjk,nm,.json',
'bnbnj,jnk,.json'
]
#some python code
What I want to add into "dev_personal_files" list (list within monitor.py file):
dev_personal_files = [
'NEWRECORD.json',
'Yura.json',
'Sasha.json'
]
Getting a list (or any object) from a .py file is done by importing the file:
import mypyfile # <- leave off the .py
importedlist = mypyfile.dev_personal_files
Or you can just import the object itself:
from mypyfile import dev_personal_files
dev_personal_files.extend(my_list_of_extra_items)
However the changes to the list will be lost after you end your python session.
If you want to permanently store changes to the list, save it in a format like json, not in a py file.
import json
# Read data from the file
with open('myfile.json') as f:
my_list = json.load(f)
# Add an item to your list
my_list.append('foo')
# Save data to the file
with open('myfile.json', 'w') as f:
json.dump(my_list)
I am playing with cgi (uploading file form),
and I am receiving the files as storage object and I sotred it in (input) variable.
this is the simple iteration.
for file in input:
filepath = ....
filename, fileext = os.path.splitext(filepath)
file_real_name = ....
file_size = ....
file_type = ...
file_url = ....
file_short_name = ...
file_show_link = ....
# etc
it would be easy if it was only one file , but what If i have more than one ?
how can I have another value that holds all the iteration information in
like uploaded_files where I can access each uploaded file with all the information for the above iteration ?
I tried to read the docs but I cant wrap my head around some iteration concepts yet, sorry :)
You want to use a data structure to hold your data. Depending on the complexity, you may want to simply use a list of dictionaries:
files = []
for file in input:
files.append({
"path": get_path(file),
"name": get_name(file),
"size": get_size(file),
...
})
Or, if you find you need to perform lots of operations on your data, you might want to make your own class and make a list of objects:
class SomeFile:
def __init__(self, path, name, size, ...):
self.path = path
...
def do_something_with_file(self):
...
files = []
for file in input:
files.append(SomeFile(get_path(file), get_name(file), get_size(file), ...))
Note that here you are following a pattern of building up a list by iterating over an iterator. You can do this efficiently using a list comprehension, e.g:
[{"path": get_path(file), "name": get_name(file), ...} for file in input]
Also note that file and input are really bad variable names, as they will mask the builtins file() and input().
results = []
for i in range(5):
file_data = {}
file_data['a'] = i
file_data['b'] = i**2
results.append(file_data)
print results