I'm working with *.cfg files. The file can be read in a text editor like gedit and has this format:
% some comments
VAR_1= 1
%
% More comments
ANOTHER_VAR= -8
%
% comments again
VAR_THE_COMEBACK= 10
I want to create multiple config files just changing VAR_1= 1....2...3.........10. I manage to import the *cfg file without any new import in python but I'm not getting a way to change just this parameter, saving the file and creating another one with another value for VAR_1.
my code until now is really simple:
import os
os.chdir('/home/leonardo/Desktop')
f = open('file.cfg','r') #if I replace r by w I erase the file ....
a = f.read()
print a.find('1')
a.replace('1','2') #I tried this but. ... :(
f.close()
Any tips ?
Thank you for the help !
Untested code, but you will get the idea:
with open('file.cfg', 'r') as f:
contents_by_line = f.readlines()
for var_index, line in enumerate(contents_by_line):
if line.startswith("VAR_"):
break
else:
raise RuntimeError("VAR_ not found in file")
for var_i, new_cfg_file in ((2,"file2.cfg"),
(3, "file3.cfg")): #add files as you want
with open(new_cfg_file, "w") as fout:
for i, line in enumerate(contents_by_line):
if i == var_index:
fout.write("VAR_1=%d\n" % var_i)
else:
fout.write(line)
Thank you guys for all the help. I changed my approach to the problem, since the lines would be all the 'same', I just created a new line and replaced with a function I found here in stack. I hope it will help someone someday.
This script will automate a series of CFD simulations for my final project at college. It creates a series of folders with some simulation conditions on the name, copies the mesh file to the folder created as well as the new config file, the last line will run the code but I need to work in the base simulation setup to run the script.
The code is just in preliminary fase, I'll change it to be more readable and to be easily modified.
thank you guys !
Any tip is welcome, I'm trying to improve my coding skill : )
the code :::
import fileinput
import os
import shutil
import subprocess
class stuff:
root_folder = '/home/leonardo/Desktop/testzone'
mini_mach = 0.3
maxi_mach = 1.3
number_steps = 3
increment = ((maxi_mach-mini_mach)/number_steps)
config_file = 'inv_NACA0012.cfg'
parameter_1 = 'MACH_NUMBER= 0.8'
parameter_2 = 'CONV_NUM_METHOD_ADJFLOW= JST'
init_pa = 'MACH_NUMBER= ' #use a space after '='
init_pa2 = 'CONV_NUM_METHOD_ADJFLOW= ' #use a space after '='
airfoil = 'NACA0012'
command1 = 'parallel_computation.py -f ' #use space before the last " ' "
command2 = '-n 2'
mesh_file = 'mesh_NACA0012_inv.su2'
class modify:
def replaceAll(self,files,searchExp,replaceExp):
for line in fileinput.input(files, inplace=1):
if searchExp in line:
line = line.replace(searchExp,replaceExp)
sys.stdout.write(line)
mod = modify()
stuff = stuff()
for i in xrange(stuff.number_steps):
mach_name = stuff.airfoil + '_mach_' + `float('%.2f'% stuff.mini_mach)`
folder_name = stuff.root_folder + '/' + mach_name
print 'creating ...' + folder_name
os.makedirs(folder_name)
file_father = stuff.root_folder + '/' + stuff.config_file
shutil.copy2(file_father,folder_name)
mesh_father = stuff.root_folder + '/' + stuff.mesh_file
shutil.copy2(mesh_father,folder_name)
os.chdir(folder_name)
pre_mod_file = mach_name + '.cfg'
os.renames(stuff.config_file,pre_mod_file)
new_parameter_1 = stuff.init_pa + `float('%.2f'% stuff.mini_mach)`
new_parameter_2 = stuff.init_pa2 + `float('%.2f'% stuff.mini_mach)`
mod.replaceAll(pre_mod_file,stuff.parameter_1,new_parameter_1)
mod.replaceAll(pre_mod_file,stuff.parameter_2,new_parameter_2)
stuff.mini_mach += stuff.increment
#subprocess.check_call(stuff.command + pre_mod_file + stuff.command2)
Related
I want to have a new file created each time a button is pressed.
To make things easier I choose to use Time to make unique file names.
But for some reason this only creates the file once, and writes to it the next time:
import os
timestr = time.strftime("%H-%M-%S")
dpath = os.path.join(os.path.expanduser("~"), "Documents")
if not os.path.exists(dpath):
os.makedirs(dpath)
fpath = os.path.join(dpath, timestr + ".bat")
open(fpath, "w+").write("""netsh interface ipv4 set address name="Ethernet" static """ +
str(sf1) + " " + str(sf2) + " " + str(sf3))
Please do the following to solve your problem. Follow comments and feel free to ask questions.
from datetime import datetime
import time
def read_to_file_once(list_of_strings):
filename = "myfile" + datetime.today().strftime("%Y_%m_%d_%H_%M_%S") # get date and time now
with open(filename, mode="a") as f: # append mode "a" create file if it even doe not exist
for line in list_of_strings:
f.write(str(line)+"\n")
if __name__ == "__main__":
read_to_file_once([111, 1112, 3434])
time.sleep(2)
read_to_file_once([888, "ABC", 3434])
So I have my main python script which I run and essentially pass three arguments that are -p, -e and -d to another python script. I have been using subprocess in order to this which I understand.
What I want to achieve is rather than using subprocess I want to import the second file 'generate_json.py', and be able to pass the three arguments to its main() function. How can I pass the three arguments like I have in my subprocess call?
My code for my main script is as follows:
import generate_json as gs
def get_json_location(username=os.getlogin()):
first = "/Users/"
last = "/Desktop/data-code/Testdata"
result = first + username + last
return result
Assuming that the script files do not have to be used individually, i.e: generate_json.py on its own from the command line.
I think a cleaner approach would be to wrap generate_json.py functions and put it into a class.
In this case I renamed generate_json.py to ConfigurationHandling.py
import os
import json
from functions import read_config
class ConfigurationHandler(object):
def __init__(self, new_parameter_file, new_export_data_file, new_export_date):
self._parameter_file = new_parameter_file
self._export_data_file = new_export_data_file
self._export_date = new_export_date
self._parsed_configuration = self.read_configuration()
self._perform_some_action1()
self._perform_some_action2()
def _read_configuration(self):
"""Uses lower level function `read_config` in function.py file to read configuration file"""
parsed_configuration = read_config(self.export_data_file)
return parsed_configuration
def _perform_some_action1(self):
pass
def _perform_some_action2(self):
# Logic code for parsing goes here.
pass
def get_config(self):
"""Returns configuration"""
return [self.parameter_file, self.parsed_configuration, self.export_date]
def json_work(self):
cfg = self.get_config()[0] # json location
data = self.get_config()[1] # export_agent_core_agent.yaml
date = self.get_config()[2] # synthetic data folder - YYYY-MM-DD
if not date:
date = ""
else:
date = date + "/"
json_location = cfg # json data path
json_database = data["config"]["database"]
json_collection = data["config"]["collection"]
json_path = "{0}/{1}{2}/{3}/{3}.json".format(json_location, date, json_database, json_collection)
json_base_name = json_database + "/" + json_collection + "/" + os.path.basename(json_path) # prints json filename
current_day = date
with open('dates/' + current_day + '.json', 'a') as file:
data = {}
if os.path.exists(json_path):
json_file_size = str(os.path.getsize(json_path)) # prints json file size
print("File Name:" " " + json_base_name + " " "Exists " + "\n")
print("File Size:" " " + json_file_size + " " "Bytes " "\n")
print("Writing to file")
# if json_path is not False:
data['File Size'] = int(json_file_size)
data['File Name'] = json_base_name
json.dump(data, file, sort_keys=True)
file.write('\n')
else:
print(json_base_name + " " "does not exist")
print("Writing to file")
data['File Name'] = json_base_name
data['File Size'] = None
json.dump(data, file, sort_keys=True)
file.write('\n')
file.close()
Then in main.py
from ConfigurationHandler import ConfigurationHandler
def main():
#Drive the program from here and add the functionality together.
#Routine to do some work here and get the required variables
parameter_file = "some_parameter"
export_data_file = "some_file.yaml"
new_export_date = "iso_8601_date_etc"
conf_handl = ConfigurationHandler(parameter_file, export_data_file, new_export_date)
configuration = conf_handl.get_config()
conf_handl.json_work()
if __name__ == '__main__':
main()
In the project, you should aim to have only one main function and split up the functionality accordingly.
It will be much easier to change parts of the program later on when everything is split out evenly.
So far i have got the following :
from genrate_jsonv2 import ConfigurationHandler
import os
import argparse
def get_json_location(username=os.getlogin()):
first = "/Users/"
last = "/Desktop/data-code/Testdata"
result = first + username + last
return result
def get_config():
parser = argparse.ArgumentParser()
parser.add_argument("-d", "--export-date", action="store", required=True)
args = parser.parse_args()
return [args.export_date]
yml_directory = os.listdir('yaml')
yml_directory.remove('export_config.yaml')
data = get_config()[0]
def main():
for yml in yml_directory:
parameter_file = get_json_location
export_data_file = yml
new_export_date = data
conf_handl = ConfigurationHandler(parameter_file, export_data_file, new_export_date)
configuration = conf_handl.get_config()
conf_handl.json_work()
if __name__ == '__main__':
main()
The issue is , within export_data_file , i don't really want to be passing a file_path location , i rather have it loop through each file_name in the yml directory. When doing so i get an error saying ,'Error reading config file'
I am trying to make a program that writes a new file each time it runs.
For example:
I run the program once. The folder is empty so it adds a file to the folder named "Test_Number_1.txt"
I run the program for the second time. The folder has one file, so it scans it as a file, scans for another file but there is no file, so it creates a new file named "Test_Number_2.txt"
This is what I had in mind, but the code won't leave the while loop. I am still new to programming so excuse my inefficient coding haha.
memory = # something that changes each time I run the program
print(memory)
print("<-<<----<<<---------<+>--------->>>---->>->")
found_new = False
which_file = 0
while not found_new:
try:
file = open("path_to_folder/Test_Number_" + str(which_file) + ".txt", "a")
except FileNotFoundError:
which_file += 1
file_w = open("path_to_folder/Test_Number_" + str(which_file) + ".txt", "w")
found_new = True
break
print("Looked", which_file, "times.")
which_file += 1
time.sleep(1)
file = open("path_to_folder/Test_Number_" + str(which_file) + ".txt", "a")
file.write(memory)
file.close()
print("Done.")
I put the time.sleep(1) to delay the process in case of a bug so that my entire computer didn't overload and thank goodness because the program just keeps adding more and more files until I force quit it.
One simple solution
from os.path import isfile
def file_n(n):
return "Test_number_" + str(n) + ".txt"
n = 0
while isfile(file_n(n)):
n += 1
f = open( file_n(n), "w" )
f.write("data...")
f.close()
The problem is that if many instances of that same program run at the same time, some files may be overwritten.
So I'm writing a script to take large csv files and divide them into chunks. These files each have lines formatted accordingly:
01/07/2003,1545,12.47,12.48,12.43,12.44,137423
Where the first field is the date. The next field to the right is a time value. These data points are at minute granularity. My goal is to fill files with 8 days worth of data, so I want to write all the lines from a file for 8 days worth into a new file.
Right now, I'm only seeing the program write one line per "chunk," rather than all the lines. Code shown below and screenshots included showing how the chunk directories are made and the file as well as its contents.
For reference, day 8 shown and 1559 means it stored the last line right before the mod operator became true. So I'm thinking that everything is getting overwritten somehow since only the last values are being stored.
import os
import time
CWD = os.getcwd()
WRITEDIR = CWD+"/Divided Data/"
if not os.path.exists(WRITEDIR):
os.makedirs(WRITEDIR)
FILEDIR = CWD+"/SP500"
os.chdir(FILEDIR)
valid_files = []
filelist = open("filelist.txt", 'r')
for file in filelist:
cur_file = open(file.rstrip()+".csv", 'r')
cur_file.readline() #skip first line
prev_day = ""
count = 0
chunk_count = 1
for line in cur_file:
day = line[3:5]
WDIR = WRITEDIR + "Chunk"
cur_dir = os.getcwd()
path = WDIR + " "+ str(chunk_count)
if not os.path.exists(path):
os.makedirs(path)
if(day != prev_day):
# print(day)
prev_day = day
count += 1
#Create new directory
if(count % 8 == 0):
chunk_count += 1
PATH = WDIR + " " + str(chunk_count)
if not os.path.exists(PATH):
os.makedirs(PATH)
print("Chunk count: " + str(chunk_count))
print("Global count: " + str(count))
temp_path = WDIR +" "+str(chunk_count)
os.chdir(temp_path)
fname = file.rstrip()+str(chunk_count)+".csv"
with open(fname, 'w') as f:
try:
f.write(line + '\n')
except:
print("Could not write to file. \n")
os.chdir(cur_dir)
if(chunk_count >= 406):
continue
cur_file.close()
# count += 1
The answer is in the comment but let me give it here so that your question is answered.
You're opening your file in 'w' mode which overwrites all the previously written content. You need to open it in the 'a' (append) mode:
fname = file.rstrip()+str(chunk_count)+".csv"
with open(fname, 'a') as f:
See more on open function and modes in Python documentation. It specifically mentions about 'w' mode:
note that 'w+' truncates the file
I need to compare text files with two other files, and then get the result as an output. So I taught myself enough to write the following script which works fine and compares all of the files in a specific directory, however I have multiple directories with text files inside. What I need is to compare all of the text files in all of the directories and have an output file for each directory. Is there a way to improve the code below to do that:
import glob
import os
import sys
sys.stdout = open("citation.txt", "w")
for filename in glob.glob('journal*.txt'):
f1 = open(filename,'r')
f1data = f1.readlines()
f2 = open('chem.txt')
f2data = f2.readlines()
f3 = open('bio.txt')
f3data = f3.readlines()
chem = 0
bio = 0
total = 0
for line1 in f1data:
i = 0
for line2 in f2data:
if line1 in line2:
i+=1
total+=1
chem+=1
if i > 0:
print 'chem ' + line1 + "\n"
for line3 in f3data:
if line1 in line3:
i+=1
total+=1
bio+=1
if i > 0:
print 'bio ' + line1 + "\n"
print filename
print total
print 'bio ' + str(bio)
print 'chem ' + str(kimya)
Thanks in advance!
Just use a list of directories and a for loop
directories = ['folder1','folder2',...]
for i,folder in enumerate(directories):
sys.stdout = open("citation{}.txt".format(i), "w")
...
[put the rest of your code here]
This will name different output files as citation0.txt but you can do other formats if you want, just by changing how that name is declared.
And if you want each citation.txt to go into the actual directory, just change your code to this:
for folder in directories:
citation = os.path.join(folder, "citation.txt")
sys.stdout = open(citation, "w")
This will create a path for a new citation.txt file with each directory as the loop runs. Make sure to import os at the start of your file, if you haven't already.