I have a function f2(a, b)
It is only ever called by a minimize algorithm which iterates the function for different values of a and b each time. I would like to store these iterations in excel for plotting.
Is it possible to extract these values (i only need to paste them all into excel or a text file) easily? Conventional return and print won't work within f2. Is there any way to extract the values a and b to a public list in the main body some other way?
The algorithm may iterate dozens or hundreds of times.
So far I have tried:
Print to console (can't paste this data into excel easily)
Write to file (csv) within f2, the csv file gets overwritten within the function each time though.
Append the values to a global list.
values = []
def f2(a,b):
values.append((a,b))
#todo: actual function logic goes here
Then you can look at values in the main scope once you're done iterating.
Write to file (csv) within f2, the csv file gets overwritten within the function each time though.
Not if you open the file in append mode:
with open("file.csv", "a") as myfile:
Related
I'm working on a program that, in short, creates a CSV file and compares it line by line to a previously generated CSV file (created on previous script execution), and logs differences between the two.
I'm having a weird issue where csv.dictreader does not appear to be reading all rows of the NEW log - but it DOES read all rows of the OLD log. What makes this issue even more puzzling is that if I run the program again, it creates a new log, and will now read the previous log all the way to the end.
Here's what I mean if that made no sense:
Run program to generate LOG A
Run program again to generate LOG B
csv.dictreader does not read all the way through LOG B, but it DOES read all the way through LOG A
Run program again to generate LOG C
csv.dictreader does NOT read all the way through LOG C, but it DOES read all the way through LOG B (which it previously didn't although no information in the csv file has changed)
Here's the relevant function:
def compare_data(newLog, oldLog):
# this function compares the new scan Log to the old Log to determine if, and how much, memory space has changed
# within each directory
# both arguments should be filenames of the logs
# use the .csv library to create dictionary objects out of files and read thru them
newReader = csv.DictReader(newLog)
oldReader = csv.DictReader(oldLog)
oldDirs, newDirs = [], []
# write data from new log into dictionary list
for row in newReader:
newLogData = {}
newLogData['ScanDate'] = row['ScanDate']
newLogData['Directory'] = row['Directory']
newLogData['Size'] = int(row['Size'])
newDirs.append(newLogData)
# write data from old log into another dictionary list
for row in oldReader:
oldLogData = {}
oldLogData['ScanDate'] = row['ScanDate']
oldLogData['Directory'] = row['Directory']
oldLogData['Size'] = int(row['Size'])
oldDirs.append(oldLogData)
# now compare data between the two dict lists to determine what's changed
for newDir in newDirs:
for oldDir in oldDirs:
dirMatch = False
if newDir['Directory'] == oldDir['Directory']:
# we have linked dirs. now check to see if their size has changed
dirMatch = True
if newDir['Size'] != oldDir['Size']:
print(newDir['Directory'], 'has changed in size! It used to be', oldDir['Size'],
'bytes, now it\'s', newDir['Size'], 'bytes! Hype')
# now that we have determined changes, now we should check for unique dirs
# unique meaning either a dir that has been deleted since last scan, or a new dir added since last scan
find_unique_dirs(oldDirs, newDirs)
Based on the fact that the old log gets read in its entirety, I don't think it would be an issue of open quotes in filenames or anything like that.
I have a similar problem to the one described here below in a different question:
Reading from a CSV file while it is being written to
Unfortunately the solution is not explained.
I'd like to create a script that plots some variables in a .csv file dynamically. The .csv is updated everytime a sensor registers something.
My basic idea was to read the file each fixed period of time and if the number of rows is increased, to update the plot with the new variables.
How can I proceed?
I am not that experienced in csv
but take this logic
def writeandRead(one_row):
with open (path/file.csv,"a"): # it is append .. if your case write just change from a to w
write.row(one_row)
with open(whateve.csv,"r"):
red=csv.read #i don't know syntax ... take the logic😊
return red
for row in rows: #or you have a lists whatever dictionary
print(writeandRead(row))
I am trying to save variables after I run a program to a text file and read them in different module, to call them back in the original program. Point of that is to write plots with 4 different outcome of the main program.
attempt at coding
#main program
a = array([[0.05562032, 0.05386903, 0.05216994, 0.03045489, 0.03029977,
0.03014554],
[0. , 0.00175129, 0.00345037, 0.15353227, 0.1536874 ,
0.15384163]])
#save paramaters in external file
save_paramaters = open('save.txt','w')
save_paramaters.write(str(a))
save_paramaters.close()
I open the txt file in python module and save it as a variable, which I corrected manually(replacing spaces with commas)
#new program
dat = "save.txt"
b = open(dat, "r")
c = array(b.read())
In the main program I now call the variable with
a = array([[0.05562032, 0.05386903, 0.05216994, 0.03045489, 0.03029977,
0.03014554],
[0. , 0.00175129, 0.00345037, 0.15353227, 0.1536874 ,
0.15384163])
#save paramaters in external file
save_paramaters = open('save.txt','w')
save_paramaters.write(str(a))
save_paramaters.close()
#open the variable
from program import c
from matplotlib.pyplot import figure, plot
#and try to plot it
plot(c[1][:], label ='results2')
plot(c[0][:], label ='results1')
File "/Example.py", line 606, in example
plot(c[1][:], label ='results2') #model
IndexError: too many indices for array
If you want to save an array you can't just save it as text and expect python to figure it out. When you read it, you're reading it as text (as a string) and that's all your program can know.
If you want to save complex objects you have several other options:
You can save text (as you do) but parse it manually when reading it to turn it into an array. This is complex to write without bugs and will get even more complex if you have anything even more complex than an array.
You can save it using pickle - while this is a good solution for almost all objects, the file created wouldn't be readable to humans, and that's perhaps not what you want.
A good middle ground is to save objects as JSON - this is a standard for most datatypes and would work beautifully for dicts and lists and tuples (but will fail with more complex objects), and more importantly, it will be readable to humans such as yourself.
Let's say you go with JSON. You save a list like this:
import json
with open('save.txt','w') as f:
json.dump(your_object, f)
As simple as that. To read back the list:
import json
with open('save.txt','r') as f:
your_new_object = json.load(f)
This is fairly simple isn't it? Notice I used a with statement to open the files to make sure they close properly as well, but that's also more simple to write. Using pickles is fairly similar and even has the same syntax, but objects are saved as bytes and not text (so you have to use 'rb' and 'wb' modes on files to read and write, respectively).
To do the same thing with numpy array, we can also use numpy.save:
np.save('save', your_numpy_array)
And we read it back (with a npy extension) with numpy.load:
your_array = np.load('save.npy')
In readability terms, opening the file would be semi-readable (less than JSON, more than pickle)
So if I have the same piece of code inside of 10 separate .ipynb files with different names and lets say that the code is as follows.
x = 1+1
so pretty simple stuff, but I want to change the variable x to y. Is their anyway using python to loop through each .ipynb file and do some sort of find and replace anywhere it sees x to change it or replace it with y? Or will I have to open each file up in Jupiter notebook and make the change manually?
I never tried this before, but the .ipynb files are simply JSONs. These pretty much function like nested dictionaries. Each cell is contained within the key 'cells', and then the 'cell_type' tells you if the cell is code. You then access the contents of the code cell (the code part) with the 'source' key.
In a notebook I am writing I can look for a particular piece of code like this:
import json
with open('UW_Demographics.ipynb') as f:
ff = json.load(f)
for cell in ff['cells']:
if cell['cell_type'] == 'code':
for elem in cell['source']:
if "pd.read_csv('UWdemographics.csv')" in elem:
print("OK")
You can iterate over your ipynb files, identify the code you want to change using the above, change it and save using json.dump in the normal way.
I'm currently trying to make an automation script for writing new files from a master where there are two strings I want to replace (x1 and x2) with values from a 21 x 2 array of numbers (namely, [[0,1000],[50,950],[100,900],...,[1000,0]]). Additionally, with each double replacement, I want to save that change as a unique file.
Here's my script as it stands:
import numpy
lines = []
x1x2 = numpy.array([[0,1000],[50,950],[100,900],...,[1000,0])
for i,j in x1x2:
with open("filenamexx.inp") as infile:
for line in infile:
linex1 = line.replace('x1',str(i))
linex2 = line.replace('x2',str(j))
lines.append(linex1)
lines.append(linex2)
with open("filename"+str(i)+str(j)+".inp", 'w') as outfile:
for line in lines:
outfile.write(line)
With my current script there are a few problems. First, the string replacements are being done separately, i.e. I end up with a new file that contains the contents of the master file twice where one line has the first change and then the next will reflect the second separately. Second, with each subsequent iteration, the new files have the contents of the previous file prepended (i.e. filename100900.inp will contain its unique contents as well as the contents of both filename01000.inp and filename50950.inp before it). Anyone think they can take a crack at solving my problem?
Note: I've looked at using regex module solutions (somehing like this: https://www.safaribooksonline.com/library/view/python-cookbook-2nd/0596007973/ch01s19.html) in order to do multiple replacements in a single pass, but I'm not sure if the way I'm indexing is translatable to a dictionary object.
I'm not sure I understood the second issue but you can use replace more than one time on the same string, so:
s = "x1 stuff x2"
s = s.replace('x1',str(1)).replace('x2',str(2))
print(s)
, will output:
1 stuff 2
No need to do this two times for two different variables. As for the second issue it just seems as your not "reset-ing" the "lines" variable before starting to write a new file. So once you finish writing a file just add:
lines = []
It should be enough to solve these issues.