I have an array (popt) updated in a loop, which I want to save it in a file.
for k in range (1:N):
# here I make "popt" array which is depend on k
np.savetxt('test.out',popt)
because of overwrite problem, only the last updated popt is saved. How can I save all the data ,not only the last one, of this array?
for k in range (1:N)
# here I make "popt" array which is depend on k
np.savetxt('test.out',popt)
You're directly specifying a file to the savetxt() function. I'd manually open the file and pass a file handle; this way you can tell Python to append to the file.
with open('test.out','a') as f_handle:
np.savetxt(f_handle, popt)
If you just want to add your matrix to an existing ASCII file, you can
open this file in append mode and give the file handle to
numpy.savetxt :
f_handle = file('test.out', 'a')
np.savetxt(f_handle, popt)
f_handle.close()
Or rather, if you intend to save the file once after the loop is processed, move the np.savetext() command out of the loop - de-indent it once. This might even have happened by mistake.
Related
I saved a numpy array in .npy format on disk I load it using np.load() but I don't know how to save on the disk the changes I made .
There are two options you could explore. The first is if you know the position of the change in the file, you can:
file = open("path/to/file", "rb+")
file.seek(position)
file.seek(file.tell()). # There seems to be a bug in python which requires you to do this
file.write("new information") # Overwriting contents
Also see here why file.seek(file.tell())
The second is to save the modified array itself
myarray = np.load("/path/to/my.npy")
myarray[10] = 50.0 # Any new value
np.save("/path/to/my.npy", myarray)
I'm trying to pluck values out of many HDF5 files and store in a list.
import h5py
h = [h5py.File('filenum_%s.h5' % (n),'r')['key'][10][10] for n in range(100)]
This list comprehension contains the values at grid point (10, 10) in the 'key' array from the HDF5 files filenum0.h5-filenum99.h5.
It works, except that it stops around the 50th element with the error:
IOError: unable to open file (File accessibilty: Unable to open file)
even though I know the file exists and it can be opened if I haven't opened many other files. I think I get the error because too many files have been opened.
Is there a way to close the files within this list comprehension?
Or, is there a more effective way to build the list I want?
By doing like you're doing, you don't control when the file is closed.
You can control that, but not with a one-liner. You need an auxiliary method which returns the data, and closes the file (using a context manager is even better as h5py files support that, I just checked)
def get_data(n):
with h5py.File('filenum_%s.h5' % (n),'r') as f:
return f['key'][10][10]
then
h = [get_data(n) for n in range(100)]
You could make the get_data function more generic by not hardcoding the 10 & 'key' arguments of course.
For the sake of argument, you could do everything in one single terrible list comprehension like this:
import h5py
h = [(f['key'][10][10], f.close())[0]
for f in (h5py.File('filenum_%s.h5' % (n),'r') for n in range(100))]
But I would strongly advise against something like that, and prefer instead an auxiliary function or some other approach.
I'm currently trying to make an automation script for writing new files from a master where there are two strings I want to replace (x1 and x2) with values from a 21 x 2 array of numbers (namely, [[0,1000],[50,950],[100,900],...,[1000,0]]). Additionally, with each double replacement, I want to save that change as a unique file.
Here's my script as it stands:
import numpy
lines = []
x1x2 = numpy.array([[0,1000],[50,950],[100,900],...,[1000,0])
for i,j in x1x2:
with open("filenamexx.inp") as infile:
for line in infile:
linex1 = line.replace('x1',str(i))
linex2 = line.replace('x2',str(j))
lines.append(linex1)
lines.append(linex2)
with open("filename"+str(i)+str(j)+".inp", 'w') as outfile:
for line in lines:
outfile.write(line)
With my current script there are a few problems. First, the string replacements are being done separately, i.e. I end up with a new file that contains the contents of the master file twice where one line has the first change and then the next will reflect the second separately. Second, with each subsequent iteration, the new files have the contents of the previous file prepended (i.e. filename100900.inp will contain its unique contents as well as the contents of both filename01000.inp and filename50950.inp before it). Anyone think they can take a crack at solving my problem?
Note: I've looked at using regex module solutions (somehing like this: https://www.safaribooksonline.com/library/view/python-cookbook-2nd/0596007973/ch01s19.html) in order to do multiple replacements in a single pass, but I'm not sure if the way I'm indexing is translatable to a dictionary object.
I'm not sure I understood the second issue but you can use replace more than one time on the same string, so:
s = "x1 stuff x2"
s = s.replace('x1',str(1)).replace('x2',str(2))
print(s)
, will output:
1 stuff 2
No need to do this two times for two different variables. As for the second issue it just seems as your not "reset-ing" the "lines" variable before starting to write a new file. So once you finish writing a file just add:
lines = []
It should be enough to solve these issues.
I have a function f2(a, b)
It is only ever called by a minimize algorithm which iterates the function for different values of a and b each time. I would like to store these iterations in excel for plotting.
Is it possible to extract these values (i only need to paste them all into excel or a text file) easily? Conventional return and print won't work within f2. Is there any way to extract the values a and b to a public list in the main body some other way?
The algorithm may iterate dozens or hundreds of times.
So far I have tried:
Print to console (can't paste this data into excel easily)
Write to file (csv) within f2, the csv file gets overwritten within the function each time though.
Append the values to a global list.
values = []
def f2(a,b):
values.append((a,b))
#todo: actual function logic goes here
Then you can look at values in the main scope once you're done iterating.
Write to file (csv) within f2, the csv file gets overwritten within the function each time though.
Not if you open the file in append mode:
with open("file.csv", "a") as myfile:
I am unable to write all values of the output to a file . Kindly help.
import numpy as np
theta=10
sigma=np.linspace(0,10,300)
Re=np.linspace(5,100,300)
file = open("New values sigma5.txt", "w")
for i in np.arange(0,300):
mu=np.sqrt(Re[i]*sigma)
A=(mu-1)*np.exp(mu)+(mu+1)*np.exp(-mu)
B=2*mu*(theta-1)
C=(A/B)
D1=np.exp(mu)/2*(mu+sigma)
D2=np.exp(-mu)/2*(mu-sigma)
D3=mu**2
D4=np.exp(-sigma)
D5=sigma
D6=mu**2-sigma**2
D7=D3*D4
D8=D5*D6
H=D7/D8
D9=(1/sigma)
D=D1-D2+H-D9
K1=C-D
K2=np.delete(K1,0)
K3=np.nonzero(K2>0)
K33=np.array(K3)
K4=np.shape(K3)
K5=len(K33.T)
K6=K5
K7=sigma[K6]
K77=np.array(K7)
print K77
file.write(K77)
print(K77)
file.close()
The output is given by K77. By the present form of the code, I am getting to see only the last value of K77. I dont see the other ones.
You may want to append the data. try opening the file in append mode using a instead of w in
file = open("New values sigma5.txt", "w")
Currently you are over writing the file content. Using the append mode, you can append the new data to the file.
The other problem I see is that you may want to save data to the file during every iteration, so file.write(K77) should be in the for loop.
Assuming that you want to capture each of the 300 values for K77, you need to change
...
K77=np.array(K7)
print K77
file.write(K77)
....
to (notice the different indentation)
...
K77=np.array(K7)
print K77
file.write(K77)
....
This will write each line to the file.