I am unable to write all values of the output to a file . Kindly help.
import numpy as np
theta=10
sigma=np.linspace(0,10,300)
Re=np.linspace(5,100,300)
file = open("New values sigma5.txt", "w")
for i in np.arange(0,300):
mu=np.sqrt(Re[i]*sigma)
A=(mu-1)*np.exp(mu)+(mu+1)*np.exp(-mu)
B=2*mu*(theta-1)
C=(A/B)
D1=np.exp(mu)/2*(mu+sigma)
D2=np.exp(-mu)/2*(mu-sigma)
D3=mu**2
D4=np.exp(-sigma)
D5=sigma
D6=mu**2-sigma**2
D7=D3*D4
D8=D5*D6
H=D7/D8
D9=(1/sigma)
D=D1-D2+H-D9
K1=C-D
K2=np.delete(K1,0)
K3=np.nonzero(K2>0)
K33=np.array(K3)
K4=np.shape(K3)
K5=len(K33.T)
K6=K5
K7=sigma[K6]
K77=np.array(K7)
print K77
file.write(K77)
print(K77)
file.close()
The output is given by K77. By the present form of the code, I am getting to see only the last value of K77. I dont see the other ones.
You may want to append the data. try opening the file in append mode using a instead of w in
file = open("New values sigma5.txt", "w")
Currently you are over writing the file content. Using the append mode, you can append the new data to the file.
The other problem I see is that you may want to save data to the file during every iteration, so file.write(K77) should be in the for loop.
Assuming that you want to capture each of the 300 values for K77, you need to change
...
K77=np.array(K7)
print K77
file.write(K77)
....
to (notice the different indentation)
...
K77=np.array(K7)
print K77
file.write(K77)
....
This will write each line to the file.
Related
I am fairly new to programming, so bear with me!
We have a task at school which we are made to clean up three text files ("Balance1", "Saving", and "Withdrawal") and append them together into a new file. These files are just names and sums of money listed downwards, but some of it is jumbled. This is my code for the first file Balance1:
with open('Balance1.txt', 'r+') as f:
f_contents = f.readlines()
# Then I start cleaning up the lines. Here I edit Anna's savings to an integer.
f_contents[8] = "Anna, 600000"
# Here I delete the blank lines and edit in the 50000 to Philip.
del f_contents[3]
del f_contents[3]
In the original text file Anna's savings is written like this: "Anna, six hundred thousand" and we have to make it look clean, so its rather "NAME, SUM (as integer). When I print this as a list it looks good, but after I have done this with all three files I try to append them together in a file called "Balance.txt" like this:
filenames = ["Balance1.txt", "Saving.txt", "Withdrawal.txt"]
with open("Balance.txt", "a") as outfile:
for filename in filenames:
with open(filename) as infile:
contents = infile.read()
outfile.write(contents)
When I check the new text file "Balance" it has appended them together, but just as they were in the beginning and not with my edits. So it is not "cleaned up". Can anyone help me understand why this happens, and what I have to do so it appends the edited and clean versions?
In the first part, where you do the "editing" of Balance.txt` file, this is what happens:
You open the file in read mode
You load the data into memory
You edit the in memory data
And voila.
You never persisted the changes to any file on the disk. So when in the second part you read the content of all the files, you will read the data that was originally there.
So if you want to concatenate the edited data, you have 2 choices:
Pre-process the data by creating 3 final correct files (editing Balance1.txt and persisting it to another file, say Balance1_fixed.txt) and then in the second part, concatenate: ["Balance1_fixed.txt", "Saving.txt", "Withdrawal.txt"]. Total of 4 data file openings, more IO.
Use only the second loop you have, and correct the contents before writing it to the outfile. You can use readlines() like you did first, edit the specific line and then use writelines(). Total of 3 data file openings, less IO than previous option
Sorry if the question is not well formulated, will reformulated if necessary.
I have a file with an array that I filled with data from an online json db, I imported this array to another file to use its data.
#file1
response = urlopen(url1)
a=[]
data = json.loads(response.read())
for i in range(len(data)):
a.append(data[i]['name'])
i+=1
#file2
from file1 import a
'''do something with "a"'''
Does importing the array means I'm filling the array each time I call it in file2?
If that is the case, what can I do to just keep the data from the array without "building" it each time I call it?
If you saved a to a file, then read a -- you will not need to rebuild a -- you can just open it. For example, here's one way to open a text file and get the text from the file:
# set a variable to be the open file
OpenFile = open(file_path, "r")
# set a variable to be everything read from the file, then you can act on that variable
file_guts = OpenFile.read()
From the Python docs on the Modules section - link - you can read:
When you run a Python module with
python fibo.py <arguments>
the code in the module will be executed, just as if you imported it
This means that importing a module has the same behavior as running it as a regular Python script, unless you use the __name__ as mentioned right after this quotation.
Also, if you think about it, you are opening something, reading from it, and then doing some operations. How can you be sure that the content you are now reading from is the same as the one you had read the first time?
I have a JSON file whose size is about 5GB. I neither know how the JSON file is structured nor the name of roots in the file. I'm not able to load the file in the local machine because of its size So, I'll be working on high computational servers.
I need to load the file in Python and print the first 'N' lines to understand the structure and Proceed further in data extraction. Is there a way in which we can load and print the first few lines of JSON in python?
If you want to do it in Python, you can do this:
N = 3
with open("data.json") as f:
for i in range(0, N):
print(f.readline(), end = '')
You can use the command head to display the N first line of the file. To get a sample of the json to know how is it structured.
And use this sample to work on your data extraction.
Best regards
I try to write to a CSV file but when i check the file, only th last set of data is displayed? How can this be fixed?
my original code is:
highscores=open("Highscores.csv","w")
toPrint=(name+","+score+"\n")
for z in toPrint:
highscores.write(z)
highscores.close()
and I also tried this:
toPrint=[]
output=(name+","+score+"\n")
toPrint.append(output)
for z in toPrint:
highscores.write(z)
highscores.close()
You need to open the file in "Append" mode instead of the "Write". Your code will remain the same only the following line will change:
highscores=open("Highscores.csv","a")
I have an array (popt) updated in a loop, which I want to save it in a file.
for k in range (1:N):
# here I make "popt" array which is depend on k
np.savetxt('test.out',popt)
because of overwrite problem, only the last updated popt is saved. How can I save all the data ,not only the last one, of this array?
for k in range (1:N)
# here I make "popt" array which is depend on k
np.savetxt('test.out',popt)
You're directly specifying a file to the savetxt() function. I'd manually open the file and pass a file handle; this way you can tell Python to append to the file.
with open('test.out','a') as f_handle:
np.savetxt(f_handle, popt)
If you just want to add your matrix to an existing ASCII file, you can
open this file in append mode and give the file handle to
numpy.savetxt :
f_handle = file('test.out', 'a')
np.savetxt(f_handle, popt)
f_handle.close()
Or rather, if you intend to save the file once after the loop is processed, move the np.savetext() command out of the loop - de-indent it once. This might even have happened by mistake.