I'm trying to save data from an array called Sevol. This matrix has 100 rows and 1000 columns, so len(Sevol[i]) has 1000 elements and Sevol[0][0] would be the first element of the first list.
I tried to save this array with the commands
np.savetxt(path + '/data_Sevol.txt', Sevol[i], delimiter=" ")
It works fine. However, I would like the file to be organized as an array anyway. For example, currently, the file is being saved like this in Notepad:
And I would like the data to remain organized, as for example in this file:
Is there an argument in the np.savetxt function or something I can do to better organize the text file?
Related
I need my program to take one cell in a csv that is a date and save it as a variable, then read the next cell in the column and save it as the next variable. The rest of the program does a calculation with the dates, but i am not sure how to save each date as a variable. (The part that is commented out is what it would look like if a user did the input themsef, but I would like it read in from the csv)
enter image description here
You can convert csv to numpy with:
from numpy import genfromtxt
my_data = genfromtxt('my_file.csv', delimiter=',')
And then you can just iterate through the created array to get the elements. Be careful as how the delimiter is defined, here it's a coma, sometimes there isn't any, so just make sure you know how your csv data are delimited.
So basically I have multiple excel files (different names) in a folder and I want to copy the same cell (for example B3) from all files and create a column in New excel file and put all the value there.
The file above is what I want to import (multiple files like that). I want to copy the names and emails and save it to the new file like the one below.
So you want to read multiple files, get a specific cell and then create a new data frame and save it as a new Excel file:
cells = []
for f in glob.glob("*.xlsx"):
data = pd.read_excel(f, 'Sheet1')
cells.append(data.iloc[3,5])
pd.Series(cells).to_excel('file.xlsx')
In my particular example I took cell F4 (row=3, col=5) - you can obviously take any other cell that you like or even more than one cell and then save it to a different list, combining the two lists in the end. You could also have more complex logic where you could check one cell to decide which other cell to look at next.
The key point is that you want to iterate through a bunch of files and for each of them:
read the file
extract whatever data you are interested in
set this data aside somewhere
Once you've gone through all the files combine all the data in any way that you like and then save it to disk in a format of your choice.
I am generating some arrays for a simulation and I want save them in a JSON file. I am using the jsonpickle library.
The problem is that the arrays I need to save can be very large in size (hundreds of MB up to some GB). Thus, I need to save each array to the JSON file immediately after its generation.
Basically, I am generating a multiple independent large arrays, storing them in another array and saving them into the JSON file after all of them have been generated:
N = 1000 # Number of arrays
z_NM = np.zeros((64000,1), dtype=complex) # Large array
z_NM_array = np.zeros((N,64000,1), dtype=complex) # Array of z_NM arrays
for in range(0, N):
z_NM[:,0] = GenerateArray() # Generate array and store it in z_NM_array
z_NM_array[i] = z_NM
# Write data to JSON file
data = {"z_NM_array": z_NM_array}
outdata = json.encode(data)
with open(filename, "wb+") as f:
f.write(outdata.encode("utf-8"))
f.close()
I was wondering if it is instead possible to append the new data to the existing JSON file, by writing each array to the file immediately after its generation, inside the for loop? If so, how? And how can it be read back? Maybe using a library different from jsonpickle?
I know I could save each array in a separate file, but I'm wondering if there's a solution that lets me use a single file. I also have some settings in the dict which I want to save along with the array.
I would like to to edit one item of my CSV file and then save the file. Let's say I have a CSV file that reads:
A,B,C,D
1,Today,3,4
5,Tomorrow,7,8
and I when I do something to the program, I want it to alter the CSV file such that it reads:
A,B,C,D
1,Yesterday,3,4
5,Tomorrow,7,8
I've tried two approaches:
First, I tried using pandas.csv_read to open the file in memory, alter it by using iloc, and then save the CSV file using csv.writer. Predictably, this resulted in the format of the CSV file being drastically altered (a seemingly arbitrary number of spaces between adjacent entries in a row). This meant that subsequent attempts to edit the CSV file would fail.
Example:
A B C D
1 Yesterday 3 4
5 Tomorrow 7 8
The second attempt was similar to the first in that I opened the CSV file and edited it, but I then converted it into a string before using csv.writer. The issue with this method is that it saves all of the entries in one row in the CSV file, and I cannot figure out how to tell the program to start a new row.
Example:
A,B,C,D,1,Yesterday,3,4,5,Tomorrow,7,8
Is there a proper way to edit one entry in a particular row of a CSV file in python?
simple problem, but maybe tricky answer:
The problem is how to handle a huge .txt file with pytables.
I have a big .txt file, with MILLIONS of lines, short lines, for example:
line 1 23458739
line 2 47395736
...........
...........
The content of this .txt must be saved into a pytable, ok, it's easy. Nothing else to do with the info in the txt file, just copy into pytables, now we have a pytable with, for example, 10 columns and millions of rows.
The problem comes up when, with the content in the txt file, 10 columns x millions lines are directly generated in the paytable BUT, depending on the data on each line of the .txt file, new colums must be created on the pytable. So how to handle this efficiently??
Solution 1: first copy all the text file, line by line into pytable (millions), and then iterate over each row on pytable (millions again) and, depending on the values, generate the new columns needed for the pytable.
Solution 2: read line by line the .txt file, do whatever needed, calculate the new needed values, and then send all the info to a pyrtable.
Solution 3:.....any other efficient and faster solution???
I think that basic problem here is one of the conceptual model. PyTables' Tables only handle regular (or structured) data. However, the data that you have is irregular or unstructured in that the structure is determined as you read the data. Said another way, PyTables needs the column description to be known completely by the time that create_table() is called. There is no way around this.
Since in your problem statement any line may add a new column you have no choice but to do this in two full passes through the data: (1) read through the data and determine the columns and (2) write the data to the table. In pseudocode:
import tables as tb
cols = {}
# discover columns
d = open('data.txt')
for line in d:
for col in line:
if col not in cols:
cols['colname'] = col
# write table
d.seek(0)
f = tb.open_file(...)
t = f.create_table(..., description=cols)
for line in d:
row = line_to_row(line)
t.append(row)
d.close()
f.close()
Obviously, if you knew the table structure ahead of time you could skip the first loop and this would be much faster.