Converting CSV to numpy NPY efficiently - python

How to convert a .csv file to .npy efficently?
I've tried:
import numpy as np
filename = "myfile.csv"
vec =np.loadtxt(filename, delimiter=",")
np.save(f"{filename}.npy", vec)
While the above works for smallish file, the actual .csv file I'm working on has ~12 million lines with 1024 columns, it takes quite a lot to load everything into RAM before converting into an .npy format.
Q (Part 1): Is there some way to load/convert a .csv to .npy efficiently for large CSV file?
The above code snippet is similar to the answer from Convert CSV to numpy but that won't work for ~12M x 1024 matrix.
Q (Part 2): If there isn't any way to to load/convert a .csv to .npy efficiently, is there some way to iteratively read the .csv file into .npy efficiently?
Also, there's an answer here https://stackoverflow.com/a/53558856/610569 to save the csv file as numpy array iteratively. But seems like the np.vstack isn't the best solution when reading the file. The accepted answer there suggests hdf5 but the format is not the main objective of this question and the hdf5 format isn't desired in my use-case since I've to read it back into a numpy array afterwards.
Q (Part 3): If part 1 and part2 are not possible, are there other efficient storage (e.g. tensorstore) that can store and efficiently convert to numpy array when loading the saved storage format?
There is another library tensorstore that seems to efficiently handles arrays which support conversion to numpy array when read, https://google.github.io/tensorstore/python/tutorial.html. But somehow there isn't any information on how to save the tensor/array without the exact dimensions, all of the examples seem to include configurations like 'dimensions': [1000, 20000],.
Unlike the HDF5, the tensorstore doesn't seem to have reading overhead issues when converting to numpy, from docs:
Conversion to an numpy.ndarray also implicitly performs a synchronous read (which hits the in-memory cache since the same region was just retrieved)

Nice question; Informative in itself.
I understand you want to have the whole data set/array in memory, eventually, as a NumPy array. I assume, then, you have enough (RAM) memory to host such array -- 12M x 1K.
I don't specifically know about how np.loadtxt (genfromtxt) is operating behind the scenes, so I will tell you how I would do (after trying like you did).
Reasoning about memory...
Notice that a simple boolean array will cost ~12 GBytes of memory:
>>> print("{:.1E} bytes".format(
np.array([True]).itemsize * 12E6 * 1024
))
1.2E+10 bytes
And this is for a Boolean data type. Most likely, you have -- what -- a dataset of Integer, Float? The size may increase quite significantly:
>>> np.array([1], dtype=bool).itemsize
1
>>> np.array([1], dtype=int).itemsize
8
>>> np.array([1], dtype=float).itemsize
8
It's a lot of memory (which you know, just want to emphasize).
At this point, I would like to point out a possible swapping of the working memory. You may have enough physical (RAM) memory in your machine, but if not enough of free memory, your system will use the swap memory (i.e, disk) to keep your system stable & have the work done. The cost you pay is clear: read/writing from/to the disk is very slow.
My point so far is: check the data type of your dataset, estimate the size of your future array, and guarantee you have that minimum amount of RAM memory available.
I/O text
Considering you do have all the (RAM) memory necessary to host the whole numpy array: I would then loop over the whole (~12M lines) text file, filling the pre-existing array row-by-row.
More precisely, I would have the (big) array already instantiated before start reading the file. Only then, I would read each line, split the columns, and give it to np.asarray and assign those (1024) values to each respective row of the output array.
The looping over the file is slow, yes. The thing here is that you limit (and control) the amount of memory being used. Roughly speaking, the big objects consuming your memory are the "output" (big) array, and the "line" (1024) array. Sure, there are quite a considerable amount of memory being consumed in each loop in the temporary objects during reading (text!) values, splitting into list elements and casting to an array. Still, it's something that will remain largely constant during the whole ~12M lines.
So, the steps I would go through are:
0) estimate and guarantee enough RAM memory available
1) instantiate (np.empty or np.zeros) the "output" array
2) loop over "input.txt" file, create a 1D array from each line "i"
3) assign the line values/array to row "i" of "output" array
Sure enough, you can even make it parallel: If on one hand text files cannot be randomly (r/w) accessed, on the other hand you can easily split them (see How can I split one text file into multiple *.txt files?) to have -- if fun is at the table -- them read in parallel, if that time if critical.
Hope that helps.

TL;DR
Export to a different function other than .npy seems inevitable unless your machine is able to handle the size of the data in-memory as per described in #Brandt answer.
Reading the data, then processing it (Kinda answering Q part 2)
To handle data size larger than what the RAM can handle, one would often resort to libraries that performs "out-of-core" computation, e.g. turicreate.SFrame, vaex or dask . These libraries would be able to lazily load the .csv files into dataframes and process them by chunks when evaluated.
from turicreate import SFrame
filename = "myfile.csv"
sf = SFrame.read_csv(filename)
sf.apply(...) # Trying to process the data
or
import vaex
filename = "myfile.csv"
df = vaex.from_csv(filename,
convert=True,
chunk_size=50_000_000)
df.apply(...)
Converting the read data into numpy array (kinda answering Q part 1)
While out-of-core libraries can read and process the data efficiently, converting into numpy is an "in-memory" operation, the machine needs to have enough RAM to fit all data.
The turicreate.SFrame.to_numpy documentation writes:
Converts this SFrame to a numpy array
This operation will construct a numpy array in memory. Care must be taken when size of the returned object is big.
And the vaex documentation writes:
In-memory data representations
One can construct a Vaex DataFrame from a variety of in-memory data representations.
And dask best practices actually reimplemented their own array objects that are simpler than numpy array, see https://docs.dask.org/en/stable/array-best-practices.html. But when going through the docs, it seems like the format they have saved the dask array in are not .npy but various other formats.
Writing the file into non-.npy versions (answering Q Part 3)
Given the numpy arrays are inevitably in-memory, trying to save the data into one single .npy isn't the most viable option.
Different libraries seems to have different solutions for storage. E.g.
vaex saves the data into hdf5 by default if the convert=True argument is set when data is read through vaex.from_csv()
sframe saves the data into their own binary format
dask export functions save to_hdf() and to_parquet() format

It it's latest version (4.14) vaex support "streaming", i.e. lazy loading of CSV files. It uses pyarrow under the hood so it is supper fast. Try something like
df = vaex.open(my_file.csv)
# or
df = vaex.from_csv_arrow(my_file.csv, lazy=True)
Then you can export to bunch of formats as needed, or keep working with it like that (it is surprisingly fast). Of course, it is better to convert to some kind of binary format..

import numpy as np
import pandas as pd
# Define the input and output file names
csv_file = 'data.csv'
npy_file = 'data.npy'
# Create dummy data
data = np.random.rand(10000, 100)
df = pd.DataFrame(data)
df.to_csv(csv_file, index=False)
# Define the chunk size
chunk_size = 1000
# Read the header row and get the number of columns
header = pd.read_csv(csv_file, nrows=0)
num_cols = len(header.columns)
# Initialize an empty array to store the data
data = np.empty((0, num_cols))
# Loop over the chunks of the csv file
for chunk in pd.read_csv(csv_file, chunksize=chunk_size):
# Convert the chunk to a numpy array
chunk_array = chunk.to_numpy()
# Append the chunk to the data array
data = np.append(data, chunk_array, axis=0)
np.save(npy_file, data)
# Load the npy file and check the shape
npy_data = np.load(npy_file)
print('Shape of data before conversion:', data.shape)
print('Shape of data after conversion:', npy_data.shape)```

I'm not aware of any existing function or utility that directly and efficiently converts csv files into npy files. With efficient I guess primarily meaning with low memory requirements.
Writing a npy file iteratively is indeed possible, with some extra effort. There's already a question on SO that addresses this, see:
save numpy array in append mode
For example using the NpyAppendArray class from Michael's answer you can do:
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
for line in csv:
row = np.fromstring(line, sep=',')
npy.append(row[np.newaxis, :])
The NpyAppendArray class updates the npy file header on every call to append, which is a bit much for your 12M rows. Maybe you could update the class to (optionally) only write the header on close. Or you could easily batch the writes:
batch_lines = 128
with open('data.csv') as csv, NpyAppendArray('data.npy') as npy:
done = False
while not done:
batch = []
for count, line in enumerate(csv):
row = np.fromstring(line, sep=',')
batch.append(row)
if count + 1 >= batch_lines:
break
else:
done = True
npy.append(np.array(batch))
(code is not tested)

Related

h5py file subset taking more space than parent file?

I have an existing h5py file that I downloaded which is ~18G in size. It has a number of nested datasets within it:
h5f = h5py.File('input.h5', 'r')
data = h5f['data']
latlong_data = data['lat_long'].value
I want to be able to some basic min/max scaling of the numerical data within latlong, so i want to put it in its own h5py file for easier use and lower memory usage.
However, when i try to write it out to its own file:
out = h5py.File('latlong_only.h5', 'w')
out.create_dataset('latlong', data=latlong)
out.close()
The output file is incredibly large. It's still not done writing to disk and is ~85GB in space. Why is the data being written to the new file not compressed?
Could be h5f['data/lat_long'] is using compression filters (and you aren't). To check the original dataset's compression settings, use this line:
print (h5f['data/latlong'].compression, h5f['data/latlong'].compression_opts)
After writing my answer, it occurred to me that you don't need to copy the data to another file to reduce the memory footprint. Your code reads the dataset into an array, which is not necessary in most use cases. A h5py dataset object behaves similar to a NumPy array. Instead, use this line: ds = h5f1['data/latlong'] to create a dataset object (instead of an array) and use it "like" it's a NumPy array. FYI, .value is a deprecated method to return the dataset as an array. Use this syntax instead arr = h5f1['data/latlong'][()]. Loading the dataset into an array also requires more memory than using an h5py object (which could be an issue with large datasets).
There are other ways to access the data. My suggestion to use dataset objects is 1 way. Your method (extracting data to a new file) is another way. I am not found of that approach because you now have 2 copies of the data; a bookkeeping nightmare. Another alternative is to create external links from the new file to the existing 18GB file. That way you have a small file that links to the big file (and no duplicate data). I describe that method in this post: [How can I combine multiple .h5 file?][1] Method 1: Create External Links.
If you still want to copy the data, here is what I would do. Your code reads the dataset into an array then writes the array to the new file (uncompressed). Instead, copy the dataset using h5py's group .copy() method, it will retain compression settings and attributes.
See below:
with h5py.File('input.h5', 'r') as h5f1, \
h5py.File('latlong_only.h5', 'w') as h5f2:
h5f1.copy(h5f1['data/latlong'], h5f2,'latlong')

Loading speed vs memory: how to efficiently load large arrays from h5 file

I have been facing the following issue: I have to loop over num_objects = 897 objects, for every single one of which I have to use num_files = 2120 h5 files. The files are very large, each being 1.48 GB and the content of interest to me is 3 arrays of floats of size 256 x 256 x 256 contained in every file (v1,v2 and v3). This is to say, the loop looks like:
for i in range(num_objects):
...
for j in range(num_files):
some operation with the three 256 x 256 x 256 arrays in each file
and my current way of loading them is by doing the following in the innermost loop:
f = h5py.File('output_'+str(q)+'.h5','r')
key1 = np.array(f['key1'])
v1=key1[:,:,:,0]
v2=key2[:,:,:,1]
v3=key3[:,:,:,2]
The above option of loading the files every time for every object is obviously very slow. On the other hand, loading all files at once and importing them in a dictionary results in excessive use of memory and my job gets killed. Some diagnostics:
The above way takes 0.48 seconds per file, per object, resulting in a total of 10.5 days (!) spent only on this operation.
I tried exporting key1 to npz files, but it was actually slower by 0.7 seconds per file.
I exported v1,v2 and v3 individually for every file to npz files (i.e. 3 npz files for each h5 file), but that saved me only 1.5 day in total.
Does anyone have some other idea/suggestion I could try to be fast and at the same time not limited by excessive memory usage?
If I understand, you have 2120 .h5 files. Do you only read the 3 arrays in dataset f['key1'] for each file? (or are there other datasets?) If you only/always read f['key1'], that's a bottleneck you can't program around. Using a SSD will help (because I/O is faster than HDD). Otherwise, you will have to reorganize your data. The amount of RAM on your system will determine the number of arrays you can read simultaneously. How much RAM do you have?
You might gain a little speed with a small code change. v1=key1[:,:,:,0] returns v1 as an array (same for v2 and v3). There's no need to read dataset f['key1'] into an array. Doing that doubles your memory footprint. (BTW, Is there a reason to convert your arrays to a dictionary?)
The process below only creates 3 arrays by slicing v1,v2,v3 from the h5py f['key1']object. It will reduce your memory footprint for each loop by 50%.
f = h5py.File('output_'+str(q)+'.h5','r')
key1 = f['key1']
## key1 is returned as a h5py dataset OBJECT, not an array
v1=key1[:,:,:,0]
v2=key2[:,:,:,1]
v3=key3[:,:,:,2]
On the HDF5 side, since you always slice out the last axis, your chunk parameters might improve I/O. However, if you want to change the chunk shape, you will have to recreate the .h5 files. So, that will probably not save time (at least in the short-term).
Agreed with #kcw78, in my experience the bottleneck is usually that you load too much data compared to what you need. So don't convert the dataset to an array unless you need to, and slice only the part you need (do you need the whole [:,:,0] or just a subset of it?).
Also, if you can, change the order of the loops, so that you open each file only once.
for j in range(num_files):
...
for i in range(num_objects):
some operation with the three 256 x 256 x 256 arrays in each file
Another approach would be to use an external tool (like h5copy) to extract the data you need in a much smaller dataset, and then read it in python, to avoid python's overhead (which might not be that much to be honest).
Finally, you could use multiprocessing to take advantage of you CPU cores, as long as you tasks are relatively independent from one another. You have an example here.
h5py is the data representation class the supports most of NumPy data types and supports traditional NumPy operations like slicing, and a variety of descriptive attributes such as shape and size attributes.
There is a cool option that you can use for representing datasets in chunks using the create_dataset method with the chunked keyword enabled:
dset = f.create_dataset("chunked", (1000, 1000), chunks=(100, 100))
The benefit of this is that you can easily resize your dataset and now as in your case, you would only have to read a chunk and not the whole 1.4 GB of data.
But beware of chunking implications: different data sizes work best of different chunk sizes. There is an option of autochunk that selects this chunk size automatically rather than through hit and trial:
dset = f.create_dataset("autochunk", (1000, 1000), chunks=True)
Cheers

When does zarr compress a chunk and push it to the underlying storage system?

I'm reading data from a large text file (a VCF) into a zarr array. The overall flow of the code is
with zarr.LMDBStore(...) as store:
array = zarr.create(..., chunks=(1000,1000), store=store, ...)
for line_num, line in enumerate(text_file):
array[line_num, :] = process_data(line)
I'm wondering - when does zarr compress the modified chunks of the array and push them to the underlying store (in this case LMDB)? Does it do that every time a chunk is updated (i.e. each line)? Or does it wait till a chunk is filled/evicted from memory before doing that? Assuming that I need to process each line separately in a for loop (that there aren't efficient array operations to use here due to the nature of the data and processing), is there any optimization I should do here with regards to how I feed the data into Zarr?
I just don't want Zarr running compression on each modified chunk every line when each chunk will be modified 1000 times before being complete and ready to save to disk.
Thanks!
Every time you execute this line:
array[line_num, :] = process_data(line)
...zarr will (1) figure out which chunks overlap the array region you want to write to, (2) retrieve those chunks from the store, (3) decompress the chunks, (4) modify the data, (5) compress the modified chunks, (6) write the modified compressed chunks to the store.
This will happen regardless of what type of underlying storage you are using.
If you have created an array with chunks that are more than one row tall, then this will likely be inefficient, resulting in each chunk being read, decompressed, updated, compressed and written many times.
A better strategy would be to parse your input file in blocks of N lines, where N is equal to the number of rows in each chunk of the output array, so that each chunk is only compressed and written once.
If by VCF you mean Variant Call Format files, you might want to look at the vcf_to_zarr function implementation in scikit-allel.
I believe the LMDB store (as far as I can tell) will write/compress every time you assign.
You could aggregate your rows in an in-memory Zarr and then assign for each block.
There could be a "batch" option to the datasets but it has not been implemeted yet as far as I can tell.

How to split large data and rejoin later

My code generates a list of numpy arrays of size (1, 1, n, n, m, m) where n may vary from 50-100 and m from 5-10 depending on the case at hand. The length of the list itself may go up to 10,000 and is being written/dumped using pickle at the end of the code. For cases at the higher end of these numbers or when file sizes go beyond 5-6 GB, I get Out of Memory error. Below is a made up example of the situation,
import numpy as np
list, list_length = [], 1000
n = 100
m = 3
for i in range(0, list_length):
list.append(np.random.random((1, 1, n, n, m, m)))
file_path = 'C:/Users/Desktop/Temp/'
with open(file_path, 'wb') as file:
pickle.dump(list, file)
I am looking for a way that helps me to
split the data so that I can get rid of memory error, and
rejoin the data in the original form when needed later
All I could think is:
for i in range(0, list_length):
data = np.random.random((1, 1, n, n, m, m))
file_path = 'C:/Users/Desktop/Temp/'+str(i)
with open(file_path, 'wb') as file:
pickle.dump(data, file)
and then combine using:
combined_list = []
for i in range(0, list_length):
file_path = 'C:/Users/Desktop/Temp/single' + str(i)
with open(file_path, 'rb') as file:
data = pickle.load(file)
combined_list.append(data)
Using this way, the file size certainly reduces due to multiple files, but that also increases processing time due to multiple file I/O operations.
Is there a more elegant and better way to do this?
Using savez, savez_compressed, or even things like h5py can be useful as #tel mentioned, but that takes extra effort trying to do "reinvent" caching mechanism. There are two easier ways to process larger-than-memory ndarray if applicable:
The easiest way is of course enable pagefile (or some other name) on Windows or swap on Linux (not sure about OS X counter part). This creates a virtually large enough memory so that you don't need to worry about memory at all. It will save to disk/load from disk accordingly
If the first way is not applicable due to not have admin rights or etc, numpy provides another way: np.memmap. This function maps an ndarray to disk such that you can index it just like it is in memory. Technically IO is done directly to the hard disk but OS will cache accordingly
For the second way, you can create a hard-disk side ndarray using:
np.memmap('yourFileName', 'float32', 'w+', 0, 2**32)
This creates a 16GB float32 array within no time (containing 4G numbers). You can then do IO to it. A lot of functions have an out parameter. You can set the out parameter accordingly so that the output is not "copied" to the disk from memory
If you want to save a list of ndarrays using the second method, either create a lot of memmaps, or concat them to a single array
Don't use pickle to store large data, it's not an efficient way to serialize anything. Instead, use the built-in numpy serialization formats/functions via the numpy.savez_compressed and numpy.load functions.
System memory isn't infinite, so at some point you'll still need to split your files (or use a heavier duty solution such as the one provided by the h5py package). However, if you were able to fit the original list into memory then savez_compressed and load should do what you need.

Large, sparse list of lists giving MemoryError when calling np.array(data)

I have a large matrix of 0s and 1s, that is mostly 0s. It is initially stored as a list of 25 thousand other lists, each of which are about 2000 ints long.
I am trying to put these into a numpy array, which is what another piece of my program takes. So I run training_data = np.array(data), but this returns a MemoryError
Why is this happening? I'm assuming it is too much memory for the program to handle (which is surprising to me..), but if so, is there a better way of doing this?
A (short) integer takes two bytes to store. You want 25,000 lists, each with 2,000 integers; that gives
25000*2000*2/1000000 = 100 MB
This works fine on my computer (4GB RAM):
>>> import numpy as np
>>> x = np.zeros((25000,2000),dtype=int)
Are you able to instantiate the above matrix of zeros?
Are you reading the file into a Python list of lists and then converting that to a numpy array? That's a bad idea; it will at least double the memory requirements. What is the file format of your data?
For sparse matrices scipy.sparse provides various alternative datatypes which will be much more efficient.
EDIT: responding to the OP's comment.
I have 25000 instances of some other class, each of which returns a list of length about 2000. I want to put all of these lists returned into the np.array.
Well, you're somehow going over 8GB! To solve this, don't do all this manipulation in memory. Write the data to disk a class at a time, then delete the instances and read in the file from numpy.
First do
with open(..., "wb") as f:
f = csv.writer(f)
for instance in instances:
f.writerow(instance.data)
This will write all your data into a large-ish CSV file. Then, you can just use np.loadtxt:
numpy.loadtxt(open(..., "rb"), delimiter=",")

Categories

Resources