Loading Large Files as Dictionary - python

My first question on stackoverflow :)
I am trying to load a pertained vectors from Glove vectors and create a dictionary with words as keys and corresponding vectors as values. I did the usual naive method to this:
fp=open(wordEmbdfile)
self.wordVectors={}
# Create wordVector dictionary
for aline in fp:
w=aline.rstrip().split()
self.wordVectors[w[0]]=w[1:]
fp.close()
I see a huge memory pressure on Activity Monitor, and eventually after trying for an hour or two it crashes.
I am going to try splitting in multiple smaller files and create multiple dictionaries.
In the meantime I have following questions:
To read the word2vec file, is it better if I read the gzipped file using gzip.open or uncompress it and then read it with plain open.
The word vector file has text in first column and float in the rest, would it be more optimal to use genfromtext or loadtext from numpy?
I intend save this dictionary using chicle, I know loading it is going to be hard too. I read the suggestion to use shelve, how does it compare to cPickle in loading time and access time. May be its better to spend some more time loading with cPickle if improve future accesses (if cPickle does not crash, with 8G RAM), Does anyone have some suggestion on this?
Thanks!

Related

Save periodically gathered data with python

I periodically receive data (every 15 minutes) and have them in an array (numpy array to be precise) in python, that is roughly 50 columns, the number of rows varies, usually is somewhere around 100-200.
Before, I only analyzed this data and tossed it, but now I'd like to start saving it, so that I can create statistics later.
I have considered saving it in a csv file, but it did not seem right to me to save high amounts of such big 2D arrays to a csv file.
I've looked at serialization options, particularly pickle and numpy's .tobytes(), but in both cases I run into an issue - I have to track the amount of arrays stored. I've seen people write the number as the first thing in the file, but I don't know how I would be able to keep incrementing the number while having the file still opened (the program that gathers the data runs practically non-stop). Constantly opening the file, reading the number, rewriting it, seeking to the end to write new data and closing the file again doesn't seem very efficient.
I feel like I'm missing some vital information and have not been able to find it. I'd love it if someone could show me something I can not see and help me solve the problem.
Saving on a csv file might not be a good idea in this case, think about the accessibility and availability of your data. Using a database will be better, you can easily update your data and control the size amount of data you store.

Python: Can I write to a file without loading its contents in RAM?

Got a big data-set that I want to shuffle. The entire set won't fit into RAM so it would be good if I could open several files (e.g. hdf5, numpy) simultaneously, loop through my data chronologically and randomly assign each data-point to one of the piles (then afterwards shuffle each pile).
I'm really inexperienced with working with data in python so I'm not sure if it's possible to write to files without holding the rest of its contents in RAM (been using np.save and savez with little success).
Is this possible and in h5py or numpy and, if so, how could I do it?
Memmory mapped files will allow for what you want. They create a numpy array which leaves the data on disk, only loading data as needed. The complete manual page is here. However, the easiest way to use them is by passing the argument mmap_mode=r+ or mmap_mode=w+ in the call to np.load leaves the file on disk (see here).
I'd suggest using advanced indexing. If you have data in a one dimensional array arr, you can index it using a list. So arr[ [0,3,5]] will give you the 0th, 3rd, and 5th elements of arr. That will make selecting the shuffled versions much easier. Since this will overwrite the data you'll need to open the files on disk read only, and create copies (using mmap_mode=w+) to put the shuffled data in.

proper method to save serialized data incrementally

This must be a very standard problem that also must have a standard solution:
What is the correct way to incrementally save feature vectors extract from data, rather than accumulating all vectors form the entire dataset and then saving all of them at once?
In more detail:
I have written a script for extracting custum text features (e.g. next_token, prefix-3, is_number) form text documents. After extraction is done I end up with one big list of scipy sparse vectors. Finally I pickle that list to space efficiently store and time efficiently load it when I want to train a model. But the problem is, that I am limited by my ram here. I can make that list of vectors only so big before it or the pickling exceeds my ram.
Of course incrementally appending string representations of these vectors would be possible. One could accumulate k vectors, append them to a text file and clear the list again for the next k vectors. But storing vectors and string would be space inefficient and require parsing the representations upon loading. That does not sound like a good solution.
I could also pickle sets of k vectors and end up with a whole bunch of pickle-files of k vectors. But that sounds messy.
So this must be a standard problem with a more elegant solution. What is the right method to solve this? Is there maybe even some existing functionality in scikit-learn for this kind of thing already, that I overlooked?
I found this: How to load one line at a time from a pickle file?
But it does not work with Python3.

Best structure for on-disk retrieval of large data using Python?

I basically have a large (multi-terabyte) dataset of text (it's in JSON but I could change it to dict or dataframe). It has multiple keys, such as "group" and "user".
Right now I'm filtering the data by reading through the entire text for these keys. It would be far more efficient to have a structure where I filter and read only the key.
Doing the above would be trivial if it fit in memory, and I could use standard dict/pandas methods and hash tables. But it doesn't fit in memory.
There must be an off the shelf system for this. Can anyone recommend one?
There are discussions about this, but some of the better ones are old. I'm looking for the simplest off the shelf solution.
I suggest you to split your large file to multiple small files with method readlines(CHUNK) and then you can process it one by one.
I worked with large Json and at beginning, the process was 45sec by file and my program ran while 2 days but when I splintered it, the program finished only for 4h

How to read a big (3-4GB) file that doesn't have newlines into a numpy array?

I have a 3.3gb file containing one long line. The values in the file are comma separated and either floats or ints. Most of the values are 10. I want to read the data into a numpy array. Currently, I'm using numpy.fromfile:
>>> import numpy
>>> f = open('distance_matrix.tmp')
>>> distance_matrix = numpy.fromfile(f, sep=',')
but that has been running for over an hour now and it's currently using ~1 Gig memory, so I don't think it's halfway yet.
Is there a faster way to read in large data that is on a single line?
This should probably be a comment... but I don't have enough reputation to put comments in.
I've used hdf files, via h5py, of sizes well over 200 gigs with very little processing time, on the order of a minute or two, for file accesses. In addition the hdf libraries support mpi and concurrent access.
This means that, assuming you can format your original one line file, as an appropriately hierarchic hdf file (e.g. make a group for every `large' segment of data) you can use the inbuilt capabilities of hdf to make use of multiple core processing of your data exploiting mpi to pass what ever data you need between the cores.
You need to be careful with your code and understand how mpi works in conjunction with hdf, but it'll speed things up no end.
Of course all of this depends on putting the data into an hdf file in a way that allows you to take advantage of mpi... so maybe not the most practical suggestion.
Consider dumping the data using some binary format. See something like http://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html
This way it will be much faster because you don't need to parse the values.
If you can't change the file type (not the result of one of your programs) then there's not much you can do about it. Make sure your machine has lots of ram (at least 8GB) so that it doesn't need to use the swap at all. Defragmenting the harddrive might help as well, or using a SSD drive.
An intermediate solution might be a C++ binary to do the parsing and then dump it in a binary format. I don't have any links for examples on this one.

Categories

Resources