In memory database with fallback to disk on OOM - python

I have lots of data to operate on (write, sort, read). This data can potentially be larger than the main memory and doesn't need to be stored permanently.
Is there any kind of library/database that can store these data for me in memory and that does have and automagically fallback to disk if system runs in a OOM situation? The API and storage type is unimportant as long as it can store basic Python types (str, int, list, date and ideally dict).

Python's built-in sqlite3 caches file-system writes.

I will go for the in memory solution and let the OS swap. I can still replace the storage component if this will be really a problem. Thanks agf.

Related

Loading large files into memory with Python

When doing work with large files and datasets (usually 1 or 2 gb+), the process is killed do to running out of RAM. What tools and methods are available to allow saving memory, while allowing the necessary functions, such as iteration over the entire file, and accessing and assigning other large variables. Due to the need to have access to the entire file in read mode, I am unsure of solutions for the given problem. Thanks for any help.
For reference, the project I am currently encountering this problem in is right here (dev branch).
Generally, you can use memory-mapped files so not to map a section of a virtual memory in a storage device. This enable you to operate on a use space of memory mapped that would not fit in RAM. Note that this is significantly slower than RAM though (there is no free lunch). You can use Numpy to do that quite transparently with numpy.memmap. Alternatively, there is mmap. For sake of performance, you can operate on chunks on read write them once from the memory-mapped section.

Why does Python's multiprocessing module only support two data types?

According to Python's multiprocessing documentation:
Data can be stored in a shared memory map using Value or Array.
Is shared memory treated differently than memory that is typically allocated to a process? Why does Python only support two data structures?
I'm guessing it has to do with garbage collection and is perhaps along the same reasons GIL exists. If this is the case, how/why are Value and Array implemented to be an exception to this?
I'm not remotely an expert on this, so def not a complete answer. There are a couple of things I think this considers:
Processes have their own memory space, so if we share "normal" variables between processes and try to write each process will have its own copy (perhaps using copy on write semantics).
Shared memory needs some sort of abstraction or primitive as it exists outside of process memory (SOURCE)
Value and Array, by default, are thread/process safe for concurrent use by guarding access with locks, handling allocations to shared memory AND protecting it :)
The attached documentation is able to answer, yes to:
is shared memory treated differently than memory that is typically allocated to a process?

Does this mechanism use the Buffer or the Cache?

As far as I know, a buffer is something that has yet to be "written" to disk, while a cache is something that has been "read" from the disk and stored for later use.
But for this mechanism: In python, when a memory is not being used, there exists such an area which will be kept by the system for the next usage instead of just releasing immediately.
I am wondering does this area belong to the Buffer or the Cache?
Thanks.
As far as I understand, the mechanism you mentioned is related to Python's memory management and garbage collection.
This isn't related to buffering or caching data. Cache and Buffer are different things, which used to reduce disk-related operations (reading or writing data to disk).
Python's memory mechanism talks about allocating memory from the operating system.
Yoy can read more about Python's Garbage Collector here and the difference between cache and buffer here.

Fastest way to save and load a large dictionary in Python

I have a relatively large dictionary. How do I know the size? well when I save it using cPickle the size of the file will grow approx. 400Mb. cPickle is supposed to be much faster than pickle but loading and saving this file just takes a lot of time. I have a Dual Core laptop 2.6 Ghz with 4GB RAM on a Linux machine. Does anyone have any suggestions for a faster saving and loading of dictionaries in python? thanks
Use the protocol=2 option of cPickle. The default protocol (0) is much slower, and produces much larger files on disk.
If you just want to work with a larger dictionary than memory can hold, the shelve module is a good quick-and-dirty solution. It acts like an in-memory dict, but stores itself on disk rather than in memory. shelve is based on cPickle, so be sure to set your protocol to anything other than 0.
The advantages of a database like sqlite over cPickle will depend on your use case. How often will you write data? How many times do you expect to read each datum that you write? Will you ever want to perform a search of the data you write, or load it one piece at a time?
If you're doing write-once, read-many, and loading one piece at a time, by all means use a database. If you're doing write once, read once, cPickle (with any protocol other than the default protocol=0) will be hard to beat. If you just want a large, persistent dict, use shelve.
I know it's an old question but just as an update for those who still looking for an answer to this question:
The protocol argument has been updated in python 3 and now there are even faster and more efficient options (i.e. protocol=3 and protocol=4) which might not work under python 2.
You can read about it more in the reference.
In order to always use the best protocol supported by the python version you're using, you can simply use pickle.HIGHEST_PROTOCOL. The following example is taken from the reference:
import pickle
# ...
with open('data.pickle', 'wb') as f:
# Pickle the 'data' dictionary using the highest protocol available.
pickle.dump(data, f, pickle.HIGHEST_PROTOCOL)
Sqlite
It might be worthwhile to store the data in a Sqlite database. Although there will be some development overhead when refactoring your program to work with Sqlite, it also becomes much easier and performant to query the database.
You also get transactions, atomicity, serialization, compression, etc. for free.
Depending on what version of Python you're using, you might already have sqlite built-in.
I have tried this for many projects and concluded that shelve is faster than pickle in saving data. Both perform the same at loading data.
Shelve is in fact a dirty solution.
That is because you have to be very careful with it. If you do not close a shelve file after opening it, or due to any reason some interruption happens in your code when you're in the middle of opening and closing it, the shelve file has high chance of getting corrupted (resulting in frustrating KeyErrors); which is really annoying given that we who are using them are interested in them because of storing our LARGE dict files which clearly also took a long time to be constructed
And that is why shelve is a dirty solution... It's still faster though. So!
You may test to compress your dictionnary (with some restrictions see : this post) it will be efficient if the disk access is the bottleneck.
That is a lot of data...
What kind of contents has your dictionary? If it is only primitive or fixed datatypes, maybe a real database or a custom file-format is the better option?

How to deserialize 1GB of objects into Python faster than cPickle?

We've got a Python-based web server that unpickles a number of large data files on startup using cPickle. The data files (pickled using HIGHEST_PROTOCOL) are around 0.4 GB on disk and load into memory as about 1.2 GB of Python objects -- this takes about 20 seconds. We're using Python 2.6 on 64-bit Windows machines.
The bottleneck is certainly not disk (it takes less than 0.5s to actually read that much data), but memory allocation and object creation (there are millions of objects being created). We want to reduce the 20s to decrease startup time.
Is there any way to deserialize more than 1GB of objects into Python much faster than cPickle (like 5-10x)? Because the execution time is bound by memory allocation and object creation, I presume using another unpickling technique such as JSON wouldn't help here.
I know some interpreted languages have a way to save their entire memory image as a disk file, so they can load it back into memory all in one go, without allocation/creation for each object. Is there a way to do this, or achieve something similar, in Python?
Try the marshal module - it's internal (used by the byte-compiler) and intentionally not advertised much, but it is much faster. Note that it doesn't serialize arbitrary instances like pickle, only builtin types (don't remember the exact constraints, see docs). Also note that the format isn't stable.
If you need to initialize multiple processes and can tolerate one process always loaded, there is an elegant solution: load the objects in one process, and then do nothing in it except forking processes on demand. Forking is fast (copy on write) and shares the memory between all processes. [Disclaimers: untested; unlike Ruby, Python ref counting will trigger page copies so this is probably useless if you have huge objects and/or access a small fraction of them.]
If your objects contain lots of raw data like numpy arrays, you can memory-map them for much faster startup. pytables is also good for these scenarios.
If you'll only use a small part of the objects, then an OO database (like Zope's) can probably help you. Though if you need them all in memory, you will just waste lots of overhead for little gain. (never used one, so this might be nonsense).
Maybe other python implementations can do it? Don't know, just a thought...
Are you load()ing the pickled data directly from the file? What about to try to load the file into the memory and then do the load?
I would start with trying the cStringIO(); alternatively you may try to write your own version of StringIO that would use buffer() to slice the memory which would reduce the needed copy() operations (cStringIO still may be faster, but you'll have to try).
There are sometimes huge performance bottlenecks when doing these kinds of operations especially on Windows platform; the Windows system is somehow very unoptimized for doing lots of small reads while UNIXes cope quite well; if load() does lot of small reads or you are calling load() several times to read the data, this would help.
I haven't used cPickle (or Python) but in cases like this I think the best strategy is to
avoid unnecessary loading of the objects until they are really needed - say load after start up on a different thread, actually its usually better to avoid unnecessary loading/initialization at anytime for obvious reasons. Google 'lazy loading' or 'lazy initialization'. If you really need all the objects to do some task before server start up then maybe you can try to implement a manual custom deserialization method, in other words implement something yourself if you have intimate knowledge of the data you will deal with which can help you 'squeeze' better performance then the general tool for dealing with it.
Did you try sacrificing efficiency of pickling by not using HIGHEST_PROTOCOL? It isn't clear what performance costs are associated with using this protocol, but it might be worth a try.
Impossible to answer this without knowing more about what sort of data you are loading and how you are using it.
If it is some sort of business logic, maybe you should try turning it into a pre-compiled module;
If it is structured data, can you delegate it to a database and only pull what is needed?
Does the data have a regular structure? Is there any way to divide it up and decide what is required and only then load it?
I'll add another answer that might be helpful - if you can, can you try to define _slots_ on the class that is most commonly created? This may be a little limiting and impossible, however it seems to have cut the time needed for initialization on my test to about a half.

Categories

Resources