How to track memory for a python script - python

We have a system that only has one interpreter. Many user scripts come through this interpreter. We want put a cap on each script's memory usage. There is only process, and that process invokes tasklets for each script. So since we only have one interpreter and one process, we don't know a way to put a cap on each scripts memory usage. What is the best way to do this

I don't think that it's possible at all. Your questions implies that the memory used by your tasklets is completly separated, which is probably not the case. Python is optimizing small objects like integers. As far as I know, for example each 3 in your code is using the same object, which is not a problem, because it is imutable. So if two of your tasklets use the same (small?) integer, they are already sharing memory. ;-)

Memory is separated at OS process level. There's no easy way to tell to which tasklet and even to which thread does a particular object belong.
Also, there's no easy way to add a custom bookkeeping allocator that would analyze which tasklet or thread is is allocating a piece of memory and prevent from allocating too much. It would also need to plug into garbage-collection code to discount objects which are freed.
Unless you're keen to write a custom Python interpreter, using a process per task is your best bet.
You don't even need to kill and respawn the interpreters every time you need to run another script. Pool several interpreters and only kill the ones that overgrow a certain memory threshold after running a script. Limit interpreters' memory consumption by means provided by OS if you need.
If you need to share large amounts of common data between the tasks, use shared memory; for smaller interactions, use sockets (with a messaging level above them as needed).
Yes, this might be slower than your current setup. But from your use of Python I suppose that in these scripts you don't do any time-critical computing anyway.

Related

Does Python multiprocessing module need multi-core CPU?

Do I need a multi-core CPU to take advantage of the Python multiprocessing module?
Also, can someone tell me how it works under the hood?
multiprocessing asks the OS to launch one or more new processes, running the same version of Python and the same version of your script. It can also set up pipes or other ways of sharing data directly between them.
It usually works like magic; when you peek under the hood, sometimes it looks like sausage being made, but you can usually understand the sausage grinders. The multiprocessing docs do a great job explaining things further (they're long, but then there's a lot to explain). And if you need even more under-the-hood knowledge, the docs link to the source, which is pretty readable Python code. If you have a specific question after reading, come back to SO and ask a specific question.
Meanwhile, you can get some of the benefits of multiprocessing without multiple cores.
The main benefit—the reason the module was designed—is parallelism for speed. And obviously, without 4 cores, you aren't going to cut your time down to 25%. But sometimes, you actually can get a bit of speedup even with a single core, especially if that core has "hyperthreading" or similar technologies. I've seen times come down to 80%, or even 60%. More commonly, they'll go up to 108% instead (because you did get a small benefit from hyperthreading, but the overhead cost was higher than the gain). But try it with your code and see.
Meanwhile, you get all of the side benefits:
Concurrency: You can run multiple tasks at once without them blocking each other. Of course threads, asyncio, and other techniques can do this too.
Isolation: You can run multiple tasks at once without the risk of one of them changing data that another one wasn't expecting to change.
Crash protection: If a child task segfaults, only that task is affected. (Well, you still have to be careful of any side-effects—if it crashed in the middle of writing a file that another tasks expects to be in a consistent shape, you're still in trouble.)
You can also use the multiprocessing module without multiple processes. Sometimes you just want the higher-level API of the module, but you want to use it with threads; multiprocessing.dummy does that. And you can switch back and forth in a couple lines of code to test it both ways. Or you can use the higher-level concurrent.futures.ProcessPoolExecutor wrapper, if its model fits what you want to do. Besides often being simpler, it lets you switch between threads and processes by just changing one word in one line.
Also, redesigning your program around multiprocessing takes you a step closer to further redesigning it as a distributed system that runs on multiple separate machines. It forces you to deal with questions like how your tasks communicate without being able to share everything, without forcing you to deal with further questions like how they communicate without reliable connections.

python multi-threads read a read-only memory buffer bypass GIL

I have many files on disk need to read, the 1st option is use multi-threads, it perform very well on SSD. (when threads blocked by IO, it will release GIL)
But I wanna achieve similar or faster speed without SSD, so I pre-load them into memory(like store in a dict), and every threads will read each file content from memory. Unfortunately, maybe because of the GIL, there is a lock in the dict, hence its speeds is even slower than loading files from SSD!
So my question is that is there any solution can create a read-only memory buffer without lock/GIL? like ramdisk or something else>
In short, no.
Even though Python (CPython in particular) is a multithread language, at any instant the interpreter can run only one piece of python code. Therefore if your pure python program does not contain blocking I/O (e.g. access lock-free memory buffer), it will degrade a single-threaded program no matter what you do. In face the performance will be worse than an actual single-threaded program because there is overhead in synchronizing with other threads.
(Special thanks Graham Dumpleton!) One of the solution is to write C extensions for CPython. And release GIL when enter the "realm of C". Just be careful that you can't access python stuff without the GIL protection otherwise it will cause subtle bugs, or crash directly.
There are several implementations that do not use GIL, for example, Jython and Cython (Not CPython). You can try using them. But keep in mind that writing a correct multithread program is hard. Writing a fast multithread program is even harder. My suggestion is to write multi-process program instead of multithread. And pass data via IPC or so (let's say, ZeroMQ, it's easy to use and lightweight).
Let me add few points to #HKTonyLee answer.
So Python has this GIL. But it is released when doing for example file I/O. This means that you can parallely read files. Since from processes point of view there is no such thing as file but only file descriptors (assuming posix) then whatever you read it does not have to be stored on the disk.
All in all, if you move your file to (for example) tmpfs or ramdisk or any equivalent then you should obtain even better performance then with SSD. Note however the risk: if you need to modify the file you may lose the update.

How to efficiently fan out large chunks of data into multiple concurrent sub-processes in Python?

[I'm using Python 3.5.2 (x64) in Windows.]
I'm reading binary data in large blocks (on the order of megabytes) and would like to efficiently share that data into 'n' concurrent Python sub-processes (each process will deal with the data in a unique and computationally expensive way).
The data is read-only, and each sequential block will not be considered to be "processed" until all the sub-processes are done.
I've focused on shared memory (Array (locked / unlocked) and RawArray): Reading the data block from the file into a buffer was quite quick, but copying that block to the shared memory was noticeably slower.
With queues, there will be a lot of redundant data copying going on there relative to shared memory. I chose shared memory because it involved one copy versus 'n' copies of the data).
Architecturally, how would one handle this problem efficiently in Python 3.5?
Edit: I've gathered two things so far: memory mapping in Windows is cumbersome because of the pickling involved to make it happen, and multiprocessing.Queue (more specifically, JoinableQueue) is faster though not (yet) optimal.
Edit 2: One other thing I've gathered is, if you have lots of jobs to do (particularly in Windows, where spawn() is the only option and is costly too), creating long-running parallel processes is better than creating them over and over again.
Suggestions - preferably ones that use multiprocessing components - are still very welcome!
In Unix this might be tractable because fork() is used for multiprocessing, but in Windows the fact that spawn() is the only way it works really limits the options. However, this is meant to be a multi-platform solution (which I'll use mainly in Windows) so I am working within that constraint.
I could open the data source in each subprocess, but depending on the data source that can be expensive in terms of bandwidth or prohibitive if it's a stream. That's why I've gone with the read-once approach.
Shared memory via mmap and an anonymous memory allocation seemed ideal, but to pass the object to the subprocesses would require pickling it - but you can't pickle mmap objects. So much for that.
Shared memory via a cython module might be impossible or it might not but it's almost certainly prohibitive - and begs the question of using a more appropriate language to the task.
Shared memory via the shared Array and RawArray functionality was costly in terms of performance.
Queues worked the best - but the internal I/O due to what I think is pickling in the background is prodigious. However, the performance hit for a small number of parallel processes wasn't too noticeable (this may be a limiting factor on faster systems though).
I will probably re-factor this in another language for a) the experience! and b) to see if I can avoid the I/O demands the Python Queues are causing. Fast memory caching between processes (which I hoped to implement here) would avoid a lot of redundant I/O.
While Python is widely applicable, no tool is ideal for every job and this is just one of those cases. I learned a lot about Python's multiprocessing module in the course of this!
At this point it looks like I've gone as far as I can go with standard CPython, but suggestions are still welcome!

How to choose between different concurrent method available in Python?

There's different ways of doing concurrent in Python, below is a simple list:
process-based: process.Popen, multiprocessing.Process, old fashioned os.system, os.popen, os.exe*
thread-based: threading.Thread
microthread-based: greenlet
I know the difference between thread-based concurrency and process-based concurrency, and I know some (but not too much) about GIL's impact in CPython's thread support.
For a beginner who want to implement some level of concurrency, how to choose between them? Or, what's the general difference between them? Are there any more ways to do concurrent in Python?
I'm not sure if I'm asking the right question, please feel free to improve this question.
The reason all three of these mechanisms exist is that they have different strengths and weaknesses.
First, if you have huge numbers of small, independent tasks, and there's no sensible way to batch them up (typically, this means you're writing a C10k server, but that's not the only possible case), microthreads win hands down. You can only run a few hundred OS threads or processes before everything either bogs down or just fails. So, either you use microthreads, or you give up on automatic concurrency and start writing explicit callbacks or coroutines. This is really the only time microthreads win; otherwise, they're just like OS threads except a few things don't work right.
Next, if your code is CPU-bound, you need processes. Microthreads are an inherently single-core solution; Threads in Python generally can't parallelize well because of the GIL; processes get as much parallelism as the OS can handle. So, processes will let your 4-core system run your code 4x as fast; nothing else will. (In fact, you might want to go farther and distribute across separate computers, but you didn't ask about that.) But if your code is I/O-bound, core-parallelism doesn't help, so threads are just as good as processes.
If you have lots of shared, mutable data, things are going to be tough. Processes require explicitly putting everything into sharable structures, like using multiprocessing.Array in place of list, which gets nightmarishly complicated. Threads share everything automatically—which means there are race conditions everywhere. Which means you need to think through your flow very carefully and use locks effectively. With processes, an experienced developers can build a system that works on all of the test data but has to be reorganized every time you give it a new set of inputs. With threads, an experienced developer can write code that runs for weeks before accidentally and silently scrambling everyone's credit card numbers.
Whichever of those two scares you more—do that one, because you understand the problem better. Or, if it's at all possible, step back and try to redesign your code to make most of the shared data independent or immutable. This may not be possible (without making things either too slow or too hard to understand), but think about it hard before deciding that.
If you have lots of independent data or shared immutable data, threads clearly win. Processes need either explicit sharing (like multiprocessing.Array again) or marshaling. multiprocessing and its third-party alternatives make marshaling pretty easy for the simple cases where everything is picklable, but it's still not as simple as just passing values around directly, and it's also a lot slower.
Unfortunately, most cases where you have lots of immutable data to pass around are the exact same cases where you need CPU parallelism, which means you have a tradeoff. And the best answer to this tradeoff may be OS threads on your current 4-core system, but processes on the 16-core system you have in 2 years. (If you organize things around, e.g., multiprocessing.ThreadPool or concurrent.futures.ThreadPoolExecutor, and trivially switch to Pool or ProcessPoolExecutor later—or even with a runtime configuration switch—that pretty much solves the problem. But this isn't always possible.)
Finally, if your application inherently requires an event loop (e.g., a GUI app or a network server), pick the framework you like first. Coding with, say, PySide vs. wx, or twisted vs. gevent, is a bigger difference than coding with microthreads vs. OS threads. And, once you've picked the framework, see how much you can take advantage of its event loop where you thought you needed real concurrency. For example, if you need some code to run every 30 seconds, don't start a thread (micro- or OS) for that, ask the framework to schedule it however it wants.

Creating an in-memory cache that persists between executions

I'm developing a Python command line utility that potentially involves rather large queries against a set of files. It's a reasonably finite list of queries (think indexed DB columns) To improve performance in-process I can generated sorted/structured lists, maps and trees once, and hit those repeatedly, rather than hit the file system each time.
However, these caches are lost when the process ends, and need to be rebuilt every time the script runs, which dramatically increases the runtime of my program. I'd like to identify the best way to share this data between multiple executions of my command, which may be concurrent, one after another, or with significant delays between executions.
Requirements:
Must be fast - any sort of per-execution processing should be minimized, this includes disk IO and object construction.
Must be OS agnostic (or at least be able to hook into similar underlying behaviors on Unix/Windows, which is more likely).
Must allow reasonably complex querying / filtering - I don't think a key/value map will be good enough
Does not need to be up-to-date - (briefly) stale data is perfectly fine, this is just a cache, the actual data is being written to disk separately.
Can't use a heavyweight daemon process, like MySQL or MemCached - I want to minimize installation costs, and asking each user to install these services is too much.
Preferences:
I'd like to avoid any sort long running daemon process at all, if possible.
While I'd like to be able to update the cache quickly, rebuilding the whole cache on update isn't the end of the world, fast reads are much more important than fast writes.
In my ideal fantasy world, I'd be able to directly keep Python objects around between executions, sort of like Java threads (like Tomcat requests) sharing singleton data store objects, but I realize that may not be possible. The closer I can get to that though, the better.
Candidates:
SQLite in memory
SQLite on it's own doesn't seem fast enough for my use case, since it's backed by disk and therefore will have to read from the file on every execution. Perhaps this isn't as bad as it seems, but it seems necessary to persistently store the database in memory. SQLite allows for DBs to use memory as storage but these DBs are destroyed upon program exit, and cannot be shared between instances.
Flat file database loaded into memory with mmap
On the opposite end of the spectrum, I could write the caches to disk, then load them into memory with mmap, can share the same memory space between separate executions. It's not clear to me what happens to the mmap if all processes exit however. It's ok if the mmap is eventually flushed from memory, but I'd want it to stick around for a little bit (30 seconds? a few minutes?) so a user can run commands one after another, and the cache can be reused. This example seems to imply that there needs to be an open mmap handle, but I haven't found any exact description of when memory mapped files get dropped from memory and need to be reloaded from disk.
I think I could implement this, if mmap objects do stick around after exit, but it feels very low level, and I imagine someone's already got a more elegant solution implemented. I'd hate to start building this only to realize I've been rebuilding SQLite. On the other hand, it feels like it would be very fast, and I could make optimizations given my specific use case.
Share Python objects between processes using Processing
The Processing package indicates "Objects can be shared between processes using ... shared memory". Looking through the rest of the docs, I didn't see any further mention of this behavior, but that sounds very promising. Can anyone direct me to more information?
Store data on a RAM disk
My concern here is OS-specific capabilities, but I could create a RAM disk and then simply read/write to it as I please (SQLite?). The fs.memoryfs package seems like a promising alternative to work with multiple OSs, but the comments imply a fair number of limitations.
I know pickle is an efficient way to store Python objects, so it might have speed advantages over any sort of manual data storage. Can I hook pickle into any of the above options? Would that be better than flat files or SQLite?
I know there's a lot of questions related to this, but I did a fair bit of digging and couldn't find anything directly addressing my question with regards to multiple command line executions.
I fully admit, I may be way overthinking this. I'm just trying to get a feel for my options, and if they're worthwhile or not.
Thank you so much for your help!
I would just do the simplest thing that might possibly work. ...which in your case would likely just be to dump to a pickle file. If you find it's not fast enough, try something more involved (like memcached or SQLite). Donald Knuth says "Premature optimization is the root of all evil"!

Categories

Resources