I'm using the Jython 2.51 implementation of Python to write a script that repeatedly invokes another process via subprocess.Popen and uses PIPE to pipe stdout and stderr to the parent process and stdin to the child process. After several hundred loop iterations, I seem to run out of file descriptors.
The Python subprocess documentation mentions very little about freeing file descriptors, other than the close_fds option, which isn't described very clearly (Why should there be any file descriptors besides 0, 1 and 2 open in the first place?). I'm assuming that in CPython, reference counting takes care of the resource freeing issue. What's the proper way to make sure all descriptors get freed when one is done with a Popen object in Jython?
Edit: Just in case it makes a difference, this is a multithreaded program, so there are several Popen processes running simultaneously.
This only answers part of your question, but my understanding is that, when you spawn a new process, it normally inherits all the handles of the parent process. That includes such things as open files and sockets that you're listening on.
On UNIX, that's a side-effect of using 'fork', which duplicates the current process and all of its handles before loading the new executable. On Windows it's more explicit, but Python does it anyway, to try to match the behavior across platforms as much as possible.
The close_fds option, when True, closes all these inherited handles after spawning the subprocess, so the new executable starts with a clean slate. But if your subprocesses are run one at a time, and terminating when they're done, then this shouldn't be the problem.
Related
I was reading the description of the two from the python doc:
spawn
The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process objects run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver.
[Available on Unix and Windows. The default on Windows and macOS.]
fork
The parent process uses os.fork() to fork the Python interpreter. The child process, when it begins, is effectively identical to the parent process. All resources of the parent are inherited by the child process. Note that safely forking a multithreaded process is problematic.
[Available on Unix only. The default on Unix.]
And my question is:
is it that the fork is much quicker 'cuz it does not try to identify which resources to copy?
is it that, since fork duplicates everything, it would "waste" much more resources comparing to spawn()?
There's a tradeoff between 3 multiprocessing start methods:
fork is faster because it does a copy-on-write of the parent process's entire virtual memory including the initialized Python interpreter, loaded modules, and constructed objects in memory.
But fork does not copy the parent process's threads. Thus locks (in memory) that in the parent process were held by other threads are stuck in the child without owning threads to unlock them, ready to cause a deadlock when code tries to acquire any of them. Also any native library with forked threads will be in a broken state.
The copied Python modules and objects might be useful or they might needlessly bloat every forked child process.
The child process also "inherits" OS resources like open file descriptors and open network ports. Those can also lead to problems but Python works around some of them.
So fork is fast, unsafe, and maybe bloated.
However these safety problems might not cause trouble depending on what the child process does.
spawn starts a Python child process from scratch without the parent process's memory, file descriptors, threads, etc. Technically, spawn forks a duplicate of the current process, then the child immediately calls exec to replace itself with a fresh Python, then asks Python to load the target module and run the target callable.
So spawn is safe, compact, and slower since Python has to load, initialize itself, read files, load and initialize modules, etc.
However it might not be noticeably slower compared to the work that the child process does.
forkserver forks a duplicate of the current Python process that trims down to approximately a fresh Python process. This becomes the "fork server" process. Then each time you start a child process, it asks the fork server to fork a child and run its target callable.
Those child processes all start out compact and without stuck locks.
forkserver is more complicated and not well documented. Bojan Nikolic's blog post explains more about forkserver and its secret set_forkserver_preload() method to preload some modules. Be wary of using an undocumented method, esp. before the bug fix in Python 3.7.0.
So forkserver is fast, compact, and safe, but it's more complicated and not well documented.
[The docs aren't great on all this so I've combined info from multiple sources and made some inferences. Do comment on any mistakes.]
is it that the fork is much quicker 'cuz it does not try to identify which resources to copy?
Yes, it's much quicker. The kernel can clone the whole process and only copies modified memory-pages as a whole. Piping resources to a new process and booting the interpreter from scratch is not necessary.
is it that, since fork duplicates everything, it would "waste" much more resources comparing to spawn()?
Fork on modern kernels does only "copy-on-write" and it only affects memory-pages which actually change. The caveat is that "write" already encompasses merely iterating over an object in CPython. That's because the reference-count for the object gets incremented.
If you have long running processes with lots of small objects in use, this can mean you waste more memory than with spawn. Anecdotally I recall Facebook claiming to have memory-usage reduced considerably with switching from "fork" to "spawn" for their Python-processes.
Note this question is not the same as Python Subprocess.Popen from a thread, because that question didn't seek an explanation on why it is ok.
If I understand correctly, subprocess.Popen() creates a new process by forking the current process and execv new program.
However, if the current process is multithreaded, and we call subprocess.Popen() in one of the thread, won't it be duplicating all the threads in the current process (because it calls syscall fork())? If it's the case, though these duplicated threads will be wiped out after syscall execv, there's a time gap in which the duplicated threads can do a bunch of nasty stuff.
A case in point is gtest_parallel.py, where the program creates a bunch of threads in execute_tasks(), and in each thread task_manager.run_task(task) will call task.run(), which calls subprocess.Popen() to run a task. Is it ok?
The question applies to other fork-in-thread programs, not just Python.
Forking only results in the calling thread being active in the fork, not all threads.. Most of the pitfalls related to forking in a multi-threaded program are related to mutexes being held by other threads that will never be released in the fork. When you're using Popen, you're going to launch some unrelated process once you execv, so that's not really a concern. There is a warning in the Popen docs about being careful with multiple threads and the preexec_fn parameter, which runs before the execv call happens:
Warning The preexec_fn parameter is not safe to use in the presence of
threads in your application. The child process could deadlock before
exec is called. If you must use it, keep it trivial! Minimize the
number of libraries you call into.
I'm not aware of any other pitfalls to watch out for with Popen, at least in recent versions of Python. Python 2.7's subprocess module does seem to have flaws that can cause issues with multi-threaded applications, however.
I am using multiprocessing module to fork child processes. Since on forking, child process gets the address space of parent process, I am getting the same logger for parent and child. I want to clear the address space of child process for any values carried over from parent. I got to know that multiprocessing does fork() at lower level but not exec(). I want to know whether it is good to use multiprocessing in my situation or should I go for os.fork() and os.exec() combination or is there any other solution?
Thanks.
Since multiprocessing is running a function from your program as if it were a thread function, it definitely needs a full copy of your process' state. That means doing fork().
Using a higher-level interface provided by multiprocessing is generally better. At least you should not care about the fork() return code yourself.
os.fork() is a lower level function providing less service out-of-the-box, though you certainly can use it for anything multiprocessing is used for... at the cost of partial reimplementation of multiprocessing code. So, I think, multiprocessing should be ok for you.
However, if you process' memory footprint is too large to duplicate it (or if you have other reasons to avoid forking -- open connections to databases, open log files etc.), you may have to make the function you want to run in a new process a separate python program. Then you can run it using subprocess, pass parameters to its stdin, capture its stdout and parse the output to get results.
UPD: os.exec... family of functions is hard to use for most of purposes since it replaces your process with a spawned one (if you run the same program as is running, it will restart from the very beginning, not keeping any in-memory data). However, if you really do not need to continue parent process execution, exec() may be of some use.
From my personal experience: os.fork() is used very often to create daemon processes on Unix; I often use subprocess (the communication is through stdin/stdout); almost never used multiprocessing; not a single time in my life I needed os.exec...().
You can just rebind the logger in the child process to its own. I don't know about other OS, but on Linux the forking doesn't duplicate the entire memory footprint (as Ellioh mentioned), but uses "copy-on-write" concept. So until you change something in the child process - it stays in the memory scope of the parent process. For instance, you can fork 100 child processes (that don't write into memory, only read) and check the overall memory usage. It'll not be parent_memory_usage * 100, but much less.
How to correctly fork a child process in twisted that does not use anything from twisted (but uses data from the parent process) (e.g. to process a “snapshot” of some data from the parent process and write it to file, without blocking)?
It seems if I do anything like clean shutdown in the child process after os.fork(), it closes some of the sockets / descriptors in the parent process; the only way to avoid that that I see is to do os.kill(os.getpid(), signal.SIGKILL), which does seem like a bad idea (though not directly problematic).
(additionally, if a dict is changed in the parent process, can it be that it will change in the child process too? Quick test shows that it doesn't change, though. OS/kernels are debian stable / sid)
IReactorProcess.spawnProcess (usually available as from twisted.internet import reactor; reactor.spawnProcess) can spawn a process running any available executable on your system. The subprocess does not need to use Twisted, or, indeed, even be in Python.
Do not call os.fork yourself. As you've discovered, it has lots of very peculiar interactions with process state, that spawnProcess will manage for you.
Among the problems with os.fork are:
Forking copies your current process state, but doesn't copy the state of threads. This means that any thread in the middle of modifying some global state will leave things half-broken, possibly holding some locks which will never be released. Don't run any threads in your application? Have you audited every library you use, every one of its dependencies, to ensure that none of them have ever or will ever use a background thread for anything?
You might think you're only touching certain areas of your application memory, but thanks to Python's reference counting, any object which you even peripherally look at (or is present on the stack) may have reference counts being incremented or decremented. Incrementing or decrementing a refcount is a write operation, which means that whole page (not just that one object) gets copied back into your process. So forked processes in Python tend to accumulate a much larger copied set than, say, forked C programs.
Many libraries, famously all of the libraries that make up the systems on macOS and iOS, cannot handle fork() correctly and will simply crash your program if you attempt to use them after fork but before exec.
There's a flag for telling file descriptors to close on exec - but no such flag to have them close on fork. So any files (including log files, and again, any background temp files opened by libraries you might not even be aware of) can get silently corrupted or truncated if you don't manage access to them carefully.
In python, is there a way to invoke a new process in, hand it the same context, such as standard IO streams, close the current process, and give control to the invoked process? This would effectively 'replace' the process.
I have a program whose behavior I want to repeat. However, it uses a third-party library, and it seems that the only way that I can truly kill threads invoked by that library is to exit() my python process.
Plus, it seems like it could help manage memory.
You may be interested in os.execv() and friends:
These functions all execute a new program, replacing the current
process; they do not return. On Unix, the new executable is loaded
into the current process, and will have the same process id as the
caller. Errors will be reported as OSError exceptions.