Multiprocessing with in a linux daemon written in python - python

I have a linux daemon (based on python module python-daemon) that needs to spawn two processes (consider a producer and a consumer) of the Multiprocessing module to handle some concurrent I/O (the producer reads from an input stream and the consumer uploads the data using python requests).
As per the python docs (https://docs.python.org/2/library/multiprocessing.html), daemonic processes are not allowed to start child processes. How can I handle this? Are there any documents or examples for this approach?
Pls. advise.
Context:
I have tried using threading module. But, due to the GIL problem, the consumer rarely gets a chance to execute. I also looked into tornado and gevent. But, that would require rewriting a lot of the code.

I think there is some confusion here. Document says only if you mark the process that has been created from python as daemon then it cannot create sub process. But your python-daemon is a normal linux daemon.
linux daemon - process running in background. (python daemon library creates such process), these can have subprocess
Only a daemon process created from multiprocessing library cannot create sub-process.

Related

Pause a python multiple processes from multiprocessing.process and resume them using FastAPI

I was trying to work something out with 3-4 python processes that run in parallel. I start/stop the service and interact with it using FastAPI. I wanted to add the functionality to pause the process (from multiprocessing.process) from the FastAPI dashboard and run it again by resuming it in some way. Is there a way to temporarily halt a python process? So far I could only terminate the process altogether. Thanks!

Profiling background threads of a Python app using vmprof

I'm trying to configure profiling of a Python application (running under pypy2.7 v7.1.1) using vmprof.
If the application is ran via pypy -m vmprof ..., the resulting profile file contains samples from all threads (main and background). However, I need to enable and disable the profiler in a running process, so I'm doing this using the vmprof.enable()/vmprof.disable() functions in a signal handler. The problem is that the resulting file only contains samples from the main thread.
Is there a way to profile all threads of a running application using vmprof?
I ended up recreating the background threads when the profiler is starting.
It is important to spawn the new threads from the main thread when the profiler is running. If the new background threads are spawned from the old background threads, the new threads still will not be profiled.

How to communicate with an external Python process? (not a subprocess)

The other python process was launched externally, only the process identifier is known. This external process is not a subprocess launched from a python process. The path to both processes could be the same. How do I communicate with that process? How can I easily send python data types between these processes?
Best regards,
Czarek
If you can accept communicating between the processes using a tcp connection, you could use zeromq. http://zeromq.org/
See these threads for examples:
interprocess communication in python
how to communicate two separate python processes?

Strategies or support for making parts of a Twisted application reloadable?

I've written a specialized JSON-RPC server and just started working my way up into the application logic and finding it is a tad annoying to constantly having to stop/restart the server to make certain changes.
Previously I had a handler that ran in intervals to compare module modified time stamps with the past check then reload the module as needed. Unfortunately I don't trust it to work correctly now.
Is there a way for a reactor to stop and restart itself in a manner similar to Paster's Reloadable HTTPServer?
Shipped with Twisted is the twisted.python.rebuild module, so that is probably a good place to start.
Also see this SO question: Checking for code changes in all imported python modules
You could write something similar to paster's reloader, that would work like this:
start your main function, and before importing / using any twisted code, fork/spawn a subprocess.
In the subprocess, run your twisted application.
In the main process, run your code which checks for changed files. If code has changed, reload the subprocess.
However, the issue here is that unlike a development webserver, most twisted apps have a lot more state and just flat out killing / restarting the process is a bad idea, you may lose some state.
There is a way to do it cleanly:
When you spawn the twisted app, use subprocess.Popen() or similar, to get stdin/stdout pipes. Now in your subprocess, use the twisted reactor to listen on stdin (there is code for this in twisted, see twisted.internet.stdio which allows you to have a Protocol which talks to a stdio transport, in the usual twisted non-blocking manner).
Finally, when you decide it's time to reload, write something to the stdin of the subprocess telling it to shutdown. Now your twisted code can respond and shut down gracefully. Once it's cleanly quit, your master process can just spawn it again.
(Alternately you can use signals to achieve this, but this may not be OS portable)

Can I send SIGINT to a Python subprocess on Windows?

I've got a Python script managing a gdb process on Windows, and I need to be able to send a SIGINT to the spawned process in order to halt the target process (managed by gdb)
It appears that there is only SIGTERM available in Win32, but clearly if I run gdb from the console and Ctrl+C, it thinks it's receiving a SIGINT. Is there a way I can fake this such that the functionality is available on all platforms?
(I am using the subprocess module, and python 2.5/2.6)
Windows doesn't have the unix signals IPC mechanism.
I would look at sending a CTRL-C to the gdb process.

Categories

Resources