I executed one python grogram and issued the following command "ps -mH -p 10934"(the number was that process Id), I just found one thread in the process. But for a Java process it would start more than 20 threads, such as: gc daemon, management daemon...
Why python process only one thread? how does python do the garbage collection?
Related
I would like to keep another Python process (Python function) running, even after the main process is completed. Can this be done without using subprocess?
Currently if I run a non-daemonic process, then it will be automatically joined to the main process.
If I set the process to be daemon, then the child process will exit once the main process is complete.
How do I have another process keep running in the background, even after the main process is complete?
I am using pexpect to run a start command on an in-house application. The start command starts a number of processes. As the processes are starting one by one in the background everything looks good, but when the 'start' process finishes and the pexpect process ends, the processes that have been started also die.
child = pexpect.spawn('foo start')
child.logfile = log
child.wait()
For this scenario, I can use nohup and it works as expected.
child = pexpect.spawn('bash -c "nohup foo start"')
However, there is also an installer for the same in-house application that has the same issue, part of the installation is to start the processes. The installer is interactive and requires input, so nohup will not work.
How can I prevent the processes that are started by the installer from dying when the pexpect session ends?
Note: The start and install processes work fine when executed from a standard terminal session. They are not tied to the session in any way.
I couldn't find much in the documentation about it, but including the "ignore_sighup=True" option in the spawn command fixed my issue.
child = pexpect.spawn('foo start', ignore_sighup=True)
I am working on Unix systems and have a GUI application that in turn spawns couple of other processes. These processes required to run independent of the parent process (GUI application). Basically, when the GUI is crashed or closed, the child processes should keep running.
One approach could be to demonize the processes. Here is an useful answer that runs a process in background through double forking.
What I would like to ask is, if it is possible to have the same result using terminal-multiplexer, like tmux or GNU-Screen. I am not sure how these terminal-multiplexers creates and maintain shell sessions but the basic idea would be to start the GUI application, that uses 'tmux' or 'screen' to creates a shell session and run child processes within the shell session. Would it make the child process independent of parent processes?
Thanks in advance!
It should work if your GUI runs something like this:
tmux new-session -s test -d vim
which creates a detached session named "test", running the "vim" command. The session can then be attached with:
tmux attach-session -t test
I have a linux daemon (based on python module python-daemon) that needs to spawn two processes (consider a producer and a consumer) of the Multiprocessing module to handle some concurrent I/O (the producer reads from an input stream and the consumer uploads the data using python requests).
As per the python docs (https://docs.python.org/2/library/multiprocessing.html), daemonic processes are not allowed to start child processes. How can I handle this? Are there any documents or examples for this approach?
Pls. advise.
Context:
I have tried using threading module. But, due to the GIL problem, the consumer rarely gets a chance to execute. I also looked into tornado and gevent. But, that would require rewriting a lot of the code.
I think there is some confusion here. Document says only if you mark the process that has been created from python as daemon then it cannot create sub process. But your python-daemon is a normal linux daemon.
linux daemon - process running in background. (python daemon library creates such process), these can have subprocess
Only a daemon process created from multiprocessing library cannot create sub-process.
I need to create spawn off a process in python that allows the calling process to exit while the child is still running. What is an effective way to do this?
Note: I'm running on a UNIX environment.
Terminating the parent process does not terminate child processes in Unix-like operating systems, so you don't need to do anything special. Just start your subprocesses with subprocess.Popen and terminate the main process. The orphaned processes will automatically be adopted by init.