Using parallel python 1.6.4 I spawn a subprocess.Popen command on a remote server. For whatever reason, the command isn't completing in a timely matter, i.e., within the socket_timeout I've set. In this case, I expected parallel python to fail, kill the remote process, and maybe raise an exception. Instead, the long process keeps running, and the ppserver quietly spawns another one!
How can I configure ppserver to fail?
Short of that, I suppose I have to set timer, and destroy the job_server to make it close out and clean up the bad process?
Related
I'm developing a python script that runs as a daemon in a linux environment. If and when I need to issue a shutdown/restart operation to the device, I want to do some cleanup and log data to a file to persist it through the shutdown.
I've looked around regarding Linux shutdown and I can't find anything detailing which, if any, signal, is sent to applications at the time of shutdown/restart. I assumed sigterm but my tests (which are not very good tests) seem to disagree with this.
When Linux is shutting down, (and this is slightly dependent on what kind of init scripts you are using) it first sends SIGTERM to all processes to shut them down, and then I believe will try SIGKILL to force them to close if they're not responding to SIGTERM.
Please note, however, that your script may not receive the SIGTERM - init may send this signal to the shell it's running in instead and it could kill python without actually passing the signal on to your script.
Hope this helps!
In a project I am working on, there is some code that starts up a long-running process using sudo:
subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
I would like to clean up this process when the parent exits. Currently, the subprocess keeps running when the parent exits (re-attached to init, of course).
I am not sure of the best solution to this problem. The code is limited to only running certain commands via sudo, and granting blanket authority to run sudo kill would be sketchy at best.
I don't have an open pipe to the child process that I can close (the child process is not reading from stdin), and I am not able to modify the code of the child process.
Are there any other mechanisms that might work in this situation?
First of all I just answer the question. Though I do not think it is a good thing to do, it is what you asked for. I would wrap that child process into a small program that can listen stdin. Then you may sudo that program, and it will be able to run the process without sudo, and will know its pid and have the rights needed to kill the process when you ask it through stdin to do so.
However, generally such a situation means sudo with no password and poor security. The most common technique is to use lowering your program's privileges, not elevating them. In such case you should create a runner program that is started by superuser, than it starts your main program with lowering of privileges and listens for a pipe to communicate. When it is necessary to run a command, your main program tells that to the runner program, and runner program does the job. When it is necessary to terminate command, you again tell this to a runner program via the pipe.
The common rules are:
If you need superuser rights, you should give them to the very parent process.
If a child process needs to do a privileged operation, it requests the top-level process to do that for him.
The top-level process should be kept as small as possible and do as little as possible. The larger it is, the more holes in security it creates.
That's what many applications do. The first example that comes into my mind is Apache web server (at least on *nix) that has a small top-level program and preforked working programs that are not run as root/wheel/whatever-else-is-the-superuser-username.
This will raise OSError: [Errno 1] Operation not permitted on the last line:
p = subprocess.Popen(['sudo', '/usr/bin/somecommand', ...])
print p.stdout.read()
p.terminate()
Assuming sudo will not ask for a password, one workaround is to make a shell script which calls sudo …
#!/bin/sh
sudo /usr/bin/somecommand
… and then do this in Python:
p = subprocess.Popen("/path/to/script.sh", cwd="/path/to")
print p.stdout.read()
p.terminate()
Updated post:
I have a python web application running on a port. It is used to monitor some other processes and one of its features is to allow users to restart his own processes. The restart is done through invoking a bash script, which will proceed to restart those processes and run them in the background.
The problem is, whenever I kill off the python web application after I have used it to restart any user's processes, those processes will take take over the port used by the python web application in a round-robin fashion, so I am unable to restart the python web application due to the port being bounded. As a result, I must kill off the processes involved in the restart until nothing occupies the port the python web application uses.
Everything is ok except for those processes occupying the port. That is really undesirable.
Processes that may be restarted:
redis-server
newrelic-admin run-program (which spawns another web application)
a python worker process
UPDATE (6 June 2013): I have managed to solve this problem. Look at my answer below.
Original Post:
I have a python web application running on a port. This python program has a function that calls a bash script. The bash script spawns a few background processes, then exits.
The problem is, whenever I kill the python program, the background processes spawned by the bash script will take over and occupy that same port.
Specifically the subprocesses are:
a redis server (with daemonize = true in the configuration file)
newrelic-admin run-program (spawns a web application)
a python worker process
Update 2: I've tried running these with nohup. Only the python worker process doesnt attempt to take over the port after I kill the python web application. The redis server and newrelic-admin still do.
I observed this problem when I was using subprocess.call in the python program to run the bash script. I've tried a double fork method in the python program before running the bash script, but it results in the same problem.
How can I prevent any processes spawned from the bash script from taking over the port?
Thank you.
Update: My intention is that, those processes spawned by the bash script should continue running if the python application is killed off. Currently, they do continue running after I kill off the python application. The problem is, when I kill off the python application, the processes spawned by the bash script start to take over the port in a round-robin fashion.
Update 3: Based on the output I see from 'pstree' and 'ps -axf', processes 1 and 2 (the redis server and the web app spawned by newrelic-admin run-program) are not child processes of the python web application. This makes it even weirder that they take over the port that the python web application occupies when I kill it... Anyone knows why?
Just some background on the methods I've tried to solve my above problem, before I go on to the answer proper:
subprocess.call
subprocess.Popen
execve
the double fork method along with one of the above (http://code.activestate.com/recipes/278731-creating-a-daemon-the-python-way/)
By the way, none of the above worked for me. Whenever I killed off the web application that executes the bash script (which in turns spawns some background processes we shall denote as Q now), the processes in Q will in a round-robin fashion, take over the port occupied by the web application, so I had to kill them one by one before I could restart my web application.
After many days of living with this problem and moving on to other parts of my project, I thought of some random Stack Overflow posts and other articles on the Internet and from my own experience, recalled my experience of ssh'ing into a remote and starting a detached screen session, then logging out, and logging in again some time later to discover the screen session still alive.
So I thought, hey, what the heck, nothing works so far, so I might as well try using screen to see if it can solve my problem. And to my great surprise and joy it does! So I am posting this solution hopefully to help those who are facing the same issue.
In the bash script, I simply started the processes using a named screen process. For instance, for the redis application, I might start it like this:
screen -dmS redisScreenName redis-server redis.conf
So those processes will keep running on those detached screen sessions they were started with. In this case, I did not daemonize the redis process.
To kill the screen process, I used:
screen -S redisScreenName -X quit
However, this does not kill the redis-server. So I had to kill it separately.
Now, in the python web application, I can just use subprocess.call to execute the bash script, which will spawn detached screen sessions (using 'screen -dmS') which run the processes I want to spawn. And when I kill off the python web application, none of the spawned processes take over its port. Everything works smoothly.
I've been building a performance test suite to exercise a server. Right now I run this by hand but I want to automate it. On the target server I run a python script that logs server metrics and halts when I hit enter. On the tester machine I run a bash script that iterates over JMeter tests, setting timestamps and naming logs and executing the tests.
I want to tie these together so that the bash script drives the whole process, but I am not sure how best to do this. I can start my python script via ssh, but how to halt it when a test is done? If I can do it in ssh then I don't need to mess with existing configuration and that is a big win. The python script is quite simple and I don't mind rewriting it if that helps.
The easiest solution is probably to make the Python script respond to signals. Of course, you can just SIGKILL the script if it doesn't require any cleanup, but having the script actually handle a shutdown request seems cleaner. SIGHUP might be a popular choice. Docs here.
You can send a signal with the kill command so there is no problem sending the signal through ssh, provided you know the pid of the script. The usual solution to this problem is to put the pid in a file in /var/run when you start the script up. (If you've got a Debian/Ubuntu system, you'll probably find that you have the start-stop-daemon utility, which will do a lot of the grunt work here.)
Another approach, which is a bit more code-intensive, is to create a fifo (named pipe) in some known location, and use it basically like you are currently using stdin: the server waits for input from the pipe, and when it gets something it recognizes as a command, it executes the command ("quit", for example). That might be overkill for your purpose, but it has the advantage of being a more articulated communications channel than a single hammer-hit.
I have a python script that does some jobs. I use multiprocessing.Pool to have a few workers do some commands for me.
My problem is when I try to terminate the script. When I press Ctrl-C, I would like, that every worker immediately cleans up its experiment (which is some custom code, or actually even a subprocess command, not just releasing locks or memory) and stops.
I know that I can catch Ctrl-C with the signal handler. How can I make all current running workers of a multiprocessing.Pool to terminate, still doing their cleanup command?
Pool.terminate() will not be useful, because the processes will be terminated without cleaning up.
How about trying the atexit standard module?
It allows you to register a function that will be executed upon termination.
Are you working with Unix? If yes, why not catch SIGTERM in the subprocesses? In fact, the documentation of Process.terminate() reads:
Terminate the process. On Unix this is done using the SIGTERM signal
(I have not tested this.)