I am trying to kill a process tree using this shell command:
kill -TERM -- -3333
so in python I use subprocess:
subprocess.call(['kill', '-TERM', '--', '-3333'])
the process is terminated as expected but I get this message:
ERROR: garbage process ID "--".
Usage:
kill pid ... Send SIGTERM to every process listed.
kill signal pid ... Send a signal to every process listed.
kill -s signal pid ... Send a signal to every process listed.
kill -l List all signal names.
kill -L List all signal names in a nice table.
kill -l signal Convert between signal numbers and names.
Why do I get this message and what am I doing wrong?
I am using Python 2.6.5 on Ubuntu 10.04.
You are passing the kill command an argument it doesn't recognise. You could simply drop the --:
subprocess.call(['kill', '-TERM', '-3333'])
You probably should be passing in the PID without a dash as well, if -- is not supported, neither will a negative PID; at which point you'd be signalling just the single process.
Note that you are not executing this through a shell, while your shell probably has its own kill command implementation, Python instructs the OS to find the first kill binary executable on the path instead. The shell built-in may accept -- but that's not the command you are executing here.
If you must use the shell built-in, then you'll have to set shell=True and pass in a string command line:
subprocess.call('kill -TERM -- -3333', shell=True)
This uses /bin/sh; you can set a different shell to run the command through with the executable argument:
subprocess.call('kill -TERM -- -3333', shell=True, executable='/bin/bash')
Last but not least, you may not need the kill command at all. Python can send signals directly with the os.kill() function:
import os, signal
os.kill(3333, signal.SIGTERM)
and the os.killpg() function can send a signal to a process group:
import os, signal
os.killpg(3333, signal.SIGTERM)
Related
Objective:
On MacOS, I want to run the powermetrics utility from a python script in the background.
The python script will continue to execute some other work while the powermetrics utility logs the power statistics in the background.
When the work is done, I want to stop the powermetrics utility running in the background.
Problem
powermetrics requires sudo privileges to run.
I've looked at previous Stack Overflow answers and haven't found a way to kill sudo process without leaving zombies i.e. the powermetrics continues to run and add to the output file (specified with -u parameter). Other SO posts
Killing sudo-started subprocess in python
Python how to kill root subprocess
Code:
Here's my current test code
cmd = "sudo powermetrics -i 1000 --samplers cpu_power,gpu_power -a -n 20 --hide-cpu-duty-cycle --show-usage-summary --show-extra-power-info -u " + outputFile
pr = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
stderr=subprocess.PIPE, universal_newlines=True)
print("Process spawned with PID: %s" % pr.pid)
pgid = os.getpgid(pr.pid)
print("Process spawned with Group ID: %s" % pgid)
time.sleep(5)
os.system("sudo pkill -9 -P " + str(pgid))
I see several issues:
A zombie process is just an entry in the process table (no memory, no CPU), but you wrote it continues to run.
The unconditional KILL -9 signal is used for "runaway" processes, i.e. when regular ways to stop failed. Read the docs for the process you want to stop. If unsure, use the default SIGTERM first, wait few seconds for a cleanup and only if the process did not stop, kill it with SIGKILL.
The -P option is documented as "Only match processes whose parent process ID is listed", but your process is using the group ID, not the process ID. Did you mean -G instead?
I'm writing a stop routine for a start-up service script:
do_stop()
{
rm -f $PIDFILE
pkill -f $DAEMON || return 1
return 0
}
The problem is that pkill (same with killall) also matches the process representing the script itself and it basically terminates itself. How to fix that?
You can explicitly filter out the current PID from the results:
kill $(pgrep -f $DAEMON | grep -v ^$$\$)
To correctly use the -f flag, be sure to supply the whole path to the daemon rather than just a substring. That will prevent you from killing the script (and eliminate the need for the above grep) and also from killing all other system processes that happen to share the daemon's name.
pkill -f accepts a full blown regex. So rather than pkill -f $DAEMON you should use:
pkill -f "^"$DAEMON
To make sure only if process name starts with the given daemon name then only it is killed.
A better solution will be to save pid (Proces Id) of the process in a file when you start the process. And for the stopping the process just read the file to get the process id to be stopped/killed.
Judging by your question, you're not hard over on using pgrep and pkill, so here are some other options commonly used.
1) Use killproc from /etc/init.d/functions or /lib/lsb/init-functions (which ever is appropriate for your distribution and version of linux). If you're writing a service script, you may already be including this file if you used one of the other services as an example.
Usage: killproc [-p pidfile] [ -d delay] {program} [-signal]
The main advantage to using this is that it sends SIGTERM, waits to see if the process terminates and sends SIGKILL only if necessary.
2) You can also use the secret sauce of killproc, which is to find the process ids to kill using pidof which has a -o option for excluding a particular process. The argument for -o could be $$, the current process id, or %PPID, which is a special variable that pidof interprets as the script calling pidof. Finally if the daemon is a script, you'll need the -x so your trying to kill the script by it's name rather than killing bash or python.
for pid in $(pidof -o %PPID -x progd); do
kill -TERM $pid
done
You can see an example of this in the article Bash: How to check if your script is already running
I have the following function that is used to execute system commands in Python:
def engage_command(
command = None
):
#os.system(command)
return os.popen(command).read()
I am using the os module instead of the subprocess module because I am dealing with a single environment in which I am interacting with many environment variables etc.
How can I use Bash with this type of function instead of the default sh shell?
output = subprocess.check_output(command, shell=True, executable='/bin/bash')
os.popen() is implemented in terms of subprocess module.
I am dealing with a single environment in which I am interacting with many environment variables etc.
each os.popen(cmd) call creates a new /bin/sh process, to run cmd shell command.
Perhaps, it is not obvious from the os.popen() documentation that says:
Open a pipe to or from command cmd
"open a pipe" does not communicate clearly: "start a new shell process with a redirected standard input or output" -- your could report a documentation issue.
If there is any doubt; the source confirms that each successful os.popen() call creates a new child process
the child can't modify its parent process environment (normally).
Consider:
import os
#XXX BROKEN: it won't work as you expect
print(os.popen("export VAR=value; echo ==$VAR==").read())
print(os.popen("echo ==$VAR==").read())
Output:
==value==
====
==== means that $VAR is empty in the second command because the second command runs in a different /bin/sh process from the first one.
To run several bash commands inside a single process, put them in a script or pass as a string:
output = check_output("\n".join(commands), shell=True, executable='/bin/bash')
Example:
#!/usr/bin/env python
from subprocess import check_output
output = check_output("""
export VAR=value; echo ==$VAR==
echo ==$VAR==
""", shell=True, executable='/bin/bash')
print(output.decode())
Output:
==value==
==value==
Note: $VAR is not empty here.
If you need to generate new commands dynamically (based on the output from the previous commands); it creates several issues and some of the issues could be fixed using pexpect module: code example.
I try to send command from python shell to Ubuntu OS to define process existed on particular port and kill it:
port = 8000
os.system("netstat -lpn | grep %s" % port)
Output:
tcp 0 0 127.0.0.1.8000 0.0.0.0:* LISTEN 22000/python
Then:
os.system("kill -SIGTERM 22000")
but got following trace
sh: 1: kill: Illegal option -S
For some reason command can not be transferred to OS with full signal -SIGTERM, but only with -S. I can simply kill this process directly from Terminal, so seems that it's Python or os issue... How can I run kill command using Python?
Any help is appreciated
os.system uses sh to execute the command, not bash which you get in a terminal. The kill builtin in sh requires giving the signal names without the SIG prefix. Change your os.system command line to kill -TERM 22000.
[EDIT] As #DJanssens suggested, using os.kill is a better option than calling the shell for such a simple thing.
You could try using
import signal
os.kill(process.pid, signal.SIGKILL)
documentation can be found here.
you could also use signal.CTRL_C_EVENT, which corresponds to the CTRL+C keystroke event.
I can't figure out how to close a bash shell that was started via Popen. I'm on windows, and trying to automate some ssh stuff. This is much easier to do via the bash shell that comes with git, and so I'm invoking it via Popen in the following manner:
p = Popen('"my/windows/path/to/bash.exe" | git clone or other commands')
p.wait()
The problem is that after bash runs the commands I pipe into it, it doesn't close. It stays open causing my wait to block indefinitely.
I've tried stringing an "exit" command at the end, but it doesn't work.
p = Popen('"my/windows/path/to/bash.exe" | git clone or other commands && exit')
p.wait()
But still, infinite blocking on the wait. After it finishes its task, it just sits at a bash prompt doing nothing. How do I force it to close?
Try Popen.terminate() this might help kill your process. If you have only synchronous executing commands try to use it directly with subprocess.call().
for example
import subprocess
subprocess.call(["c:\\program files (x86)\\git\\bin\\git.exe",
"clone",
"repository",
"c:\\repository"])
0
Following is an example of using a pipe but this is a little overcomplicated for most use cases and makes sense only if you talk with a service that needs interaction (at least in my opinion).
p = subprocess.Popen(["c:\\program files (x86)\\git\\bin\\git.exe",
"clone",
"repository",
"c:\\repository"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
print p.stderr.read()
fatal: destination path 'c:\repository' already exists and is not an empty directory.
print p.wait(
128
This can be applied to ssh as well
To kill the process tree, you could use taskkill command on Windows:
Popen("TASKKILL /F /PID {pid} /T".format(pid=p.pid))
As #Charles Duffy said, your bash usage is incorrect.
To run a command using bash, use -c parameter:
p = Popen([r'c:\path\to\bash.exe', '-c', 'git clone repo'])
In simple cases, you could use subprocess.check_call instead of Popen().wait():
import subprocess
subprocess.check_call([r'c:\path\to\bash.exe', '-c', 'git clone repo'])
The latter command raises an exception if bash process returns non-zero status (it indicates an error).