Terminate a gnome-terminal opened with subprocess - python

Using subprocess and the command 'gnome-terminal -e bash' I can open up a gnome-terminal as desired (and have it stick around). This is done with either
p=subprocess.Popen(['gnome-terminal', '-e', 'bash'])
or
p=subprocess.Popen(['gnome-terminal -e bash'], shell=True)
but I cannot close the terminal using p.terminate() or p.kill(). From what I understand, this is a little trickier when using shell=True but I did not expect to run into problems otherwise.

To terminate a terminal and its children (in the same process group):
#!/usr/bin/env python
import os
import signal
import subprocess
p = subprocess.Popen(['gnome-terminal', '--disable-factory', '-e', 'bash'],
preexec_fn=os.setpgrp)
# do something here...
os.killpg(p.pid, signal.SIGINT)
--disable-factory is used to avoid re-using an active terminal so that we can kill newly created terminal via the subprocess handle
os.setpgrp puts gnome-terminal in its own process group so that os.killpg() could be used to send signal to this group

You should be able to do this workaround:
get the process id
kill the process
Working Solution: Close gnome-terminal-server
As suggested by #j-f-sebastian in the comment, gnome-terminal
just sends the request (to gnome-terminal-server) to start a new terminal and exits immediately -- there is nothing to kill the process is already dead (and newly created processes are not descendants:  the new bash process is a child of gnome-terminal-server, not gnome-terminal).
import subprocess
import os, signal
import time
p=subprocess.Popen(['gnome-terminal -e bash'], stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
print "this is going to be closed in 3 sec"
time.sleep(3)
# this line returns the list of bash instances pid as string
bash_pids = subprocess.check_output(["pidof", "bash"])
# I get the last instance opened
pid_to_kill = bash_pids.split(" ")[0]
os.kill(int(pid_to_kill), signal.SIGTERM)
My solution is following this logic:
run gnome-terminal
get the latest bash instance opened process id
kill this process id
Broken solutions
These solutions might work in simpler cases:
Solution 1
import subprocess
import os, signal
p=subprocess.Popen(['gnome-terminal -e bash'], shell=True)
p_pid = p.pid # get the process id
os.kill(p_pid, signal.SIGKILL)
In order to choose the appropriate method of signal to pass instead of SIGKILL you can refer the signal documentation. E.g.
On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, or SIGTERM
For Unix you have a quite extensive list of method to call.
To have a better overview about os.kill, you can refer its documentation.
Solution 2
An alternative method useful for Unix could be:
import subprocess
import os, signal
p=subprocess.Popen(['gnome-terminal -e bash'], stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
os.killpg(os.getpgid(p.pid), signal.SIGTERM)
It seems that your process is opening child process that prevent the parent to be close. Adding a session id to your parent process, you should be able to fix it.
Solution 3
import subprocess, psutil
def kill(p_pid):
process = psutil.Process(p_pid)
for proc in process.get_children(recursive=True):
proc.kill()
process.kill()
p = subprocess.Popen(['gnome-terminal -e bash'], shell=True)
try:
p.wait(timeout=3)
except subprocess.TimeoutExpired:
kill(p.pid)
This solution requires psutil.
Solution 4
According to askubuntu, it seems that the best way to close a gnome terminal instance would be to execute a bash command like:
killall -s {signal} gnome-terminal
where {signal} simulates Alt + F4.
You can try to do it using [pexpect]:
p = pexpect.spawn(your_cmd_here)
p.send('^F4')

I wanted to add this snippet for anyone who is running on Linux Ubuntu and trying to open a subprocess, run a script, and terminate it after a time.wait().
I found a litany of solutions that would open a window, but not close it. Or a solution would open a window, and close it, but wouldn't run the script inside the terminal.
There was no exact answer so I had to hack together several solutions, as I am a novice when it comes t subprocess/shell.
This snippet was able to open a subprocess, run the script, and when 10 seconds had passed the subprocess was terminated. Again, this was built ofn the shoulders of giants. I hope this saves someone time; cheers.
import os
import signal
import subprocess
import time
command = 'python3 Rmonitor.py'
p = subprocess.Popen(['gnome-terminal','--disable-factory', '--', 'bash', '-c', command],preexec_fn=os.setpgrp)
time.sleep(10)
os.killpg(p.pid, signal.SIGINT)

Related

Opens a process with Popen cant close it ( need to run Ros command in cmd)

I need to save some image files from my simulation at different times. So my idea was to open a subprocess save some image files and close it .
import subprocess
cmd = "rosrun pcl_ros pointcloud_to_pcd input:=camera/depth/points"
proc = subprocess.Popen(cmd, shell=True)
When it comes to closing I tried different things:
import os
import signal
import subprocess
cmd = "rosrun pcl_ros pointcloud_to_pcd input:=camera/depth/points"
pro = subprocess.Popen(cmd, stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
os.killpg(os.getpgid(pro.pid), signal.SIGTERM)
command did not execute , so it doesn't work for me. I also tried a solution with psutil and it didn't work neither...
you probably don't need shell=True here, which is the cause of your problems. I suspect that when you kill the process group in your second snippet, the shell process is killed before the process you want to run has a chance to start...
Try to pass the parameters as a list of strings (so you don't need shell=True), wait a bit, and use terminate on the Popen object. You don't need process group, or psutil to kill the process & its children, just plain old terminate() on the process object does the trick.
cmd = ["rosrun","pcl_ros","pointcloud_to_pcd","input:=camera/depth/points"]
proc = subprocess.Popen(cmd)
time.sleep(1) # maybe needed to wait the process to do something useful
proc.terminate()
Note that proc.terminate() tries to exit gracefully where proc.kill() would have just killed the process (there's a difference under Un*x systems, not under Windows)
Another plea for "do not use shell=True unless forced at gunpoint".

How to run a background process and do *not* wait?

My goal is simple: kick off rsync and DO NOT WAIT.
Python 2.7.9 on Debian
Sample code:
rsync_cmd = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1)
rsync_cmd2 = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3} &".format(remote_user, remote_server, file1, file1)
rsync_path = "/usr/bin/rsync"
rsync_args = shlex.split("-a -e 'ssh -i /home/mysuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1))
#subprocess.call(rsync_cmd, shell=True) # This isn't supposed to work but I tried it
#subprocess.Popen(rsync_cmd, shell=True) # This is supposed to be the solution but not for me
#subprocess.Popen(rsync_cmd2, shell=True) # Adding my own shell "&" to background it, still fails
#subprocess.Popen(rsync_cmd, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True) # This doesn't work
#subprocess.Popen(shlex.split(rsync_cmd)) # This doesn't work
#os.execv(rsync_path, rsync_args) # This doesn't work
#os.spawnv(os.P_NOWAIT, rsync_path, rsync_args) # This doesn't work
#os.system(rsync_cmd2) # This doesn't work
print "DONE"
(I've commented out the execution commands only because I'm actually keeping all of my trials in my code so that I know what I've done and what I haven't done. Obviously, I would run the script with the right line uncommented.)
What happens is this...I can watch the transfer on the server and when it's finished, then I get a "DONE" printed to the screen.
What I'd like to have happen is a "DONE" printed immediately after issuing the rsync command and for the transfer to start.
Seems very straight-forward. I've followed details outlined in other posts, like this one and this one, but something is preventing it from working for me.
Thanks ahead of time.
(I have tried everything I can find in StackExchange and don't feel like this is a duplicate because I still can't get it to work. Something isn't right in my setup and need help.)
Here is verified example for Python REPL:
>>> import subprocess
>>> import sys
>>> p = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(100)'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT); print('finished')
finished
How to verify that via another terminal window:
$ ps aux | grep python
Output:
user 32820 0.0 0.0 2447684 3972 s003 S+ 10:11PM 0:00.01 /Users/user/venv/bin/python -c import time; time.sleep(100)
Popen() starts a child process—it does not wait for it to exit. You have to call .wait() method explicitly if you want to wait for the child process. In that sense, all subprocesses are background processes.
On the other hand, the child process may inherit various properties/resources from the parent such as open file descriptors, the process group, its control terminal, some signal configuration, etc—it may lead to preventing ancestors processes to exit e.g., Python subprocess .check_call vs .check_output or the child may die prematurely on Ctrl-C (SIGINT signal is sent to the foreground process group) or if the terminal session is closed (SIGHUP).
To disassociate the child process completely, you should make it a daemon. Sometimes something in between could be enough e.g., it is enough to redirect the inherited stdout in a grandchild so that .communicate() in the parent would return when its immediate child exits.
I encountered a similar issue while working with qnx devices and wanted a sub-process that runs independently of the main process and even runs after the main process terminates.
Here's the solution I found that actually works 'creationflags=subprocess.DETACHED_PROCESS':
import subprocess
import time
pid = subprocess.Popen(["python", "path_to_script\turn_ecu_on.py"], creationflags=subprocess.DETACHED_PROCESS)
time.sleep(15)
print("Done")
Link to the doc: https://docs.python.org/3/library/subprocess.html#subprocess.Popen
In Ubuntu the following commands keep working even if python app exits.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)

python - subprocess.Popen().pid return the pid of the parent script

I am trying to run a Python script from another Python script, and getting its pid so I can kill it later.
I tried subprocess.Popen() with argument shell=True', but thepidattribute returns thepid` of the parent script, so when I try to kill the subprocess, it kills the parent.
Here is my code:
proc = subprocess.Popen(" python ./script.py", shell=True)
pid_ = proc.pid
.
.
.
# later in my code
os.system('kill -9 %s'%pid_)
#IT KILLS THE PARENT :(
shell=True starts a new shell process. proc.pid is the pid of that shell process. kill -9 kills the shell process making the grandchild python process into an orphan.
If the grandchild python script can spawn its own child processes and you want to kill the whole process tree then see How to terminate a python subprocess launched with shell=True:
#!/usr/bin/env python
import os
import signal
import subprocess
proc = subprocess.Popen("python script.py", shell=True, preexec_fn=os.setsid)
# ...
os.killpg(proc.pid, signal.SIGTERM)
If script.py does not spawn any processes then use #icktoofay suggestion: drop shell=True, use a list argument, and call proc.terminate() or proc.kill() -- the latter always works eventually:
#!/usr/bin/env python
import subprocess
proc = subprocess.Popen(["python", "script.py"])
# ...
proc.terminate()
If you want to run your parent script from a different directory; you might need get_script_dir() function.
Consider importing the python module and running its functions, using its object (perhaps via multiprocessing) instead of running it as a script. Here's code example that demonstrates get_script_dir() and multiprocessing usage.
So run it directly without a shell:
proc = subprocess.Popen(['python', './script.py'])
By the way, you may want to consider changing the hardcoded 'python' to sys.executable. Also, you can use proc.kill() to kill the process rather than extracting the PID and using that; furthermore, even if you did need to kill by PID, you could use os.kill to kill the process rather than spawning another command.

Python: Terminate subprocess = Success, but it's still running (?)

I have a simple script that calls another python script as a subprocess. I can confirm the subprocess is started and I can grab its PID.
When I attempt to terminate the subprocess (in win), I get the SUCCESS message against the correct PID, but Windows task manager shows the 2nd python.exe process to still be running.
Any suggestions to accomplish this task in Win? I'll be extending this to also work in OSX and Linux eventually:
Simplified:
#!/usr/bin/env python
import os, sys
import subprocess
from subprocess import Popen, PIPE, STDOUT, check_call
pyTivoPath="c:\pyTivo\pyTivo.py"
print "\nmyPID: %d" % os.getpid()
## Start pyTivo ##
py_process = subprocess.Popen(pyTivoPath, shell=True, stdout=PIPE, stderr=subprocess.STDOUT)
print "newPID: %s" % py_process.pid
## Terminate pyTivo ##
#py_process.terminate() - for nonWin (?)
py_kill = subprocess.Popen("TASKKILL /PID "+ str(py_process.pid) + " /f")
raw_input("\nPress Enter to continue...")
Note: Python2.7 required, psutils not available
In my implementation, the following actually creates TWO processes in Windows ("cmd.exe" and "python.exe").
py_process = subprocess.Popen(pyTivoPath, shell=True, stdout=PIPE, stderr=subprocess.STDOUT)
Noticing the "python.exe" process is a child of the "cmd.exe" process, I added the "/T" (tree kill) switch to my TASKKILL:
py_kill = subprocess.Popen("TASKKILL /PID "+ str(py_process.pid) + " /f /t")
This results in the desired effect to effectively KILL the python subprocess.
Two processes are created because you call Popen with shell=True. It looks like the only reason you need to use a shell is so you make use of the file association with the interpreter. To resolve your issue you could also try:
from subprocess Popen, PIPE, STDOUT
pyTivoPath = "c:\pyTivo\pyTivo.py"
cmd = r'c:\Python27\python.exe "{}"'.format(pyTivoPath)
# start process
py_process = Popen(cmd, stdout=PIPE, stderr=STDOUT)
# kill process
py_process.terminate()
Use the /F (Force) switch on the TASKKILL command. Lots of windows commands do not has useful return values. Don't recall if TASKKILL returns has a useful value.
Sorry, overlooked your /F
You could try calling the win32 api directly.
import win32api
win32api.TerminateProcess(int(process._handle), -1)
Found the ActiveState page for this. Documents a number of kill methods, including the Win32 approach above.
There are also a number of reasons why Windows will not allow you to terminate a process. Common reasons are permissions and buggy drivers that have pending I/O requests that don't response to the kill signal properly.
There are some programs, e.g. ProcessHacker, that are more enthusiastic about killing processes, but I don't know the technical details for certain, though I suspect forced closing of open file handles etc. and then calling Terminate are involved.
You can have similar issues on Linux, i.e., no permission to kill process or the process is ignoring the kill signal. Easier to resolve on Linux though, if kill -9 does not work, it can't be killed and it is a rarer condition because you have to ignore signal 9 explicitly in your code.
0) You could use TASKKILL /T to kill CMD and the Python interpreter.
1) If you change your process creation to create the python process directly (instead of invoking the .py and relying on cmd to launch) with the script name as command argument you will get the PID you expect when you create the process.
2) You could use TASKKILL /IM to kill the process by name, but the name will be the python interpreter and it could kill unintended processes.

is there a way to start/stop linux processes with python?

I want to be able to start a process and then be able to kill it afterwards
Here's a little python script that starts a process, checks if it is running, waits a while, kills it, waits for it to terminate, then checks again. It uses the 'kill' command. Version 2.6 of python subprocess has a kill function. This was written on 2.5.
import subprocess
import time
proc = subprocess.Popen(["sleep", "60"], shell=False)
print 'poll =', proc.poll(), '("None" means process not terminated yet)'
time.sleep(3)
subprocess.call(["kill", "-9", "%d" % proc.pid])
proc.wait()
print 'poll =', proc.poll()
The timed output shows that it was terminated after about 3 seconds, and not 60 as the call to sleep suggests.
$ time python prockill.py
poll = None ("None" means process not terminated yet)
poll = -9
real 0m3.082s
user 0m0.055s
sys 0m0.029s
Have a look at the subprocess module.
You can also use low-level primitives like fork() via the os module.
http://docs.python.org/library/os.html#process-management
A simple function that uses subprocess module:
def CMD(cmd) :
p = subprocess.Popen(cmd, shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
close_fds=False)
return (p.stdin, p.stdout, p.stderr)
see docs for primitive fork() and modules subprocess, multiprocessing, multithreading
If you need to interact with the sub process at all, I recommend the pexpect module (link text). You can send input to the process, receive (or "expect") output in return, and you can close the process (with force=True to send SIGKILL).

Categories

Resources