How to run a background process and do *not* wait? - python

My goal is simple: kick off rsync and DO NOT WAIT.
Python 2.7.9 on Debian
Sample code:
rsync_cmd = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1)
rsync_cmd2 = "/usr/bin/rsync -a -e 'ssh -i /home/myuser/.ssh/id_rsa' {0}#{1}:'{2}' {3} &".format(remote_user, remote_server, file1, file1)
rsync_path = "/usr/bin/rsync"
rsync_args = shlex.split("-a -e 'ssh -i /home/mysuser/.ssh/id_rsa' {0}#{1}:'{2}' {3}".format(remote_user, remote_server, file1, file1))
#subprocess.call(rsync_cmd, shell=True) # This isn't supposed to work but I tried it
#subprocess.Popen(rsync_cmd, shell=True) # This is supposed to be the solution but not for me
#subprocess.Popen(rsync_cmd2, shell=True) # Adding my own shell "&" to background it, still fails
#subprocess.Popen(rsync_cmd, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True) # This doesn't work
#subprocess.Popen(shlex.split(rsync_cmd)) # This doesn't work
#os.execv(rsync_path, rsync_args) # This doesn't work
#os.spawnv(os.P_NOWAIT, rsync_path, rsync_args) # This doesn't work
#os.system(rsync_cmd2) # This doesn't work
print "DONE"
(I've commented out the execution commands only because I'm actually keeping all of my trials in my code so that I know what I've done and what I haven't done. Obviously, I would run the script with the right line uncommented.)
What happens is this...I can watch the transfer on the server and when it's finished, then I get a "DONE" printed to the screen.
What I'd like to have happen is a "DONE" printed immediately after issuing the rsync command and for the transfer to start.
Seems very straight-forward. I've followed details outlined in other posts, like this one and this one, but something is preventing it from working for me.
Thanks ahead of time.
(I have tried everything I can find in StackExchange and don't feel like this is a duplicate because I still can't get it to work. Something isn't right in my setup and need help.)

Here is verified example for Python REPL:
>>> import subprocess
>>> import sys
>>> p = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(100)'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT); print('finished')
finished
How to verify that via another terminal window:
$ ps aux | grep python
Output:
user 32820 0.0 0.0 2447684 3972 s003 S+ 10:11PM 0:00.01 /Users/user/venv/bin/python -c import time; time.sleep(100)

Popen() starts a child process—it does not wait for it to exit. You have to call .wait() method explicitly if you want to wait for the child process. In that sense, all subprocesses are background processes.
On the other hand, the child process may inherit various properties/resources from the parent such as open file descriptors, the process group, its control terminal, some signal configuration, etc—it may lead to preventing ancestors processes to exit e.g., Python subprocess .check_call vs .check_output or the child may die prematurely on Ctrl-C (SIGINT signal is sent to the foreground process group) or if the terminal session is closed (SIGHUP).
To disassociate the child process completely, you should make it a daemon. Sometimes something in between could be enough e.g., it is enough to redirect the inherited stdout in a grandchild so that .communicate() in the parent would return when its immediate child exits.

I encountered a similar issue while working with qnx devices and wanted a sub-process that runs independently of the main process and even runs after the main process terminates.
Here's the solution I found that actually works 'creationflags=subprocess.DETACHED_PROCESS':
import subprocess
import time
pid = subprocess.Popen(["python", "path_to_script\turn_ecu_on.py"], creationflags=subprocess.DETACHED_PROCESS)
time.sleep(15)
print("Done")
Link to the doc: https://docs.python.org/3/library/subprocess.html#subprocess.Popen

In Ubuntu the following commands keep working even if python app exits.
url = "https://www.youtube.com/watch?v=t3kcqTE6x4A"
cmd = f"mpv '{url}' && zenity --info --text 'you have watched {url}' &"
os.system(cmd)

Related

Output garbled when launching multiple ssh-sessions with pseudo-tty (need remote process to exit when ssh disconnects/is killed)

I have a python script that opens multiple concurrent pseudo-tty ssh sessions to a server. My problem is that the output is garbled:
for i in range(0, 3):
subprocess.Popen(
"ssh -tt -q myserver 'echo 11; echo 22; echo 33; echo 44;'",
shell=True
)
Output:
11
22
33
44
11
22
33
44
11
22
33
44
The output varies. Sometimes it works, but most of the time I get those weird indentations. In reality I want to launch remote python processes (a locust load gen slave), but I've simplified it to just use echo.
Things I've tried:
universal_newlines=True, bufsize=1 (doesnt help)
remove -tt (fixes the output but has the undesired side effect of remote processes not dying right away if python/ssh is terminated)
piping to cat -e to get hidden characters (for debugging):
11^M$
22^M$
33^M$
44^M$
11$
22$
33$
44$
11$
22$
33$
44$
I'm not sure if is even a python issue or just an SSH issue. My guess is that I need to use some sort of line buffering, but I dont know how :-/
I'm on MacOS Mojave, and I've tried both in iTerm2 and Term if that matters.
Edit: I'm not sure it is related, but the problem appears to occur more frequently if I ensure python keeps running until the ssh session has terminated (by adding time.sleep(10) at the end of the script)
edit 2: I tried #FLemaitre 's solution (not using -tt and killing explicitly), and it works in the simple case, but not when spawning locust:
proc = subprocess.Popen(
"ssh servername 'locust --slave --master-port 7777 --no-web -f locustfile.py & read; kill $!'",
shell=True,
stdin=subprocess.PIPE,
)
time.sleep(10)
proc.kill()
proc.wait()
On the remote a bash -c locust --slave ... process is started. It dies when ssh is killed, but locust itself (a child of the above process) does not :-/
I reproduce systematically the issue with the following script:
import subprocess
import time
if __name__ == "__main__":
for i in range(0, 10):
proc = subprocess.Popen(
"ssh -tt -q localhost 'echo 11; echo 22; echo 33; '",
shell=True
)
time.sleep(4)
And I think the issue is not related to Python. These multiple ssh with pseudo-TTY seem to conflict with each other's. Eventually, the terminal used to run this script ends up broken as well (whereas it wasn't sourced):
>cat test2.py
import subprocess
import time
import atexit
... etc ...
I checked the documentation and this -t option seems to do much more than what you are actually trying to achieve. When I remove the second t and the -q options, I sometimes (not often), get a cryptic error message stating that something went wrong (but I no longer manage to reproduce it). I checked with google but without much success. Still, I'm convinced that this option is overkill and I would rather focus on the undying processes. This one issue is well known:
Starting a process over ssh using bash and then killing it on sigint
The second answer is your -tt option, but the best answer suits your example very well and is superior (with -tt you solve the ssh propagation of the termination but do not tackle the same issue between Python and its subprocess). For example:
import subprocess
import time
if __name__ == "__main__":
for i in range(0, 10):
proc = subprocess.Popen(
"ssh localhost 'sleep 90 & read ; kill $!'",
shell=True,
stdin=subprocess.PIPE
)
time.sleep(40)
With this solution, stdin is shared by all actors (python, the python subprocess, the ssh process, the sleep process), and its closure at any point in the chain is detected by the final business process, trigering a graceful shutdown.
Edit with locust:
I gave it a quick try and the issue was that a simple 'kill' is ignored by the slave (looks like an issue on lucust side). It seems to work with a 'kill -9':
import subprocess
import time
if __name__ == "__main__":
for i in range(0, 2):
proc = subprocess.Popen(
"ssh localhost 'python -m locust --slave --no-web -f ~devsup/users/flemaitre/tmp/locust_config.py & read ; kill -9 $!'",
shell=True,
stdin=subprocess.PIPE
)
time.sleep(40)

Terminate a gnome-terminal opened with subprocess

Using subprocess and the command 'gnome-terminal -e bash' I can open up a gnome-terminal as desired (and have it stick around). This is done with either
p=subprocess.Popen(['gnome-terminal', '-e', 'bash'])
or
p=subprocess.Popen(['gnome-terminal -e bash'], shell=True)
but I cannot close the terminal using p.terminate() or p.kill(). From what I understand, this is a little trickier when using shell=True but I did not expect to run into problems otherwise.
To terminate a terminal and its children (in the same process group):
#!/usr/bin/env python
import os
import signal
import subprocess
p = subprocess.Popen(['gnome-terminal', '--disable-factory', '-e', 'bash'],
preexec_fn=os.setpgrp)
# do something here...
os.killpg(p.pid, signal.SIGINT)
--disable-factory is used to avoid re-using an active terminal so that we can kill newly created terminal via the subprocess handle
os.setpgrp puts gnome-terminal in its own process group so that os.killpg() could be used to send signal to this group
You should be able to do this workaround:
get the process id
kill the process
Working Solution: Close gnome-terminal-server
As suggested by #j-f-sebastian in the comment, gnome-terminal
just sends the request (to gnome-terminal-server) to start a new terminal and exits immediately -- there is nothing to kill the process is already dead (and newly created processes are not descendants:  the new bash process is a child of gnome-terminal-server, not gnome-terminal).
import subprocess
import os, signal
import time
p=subprocess.Popen(['gnome-terminal -e bash'], stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
print "this is going to be closed in 3 sec"
time.sleep(3)
# this line returns the list of bash instances pid as string
bash_pids = subprocess.check_output(["pidof", "bash"])
# I get the last instance opened
pid_to_kill = bash_pids.split(" ")[0]
os.kill(int(pid_to_kill), signal.SIGTERM)
My solution is following this logic:
run gnome-terminal
get the latest bash instance opened process id
kill this process id
Broken solutions
These solutions might work in simpler cases:
Solution 1
import subprocess
import os, signal
p=subprocess.Popen(['gnome-terminal -e bash'], shell=True)
p_pid = p.pid # get the process id
os.kill(p_pid, signal.SIGKILL)
In order to choose the appropriate method of signal to pass instead of SIGKILL you can refer the signal documentation. E.g.
On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, or SIGTERM
For Unix you have a quite extensive list of method to call.
To have a better overview about os.kill, you can refer its documentation.
Solution 2
An alternative method useful for Unix could be:
import subprocess
import os, signal
p=subprocess.Popen(['gnome-terminal -e bash'], stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
os.killpg(os.getpgid(p.pid), signal.SIGTERM)
It seems that your process is opening child process that prevent the parent to be close. Adding a session id to your parent process, you should be able to fix it.
Solution 3
import subprocess, psutil
def kill(p_pid):
process = psutil.Process(p_pid)
for proc in process.get_children(recursive=True):
proc.kill()
process.kill()
p = subprocess.Popen(['gnome-terminal -e bash'], shell=True)
try:
p.wait(timeout=3)
except subprocess.TimeoutExpired:
kill(p.pid)
This solution requires psutil.
Solution 4
According to askubuntu, it seems that the best way to close a gnome terminal instance would be to execute a bash command like:
killall -s {signal} gnome-terminal
where {signal} simulates Alt + F4.
You can try to do it using [pexpect]:
p = pexpect.spawn(your_cmd_here)
p.send('^F4')
I wanted to add this snippet for anyone who is running on Linux Ubuntu and trying to open a subprocess, run a script, and terminate it after a time.wait().
I found a litany of solutions that would open a window, but not close it. Or a solution would open a window, and close it, but wouldn't run the script inside the terminal.
There was no exact answer so I had to hack together several solutions, as I am a novice when it comes t subprocess/shell.
This snippet was able to open a subprocess, run the script, and when 10 seconds had passed the subprocess was terminated. Again, this was built ofn the shoulders of giants. I hope this saves someone time; cheers.
import os
import signal
import subprocess
import time
command = 'python3 Rmonitor.py'
p = subprocess.Popen(['gnome-terminal','--disable-factory', '--', 'bash', '-c', command],preexec_fn=os.setpgrp)
time.sleep(10)
os.killpg(p.pid, signal.SIGINT)

Python: Terminate subprocess = Success, but it's still running (?)

I have a simple script that calls another python script as a subprocess. I can confirm the subprocess is started and I can grab its PID.
When I attempt to terminate the subprocess (in win), I get the SUCCESS message against the correct PID, but Windows task manager shows the 2nd python.exe process to still be running.
Any suggestions to accomplish this task in Win? I'll be extending this to also work in OSX and Linux eventually:
Simplified:
#!/usr/bin/env python
import os, sys
import subprocess
from subprocess import Popen, PIPE, STDOUT, check_call
pyTivoPath="c:\pyTivo\pyTivo.py"
print "\nmyPID: %d" % os.getpid()
## Start pyTivo ##
py_process = subprocess.Popen(pyTivoPath, shell=True, stdout=PIPE, stderr=subprocess.STDOUT)
print "newPID: %s" % py_process.pid
## Terminate pyTivo ##
#py_process.terminate() - for nonWin (?)
py_kill = subprocess.Popen("TASKKILL /PID "+ str(py_process.pid) + " /f")
raw_input("\nPress Enter to continue...")
Note: Python2.7 required, psutils not available
In my implementation, the following actually creates TWO processes in Windows ("cmd.exe" and "python.exe").
py_process = subprocess.Popen(pyTivoPath, shell=True, stdout=PIPE, stderr=subprocess.STDOUT)
Noticing the "python.exe" process is a child of the "cmd.exe" process, I added the "/T" (tree kill) switch to my TASKKILL:
py_kill = subprocess.Popen("TASKKILL /PID "+ str(py_process.pid) + " /f /t")
This results in the desired effect to effectively KILL the python subprocess.
Two processes are created because you call Popen with shell=True. It looks like the only reason you need to use a shell is so you make use of the file association with the interpreter. To resolve your issue you could also try:
from subprocess Popen, PIPE, STDOUT
pyTivoPath = "c:\pyTivo\pyTivo.py"
cmd = r'c:\Python27\python.exe "{}"'.format(pyTivoPath)
# start process
py_process = Popen(cmd, stdout=PIPE, stderr=STDOUT)
# kill process
py_process.terminate()
Use the /F (Force) switch on the TASKKILL command. Lots of windows commands do not has useful return values. Don't recall if TASKKILL returns has a useful value.
Sorry, overlooked your /F
You could try calling the win32 api directly.
import win32api
win32api.TerminateProcess(int(process._handle), -1)
Found the ActiveState page for this. Documents a number of kill methods, including the Win32 approach above.
There are also a number of reasons why Windows will not allow you to terminate a process. Common reasons are permissions and buggy drivers that have pending I/O requests that don't response to the kill signal properly.
There are some programs, e.g. ProcessHacker, that are more enthusiastic about killing processes, but I don't know the technical details for certain, though I suspect forced closing of open file handles etc. and then calling Terminate are involved.
You can have similar issues on Linux, i.e., no permission to kill process or the process is ignoring the kill signal. Easier to resolve on Linux though, if kill -9 does not work, it can't be killed and it is a rarer condition because you have to ignore signal 9 explicitly in your code.
0) You could use TASKKILL /T to kill CMD and the Python interpreter.
1) If you change your process creation to create the python process directly (instead of invoking the .py and relying on cmd to launch) with the script name as command argument you will get the PID you expect when you create the process.
2) You could use TASKKILL /IM to kill the process by name, but the name will be the python interpreter and it could kill unintended processes.

How do I fork a new process with independent stdout, stderr, and stdin?

I have read most of the related questions about subprocess and os.fork(), including all the discussions about the double forking trick. However, none of the those solutions appear to work correctly for my scenario.
I want to fork a new process and allow the parent to terminate (normally) without screwing up the child's stdin, stdout, and stderr and without killing the child.
My first attempt was to use subprocess.Popen().
#!/usr/bin/python
from subprocess import call,Popen
Popen("/bin/bash", shell=True)
call("echo Hello > /tmp/FooBar", shell=True)
This fails because the child process is killed once the parent is exits. I am aware of creationflags but that is Windows-specific and I am running on Linux. Note that the above code works beautifully if we simply keep the parent process alive by adding an infinite loop to the end of it. This is undesirable because the parent is already finished with its job and there's no real reason for it to stick around.
The second attempt was to use os.fork().
#!/usr/bin/python
from subprocess import call
from os import fork
try:
pid = fork()
if pid > 0:
pass
else: # child process will start interactive process
call("/bin/bash", shell=True)
except:
print "Forking failed!"
call("echo Hello > /tmp/FooBar", shell=True)
Here, the child process no longer dies with the parent, but after the parent's death the child can no longer read input and write output.
Thus, I want to know how I fork a new process with utterly independent stdout, stderr, and stdin. Independence means that the parent process can terminate (normally), and the child process (whether it is bash or tmux or any other interactive program) behaves exactly as though the parent program had not terminated. To be even more precise, consider the following variation of the original program.
#!/usr/bin/python
from subprocess import call,Popen
Popen("/bin/bash", shell=True)
call("echo Hello > /tmp/FooBar", shell=True)
while True:
pass
The above code has all the behaviors I seek, but it keeps the Python process alive artificially. I am trying to achieve this exact behavior, without the Python process being alive.
Caveat: I am running these applications over ssh, so spawning a new GUI window is not a viable solution.
Desired Behavior:
I run the python code.
I get a shiny new bash shell that works exactly like the bash shell I started with.
The file /tmp/FooBar is created.
The original Python script finishes.
I continue on with my shiny new bash shell, and the output of ps aux | grep python does not include python script I just ran.
Your first example has an extra unnecessary call to Popen(), as the call convenience function will just execute its commands in a shell and exit, so it would work if you just ran:
from subprocess import call, Popen
call("echo Hello > /tmp/FooBar", shell=True)
However if you want to send a series of commands to a shell then the shell needs to opened with stdin attached to pipe so it doesn't get mixed up with the parents stdin (which is effectively what you're asking for in terms of obtaining independent stdio):
p = Popen("/bin/bash", shell=True, stdin = subprocess.PIPE)
p.stdin.write("echo Hello > /tmp/FooBar\n")
p.stdin.write("date >> /tmp/FooBar\n")
p.terminate()
If you want to also control the output then you redirect stdout and stderr to PIPE (i.e. stdout = subprocess.PIPE, stderr = subprocess.PIPE) and then call p.stdout.read() to obtain output as needed.
To allow the process to continue running after exiting python one can add & operator to the end of the command so it continues running in the background e.g.:
call("nc -l 2000 > /tmp/nc < &",shell=True)
To have a process run so that both stdin and stdout and still connected one can use shell redirects. To maintain access to stdin one can create a named pipe using mkfifo e.g:
call("mkfifo /tmp/pipe",shell=True)
call("tail -f /tmp/pipe > /tmp/out &",shell=True)
To provide input stdin one just send data to the pipe .e.g. from the shell:
$ echo 'test' > /tmp/pipe
I recently encountered this problem, I found a solution that might help,
using creationflags to tell Popen that the child process should not inherit the parent process console, and thus has it's own stdout, stdin and stderr as if it was a parent process.
subprocess.Popen("/bin/bash", creationflags= subprocess.CREATE_NEW_CONSOLE)
Popen supports creationflags keyword argument according to docs:
creationflags, if given, can be one or more of the following flags:
CREATE_NEW_CONSOLE
CREATE_NEW_PROCESS_GROUP
ABOVE_NORMAL_PRIORITY_CLASS
BELOW_NORMAL_PRIORITY_CLASS
HIGH_PRIORITY_CLASS
IDLE_PRIORITY_CLASS
NORMAL_PRIORITY_CLASS
REALTIME_PRIORITY_CLASS
CREATE_NO_WINDOW
DETACHED_PROCESS
CREATE_DEFAULT_ERROR_MODE
CREATE_BREAKAWAY_FROM_JOB
The ones that you are interested in are DETACHED_PROCESS and CREATE_NEW_CONSOLE, I've used CREATE_NEW_CONSOLE, what it does is it spawns a new process as if it was a parent, I used Python3.5 Windows, in Python 3.7 DETACHED_PROCESS was added and it's documented to do the same as CREATE_NEW_CONSOLE.

Python Popen waiting while it shouldn't (bg and output redirected)

When I run directly in a terminal:
echo "useful"; sleep 10 &> /tmp/out.txt & echo "more";
I get both outputs while sleep goes on in the background. I was this same behavious with Popen (python 2.7):
p = Popen('echo "useful"; sleep 10 &> /tmp/out.txt & echo "more";', shell = True, stdout = PIPE, stderr = PIPE)
print p.communicate()
It was my understanding that a background process with redirected stdout and stderr would achieve this, but it does not; I have to wait for sleep. Can someone explain?
I need the other output so changing stdout/stderr arguments in Python is not a solution.
EDIT: I understand now that the behaviour I want (get the output but stop when no more output rather than when completed) is not possible from Python.
However, the behaviour appears more or less automatically when using ssh:
ssh 1.2.3.4 "echo \'useful\'; cd ~/dicp/python; nohup sleep 5 &> /tmp/out.txt & echo \'more\';"
(I can ssh to this address without password). So it's not entirely impossible by working around Python; now I need a way to do it without ssh...
That's because the shell process still has to wait for the background process to finish.
You don't normally realize this is happening because you normally are working in the shell where you backgrounded something. You put a process in the background so you can get control of the shell again and continue to work with it.
In other words, the background process is relative to the shell, not your Python process.
As Martijn Pieters points out, this is not how Python behaves (or is meant to behave). However, because the desired behaviour appears when running the command through ssh with nohup, I found this similar trick:
p = Popen('bash -c "echo \'useful\'; cd ~/dicp/python; nohup sleep 5 &> /tmp/out.txt & echo \'more\';"', shell = True, stdout = PIPE, stderr = PIPE)
print p.communicate()
So if I understand correctly, this starts a new shell (bash -c), which then starts a process not attached to it (nohup). The shell terminates as soon as all other processes complete, but the nohup-process keeps running. Desires behaviour achieved!
Maybe not pretty and probably not efficient, but it works.
EDIT: assuming, of course, that you are using bash. A more general answer is welcome.
EDIT2: actually, if my explanation is correct, I am not sure why nohup does not detach the process even if not using bash -c... Seems like bash -c would be redundant, just detach it from the shell started by Popen, but that does not work.

Categories

Resources