I've been reading up on a lot of documentations but am still not sure what I'm doing wrong.
So I have a separate shell script that fires up a separate server then the one I'm working on. Once the server is connected, I want to run ls and that's it. However, for some reason stdin=subprocess.PIPE is preventing the Popen command from terminating so that the next line could execute. For example because the code is stuck I'll Ctrl+C but I'll get an error saying that wait() got a keyboard interrupt. Here's an example code:
import subprocess
from time import sleep
p1 = subprocess.Popen("run_server",
stdout = subprocess.PIPE,
stdin = subprocess.PIPE)
#sleep(1)
p1.wait()
p1.communicate(input = "ls")[0]"
If I replace p1.wait() with sleep(1), the communicate command does run and displays ls, but the script that runs the server detects eof on tty and terminates it self. I must have some kind of wait between Popen and communicate because the server script will terminate for the same reason.
p.wait() does not return until the child process is dead. While the parent script is stuck on p.wait() call; your child process expects input at the same time -- deadlock. Then you press Ctrl+C in the shell; it sends SIGINT signal to all processes in the foreground process group that kills both your parent Python script and run_server subprocess.
You should drop the .wait() call:
#!/usr/bin/env python
from subprocess import Popen, PIPE
p = Popen(["run_server"], stdout=PIPE, stdin=PIPE)
output = p.communicate(b"ls")[0]
Or in Python 3.4+:
#!/usr/bin/env python3
from subprocess import check_output
output = check_output(["run_server"], input=b"ls")
If you want to run several commands then pass them all at once:
input = "\n".join(["ls", "cmd2", "etc"]) # with universal_newlines=True
As you know from reading the subprocess docs, p.communicate() waits for the child process to exit and therefore it should be called at most once. As well as with .wait(), the child process is dead after .communicate() has returned.
The fact that when you Ctrl+C and your traceback says you were stuck in wait() means the next line is executing, the next line is wait(). wait() won't return until your p1 process returns. However, it seems your p1 process won't return until you send it a command, 'ls' in your case. Try sending the command then calling wait().:
import subprocess
from time import sleep
p1 = subprocess.Popen("run_server",
stdout = subprocess.PIPE,
stdin = subprocess.PIPE)
#sleep(1)
p1.communicate(input = "ls")[0]"
p1.wait()
Otherwise, make sure your "run_server" script terminates so your script can advance past p1.wait()
Related
There's this external python script that I would like to call.
It provides an async mode so that it returns the task id before it completes the whole process.
The mechanism works well when I execute in the command line. The task id returns on stdout immediately. But the main process actually forks a subprocess to do the backend job. So when I want to use bash script to get the task id, it hangs until the subprocess finishes. It's not async at all.
So my question is, how can I get the main process output immediately instead of waiting for the subprocess complete?
e.g.
$ ./cmd args
{"task": 1}
$ x=`./cmd args`
<< it hungs until entire process completed and returns all result at once.
{"task": 1} {"task": 1} {"actual_result": "xxx"}
// It's the same using python
import subprocess
p = subprocess.Popen(["cmd", "args"], stdout= subprocess.PIPE, stderr = subprocess.PIPE)
out, err = p.communicate()
<< stuck here as well
I would not call this a fork - read this for the difference https://stackoverflow.com/questions/49627957/what-is-the-difference-between-subprocess-popen-and-os-fork#:~:text=It%20seems%20like%20subprocess.,to%20create%20a%20child%20process.
. So you want let the subprocess run while main process print something.
communicate() blocks IO, which is your main problem. You can get rid of the PIPE, just let the SP print to stdout, or any file object. More aggressively, add 'nohup' to the front of the child command can make the parent process free to exit itself without worrying about the child. Thought it has side effect of making the child nohup from current shell.
If you insist the parent program should manage all print, use poll() to check the status of child process before you communicate() with it.
I try to run simple script in windows in the same shell.
When I run
subprocess.call(["python.exe", "a.py"], shell=False)
It works fine.
But when I run
subprocess.Popen(["python.exe", "a.py"], shell=False)
It opens new shell and the shell=false has no affect.
a.py just print message to the screen.
First calling Popen with shell=False doesn't mean that the underlying python won't try to open a window/console. It's just that the current python instance executes python.exe directly and not in a system shell (cmd or sh).
Second, Popen returns a handle on the process, and you have to perform a wait() on this handle for it to end properly or you could generate a defunct process (depending on the platform you're running on). I suggest that you try
p = subprocess.Popen(["python.exe", "a.py"], shell=False)
return_code = p.wait()
to wait for process termination and get return code.
Note that Popen is a very bad way to run processes in background. The best way would be to use a separate thread
import subprocess
import threading
def run_it():
subprocess.call(["python.exe", "a.py"], shell=False)
t = threading.Thread(target=run_it)
t.start()
# do your stuff
# in the end
t.join()
If need to periodically check the stdout of a running process. For example, the process is tail -f /tmp/file, which is spawned in the python script. Then every x seconds, the stdout of that subprocess is written to a string and further processed. The subprocess is eventually stopped by the script.
To parse the stdout of a subprocess, if used check_output until now, which doesn't seem to work, as the process is still running and doesn't produce a definite output.
>>> from subprocess import check_output
>>> out = check_output(["tail", "-f", "/tmp/file"])
#(waiting for tail to finish)
It should be possible to use threads for the subprocesses, so that the output of multiple subprocesses may be processed (e.g. tail -f /tmp/file1, tail -f /tmp/file2).
How can I start a subprocess, periodically check and process its stdout and eventually stop the subprocess in a multithreading friendly way? The python script runs on a Linux system.
The goal is not to continuously read a file, the tail command is an example, as it behaves exactly like the actual command used.
edit: I didn't think this through, the file did not exist. check_output now simply waits for the process to finish.
edit2: An alternative method, with Popen and PIPE appears to result in the same issue. It waits for tail to finish.
>>> from subprocess import Popen, PIPE, STDOUT
>>> cmd = 'tail -f /tmp/file'
>>> p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
>>> output = p.stdout.read()
#(waiting for tail to finish)
Your second attempt is 90% correct. The only issue is that you are attempting to read all of tail's stdout at the same time once it's finished. However, tail is intended to run (indefinitely?) in the background, so you really want to read stdout from it line-by-line:
from subprocess import Popen, PIPE, STDOUT
p = Popen(["tail", "-f", "/tmp/file"], stdin=PIPE, stdout=PIPE, stderr=STDOUT)
for line in p.stdout:
print(line)
I have removed the shell=True and close_fds=True arguments. The first is unnecessary and potentially dangerous, while the second is just the default.
Remember that file objects are iterable over their lines in Python. The for loop will run until tail dies, but it will process each line as it appears, as opposed to read, which will block until tail dies.
If I create an empty file in /tmp/file, start this program and begin echoing lines into the file using another shell, the program will echo those lines. You should probably replace print with something a bit more useful.
Here is an example of commands I typed after starting the code above:
Command line
$ echo a > /tmp/file
$ echo b > /tmp/file
$ echo c >> /tmp/file
Program Output (From Python in a different shell)
b'a\n'
b'tail: /tmp/file: file truncated\n'
b'b\n'
b'c\n'
In the case that you want your main program be responsive while you respond to the output of tail, start the loop in a separate thread. You should make this thread a daemon so that it does not prevent your program from exiting even if tail is not finished. You can have the thread open the sub-process or you can just pass in the standard output to it. I prefer the latter approach since it gives you more control in the main thread:
def deal_with_stdout():
for line in p.stdout:
print(line)
from subprocess import Popen, PIPE, STDOUT
from threading import Thread
p = Popen(["tail", "-f", "/tmp/file"], stdin=PIPE, stdout=PIPE, stderr=STDOUT)
t = Thread(target=deal_with_stdout, daemon=True)
t.start()
t.join()
The code here is nearly identical, with the addition of a new thread. I added a join() at the end so the program would behave well as an example (join waits for the thread to die before returning). You probably want to replace that with whatever processing code you would normally be running.
If your thread is complex enough, you may also want to inherit from Thread and override the run method instead of passing in a simple target.
I've got a command that I'm wrapping in script and spawning from a Python script using subprocess.Popen. I'm trying to make sure it dies if the user issues a SIGINT.
I could figure out if the process was interrupted in a least two ways:
A. Die if the wrapped command has a non-zero exit status (doesn't work, because script seems to always return 0)
B. Do something special with SIGINT in the parent Python script rather than simply interrupting the subprocess. I've tried the following:
import sys
import signal
import subprocess
def interrupt_handler(signum, frame):
print "While there is a 'script' subprocess alive, this handler won't executes"
sys.exit(1)
signal.signal(signal.SIGINT, interrupt_handler)
for n in range( 10 ):
print "Going to sleep for 2 second...Ctrl-C to exit the sleep cycles"
# exit 1 if we make it to the end of our sleep
cmd = [ 'script', '-q', '-c', "sleep 2 && (exit 1)", '/dev/null']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
if p.poll() != None :
break
else :
pass
# Exiting on non-zero exit status would suffice
print "Exit status (script always exits zero, despite what happened to the wrapped command):", p.returncode
I'd like hitting Ctrl-C to exit the python script. What's happening instead is the subprocess dies and the script continues.
The subprocess is by default part of the same process group, and only one can control and receive signals from the terminal, so there are a couple of different solutions.
Setting stdin as a PIPE (in contrast to inheriting from the parent process), this will prevent the child process from receiving signals associated to it.
subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
Detaching from the parent process group, the child will no longer receive signals
def preexec_function():
os.setpgrp()
subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=preexec_function)
Explicitly ignoring signals in the child process
def preexec_function():
signal.signal(signal.SIGINT, signal.SIG_IGN)
subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, preexec_fn=preexec_function)
This might however be overwritten by the child process.
Fist thing; there is a send_signal() method on the Popen object. If you want to send a signal to one you've launched, use this method to send it.
Second thing; a deeper problem with the way you're setting up communication with your subprocess and then, um, not communicating with it. You cannot safely tell the subprocess to send its output to subprocess.PIPE and then not read from the pipes. UNIX pipes are buffered (typically a 4K buffer?), and if the subprocess fills up the buffer and the process on the other end of the pipe doesn't read the buffered data, the subprocess will pend (locking up, from an observer's perspective) on its next write to the pipe. So, the usual pattern when using subprocess.PIPE is to call communicate() on the Popen object.
It is not mandatory to use subprocess.PIPE if you want data back from the subprocess. A cool trick is to use the tempfile.TemporaryFile class to make an unnamed temp file (really it opens a file and immediately deletes the inode from the file system, so you have access to the file but no-one else can open one. You can do something like:
with tempfile.TemporaryFile() as iofile:
p = Popen(cmd, stdout=iofile, stderr=iofile)
while True:
if p.poll() is not None:
break
else:
time.sleep(0.1) # without some sleep, this polling is VERY busy...
Then you can read the contents of your temporary file (seek to the beginning of it before you do, to be sure you're at the beginning) when you know the subprocess has exited, instead of using pipes. The pipe buffering problem won't be a problem if the subprocess's output is going to a file (temporary or not).
Here is a riff on your code sample that I think does what you want. The signal handler just repeats the signals being trapped by the parent process (in this example, SIGINT and SIGTERM) to all current subprocesses (there should only ever be one in this program) and sets a module-level flag saying to shutdown at the next opportunity. Since I'm using subprocess.PIPE I/O, I call communicate() on the Popen object.
#!/usr/bin/env python
from subprocess import Popen, PIPE
import signal
import sys
current_subprocs = set()
shutdown = False
def handle_signal(signum, frame):
# send signal recieved to subprocesses
global shutdown
shutdown = True
for proc in current_subprocs:
if proc.poll() is None:
proc.send_signal(signum)
signal.signal(signal.SIGINT, handle_signal)
signal.signal(signal.SIGTERM, handle_signal)
for _ in range(10):
if shutdown:
break
cmd = ["sleep", "2"]
p = Popen(cmd, stdout=PIPE, stderr=PIPE)
current_subprocs.add(p)
out, err = p.communicate()
current_subprocs.remove(p)
print "subproc returncode", p.returncode
And calling it (with a Ctrl-C in the third 2 second interval):
% python /tmp/proctest.py
subproc returncode 0
subproc returncode 0
^Csubproc returncode -2
This hack will work, but it's ugly...
Change the command to this:
success_flag = '/tmp/success.flag'
cmd = [ 'script', '-q', '-c', "sleep 2 && touch " + success_flag, '/dev/null']
And put
if os.path.isfile( success_flag ) :
os.remove( success_flag )
else :
return
at the end of the for loop
If you have no python processing to do after your process is spawned (like in your example), then the easiest way is to use os.execvp instead of the subprocess module. Your subprocess is going to completely replace your python process, and will be the one catching SIGINT directly.
I found a -e switch in the script man page:
-e Return the exit code of the child process. Uses the same format
as bash termination on signal termination exit code is 128+n.
Not too sure what the 128+n is all about but it seems to return 130 for ctrl-c. So modifying your cmd to be
cmd = [ 'script', '-e', '-q', '-c', "sleep 2 && (exit 1)", '/dev/null']
and putting
if p.returncode == 130:
break
at the end of the for loop seems to do what you want.
If I spawn a new subprocess in python with a given command (let's say I start the python interpreter with the python command), how can I send new data to the process (via STDIN)?
Use the standard subprocess module. You use subprocess.Popen() to start the process, and it will run in the background (i.e. at the same time as your Python program). When you call Popen(), you probably want to set the stdin, stdout and stderr parameters to subprocess.PIPE. Then you can use the stdin, stdout and stderr fields on the returned object to write and read data.
Untested example code:
from subprocess import Popen, PIPE
# Run "cat", which is a simple Linux program that prints it's input.
process = Popen(['/bin/cat'], stdin=PIPE, stdout=PIPE)
process.stdin.write(b'Hello\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'Hello\n'
process.stdin.write(b'World\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'World\n'
# "cat" will exit when you close stdin. (Not all programs do this!)
process.stdin.close()
print('Waiting for cat to exit')
process.wait()
print('cat finished with return code %d' % process.returncode)
Don't.
If you want to send commands to a subprocess, create a pty and then fork the subprocess with one end of the pty attached to its STDIN.
Here is a snippet from some of my code:
RNULL = open('/dev/null', 'r')
WNULL = open('/dev/null', 'w')
master, slave = pty.openpty()
print parsedCmd
self.subp = Popen(parsedCmd, shell=False, stdin=RNULL,
stdout=WNULL, stderr=slave)
In this code, the pty is attached to stderr because it receives error messages rather than sending commands, but the principle is the same.