If I spawn a new subprocess in python with a given command (let's say I start the python interpreter with the python command), how can I send new data to the process (via STDIN)?
Use the standard subprocess module. You use subprocess.Popen() to start the process, and it will run in the background (i.e. at the same time as your Python program). When you call Popen(), you probably want to set the stdin, stdout and stderr parameters to subprocess.PIPE. Then you can use the stdin, stdout and stderr fields on the returned object to write and read data.
Untested example code:
from subprocess import Popen, PIPE
# Run "cat", which is a simple Linux program that prints it's input.
process = Popen(['/bin/cat'], stdin=PIPE, stdout=PIPE)
process.stdin.write(b'Hello\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'Hello\n'
process.stdin.write(b'World\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'World\n'
# "cat" will exit when you close stdin. (Not all programs do this!)
process.stdin.close()
print('Waiting for cat to exit')
process.wait()
print('cat finished with return code %d' % process.returncode)
Don't.
If you want to send commands to a subprocess, create a pty and then fork the subprocess with one end of the pty attached to its STDIN.
Here is a snippet from some of my code:
RNULL = open('/dev/null', 'r')
WNULL = open('/dev/null', 'w')
master, slave = pty.openpty()
print parsedCmd
self.subp = Popen(parsedCmd, shell=False, stdin=RNULL,
stdout=WNULL, stderr=slave)
In this code, the pty is attached to stderr because it receives error messages rather than sending commands, but the principle is the same.
Related
My python script (python 3.4.3) calls a bash script via subprocess.
OutPST = subprocess.check_output(cmd,shell=True)
It works, but the problem is, that I only get half of the data. The subprocess I call, calls a different subprocess and I have the guess, that if the "sub subprocess" sends the EOF, my programm thinks, that that´s it and ends the check_output.
Has someone an idea how to get all the data?
You should use subprocess.run() unless you really need that fine grained of control over talking to the processing via its stdin (or doing something else while the process is running instead of blocking for it to finish). It makes capturing output super easy:
from subprocess import run, PIPE
result = run(cmd, stdout=PIPE, stderr=PIPE)
print(result.stdout)
print(result.stderr)
If you want to merge stdout and stderr (like how you'd see it in your terminal if you didn't do any redirection), you can use the special destination STDOUT for stderr:
from subprocess import STDOUT
result = run(cmd, stdout=PIPE, stderr=STDOUT)
print(result.stdout)
If need to periodically check the stdout of a running process. For example, the process is tail -f /tmp/file, which is spawned in the python script. Then every x seconds, the stdout of that subprocess is written to a string and further processed. The subprocess is eventually stopped by the script.
To parse the stdout of a subprocess, if used check_output until now, which doesn't seem to work, as the process is still running and doesn't produce a definite output.
>>> from subprocess import check_output
>>> out = check_output(["tail", "-f", "/tmp/file"])
#(waiting for tail to finish)
It should be possible to use threads for the subprocesses, so that the output of multiple subprocesses may be processed (e.g. tail -f /tmp/file1, tail -f /tmp/file2).
How can I start a subprocess, periodically check and process its stdout and eventually stop the subprocess in a multithreading friendly way? The python script runs on a Linux system.
The goal is not to continuously read a file, the tail command is an example, as it behaves exactly like the actual command used.
edit: I didn't think this through, the file did not exist. check_output now simply waits for the process to finish.
edit2: An alternative method, with Popen and PIPE appears to result in the same issue. It waits for tail to finish.
>>> from subprocess import Popen, PIPE, STDOUT
>>> cmd = 'tail -f /tmp/file'
>>> p = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
>>> output = p.stdout.read()
#(waiting for tail to finish)
Your second attempt is 90% correct. The only issue is that you are attempting to read all of tail's stdout at the same time once it's finished. However, tail is intended to run (indefinitely?) in the background, so you really want to read stdout from it line-by-line:
from subprocess import Popen, PIPE, STDOUT
p = Popen(["tail", "-f", "/tmp/file"], stdin=PIPE, stdout=PIPE, stderr=STDOUT)
for line in p.stdout:
print(line)
I have removed the shell=True and close_fds=True arguments. The first is unnecessary and potentially dangerous, while the second is just the default.
Remember that file objects are iterable over their lines in Python. The for loop will run until tail dies, but it will process each line as it appears, as opposed to read, which will block until tail dies.
If I create an empty file in /tmp/file, start this program and begin echoing lines into the file using another shell, the program will echo those lines. You should probably replace print with something a bit more useful.
Here is an example of commands I typed after starting the code above:
Command line
$ echo a > /tmp/file
$ echo b > /tmp/file
$ echo c >> /tmp/file
Program Output (From Python in a different shell)
b'a\n'
b'tail: /tmp/file: file truncated\n'
b'b\n'
b'c\n'
In the case that you want your main program be responsive while you respond to the output of tail, start the loop in a separate thread. You should make this thread a daemon so that it does not prevent your program from exiting even if tail is not finished. You can have the thread open the sub-process or you can just pass in the standard output to it. I prefer the latter approach since it gives you more control in the main thread:
def deal_with_stdout():
for line in p.stdout:
print(line)
from subprocess import Popen, PIPE, STDOUT
from threading import Thread
p = Popen(["tail", "-f", "/tmp/file"], stdin=PIPE, stdout=PIPE, stderr=STDOUT)
t = Thread(target=deal_with_stdout, daemon=True)
t.start()
t.join()
The code here is nearly identical, with the addition of a new thread. I added a join() at the end so the program would behave well as an example (join waits for the thread to die before returning). You probably want to replace that with whatever processing code you would normally be running.
If your thread is complex enough, you may also want to inherit from Thread and override the run method instead of passing in a simple target.
I have a c program (I'm not the author) that reads from stderr. I call it using subprocess.Popen as below. Is there any way to write to stderr of the subprocess.
proc = subprocess.Popen(['./std.bin'],stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
Yes, maybe, but you should be aware of the irregularity of writing to the standard output or standard error output of a subprocess. The vast majority of processes only writes to these and almost none is actually trying to read (because in almost all cases there's nothing to read).
What you could try is to open a socket and supply that as the stderr argument.
What you most probably want to do is the opposite, to read from the stderr from the subprocess (the subprocesses writes, you read). That can be done by just setting it to subprocess.PIPE and then access the stderr attribute of the subprocess:
proc subprocess(['./std.bin'], stderr=subprocess.PIPE)
for l in proc.stderr:
print(l)
Note that you could specify more than one of stdin, stdout and stderr as being subprocess.PIPE. This will not mean that they will be connected to the same pipe (subprocess.PIPE is no actuall file, but just a placeholder to indicate that a pipe should be created). If you do this however you should take care to avoid deadlocks, this can for example be done by using the communicate method (you can inspect the source of the subprocess module to see what communicate does if you want to do it yourself).
If the child process reads from stderr (note: normally stderr is opened for output):
#!/usr/bin/env python
"""Read from *stderr*, write to *stdout* reversed bytes."""
import os
os.write(1, os.read(2, 512)[::-1])
then you could provide a pseudo-tty (so that all streams point to the same place), to work with the child as if it were a normal subprocess:
#!/usr/bin/env python
import sys
import pexpect # $ pip install pexpect
child = pexpect.spawnu(sys.executable, ['child.py'])
child.sendline('abc') # write to the child
child.expect(pexpect.EOF)
print(repr(child.before))
child.close()
Output
u'abc\r\n\r\ncba'
You could also use subprocess + pty.openpty() instead pexpect.
Or you could write a code specific to the weird stderr behavior:
#!/usr/bin/env python
import os
import sys
from subprocess import Popen, PIPE
r, w = os.pipe()
p = Popen([sys.executable, 'child.py'], stderr=r, stdout=PIPE,
universal_newlines=True)
os.close(r)
os.write(w, b'abc') # write to subprocess' stderr
os.close(w)
print(repr(p.communicate()[0]))
Output
'cba'
for line in proc.stderr:
sys.stdout.write(line)
This is write the stderr of the subprocess. Hope it answers your question.
I've been reading up on a lot of documentations but am still not sure what I'm doing wrong.
So I have a separate shell script that fires up a separate server then the one I'm working on. Once the server is connected, I want to run ls and that's it. However, for some reason stdin=subprocess.PIPE is preventing the Popen command from terminating so that the next line could execute. For example because the code is stuck I'll Ctrl+C but I'll get an error saying that wait() got a keyboard interrupt. Here's an example code:
import subprocess
from time import sleep
p1 = subprocess.Popen("run_server",
stdout = subprocess.PIPE,
stdin = subprocess.PIPE)
#sleep(1)
p1.wait()
p1.communicate(input = "ls")[0]"
If I replace p1.wait() with sleep(1), the communicate command does run and displays ls, but the script that runs the server detects eof on tty and terminates it self. I must have some kind of wait between Popen and communicate because the server script will terminate for the same reason.
p.wait() does not return until the child process is dead. While the parent script is stuck on p.wait() call; your child process expects input at the same time -- deadlock. Then you press Ctrl+C in the shell; it sends SIGINT signal to all processes in the foreground process group that kills both your parent Python script and run_server subprocess.
You should drop the .wait() call:
#!/usr/bin/env python
from subprocess import Popen, PIPE
p = Popen(["run_server"], stdout=PIPE, stdin=PIPE)
output = p.communicate(b"ls")[0]
Or in Python 3.4+:
#!/usr/bin/env python3
from subprocess import check_output
output = check_output(["run_server"], input=b"ls")
If you want to run several commands then pass them all at once:
input = "\n".join(["ls", "cmd2", "etc"]) # with universal_newlines=True
As you know from reading the subprocess docs, p.communicate() waits for the child process to exit and therefore it should be called at most once. As well as with .wait(), the child process is dead after .communicate() has returned.
The fact that when you Ctrl+C and your traceback says you were stuck in wait() means the next line is executing, the next line is wait(). wait() won't return until your p1 process returns. However, it seems your p1 process won't return until you send it a command, 'ls' in your case. Try sending the command then calling wait().:
import subprocess
from time import sleep
p1 = subprocess.Popen("run_server",
stdout = subprocess.PIPE,
stdin = subprocess.PIPE)
#sleep(1)
p1.communicate(input = "ls")[0]"
p1.wait()
Otherwise, make sure your "run_server" script terminates so your script can advance past p1.wait()
When I execute a python script using subprocess.Popen(script, shell=True) in another python script, is it possible to alert python when the script completes running before executing other functions?
On a side note, can I get real-time output of the executed python script?
I can only get output from it doing command>output.txt but that's only after the whole process ends. stdout does not grep any ouput.
When you create a subprocess with Popen, it returns a subprocess.Popen object that has several methods for accessing subprocess status and data:
You can use poll() to determine whether a subprocess has finished. None indicates that the process has ended.
Output from a script while its running can be retrieved with communicate().
You can combine these two to create a script that monitors output from a subprocess and waits until its ready as follows:
import subprocess
p = subprocess.Popen((["python", "script.py"]), stdout=subprocess.PIPE)
while p.poll() is None:
(stdout, stderr) = p.communicate()
print stdout
You want to wait for the Popen to end? have you tried simply this:
popen = subprocess.Popen(script, shell=True)
popen.wait()
Have you considered using the external python script importing it as a module instead of spawning a subprocess?
As for the real-time output: try python -u ...