Recently I've been messing around with Popen. I spawned a process in a background that writes the output to a TemporaryFile:
f = tempfile.TemporaryFile()
p = subprocess.Popen(["gatttool"], stdin = subprocess.PIPE, stdout = f)
Now it works in a way that I send a command to the process via stdin and read a bit later the temporary file. And it's non-blocking, so I can perform other tasks.
The problem is that gatttool sometimes generates some output byt itself (for example notifications). And I'm looking for a way to read this output without blocking the TemporaryFile.
My question:
1) Is it safe to read output from a TemporaryFile (50 lines) and hope that subprocess gracefully waits for me to read that data or will it terminate?
2) Is there an elegant way to create a callback function that will be called on every event on TemporaryFile (instead of having a thread that will run every one second and read the data)?
Actually the resolution is very simple. Create a pipe, use the gatttool output as the input. The output of that pipe goes to a thread, that reads that output, line by line and each of that line is parsed. Checked it and it works. Please lock this question down.
# Create a pipe. "gatt_in" ins where the "gatttool" will be dumping it's output.
# We read that output from the other end of pipe, "gatt_out"
gatt_out, gatt_in = os.pipe()
gatt_process = subprocess.Popen(["gatttool", "your parametres"], stdin = subprocess.PIPE,
stdout = gatt_in)
Now every time I want to send a command to gatttool I do this:
gatt_process.stdin.write("Some commands\n")
The result of this command will apear in gatt_out. In my case, this is handled in another thread.
To provide input/get output from a child process, you could use subprocess.PIPE:
from subprocess import Popen, PIPE
p = Popen(['gatttool', 'arg 1', 'arg 2'], stdin=PIPE, stdout=PIPE, bufsize=1)
# provide input
p.stdin.write(b'input data')
p.stdin.close()
# read output incrementally (in "real-time")
for line in iter(p.stdout.readline, b''):
print line,
p.stdout.close()
p.wait()
Related
I would like to launch a process, let it run for some time and then read its output (the last output is ok, I don't need everything). I tried to use the following code:
def connect(interface):
proc = subprocess.Popen(['mycommand', interface], stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
time.sleep(10)
proc.terminate()
output, err = proc.communicate()
print(output)
Unfortunately, every time it gets stuck when reading the output. I also tried to use proc.read() instead of communicate() but it didn't solve the problem.
What's the best way to handle output in this case?
Many thanks in advance!
After some research, I found that the issue came from the buffer. As indicated in the subprocess module documentation:
This will deadlock when using stdout=PIPE or stderr=PIPE and the child
process generates enough output to a pipe such that it blocks waiting
for the OS pipe buffer to accept more data. Use Popen.communicate()
when using pipes to avoid that.
There are two solutions:
Use the bufsize argument to set the buffer to a size large enought to store all the output generated during the time you wait.
Use readline() to read the output and flush the buffer during the waiting time. If you don't need it just discard the output.
I choose the second approach ; my code is as follow:
def connect(interface):
stdout = []
timeout = 60
start_time = time.time()
proc = subprocess.Popen(['mycommand', interface], stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1, universal_newlines=True)
while time.time() < start_time + timeout:
line = proc.stdout.readline()
stdout.append(line)
proc.terminate()
print(output)
I would like to run several commands in the same shell. After some research I found that I could keep a shell open using the return process from Popen. I can then write and read to stdin and stdout. I tried implementing it as such:
process = Popen(['/bin/sh'], stdin=PIPE, stdout=PIPE)
process.stdin.write('ls -al\n')
out = ' '
while not out == '':
out = process.stdout.readline().rstrip('\n')
print out
Not only is my solution ugly, it doesn't work. out is never empty because it hands on the readline(). How can I successfully end the while loop when there is nothing left to read?
Use iter to read data in real time:
for line in iter(process.stdout.readline,""):
print line
If you just want to write to stdin and get the output you can use communicate to make the process end:
process = Popen(['/bin/sh'], stdin=PIPE, stdout=PIPE)
out,err =process.communicate('ls -al\n')
Or simply get the output use check_output:
from subprocess import check_output
out = check_output(["ls", "-al"])
The command you're running in a subprocess is sh, so the output you're reading is sh's output. Since you didn't indicate to the shell it should quit, it is still alive, thus its stdout is still open.
You can perhaps write exit to its stdin to make it quit, but be aware that in any case, you get to read things you don't need from its stdout, e.g. the prompt.
Bottom line, this approach is flawed to start with...
I'm writing a script that executes a list of processes and concatenates all of their output into the input of another process. I've condensed my script into a test case using echo and cat as stand-ins for the actual processes.
#!/usr/bin/python
import os,subprocess
(pipeOut, pipeIn) = os.pipe()
catProcess = subprocess.Popen("/bin/cat", stdin = pipeOut)
for line in ["First line", "Last line"]:
subprocess.call(["/bin/echo",line], stdout = pipeIn)
os.close(pipeIn)
os.close(pipeOut)
catProcess.wait()
The program works as expected, except that the call to catProcess.wait() hangs (presumably because it's still waiting for more input). Passing close_fds=True to Popen or call doesn't seem to help, either.
Is there a way to close catProcesses's stdin so it exits gracefully? Or is there another way to write this program?
Passing close_fds=True to catProcess helps on my system.
You don't need to create the pipe explicitly:
#!/usr/bin/python
from subprocess import Popen, PIPE, call
cat = Popen("cat", stdin=PIPE)
for line in ["First line", "Last line"]:
call(["echo", line], stdout=cat.stdin)
cat.communicate() # close stdin, wait
From Python in Linux, I want to start a sub-process, wait until it prints one line on it's standard out, then continue with the rest of my Python script. If I do:
from subprocess import *
proc = Popen(my_process, stdout=PIPE)
proc.readline()
# Now continue with the rest of my script
Will my process eventually block if it writes a lot to its stdout, because the pipe fills up?
Ideally, I'd like the rest of the output to go to the standard output of my script. Is there a way to change the stdout of the subprocess from PIPE to my standard output after it starts?
I'm guessing I'll have to spawn a separate thread just to read from my process's stdout and print to my own, but I'd like to avoid that if there's a simpler solution.
Stop the process?
proc.terminate()
After the readline
The readline method should not block if the line is particularly large, this is pulling data directly out of the pipe buffer and into userspace. If the data was remaining in the pipe buffer, there's a good chance it would block the spawned process but I'm pretty sure Python must take the data out of the pipe buffer before it can examine it for the end-of-line.
Or you could just read characters off the pipe directly, this would prevent any possible buffer issues:
from subprocess import *
proc = Popen(my_process, stdout=PIPE)
c = ' '
while c != '\n':
c = proc.stdout.read(1)
# Now complete the rest of the program....
If I spawn a new subprocess in python with a given command (let's say I start the python interpreter with the python command), how can I send new data to the process (via STDIN)?
Use the standard subprocess module. You use subprocess.Popen() to start the process, and it will run in the background (i.e. at the same time as your Python program). When you call Popen(), you probably want to set the stdin, stdout and stderr parameters to subprocess.PIPE. Then you can use the stdin, stdout and stderr fields on the returned object to write and read data.
Untested example code:
from subprocess import Popen, PIPE
# Run "cat", which is a simple Linux program that prints it's input.
process = Popen(['/bin/cat'], stdin=PIPE, stdout=PIPE)
process.stdin.write(b'Hello\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'Hello\n'
process.stdin.write(b'World\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'World\n'
# "cat" will exit when you close stdin. (Not all programs do this!)
process.stdin.close()
print('Waiting for cat to exit')
process.wait()
print('cat finished with return code %d' % process.returncode)
Don't.
If you want to send commands to a subprocess, create a pty and then fork the subprocess with one end of the pty attached to its STDIN.
Here is a snippet from some of my code:
RNULL = open('/dev/null', 'r')
WNULL = open('/dev/null', 'w')
master, slave = pty.openpty()
print parsedCmd
self.subp = Popen(parsedCmd, shell=False, stdin=RNULL,
stdout=WNULL, stderr=slave)
In this code, the pty is attached to stderr because it receives error messages rather than sending commands, but the principle is the same.