Python subprocess: get all stdout data after proccess has terminated - python

I am using subprocess in python to invoke another executable, write some data to it's stdin (close the stream once everything is written - which is how the sub process knows it's recieved everything) and then receive all of it's stdout data after it's terminated - which it will after some period of time.
In pseudo code:
Open subprocess
write to it's stdout
let program finish
retrieve anything it spat out to stdout
I have tried the following:
import subprocess
p = subprocess.Popen([cmd],
stdout=subprocess.PIPE,stdin=subprocess.PIPE)
p.stdin.write(str(data))
p.stdin.close()
p.wait()
result = p.communicate()[0]
However I get the following stack trace:
result = p.communicate()[0]
File "/usr/lib64/python2.7/subprocess.py", line 800, in communicate .
return self._communicate(input)
File "/usr/lib64/python2.7/subprocess.py", line 1396, in _communicate
self.stdin.flush()
ValueError: I/O operation on closed file
Please advise

Use communicate:
import subprocess
p = subprocess.Popen([cmd], stdout=subprocess.PIPE, stdin=subprocess.PIPE)
result = p.communicate(data)[0]

Related

Running a list command strings with subprocess popen and getting the output

I'm trying to run multiple UNIX commands in a python script like this
import subprocess
cmds = ['sleep 3', 'uptime','time ls -l /']
p = subprocess.Popen(cmds,stdout=subprocess.PIPE,shell=True)
while p.poll() is None:
time.sleep(0.5)
tempdata = p.stdout.read()
print(tempdata)
However my output does not contain all output and doesn't seem to run all the commands. Setting shell=False also causes an error.
Traceback (most recent call last):
File "task1.py", line 32, in ?
p = subprocess.Popen(commands,stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=False)
File "/usr/lib64/python36/subprocess.py", line 550, in __init__
errread, errwrite)
File "/usr/lib64/python36/subprocess.py", line 996, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
When you create a new process, you don't pass it a list of commands to run; rather, you pass it a single command -- either as a string (with shell=True) or as a list of args (with shell=False).
import subprocess
cmds = ['sleep 1', 'uptime', 'ls -l /']
for cmd in cmds:
stdout = subprocess.check_output(cmd, shell=True)
print('\n# {}'.format(cmd))
print(stdout)
If you just want to collect stdout, subprocess.check_output() might be simpler than Popen() -- but either approach will work, depending on what you need to do with the process.
Your problem is 'sleep 3' causes the error you get from the traceback, when I removed that it worked.
To run all for these:
cmds = ['sleep 3', 'uptime','time ls -l /']
You have to call popen for each of them:
for cmd in cmds:
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
while p.poll() is None:
time.sleep(0.5)
output = p.stdout.read()
Or simpler:
for cmd in cmds:
output = subprocess.check_output(cmd, stdout=subprocess.PIPE, shell=True)
Second question: This captures all output written to stdout. To capture also stderr, redirect that into subprocess.PIPE as well.

Reading output from terminal using subprocess.run

I'm writing a python string to parse a value from a JSON file, run a tool called Googler with a couple of arguments including the value from the JSON file, and then save the output of the tool to a file (CSV preferred, but that's for another day).
So far the code is:
import json
import os
import subprocess
import time
with open("test.json") as json_file:
json_data = json.load(json_file)
test = (json_data["search_term1"]["apparel"]["biba"])
#os.system("googler -N -t d1 "+test) shows the output, but can't write to a file.
result= subprocess.run(["googler", "-N","-t","d1",test], stdout=subprocess.PIPE, universal_newlines=True)
print(result.stdout)
When I run the above script, nothing happens, the terminal just sits blank until I send a keyboard interrupt and then I get this error:
Traceback (most recent call last):
File "script.py", line 12, in <module>
result= subprocess.run(["googler", "-N","-t","d1",test], stdout=subprocess.PIPE, universal_newlines=True)
File "/usr/lib/python3.5/subprocess.py", line 695, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.5/subprocess.py", line 1059, in communicate
stdout = self.stdout.read()
KeyboardInterrupt
I tried replacing the test variable with a string, same error. The same line works on something like "ls", "-l", "/dev/null".
How do I extract the output of this tool and write it to a file?
Your googler command works in interactive mode. It never exits, so your program is stuck.
You want googler to run the search, print the output and then exit.
From the docs, I think --np (or --noprompt) is the right parameter for that. I didn't test.
result = subprocess.run(["googler", "-N", "-t", "d1", "--np", test], stdout=subprocess.PIPE, universal_newlines=True)

Analyze output of subprocess.Popen when piping to file

How can I analyze the output of my command, which I pipe to file, in real-time while the file is written?
This is what I have so far:
with open('output.log', 'w') as out:
command = ['pg_dump', 'myDB']
p = subprocess.Popen(cmd, stdout=out, stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
sys.stdout.flush()
print(">>> " + line.rstrip())
But this generates the following error:
Traceback (most recent call last):
File "pipe-to-file.py", line 95, in <module>
for line in iter(p.stdout.readline, b''):
AttributeError: 'NoneType' object has no attribute 'readline'
Why is p.stdout equal to None here?
You have to use subprocess.PIPE for the stdout argument in order to get a file object, else it'll be None. That's why p.stdout equals to None in your code.
From the DOC
Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process.
If you want to write stdout to a file while analyzing the output then you can use something like this.
with open('log', 'ab+') as out:
p = subprocess.Popen('ls', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
std_out, std_error = p.communicate()
# Do something with std_out
# ...
# Write to the file
out.write( std_out )
# You can use `splitlines()` to iterate over the lines.
for line in std_out.splitlines():
print line
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output,error=p.communicate()
Now you have your output,error.

python subprocess Popen hangs

OpenSolaris derivate (NexentaStor), python 2.5.5
I've seen numerous examples and many seem to indicate that the problem is a deadlock. I'm not writing to stdin so I think the problem is that one of the shell commands exits prematurely.
What's executed in Popen is:
ssh <remotehost> "zfs send tank/dataset#snapshot | gzip -9" | gzip -d | zfs recv tank/dataset
In other words, login to a remote host and (send a replication stream of a storage volume, pipe it to gzip) pipe it to zfs recv to write to a local datastore.
I've seen the explanation about buffers but Im definitely not filling up those, and gzip is bailing out prematurely so I think that the process.wait() never gets an exit.
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
process.wait()
if process.returncode == 0:
for line in process.stdout:
stdout_arr.append([line])
return stdout_arr
else:
return False
Here's what happens when I run and interrupt it
# ./zfs_replication.py
gzip: stdout: Broken pipe
^CKilled by signal 2.
Traceback (most recent call last):
File "./zfs_replication.py", line 155, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
process.wait()
File "/usr/lib/python2.5/subprocess.py", line 1184, in wait
pid, sts = self._waitpid_no_intr(self.pid, 0)
File "/usr/lib/python2.5/subprocess.py", line 1014, in _waitpid_no_intr
return os.waitpid(pid, options)
KeyboardInterrupt
I also tried to use the Popen.communicat() method but that too hangs if gzip bail out. In this case the last part of my command (zfs recv) exits because the local dataset has been modified so the incremental replication stream will not be applied, so even though that will be fixed there has got to be a way of dealing with gzips broken pipes?
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
dosomething()
else:
dosomethingelse()
And when run:
cannot receive incremental stream: destination tank/repl_test has been modified
since most recent snapshot
gzip: stdout: Broken pipe
^CKilled by signal 2.Traceback (most recent call last):
File "./zfs_replication.py", line 154, in <module>
Exec(zfsSendRecv(dataset, today), LOCAL)
File "./zfs_replication.py", line 83, in Exec
stdout, stderr = process.communicate()
File "/usr/lib/python2.5/subprocess.py", line 662, in communicate
stdout = self._fo_read_no_intr(self.stdout)
File "/usr/lib/python2.5/subprocess.py", line 1025, in _fo_read_no_intr
return obj.read()
KeyboardInterrupt

data stream python subprocess.check_output exe from another location

I would like to run an exe from this directory:/home/pi/pi_sensors-master/bin/Release/
This exe is then run by tying mono i2c.exe and it runs fine.
I would like to get this output in python which is in a completely different directory.
I know that I should use subprocess.check_output to take the output as a string.
I tried to implement this in python:
import subprocess
import os
cmd = "/home/pi/pi_sensors-master/bin/Release/"
os.chdir(cmd)
process=subprocess.check_output(['mono i2c.exe'])
print process
However, I received this error:
The output would usually be a data stream with a new number each time, is it possible to capture this output and store it as a constantly changing variable?
Any help would be greatly appreciated.
Your command syntax is incorrect, which is actually generating the exception. You want to call mono i2c.exe, so your command list should look like:
subprocess.check_output(['mono', 'i2c.exe']) # Notice the comma separation.
Try the following:
import subprocess
import os
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
print subprocess.check_output(['mono', executable])
The sudo is not a problem as long as you give the full path to the file and you are sure that running the mono command as sudo works.
I can generate the same error by doing a ls -l:
>>> subprocess.check_output(['ls -l'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/subprocess.py", line 537, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
However when you separate the command from the options:
>>> subprocess.check_output(['ls', '-l'])
# outputs my entire folder contents which are quite large.
I strongly advice you to use the subprocess.Popen -object to deal with external processes. Use Popen.communicate() to get the data from both stdout and stderr. This way you should not run into blocking problems.
import os
import subprocess
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
proc = subprocess.Popen(['mono', executable])
try:
outs, errs = proc.communicate(timeout=15) # Times out after 15 seconds.
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
Or you can call the communicate in a loop if you want a 'data-stream' of sort, an answer from this question:
from subprocess import Popen, PIPE
executable = "/home/pi/pi_sensors-master/bin/Release/i2c.exe"
p = Popen(["mono", executable], stdout=PIPE, bufsize=1)
for line in iter(p.stdout.readline, b''):
print line,
p.communicate() # close p.stdout, wait for the subprocess to exit

Categories

Resources