capture stdout of subprocess.Popen when there is exception raised - python

How to capture the stdout/stderr from subprocess.Popen CMD call when there is exception raised?
code snippet:
p_cmd = subprocess.Popen(CMD, bufsize=0, shell=True, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(cmd_stdo, cmd_stde) = p_cmd.communicate(timeout=60)
If the CMD is timeout which run over 60 secs, how could I get the CMD stdout output in cmd_stdo or cmd_stde?
I try to get it in try, exception block it's NULL.
And, I am pretty sure that there is OUTPUT when running CMD.

In case of time-out, the variables cmd_stdo and cmd_stde are never assigned to, because the exception happens before the assignment.
To make sure stdout and stderr are captured even in case of the exception, I'd capture to (temporary) files and read them into variables afterwards.
import subprocess
from tempfile import TemporaryFile as Tmp
CMD = [ 'echo "before sleep" ; sleep 7 ; echo "after sleep"' ]
with Tmp() as out, Tmp() as err:
p_cmd = subprocess.Popen(CMD, bufsize=0, shell=True, stdin=None, stdout=out, stderr=err)
timed_out = False
try:
p_cmd.wait(timeout=5)
except subprocess.TimeoutExpired:
timed_out = True
out.seek(0)
err.seek(0)
str_out = out.read()
str_err = err.read()
print('Has timed out:', timed_out)
print('Stdout:', str_out)
print('Stderr:', str_err)
(When trying this out, play with sleep and timeout times in the code to make time-out happen or not)

Since I don't have enough reputation I have to ask for clarification this way.
Are you trying to run it on Windows or Linux?

Related

Subprocess Timeout in Python

I am trying to check the header of a website and the code works perfectly fine. However when the website does not respond within a reasonable amount of time, I added a timeout and that works too.
Unfortunately the command is not taking parameters and am struck over there. Any suggestions would be highly appreciated
import subprocess
from threading import Timer
kill = lambda process: process.kill()
c1='curl -H'
cmd = [c1, 'google.com']
p = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
my_timer = Timer(10, kill, [p])
try:
my_timer.start()
stdout, stderr = p.communicate()
print stdout
finally:
print stderr
my_timer.cancel()
Error while running :
OSError: [Errno 2] No such file or directory
However if I change c1 as shown below, it works fine.
c1='curl'
With
c1='curl'
use
cmd = [c1, '-H','google.com']

Capturing *all* terminal output of a program called from Python

I have a program which can be execute as
./install.sh
This install bunch of stuff and has quite a lot of activity happening on screen..
Now, I am trying to execute it via
p = subprocess.Popen(executable, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
With the hope that all the activity happening on the screen is captured in out (or err). However, content is printed directly to the terminal while the process is running, and not captured into out or err, which are both empty after the process is run.
What could be happening here? How can this content be captured?
In general, what you're doing is already sufficient to channel all output to your variables.
One exception to that is if the program you're running is using /dev/tty to connect directly to its controlling terminal, and emitting output through that terminal rather than through stdout (FD 1) and stderr (FD 2). This is commonly done for security-sensitive IO such as password prompts, but rarely seen otherwise.
As a demonstration that this works, you can copy-and-paste the following into a Python shell exactly as given:
import subprocess
executable = ['/bin/sh', '-c', 'echo stdout; echo stderr >&2']
p = subprocess.Popen(executable, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print "---"
print "output: ", out
print "stderr: ", err
...by contrast, for a demonstration of the case that doesn't work:
import subprocess
executable = ['/bin/sh', '-c', 'echo uncapturable >/dev/tty']
p = subprocess.Popen(executable, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print "---"
print "output: ", out
In this case, content is written to the TTY directly, not to stdout or stderr. This content cannot be captured without using a program (such as script or expect) that provides a fake TTY. So, to use script:
import subprocess
executable = ['script', '-q', '/dev/null',
'/bin/sh', '-c', 'echo uncapturable >/dev/tty']
p = subprocess.Popen(executable, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
print "---"
print "output: ", out

Get stdout in case of success and stdout+stderr in case of failure

try:
output = subprocess.check_output(command, shell=True)
except subprocess.CalledProcessError as exc:
logger.error('There was an error while ...: \n%s',
exc.output)
raise
What is the easiest way to do the following:
Call a process using subprocess module.
If the program exited normally, put into output variable its standard output.
If the program exited abnormally, get its standard output and error.
import subprocess
process= subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.wait() #wait for the command to finish
output= process.stdout.read()
if process.poll(): #check the error code
error= process.stderr.read()

Python's subprocess.Popen object hangs gathering child output when child process does not exit

When a process exits abnormally or not at all, I still want to be able to gather what output it may have generated up until that point.
The obvious solution to this example code is to kill the child process with an os.kill, but in my real code, the child is hung waiting for NFS and does not respond to a SIGKILL.
#!/usr/bin/python
import subprocess
import os
import time
import signal
import sys
child_script = """
#!/bin/bash
i=0
while [ 1 ]; do
echo "output line $i"
i=$(expr $i \+ 1)
sleep 1
done
"""
childFile = open("/tmp/childProc.sh", 'w')
childFile.write(child_script)
childFile.close()
cmd = ["bash", "/tmp/childProc.sh"]
finish = time.time() + 3
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
while p.poll() is None:
time.sleep(0.05)
if finish < time.time():
print "timed out and killed child, collecting what output exists so far"
out, err = p.communicate()
print "got it"
sys.exit(0)
In this case, the print statement about timing out appears and the python script never exits or progresses. Does anybody know how I can do this differently and still get output from my child processe
Problem is that bash doesn't answer to CTRL-C when not connected with a terminal.
Switching to SIGHUP or SIGTERM seems to do the trick:
cmd = ["bash", 'childProc.sh']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
close_fds=True)
time.sleep(3)
print 'killing pid', p.pid
os.kill(p.pid, signal.SIGTERM)
print "timed out and killed child, collecting what output exists so far"
out = p.communicate()[0]
print "got it", out
Outputs:
killing pid 5844
timed out and killed child, collecting what output exists so far
got it output line 0
output line 1
output line 2
Here's a POSIX way of doing it without the temporary file. I realize that subprocess is a little superfluous here, but since the original question used it...
import subprocess
import os
import time
import signal
import sys
pr, pw = os.pipe()
pid = os.fork ()
if pid: #parent
os.close(pw)
cmd = ["bash"]
finish = time.time() + 3
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=pr, close_fds=True)
while p.poll() is None:
time.sleep(0.05)
if finish < time.time():
os.kill(p.pid, signal.SIGTERM)
print "timed out and killed child, collecting what output exists so far"
out, err = p.communicate()
print "got it: ", out
sys.exit(0)
else: #child
os.close(pr)
child_script = """
#!/bin/bash
while [ 1 ]; do
((++i))
echo "output line $i"
sleep 1
done
"""
os.write(pw, child_script)
There are good tips in another stackoverflow question: How do I get 'real-time' information back from a subprocess.Popen in python (2.5)
Most of the hints in there work with pipe.readline() instead of pipe.communicate() because the latter only returns at the end of the process.
I had the exact same problem. I ended up fixing the issue (after scouring Google and finding many related problems) by simply setting the following parameters when calling subprocess.Popen (or .call):
stdout=None
and
stderr=None
There are many problems with these functions but in my specific case I believe stdout was being filled up by the process I was calling and then resulting in a blocking condition. By setting these to None (opposed to something like subprocess.PIPE) I believe this is avoided.
Hope this helps someone.

catching stdout in realtime from subprocess

I want to subprocess.Popen() rsync.exe in Windows, and print the stdout in Python.
My code works, but it doesn't catch the progress until a file transfer is done! I want to print the progress for each file in real time.
Using Python 3.1 now since I heard it should be better at handling IO.
import subprocess, time, os, sys
cmd = "rsync.exe -vaz -P source/ dest/"
p, line = True, 'start'
p = subprocess.Popen(cmd,
shell=True,
bufsize=64,
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
for line in p.stdout:
print(">>> " + str(line.rstrip()))
p.stdout.flush()
Some rules of thumb for subprocess.
Never use shell=True. It needlessly invokes an extra shell process to call your program.
When calling processes, arguments are passed around as lists. sys.argv in python is a list, and so is argv in C. So you pass a list to Popen to call subprocesses, not a string.
Don't redirect stderr to a PIPE when you're not reading it.
Don't redirect stdin when you're not writing to it.
Example:
import subprocess, time, os, sys
cmd = ["rsync.exe", "-vaz", "-P", "source/" ,"dest/"]
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
for line in iter(p.stdout.readline, b''):
print(">>> " + line.rstrip())
That said, it is probable that rsync buffers its output when it detects that it is connected to a pipe instead of a terminal. This is the default behavior - when connected to a pipe, programs must explicitly flush stdout for realtime results, otherwise standard C library will buffer.
To test for that, try running this instead:
cmd = [sys.executable, 'test_out.py']
and create a test_out.py file with the contents:
import sys
import time
print ("Hello")
sys.stdout.flush()
time.sleep(10)
print ("World")
Executing that subprocess should give you "Hello" and wait 10 seconds before giving "World". If that happens with the python code above and not with rsync, that means rsync itself is buffering output, so you are out of luck.
A solution would be to connect direct to a pty, using something like pexpect.
I know this is an old topic, but there is a solution now. Call the rsync with option --outbuf=L. Example:
cmd=['rsync', '-arzv','--backup','--outbuf=L','source/','dest']
p = subprocess.Popen(cmd,
stdout=subprocess.PIPE)
for line in iter(p.stdout.readline, b''):
print '>>> {}'.format(line.rstrip())
Depending on the use case, you might also want to disable the buffering in the subprocess itself.
If the subprocess will be a Python process, you could do this before the call:
os.environ["PYTHONUNBUFFERED"] = "1"
Or alternatively pass this in the env argument to Popen.
Otherwise, if you are on Linux/Unix, you can use the stdbuf tool. E.g. like:
cmd = ["stdbuf", "-oL"] + cmd
See also here about stdbuf or other options.
On Linux, I had the same problem of getting rid of the buffering. I finally used "stdbuf -o0" (or, unbuffer from expect) to get rid of the PIPE buffering.
proc = Popen(['stdbuf', '-o0'] + cmd, stdout=PIPE, stderr=PIPE)
stdout = proc.stdout
I could then use select.select on stdout.
See also https://unix.stackexchange.com/questions/25372/
for line in p.stdout:
...
always blocks until the next line-feed.
For "real-time" behaviour you have to do something like this:
while True:
inchar = p.stdout.read(1)
if inchar: #neither empty string nor None
print(str(inchar), end='') #or end=None to flush immediately
else:
print('') #flush for implicit line-buffering
break
The while-loop is left when the child process closes its stdout or exits.
read()/read(-1) would block until the child process closed its stdout or exited.
Your problem is:
for line in p.stdout:
print(">>> " + str(line.rstrip()))
p.stdout.flush()
the iterator itself has extra buffering.
Try doing like this:
while True:
line = p.stdout.readline()
if not line:
break
print line
You cannot get stdout to print unbuffered to a pipe (unless you can rewrite the program that prints to stdout), so here is my solution:
Redirect stdout to sterr, which is not buffered. '<cmd> 1>&2' should do it. Open the process as follows: myproc = subprocess.Popen('<cmd> 1>&2', stderr=subprocess.PIPE)
You cannot distinguish from stdout or stderr, but you get all output immediately.
Hope this helps anyone tackling this problem.
To avoid caching of output you might wanna try pexpect,
child = pexpect.spawn(launchcmd,args,timeout=None)
while True:
try:
child.expect('\n')
print(child.before)
except pexpect.EOF:
break
PS : I know this question is pretty old, still providing the solution which worked for me.
PPS: got this answer from another question
p = subprocess.Popen(command,
bufsize=0,
universal_newlines=True)
I am writing a GUI for rsync in python, and have the same probelms. This problem has troubled me for several days until i find this in pyDoc.
If universal_newlines is True, the file objects stdout and stderr are opened as text files in universal newlines mode. Lines may be terminated by any of '\n', the Unix end-of-line convention, '\r', the old Macintosh convention or '\r\n', the Windows convention. All of these external representations are seen as '\n' by the Python program.
It seems that rsync will output '\r' when translate is going on.
if you run something like this in a thread and save the ffmpeg_time property in a property of a method so you can access it, it would work very nice
I get outputs like this:
output be like if you use threading in tkinter
input = 'path/input_file.mp4'
output = 'path/input_file.mp4'
command = "ffmpeg -y -v quiet -stats -i \"" + str(input) + "\" -metadata title=\"#alaa_sanatisharif\" -preset ultrafast -vcodec copy -r 50 -vsync 1 -async 1 \"" + output + "\""
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True, shell=True)
for line in self.process.stdout:
reg = re.search('\d\d:\d\d:\d\d', line)
ffmpeg_time = reg.group(0) if reg else ''
print(ffmpeg_time)
Change the stdout from the rsync process to be unbuffered.
p = subprocess.Popen(cmd,
shell=True,
bufsize=0, # 0=unbuffered, 1=line-buffered, else buffer-size
stdin=subprocess.PIPE,
stderr=subprocess.PIPE,
stdout=subprocess.PIPE)
I've noticed that there is no mention of using a temporary file as intermediate. The following gets around the buffering issues by outputting to a temporary file and allows you to parse the data coming from rsync without connecting to a pty. I tested the following on a linux box, and the output of rsync tends to differ across platforms, so the regular expressions to parse the output may vary:
import subprocess, time, tempfile, re
pipe_output, file_name = tempfile.TemporaryFile()
cmd = ["rsync", "-vaz", "-P", "/src/" ,"/dest"]
p = subprocess.Popen(cmd, stdout=pipe_output,
stderr=subprocess.STDOUT)
while p.poll() is None:
# p.poll() returns None while the program is still running
# sleep for 1 second
time.sleep(1)
last_line = open(file_name).readlines()
# it's possible that it hasn't output yet, so continue
if len(last_line) == 0: continue
last_line = last_line[-1]
# Matching to "[bytes downloaded] number% [speed] number:number:number"
match_it = re.match(".* ([0-9]*)%.* ([0-9]*:[0-9]*:[0-9]*).*", last_line)
if not match_it: continue
# in this case, the percentage is stored in match_it.group(1),
# time in match_it.group(2). We could do something with it here...
In Python 3, here's a solution, which takes a command off the command line and delivers real-time nicely decoded strings as they are received.
Receiver (receiver.py):
import subprocess
import sys
cmd = sys.argv[1:]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
for line in p.stdout:
print("received: {}".format(line.rstrip().decode("utf-8")))
Example simple program that could generate real-time output (dummy_out.py):
import time
import sys
for i in range(5):
print("hello {}".format(i))
sys.stdout.flush()
time.sleep(1)
Output:
$python receiver.py python dummy_out.py
received: hello 0
received: hello 1
received: hello 2
received: hello 3
received: hello 4

Categories

Resources