I have some relatively big program and I used to print its progress to the console in the manner that each function prints only one line of output (it displays "Doing something..." while the function is running (sometimes it displays percentage bar) and turns to "Doing something... Done" when the function finishes successfully. I apply '\r', line clearance, etc. for my progress bar to look nice. However, when error occurs, the message continues the same line, which I want to avoid. For example I have a code:
import os, sys, subprocess
def some_function(filename):
print('Doing something... ', end = '')
sys.stdout.flush()
with open(os.devnull, 'wb') as devnull:
check = subprocess.call(['ls', filename], stdout = devnull)
if check != 0:
sys.exit(1)
print('Done')
some_function('some.file')
It produces the following output (depending on presence of an error):
Doing something... Done
or
Doing something... ls: some.file: No such file or directory
And what I want to see in the case of error:
Doing something...
ls: some.file: No such file or directory
Is there some general way to introduce newline in output if error occurs (it can be some internal or user-defined exception as well)?
call does not raise an exception so you can catch the error with subprocess using stderr:
import os, sys, subprocess
def some_function(filename):
print('Doing something... ', end='')
sys.stdout.flush()
with open(os.devnull, 'wb') as devnull:
check = subprocess.Popen(['ls', filename], stdout = devnull, stderr=subprocess.PIPE)
stdout, stderr = check.communicate()
if stderr:
print("\n{}".format(stderr.decode("utf-8")))
sys.exit(1)
print('Done')
try:
do stuff that might cause an error
except:
print()
raise
If an error is raised inside the try block, this will print a new line, and then re-raise the caught exception.
Edit
As has been pointed out in the comments, in this case the error message is not being generated by a raised exception, so I defer to Padraic's answer.
Related
I'm trying to run some commands using the python library subprocess. Some of my commands might get stuck in loops and block the python script. Therefore I'm using check_output() with a timeout argument to limit the commands in time. When the command takes too much time, the function raise a TimeoutExpired error. What I want to do is get what the command has been able to run before being killed by the timeout.
I've except the error and tried "except sp.TimeoutExpired as e:". I read on the doc that if I do e.output it should give me what I want. "Output of the child process if this exception is raised by check_output(). Otherwise, None.". However I don't get anything in the output.
Here is what I did:
import subprocess as sp
try:
out = sp.check_output('ls', stderr=sp.STDOUT, universal_newlines=True, timeout=1)
except sp.TimeoutExpired as e:
print ('output: '+ e.output)
else:
return out
Let say the folder I'm working with is huge and so 1 second isn't enough to ls all its files. Therefore the TimeoutExpired error will be raised. However, I'd like to store what the script was able to get at least. Does someone have an idea?
Found a solution, posting it here in case someone is interested.
In Python 3, the run method allows to get the output.
Using the parameters as shown in the example, TimeoutExpired returns the output before the timeout in stdout:
import subprocess as sp
for cmd in [['ls'], ['ls', '/does/not/exist'], ['sleep', '5']]:
print('Running', cmd)
try:
out = sp.run(cmd, timeout=3, check=True, stdout=sp.PIPE, stderr=sp.STDOUT)
except sp.CalledProcessError as e:
print(e.stdout.decode() + 'Returned error code ' + str(e.returncode))
except sp.TimeoutExpired as e:
print(e.stdout.decode() + 'Timed out')
else:
print(out.stdout.decode())
Possible output:
Running ['ls']
test.py
Running ['ls', '/does/not/exist']
ls: cannot access '/does/not/exist': No such file or directory
Returned error code 2
Running ['sleep', '5']
Timed out
I hope it helps someone.
The output of subprocess.check_output() looks like this at the moment:
CalledProcessError: Command '['foo', ...]' returned non-zero exit status 1
Is there a way to get a better error message?
I want to see stdout and stderr.
Redirect STDERR to STDOUT.
Example from the interpreter:
>>> try:
... subprocess.check_output(['ls','-j'], stderr=subprocess.STDOUT)
... except subprocess.CalledProcessError as e:
... print('error>', e.output, '<')
...
Will throw:
error> b"ls: invalid option -- 'j'\nTry `ls --help' for more information.\n" <
Explantion
From check_output documentation:
To also capture standard error in the result, use
stderr=subprocess.STDOUT
Don't use check_output(), use Popen and Popen.communicate() instead:
>>> proc = subprocess.Popen(['cmd', '--optional-switch'])
>>> output, errors = proc.communicate()
Here output is data from stdout and errors is data from stderr.
Since I don't want to write more code, just to get a good error message, I wrote subx
From the docs:
subprocess.check_output() vs subx.call()
Look, compare, think and decide what message helps your more.
subprocess.check_output()::
CalledProcessError: Command '['cat', 'some-file']' returned non-zero exit status 1
sub.call()::
SubprocessError: Command '['cat', 'some-file']' returned non-zero exit status 1:
stdout='' stderr='cat: some-file: No such file or directory'
... especially if the code fails in a production environment where
reproducing the error is not easy, subx can call help you to spot the
source of the failure.
In my opinion that a perfect scenario to use sys.excepthook! You just have to filter what you would like to be formatted as you want in the if statement. With this solution, it will cover every exception of your code without having to refract everything!
#!/usr/bin/env python
import sys
import subprocess
# Create the exception handler function
def my_excepthook(type, value, traceback):
# Check if the exception type name is CalledProcessError
if type.__name__ == "CalledProcessError":
# Format the error properly
sys.stderr.write("Error: " + type.__name__ + "\nCommand: " + value.cmd + "\nOutput: " + value.output.strip())
# Else we format the exception normally
else:
sys.stderr.write(str(value))
# We attach every exceptions to the function my_excepthook
sys.excepthook = my_excepthook
# We duplicate the exception
subprocess.check_output("dir /f",shell=True,stderr=subprocess.STDOUT)
You can modify the output as you wish, here is the actual ouput:
Error: CalledProcessError
Command: dir /f
Output: Invalid switch - "f".
I am trying to print a list of tuples formatted in my stdout. For this, I use the str.format method. Everything works fine, but when I pipe the output to see the
first lines using the head command a IOError occurs.
Here is my code:
# creating the data
data = []$
for i in range(0, 1000):
pid = 'pid%d' % i
uid = 'uid%d' % i
pname = 'pname%d' % i
data.append( (pid, uid, pname) )
# find max leghed string for each field
pids, uids, pnames = zip(*data)
max_pid = len("%s" % max( pids) )
max_uid = len("%s" % max( uids) )
max_pname = len("%s" % max( pnames) )
# my template for the formatted strings
template = "{0:%d}\t{1:%d}\t{2:%d}" % (max_pid, max_uid, max_pname)
# print the formatted output to stdout
for pid, uid, pname in data:
print template.format(pid, uid, pname)
And here is the error I get after running the command: python myscript.py | head
Traceback (most recent call last):
File "lala.py", line 16, in <module>
print template.format(pid, uid, pname)
IOError: [Errno 32] Broken pipe
Can anyone help me on this?
I tried to put print in a try-except block to handle the error,
but after that there was another message in the console:
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
I also tried to flush immediately the data through a two consecutive
sys.stdout.write and sys.stdout.flush calls, but nothing happend..
head reads from stdout then closes it. This causes print to fail, internally it writes to sys.stdout, now closed.
You can simply catch the IOError and exit silently:
try:
for pid, uid, pname in data:
print template.format(pid, uid, pname)
except IOError:
# stdout is closed, no point in continuing
# Attempt to close them explicitly to prevent cleanup problems:
try:
sys.stdout.close()
except IOError:
pass
try:
sys.stderr.close()
except IOError:
pass
The behavior you are seeing is linked to the buffered output implementation in Python3. The problem can be avoided using the -u option or setting environmental variable PYTHONUNBUFFERED=x. See the man pages for more information on -u.
$ python2.7 testprint.py | echo
Exc: <type 'exceptions.IOError'>
$ python3.5 testprint.py | echo
Exc: <class 'BrokenPipeError'>
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe
$ python3.5 -u testprint.py | echo
Exc: <class 'BrokenPipeError'>
$ export PYTHONUNBUFFERED=x
$ python3.5 testprint.py | echo
Exc: <class 'BrokenPipeError'>
In general, I try to catch the most specific error I can get away with. In this case it is BrokenPipeError:
try:
# I usually call a function here that generates all my output:
for pid, uid, pname in data:
print template.format(pid, uid, pname)
except BrokenPipeError as e:
pass # Ignore. Something like head is truncating output.
finally:
sys.stderr.close()
If this is at the end of execution, I find I only need to close sys.stderr. If I don't close sys.stderr, I'll get a BrokenPipeError but without a stack trace.
This seems to be the minimum fix for writing tools that output to pipelines.
Had this problem with Python3 and debug logging piped into head as well. If your script talks to the network or does file IO, simply dropping IOError's is not a good solution. Despite mentions here, I was not able to catch BrokenPipeError for some reason.
Found a blog post talking about restoring the default signal handler for sigpipe: http://newbebweb.blogspot.com/2012/02/python-head-ioerror-errno-32-broken.html
In short, you add the following to your script before the bulk of the output:
if log.isEnabledFor(logging.DEBUG): # optional
# set default handler to no-op
from signal import signal, SIGPIPE, SIG_DFL
signal(SIGPIPE, SIG_DFL)
This seems to happen with head, but not other programs such as grep---as mentioned head closes stdout. If you don't use head with the script often, it may not be worth worrying about.
I'm using the plumbum python library (http://plumbum.readthedocs.org/) as a replacement for shell scripts.
There's a command I want to run that, when it fails it outputs the paths to a file I'm interested in:
$ slow_cmd
Working.... 0%
Working.... 5%
Working... 15%
FAIL. Check log/output.log for details
I want to run the program on the foreground to check the progress:
from plumbum.cmd import slow_cmd
try:
f = slow_cmd & FG
except Exception, e:
print "Something went wrong."
# Need the error output from f to get the log file :(
When the slow_cmd fails, it throws the exception (which I can catch). But I cannot obtain the error output from the exception or from the f future object.
If I don't run the slow_cmd on the FG, the exception contains all of the output and I can read the file from there.
the problem is, FG redirects the output straight to your program's stdout. see https://github.com/tomerfiliba/plumbum/blob/master/plumbum/commands.py#L611
when output is redirected this way, it doesn't go through plumbum's machinery so you won't get it in the exception object. if you're willing to block until slow_cmd finishes, a better solution would be to read from stdout yourself. here's a sketch:
lines = []
p = slow_cmd.popen()
while p.poll() is None:
line = p.stdout.readline()
lines.append(line)
print line
if p.returncode != 0:
print "see log file..."
a more elegant solution would be to write your own ExecutionModifier (like FG) that duplicates the output streams. let's call it TEE (after http://en.wikipedia.org/wiki/Tee_(command))... i haven't tested it, but it should do the trick (minus selecting on stdout/err):
class TEE(ExecutionModifier):
def __init__(self, retcode = 0, dupstream = sys.stdout):
ExecutionModifier.__init__(self, retcode)
self.dupstream = dupstream
def __rand__(self, cmd):
p = cmd.popen()
stdout = []
stderr = []
while p.poll():
# note: you should probably select() on the two pipes, or make the pipes nonblocking,
# otherwise readline would block
so = p.stdout.readline()
se = p.stderr.readline()
if so:
stdout.append(so)
dupstream.write(so)
if se:
stderr.append(se)
dupstream.write(se)
stdout = "".join(stdout)
stderr = "".join(stderr)
if p.returncode != self.retcode:
raise ProcessExecutionError(p.argv, p.returncode, stdout, stderr)
return stdout, stderr
try:
stdout, stderr = slow_cmd & TEE()
except ProcessExecutionError as e:
pass # find the log file, etc.
Is there a variant of subprocess.call that can run the command without printing to standard out, or a way to block out it's standard out messages?
Yes. Redirect its stdout to /dev/null.
process = subprocess.call(["my", "command"], stdout=open(os.devnull, 'wb'))
Often that kind of chatter is coming on stderr, so you may want to silence that too. Since Python 3.3, subprocess.call has this feature directly:
To suppress stdout or stderr, supply a value of DEVNULL.
Usage:
import subprocess
rc = subprocess.call(args, stderr=subprocess.DEVNULL, stdout=subprocess.DEVNULL)
If you are still on Python 2:
import os, subprocess
with open(os.devnull, 'wb') as shutup:
rc = subprocess.call(args, stdout=shutup, stderr=shutup)
This is a recipe I use a lot: call subprocess and collect the output, and when the command succeeds discard the output, but when it fails print the output.
import subprocess as sp
import sys
if "print" in __builtins__.__dict__:
prn = __builtins__.__dict__["print"]
else:
def prn(*args, **kwargs):
"""
prn(value, ..., sep=' ', end='\\n', file=sys.stdout)
Works just like the print function in Python 3.x but can be used in 2.x.
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
"""
sep = kwargs.get("sep", ' ')
end = kwargs.get("end", '\n')
file = kwargs.get("file", sys.stdout)
s = sep.join(str(x) for x in args) + end
file.write(s)
def rc_run_cmd_basic(lst_cmd, verbose=False, silent=False):
if silent and verbose:
raise ValueError("cannot specify both verbose and silent as true")
p = sp.Popen(lst_cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
tup_output = p.communicate()
s_cmd = ' '.join(lst_cmd)
if verbose:
prn()
prn("command: '%s'\n" % s_cmd)
if 0 != p.returncode:
prn()
prn("Command failed with code %d:" % p.returncode)
else:
prn("Command succeeded! code %d" % p.returncode)
if verbose:
prn("Output for: " + s_cmd)
prn(tup_output[0])
prn()
if not silent and 0 != p.returncode:
prn("Error output for: " + s_cmd)
prn(tup_output[1])
prn()
return p.returncode
I use subprocess.check_output in such cases and drop the return value. You might want to add a comment your code stating why you are using check_output in place of check_call. check_output is also nicer when a failure occurs and you are interested in the error output. Example code below. The output is seen only when you uncomment the print line. If the command fails, an exception is thrown.
import subprocess
ret = subprocess.check_output(["cat", "/tmp/1"])
#print ret