I am using below code for getting output of shell command.
import subprocess
exitcode, err, out = 0, None, None
try:
out = subprocess.check_output(cmd, shell=True, universal_newlines=True)
except subprocess.CalledProcessError as e:
exitcode, err = e.returncode, e.output
print("x{} e{} o{}".format(exitcode, err, out))
When a valid command is being passed for cmd like echo hello, the program is running fine and giving output as (0, None, "hello\n")
But if I give a wrong kind of command I am expecting the error message should come in err, but its getting printed directly. For example if I pass ls -lrt foo in cmd the output is coming as
anirban#desktop> python mytest.py
ls: cannot access foo: No such file or directory
x2 e oNone
So I want ls: cannot access foo: No such file or directory should be coming in err. How to do that?
To capture the error output, you need to pass in another argument to the subprocess.check_output() function. You need to set stderr=subprocess.STDOUT. This will channel the stderr output to e.output.
subprocess.check_output() is a wrapper over subprocess.run(). It makes our lives easier by passing in some sensible defaults. One of those is making stdout=subprocess.PIPE. This directs the standard output of the command you are running back to your program. Similarly, you can direct the standard error output to your program by passing in argument, stderr=subprocess.PIPE, this will populate e.stderr where e is the exception.
Let me know if this is not clear.
Related
I would like to execute system calls from my (python) script such that the output of the sys-call is generated on the terminal where script is running as well as is captured in a logfile. However I am not able to make this work for interactive system calls.
I started with following code that does not capture the system call in a logfile, but works correctly (as in is able to display output on terminal, and can take inputs I type in terminal) for both basic commands such as system('echo HEYA') as well as interactive commands such as system('python') :
def system(cmd):
log.info(f"Running: {cmd}")
try:
subprocess.run(cmd, check=True, shell=True, executable=SHELL)
except subprocess.CalledProcessError as err:
log.error(err.output)
Notes: log is a logger (created using standard logging module). SHELL variable holds a shell of my choice.
Now, I modify the above to be able to redirect the process output to terminal as well as logfile on disk in realtime with following code:
def system(cmd):
log.info(f"Running: {cmd}")
try:
process = subprocess.Popen(cmd,
shell=True, executable=SHELL, universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
o = process.stdout.readline().rstrip('\n')
if o == '' and process.poll() is not None:
break
if o:
syslog.info(o)
ret = process.poll()
if ret:
log.error(f"Failed with exit code: {ret}")
else:
log.info("Done")
except:
err = sys.exc_info()[0]
log.error(err)
raise
Notice that I use a different logger (syslog) to redirect outputs to logfile and terminal. The only reason for this, is I want system command outputs formatted differently from other messages generated in the script.
The second version of system function works for something like system('echo HEYA') but not for interactive calls like system('python'). Any suggestions what I may be doing wrong, and how I can get this to work?
Based on an earlier post of similar nature:
https://stackoverflow.com/a/651718/5150258
I was able to get this work partially by using first form of system definition with a tee:
tee = subprocess.Popen(["tee", "sys_cmds.log"], stdin=subprocess.PIPE)
os.dup2(tee.stdin.fileno(), sys.stdout.fileno())
os.dup2(tee.stdin.fileno(), sys.stderr.fileno()
def system(cmd):
log.info(f"Running: {cmd}")
try:
subprocess.run(cmd, check=True, shell=True, executable=SHELL)
except subprocess.CalledProcessError as err:
log.error(err.output)
However this isn't the perfect solution since I lose formatting control of the output as it goes directly without passing through the logger object.
I am making a program that adds additional functionality to the standard command shell in Windows. For instance, typing google followed by keywords will open a new tab with Google search for those keywords, etc. Whenever the input doesn't refer to a custom function I've created, it gets processed as a shell command using subprocess.call(rawCommand, shell=True).
Since I'd like to anticipate when my input isn't a valid command and return something like f"Invalid command: {rawCommand}", how should I go about doing that?
So far I've tried subprocess.call(rawCommand) which also return the standard output as well as the exit code. So that looks like this:
>>> from subprocess import call
>>> a, b = call("echo hello!", shell=1), call("xyz arg1 arg2", shell=1)
hello!
'xyz' is not recognized as an internal or external command,
operable program or batch file.
>>> a
0
>>> b
1
I'd like to simply recieve that exit code. Any ideas on how I can do this?
Should you one day want deal with encoding errors, get back the result of the command you're running, have a timeout or decide which exit codes other than 0 may not trigger errors (i'm looking at you, java runtime !), here's a complete function that does that job:
import os
from logging import getLogger
import subprocess
logger = getLogger()
def command_runner(command, valid_exit_codes=None, timeout=300, shell=False, encoding='utf-8',
windows_no_window=False, **kwargs):
"""
Whenever we can, we need to avoid shell=True in order to preseve better security
Runs system command, returns exit code and stdout/stderr output, and logs output on error
valid_exit_codes is a list of codes that don't trigger an error
windows_no_window will hide the command window (works with Microsoft Windows only)
Accepts subprocess.check_output arguments
"""
# Set default values for kwargs
errors = kwargs.pop('errors', 'backslashreplace') # Don't let encoding issues make you mad
universal_newlines = kwargs.pop('universal_newlines', False)
creationflags = kwargs.pop('creationflags', 0)
if windows_no_window:
creationflags = creationflags | subprocess.CREATE_NO_WINDOW
try:
# universal_newlines=True makes netstat command fail under windows
# timeout does not work under Python 2.7 with subprocess32 < 3.5
# decoder may be unicode_escape for dos commands or utf-8 for powershell
output = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=shell,
timeout=timeout, universal_newlines=universal_newlines, encoding=encoding,
errors=errors, creationflags=creationflags, **kwargs)
except subprocess.CalledProcessError as exc:
exit_code = exc.returncode
try:
output = exc.output
except Exception:
output = "command_runner: Could not obtain output from command."
if exit_code in valid_exit_codes if valid_exit_codes is not None else [0]:
logger.debug('Command [%s] returned with exit code [%s]. Command output was:' % (command, exit_code))
if isinstance(output, str):
logger.debug(output)
return exc.returncode, output
else:
logger.error('Command [%s] failed with exit code [%s]. Command output was:' %
(command, exc.returncode))
logger.error(output)
return exc.returncode, output
# OSError if not a valid executable
except (OSError, IOError) as exc:
logger.error('Command [%s] failed because of OS [%s].' % (command, exc))
return None, exc
except subprocess.TimeoutExpired:
logger.error('Timeout [%s seconds] expired for command [%s] execution.' % (timeout, command))
return None, 'Timeout of %s seconds expired.' % timeout
except Exception as exc:
logger.error('Command [%s] failed for unknown reasons [%s].' % (command, exc))
logger.debug('Error:', exc_info=True)
return None, exc
else:
logger.debug('Command [%s] returned with exit code [0]. Command output was:' % command)
if output:
logger.debug(output)
return 0, output
Usage:
exit_code, output = command_runner('whoami', shell=True)
Some shells have a syntax-checking mode (e.g., bash -n), but that’s the only form of error that’s separable from “try to execute the commands and see what happens”. Defining a larger class of “immediate” errors is a fraught proposition: if echo hello; ./foo is invalid because foo can’t be found as a command, what about false && ./foo, which will never try to run it, or cp /bin/ls foo; ./foo, which may succeed (or might fail to copy)? What about eval $(configure_shell); foo which might or might not manipulate PATH so as to find foo? What about foo || install_foo, where the failure might be anticipated?
As such, anticipating failure is not possible in any meaningful sense: your only real option is to capture the command’s output/error (as mentioned in the comments) and report them in some useful way.
I'm trying to run some commands using the python library subprocess. Some of my commands might get stuck in loops and block the python script. Therefore I'm using check_output() with a timeout argument to limit the commands in time. When the command takes too much time, the function raise a TimeoutExpired error. What I want to do is get what the command has been able to run before being killed by the timeout.
I've except the error and tried "except sp.TimeoutExpired as e:". I read on the doc that if I do e.output it should give me what I want. "Output of the child process if this exception is raised by check_output(). Otherwise, None.". However I don't get anything in the output.
Here is what I did:
import subprocess as sp
try:
out = sp.check_output('ls', stderr=sp.STDOUT, universal_newlines=True, timeout=1)
except sp.TimeoutExpired as e:
print ('output: '+ e.output)
else:
return out
Let say the folder I'm working with is huge and so 1 second isn't enough to ls all its files. Therefore the TimeoutExpired error will be raised. However, I'd like to store what the script was able to get at least. Does someone have an idea?
Found a solution, posting it here in case someone is interested.
In Python 3, the run method allows to get the output.
Using the parameters as shown in the example, TimeoutExpired returns the output before the timeout in stdout:
import subprocess as sp
for cmd in [['ls'], ['ls', '/does/not/exist'], ['sleep', '5']]:
print('Running', cmd)
try:
out = sp.run(cmd, timeout=3, check=True, stdout=sp.PIPE, stderr=sp.STDOUT)
except sp.CalledProcessError as e:
print(e.stdout.decode() + 'Returned error code ' + str(e.returncode))
except sp.TimeoutExpired as e:
print(e.stdout.decode() + 'Timed out')
else:
print(out.stdout.decode())
Possible output:
Running ['ls']
test.py
Running ['ls', '/does/not/exist']
ls: cannot access '/does/not/exist': No such file or directory
Returned error code 2
Running ['sleep', '5']
Timed out
I hope it helps someone.
The output of subprocess.check_output() looks like this at the moment:
CalledProcessError: Command '['foo', ...]' returned non-zero exit status 1
Is there a way to get a better error message?
I want to see stdout and stderr.
Redirect STDERR to STDOUT.
Example from the interpreter:
>>> try:
... subprocess.check_output(['ls','-j'], stderr=subprocess.STDOUT)
... except subprocess.CalledProcessError as e:
... print('error>', e.output, '<')
...
Will throw:
error> b"ls: invalid option -- 'j'\nTry `ls --help' for more information.\n" <
Explantion
From check_output documentation:
To also capture standard error in the result, use
stderr=subprocess.STDOUT
Don't use check_output(), use Popen and Popen.communicate() instead:
>>> proc = subprocess.Popen(['cmd', '--optional-switch'])
>>> output, errors = proc.communicate()
Here output is data from stdout and errors is data from stderr.
Since I don't want to write more code, just to get a good error message, I wrote subx
From the docs:
subprocess.check_output() vs subx.call()
Look, compare, think and decide what message helps your more.
subprocess.check_output()::
CalledProcessError: Command '['cat', 'some-file']' returned non-zero exit status 1
sub.call()::
SubprocessError: Command '['cat', 'some-file']' returned non-zero exit status 1:
stdout='' stderr='cat: some-file: No such file or directory'
... especially if the code fails in a production environment where
reproducing the error is not easy, subx can call help you to spot the
source of the failure.
In my opinion that a perfect scenario to use sys.excepthook! You just have to filter what you would like to be formatted as you want in the if statement. With this solution, it will cover every exception of your code without having to refract everything!
#!/usr/bin/env python
import sys
import subprocess
# Create the exception handler function
def my_excepthook(type, value, traceback):
# Check if the exception type name is CalledProcessError
if type.__name__ == "CalledProcessError":
# Format the error properly
sys.stderr.write("Error: " + type.__name__ + "\nCommand: " + value.cmd + "\nOutput: " + value.output.strip())
# Else we format the exception normally
else:
sys.stderr.write(str(value))
# We attach every exceptions to the function my_excepthook
sys.excepthook = my_excepthook
# We duplicate the exception
subprocess.check_output("dir /f",shell=True,stderr=subprocess.STDOUT)
You can modify the output as you wish, here is the actual ouput:
Error: CalledProcessError
Command: dir /f
Output: Invalid switch - "f".
I'm attempting to call an outside program from my python application, but it shows no output and fails with error 127. Executing the command from the command line works fine. (and I am in the correct working directory)
def buildContris (self, startUrl, reportArray):
urls = []
for row in reportArray:
try:
url = subprocess.check_output(["casperjs", "casper.js", startUrl, row[0]], shell=True)
print (url)
urls.append(url)
break
except subprocess.CalledProcessError as e:
print ("Error: " + str(e.returncode) + " Output:" + e.output.decode())
return urls
Each loop outputs the following error: (I've also checked e.cmd. It's correct, but long, so I omitted it in this example)
Error: 127 Output:
SOLUTION:
The following code works
app = subprocess.Popen(["./casperjs/bin/casperjs", "casper.js", startUrl, row[0]], stdout=subprocess.PIPE, stderr=subprocess.PIPE, env = {"PATH" : "/usr/local/bin/:/usr/bin"}, universal_newlines=True)
app.wait()
out, errs = app.communicate()
Try adding the full path to casperjs in your subprocess.check_output() call.
Edit: Answeing your 2nd question. My apologies for the formatting as I'm on iPad.
I think you should try Popen instead of check_output so that you can specify environment variables:
p = subprocess.Popen(["/path/to/casperjs", "casper.js", startUrl, row[0]], env={"PATH": "/path/to/phantomjs"})
url, err = p.communicate()
shell=True changes the interpretation of the first argument (args) in check_output() call, from the docs:
On Unix with shell=True, ... If args is a
sequence, the first item specifies the command string, and any
additional items will be treated as additional arguments to the shell
itself. That is to say, Popen does the equivalent of:
Popen(['/bin/sh', '-c', args[0], args[1], ...])
Exit status 127 might mean that the shell haven't found casperjs program or casperjs itself exited with that code.
To fix the code: drop shell=True and specify the full path to the casperjs program e.g.:
url = check_output(["./casperjs", "casper.js", startUrl, row[0]])
Try to add explicitly the path in this way.
If the file to call is in the same path (change __file__ if not):
cwd=os.path.dirname(os.path.realpath(__file__))
a = subprocess.check_output(["./casper.js", startUrl, row[0]],cwd=cwd,shell=True)
If you're experiencing this kinda nonsense on macOS: don't use aliases. Lost half a day with that. So, change:
subprocess.check_output(
"scribus-ng -g -ns -py {0} {1}".format(script_path, id),
stderr=subprocess.STDOUT,
shell=True)
to
subprocess.check_output(
"/Applications/Scribus.app/Contents/MacOS/Scribus -g -ns -py {0} {1}".format(script_path, id),
stderr=subprocess.STDOUT,
shell=True)