For example I'm trying to run some bash command from python:
from subprocess import run
command = f'ffmpeg -y -i "{video_path}" "{frames_path}/%d.png"'
run(command, shell=True, check=True)
but if it fails I just get subprocess.CalledProcessError: Command 'ffmpeg ...' returned non-zero exit status 127. how can I get full ffmpeg error message?
It's the check=True kwarg that's causing it to throw a CalledProcessError. Just remove check=True, and it will stop throwing the error. If you want to print the STDERR printed by ffmpeg, you can use capture_output=True. Then, the resulting CompletedProcess object will have a .stderr member that contains the STDERR of your command, encoded as a bytes-like string. Use str.decode() to turn it into a normal string:
from subprocess import run
command = f'ffmpeg -y -i "{video_path}" "{frames_path}/%d.png"'
proc = run(command, shell=True, capture_output=True)
out = proc.stdout.decode() # stores the output of stdout
err = proc.stderr.decode() # stores the output of stderr
print(err)
Related
I want to run the command ffprobe -i test.m4a -show_entries format=duration -v quiet -of csv="p=0". It works in the terminal and returns output code 0, but running it with subprocess, i.e.
subprocess.check_output(['ffprobe', '-i', 'test.m4a', '-show_entries', 'format=duration', '-v', 'quiet', '-of', 'csv="p=0"'])
raises a CalledProcessError - {Command} returned non-zero exit status 1.. I tried running this command in a try-except loop and printing the error details, but it just outputs as an empty byte string b''.
One way for debugging the issue is adding -report argument:
subprocess.check_output(['ffprobe', '-i', 'output.mp4', '-show_entries', 'format=duration', '-v', 'quiet', '-of', 'csv="p=0"', '-report'])
-report is used for creating a log file with name like ffprobe-20220811-232043.log.
The log files shows the following error:
[csv # 00000213297fe640] Failed to set option '"p' with value '0"' provided to writer context
The log files shows that the executed "shell command" is:
ffprobe -i output.mp4 -show_entries "format=duration" -v quiet -of "csv=\"p=0\"" -report
The solution is removing the quotes from "p=0":
subprocess.check_output(['ffprobe', '-i', 'output.mp4', '-show_entries', 'format=duration', '-v', 'quiet', '-of', 'csv=p=0'])
I recommend using the subprocess.run instead of check_output.
subprocess.run(command, stdout=output, encoding="utf-8")
Command = is the variable that houses the command you want , no need for separation using the commas.
stdout = Output = means that the output should be recorded in the file called (Output) which you can create beforehand.
encoding = just means to make sure it's encoded into texts from bytes
I'm trying to run a process with subprocess and print its entire output if and only if an exception occurs.
Where I was before:
try:
proc = subprocess.run(
command,
capture_output=True,
check=True,
text=True,
)
except subprocess.CalledProcessError as error:
print(error.output)
This did not work.
Output when subprocess.CalledProcessError occurs:
b''
Replacing capture_output with stdout=subprocess.PIPE resulted in the output of everything regardless whether an exception occurred or not, error.output was still empty.
So I experimented:
This prints everything I would see if I executed the command in the command-line.
subprocess.run(
command,
stdout=subprocess.PIPE,
)
This prints out nothing.
proc = subprocess.run(
command,
capture_output=True,
)
print(proc.stdout.decode())
I also tried subprocess.check_output() which to my information does the same as subprocess.run() with the flags I set in the first code snippet.
What am I missing here? Thanks.
Addendum
import subprocess
command = ['pandoc', 'file']
try:
proc = subprocess.run(
command,
capture_output=True,
check=True,
)
except subprocess.CalledProcessError as error:
print('Exception:')
print(error.output)
This is an MWE with the specific process I want to run (pandoc)
Output
$ pandoc file
pandoc: file: openBinaryFile: does not exist (No such file or directory)
$ ./samplecode.py
Exception:
b''
So the exception gets triggered, but the output object is empty.
It seems that the error message is present in error.stderr and not in error.output. I tried your example (with a ls of non-existent file) :
import subprocess
command = ['ls', 'file']
try:
proc = subprocess.run(
command,
check=True,
capture_output=True,
text=True
)
except subprocess.CalledProcessError as error:
print('Exception:')
print('output : ' + error.output)
print('stderr : ' + error.stderr)
The output is the following :
Exception:
output :
stderr : ls: file: No such file or directory
Hope it helps.
I believe what you're meaning to run is stderr=subprocess.PIPE. This should print the relevant error code to the standard console error output.
Example:
process = subprocess.Popen(['ls', 'myfile.txt'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(output,error) = process.communicate()
if error:
print error
My python script needs to invoke a program, detect if it failed (eg, result != 0) and send the output of the program to both stdout like normal plus a log file.
My default shell is bash. I'm using Python 2.7.9
To send output to both stdout and a file I'd normally use tee:
result = subprocess.call('some_program --an-option | tee -a ' + logfile , shell=True)
However, the pipe in bash will return true even if the first command fails, so this approach fails to detect if the command fails.
If I try to use set -o pipefail in the command (so that the result will indicate if the first command fails) like this:
result = subprocess.call('set -o pipefail && some_program --an_option | tee -a ' + logfile , shell=True)
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Is there a way in python to invoke a command, send the output to both the normal stdout console and a logfile, and still detect if the command failed?
Note: we have to continue sending some_program's output to stdout since stdout is being sent to a websocket.
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Pass executable='/bin/bash' otherwise /bin/sh is used.
You could implement tee in pure Python:
#!/usr/bin/env python2
import sys
from subprocess import Popen, PIPE
chunk_size = 1 << 13
p = Popen(["some_program", "--an-option"], stdout=PIPE, bufsize=1)
with p.stdout, open('logfile', 'ab') as logfile:
for chunk in iter(lambda: p.stdout.read(chunk_size), b''):
sys.stdout.write(chunk)
logfile.write(chunk)
if p.wait() != 0:
raise Error
My preference would to to send stdout to a pipe, and then read the pipe in the Python code. The Python code can write to stdout, a file, etc as required. It would also enable you to set shell=False as setting it to True is a potential security issue, as mentioned in the documentation.
However, the pipe in bash will return true even if the first command
fails, so this approach fails to detect if the command fails.
That is not true.
But I think you mean: the 'some_program --an-option | tee -a ' + logfile exit status code always is 0 even though fails in any command part.
Well, using multiple commands (when using && or ||) or connecting multiple commands together via pipes causes unreliable exit status code when returned.
Regardless, in the command: some_program --an-option | tee -a ' + logfile logfile is not written if some_program fails. So you don't need to worry regarding exit code.
Anyway the best way to do pipe along with subprocess is creating Popen objects ans handling stdout and stdin:
import subprocess as sp
STATUS_OK = 0
logfile = '/tmp/test.log'
commands = {
'main' : 'ls /home',
'pipe_to': 'tee -a ' + logfile
}
process = sp.Popen(commands['main'], shell=True, stdout=sp.PIPE)
# explicitly force waits till command terminate, set and return exit status code
process.wait()
if process.returncode == STATUS_OK:
stdoutdata = process.communicate()[0]
# pipe last command output to "tee" command
sp.Popen(commands['pipe_to'], stdin=sp.PIPE, shell=1).communicate(stdoutdata)
else:
# do something when command fails 'ls /hom' (in this case) fails
pass
That is it!
I the last Popen we invoke Popen.communicate() to send the last output from ls command to tee command STDIN.
In the Python doc there's a tiny tutorial called Replacing shell pipeline, maybe you want take a look.
I have this code from https://pymotw.com/2/subprocess/
I'm not sure how to interpret the code, in the check_output with 1>&2 output is redirected to stderr, but in the parameter, the stderr is back to stdout stderr=subprocess.STDOUT.
output = subprocess.check_output(
'echo to stdout; echo to stderr 1>&2; exit 1',
shell=True,
stderr=subprocess.STDOUT,
)
print "*****************"
print 'Have %d bytes in output' % len(output)
print output
Running the code, the print commands are not executed meaning nothing is captured.
What does this code trying to accomplish?
EDIT
From the answer and comment, I could run this code to get
try:
output = subprocess.check_output(
'echo to stdout; echo to stderr 1>&2; exit 1',
shell=True, # No such file or directory error without, maybe 1>&2 requires shell=True
stderr=subprocess.STDOUT,
)
except subprocess.CalledProcessError as e:
print "*****************"
print 'Have %d bytes in output' % len(e.output)
print e.output
this output:
*****************
Have 20 bytes in output
to stdout
to stderr
However, when I commented out the stderr=subprocess.STDOUT line, I got instead
to stderr
*****************
Have 10 bytes in output
to stdout
EDIT2
I tested more with stderr library (https://github.com/sickill/stderred) that helps a shell to show characters from stderr in red color.
When I execute this code (comment out the redirection), I can see the to stderr in BLACK color which implies it uses stdout.
output = subprocess.check_output(
'echo to stdout; echo to stderr 1>&2; exit 1',
shell=True,
#stderr=subprocess.STDOUT,
)
From this, I guess (correct me if I'm wrong) that Python's check_output method prints out the data into the stderr redirect to stdout so that it prints out the error message into stderr.
The 1 >&2 shell code applies only to the (echo) command it appears on. It is how to tell the shell to direct the output of that echo to the shell's stderr stream.
The python code stderr=subprocess.STDOUT tells the subprocess module that you want the process's stderr stream to be the same file descriptor as its stdout stream so that you will read whatever the process writes to either stream interleaved together in one stream.
The exit 1 in the shell command means that the shell exits with an error (non-zero) status.
The purpose of the code is to demonstrate that the python function subprocess.check_output will check the exit status and raise an exception when it is non-zero.
If the exit code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and output in the output attribute.
Your description of:
Running the code, the print commands are not executed
is a bit misleading since you neglect to mention the output that does occur:
Traceback (most recent call last):
File "t.py", line 6, in <module>
stderr=subprocess.STDOUT,
File "/usr/lib/python2.7/subprocess.py", line 573, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command 'echo to stdout; echo to stderr 1>&2; exit 1' returned non-zero exit status 1
I am using the below code to run a git command "git tag -l contains ad0beef66e5890cde6f0961ed03d8bc7e3defc63" ..if I run this command standalone I see the required output..but through the below program,it doesnt work,does anyone have any inputs on what could be wrong?
from subprocess import check_call,Popen,PIPE
revtext = "ad0beef66e5890cde6f0961ed03d8bc7e3defc63"
proc = Popen(['git', 'tag', '-l', '--contains', revtext ],stdout=PIPE ,stderr=PIPE)
(out, error) = proc.communicate()
print "OUT"
print out