Python subprocess returncode takes on different values - python

I'm running a python script demo.py which is as follows:
#!/usr/bin/env python
from subprocess import Popen, PIPE, CalledProcessError
try:
process = Popen(["/root/script.sh"], stdout = PIPE, stderr = PIPE)
process_out, process_err = process.communicate()
return_code = process.returncode
if process_out:
print "output:", process_out
if process_err:
print "error:", process_err
print "return code:", return_code
except CalledProcessError as e:
print "CalledProcessError:", e
except Exception, fault:
print "fault:", fault
script.sh is as follows:
#!/bin/bash
cd /root/
mkdir foo
cd foo
cat << EOF > bar.txt
random text
EOF
The directory foo already exists. Hence, script.sh should fail and it does. demo.py correctly catches the error in process_err and prints it:
error: mkdir: cannot create directory `foo': File exists
But the value of return_code is still 0 (which indicates successful run).
If my script.sh is as follows:
#!/bin/bash
cd /root/
mkdir foo
process_err prints the same error message but now the value of return_code is 1.
Where is the problem?
Please also suggest scenarios in which process.returncode takes on values other than 0 and 1.

This is not a problem with Python, but with your Bash script that is continuing execution after mkdir. This is how Bash works by default, you have to tell it to exit when it encounters an error.
Use:
#!/bin/bash
set -e
Or, if you can't change the Bash script and you can only change the Python code:
process = Popen(['bash', '-e', '/root/script.sh'], stdout = PIPE, stderr = PIPE)
From help set:
-e Exit immediately if a command exits with a non-zero status.

Related

How can I get full output error if subprocess.run failed?

For example I'm trying to run some bash command from python:
from subprocess import run
command = f'ffmpeg -y -i "{video_path}" "{frames_path}/%d.png"'
run(command, shell=True, check=True)
but if it fails I just get subprocess.CalledProcessError: Command 'ffmpeg ...' returned non-zero exit status 127. how can I get full ffmpeg error message?
It's the check=True kwarg that's causing it to throw a CalledProcessError. Just remove check=True, and it will stop throwing the error. If you want to print the STDERR printed by ffmpeg, you can use capture_output=True. Then, the resulting CompletedProcess object will have a .stderr member that contains the STDERR of your command, encoded as a bytes-like string. Use str.decode() to turn it into a normal string:
from subprocess import run
command = f'ffmpeg -y -i "{video_path}" "{frames_path}/%d.png"'
proc = run(command, shell=True, capture_output=True)
out = proc.stdout.decode() # stores the output of stdout
err = proc.stderr.decode() # stores the output of stderr
print(err)

Run cmd file using python

I have a cmd file "file.cmd" containing 100s of lines of command.
Example
pandoc --extract-media -f docx -t gfm "sample1.docx" -o "sample1.md"
pandoc --extract-media -f docx -t gfm "sample2.docx" -o "sample2.md"
pandoc --extract-media -f docx -t gfm "sample3.docx" -o "sample3.md"
I am trying to run these commands using a script so that I don't have to go to a file and click on it.
This is my code, and it results in no output:
file1 = open('example.cmd', 'r')
Lines = file1.readlines()
# print(Lines)
for i in Lines:
print(i)
os.system(i)
You don't need to read the cmd file line by line. you can simply try the following:
import os
os.system('myfile.cmd')
or using the subprocess module:
import subprocess
p = subprocess.Popen(['myfile.cmd'], shell = True, close_fds = True)
stdout, stderr = proc.communicate()
Example:
myfile.cmd:
#ECHO OFF
ECHO Grettings From Python!
PAUSE
script.py:
import os
os.system('myfile.cmd')
The cmd will open with:
Greetings From Python!
Press any key to continue ...
You can debug the issue by knowing the return exit code by:
import os
return_code=os.system('myfile.cmd')
assert return_code == 0 #asserts that the return code is 0 indicating success!
Note: os.system works by calling system() in C can only take up to 65533 arguments after a command (so it is a 16 bit issue). Giving one more argument will result in the return code 32512 (which implies the exit code 127).
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function (os.system('command')).
since it is a command file (cmd), and only the shell can run it, then shell argument must set to be true. since you are setting the shell argument to true, the command needs to be string form and not a list.
use the Popen method for spawn a new process and the communicte for waiting on that process (you can time it out as well). if you whish to communicate with the child process, provide the PIPES (see mu example, but you dont have to!)
the code below for python 3.3 and beyond
import subprocess
try:
proc=subprocess.Popen('myfile.cmd', shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
outs, errs = proc.communicate(timeout=15) #timing out the execution, just if you want, you dont have to!
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
for older python versions
proc = subprocess.Popen('myfile.cmd', shell=True)
t=10
while proc.poll() is None and t >= 0:
print('Still waiting')
time.sleep(1)
t -= 1
proc.kill()
In both cases (python versions) if you dont need the timeout feature and you dont need to interact with the child process, then just, use:
proc = subprocess.Popen('myfile.cmd', shell=True)
proc.communicate()

How use python subprocess.call, sending copy of stdout to logfile, while detecting result of first command

My python script needs to invoke a program, detect if it failed (eg, result != 0) and send the output of the program to both stdout like normal plus a log file.
My default shell is bash. I'm using Python 2.7.9
To send output to both stdout and a file I'd normally use tee:
result = subprocess.call('some_program --an-option | tee -a ' + logfile , shell=True)
However, the pipe in bash will return true even if the first command fails, so this approach fails to detect if the command fails.
If I try to use set -o pipefail in the command (so that the result will indicate if the first command fails) like this:
result = subprocess.call('set -o pipefail && some_program --an_option | tee -a ' + logfile , shell=True)
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Is there a way in python to invoke a command, send the output to both the normal stdout console and a logfile, and still detect if the command failed?
Note: we have to continue sending some_program's output to stdout since stdout is being sent to a websocket.
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Pass executable='/bin/bash' otherwise /bin/sh is used.
You could implement tee in pure Python:
#!/usr/bin/env python2
import sys
from subprocess import Popen, PIPE
chunk_size = 1 << 13
p = Popen(["some_program", "--an-option"], stdout=PIPE, bufsize=1)
with p.stdout, open('logfile', 'ab') as logfile:
for chunk in iter(lambda: p.stdout.read(chunk_size), b''):
sys.stdout.write(chunk)
logfile.write(chunk)
if p.wait() != 0:
raise Error
My preference would to to send stdout to a pipe, and then read the pipe in the Python code. The Python code can write to stdout, a file, etc as required. It would also enable you to set shell=False as setting it to True is a potential security issue, as mentioned in the documentation.
However, the pipe in bash will return true even if the first command
fails, so this approach fails to detect if the command fails.
That is not true.
But I think you mean: the 'some_program --an-option | tee -a ' + logfile exit status code always is 0 even though fails in any command part.
Well, using multiple commands (when using && or ||) or connecting multiple commands together via pipes causes unreliable exit status code when returned.
Regardless, in the command: some_program --an-option | tee -a ' + logfile logfile is not written if some_program fails. So you don't need to worry regarding exit code.
Anyway the best way to do pipe along with subprocess is creating Popen objects ans handling stdout and stdin:
import subprocess as sp
STATUS_OK = 0
logfile = '/tmp/test.log'
commands = {
'main' : 'ls /home',
'pipe_to': 'tee -a ' + logfile
}
process = sp.Popen(commands['main'], shell=True, stdout=sp.PIPE)
# explicitly force waits till command terminate, set and return exit status code
process.wait()
if process.returncode == STATUS_OK:
stdoutdata = process.communicate()[0]
# pipe last command output to "tee" command
sp.Popen(commands['pipe_to'], stdin=sp.PIPE, shell=1).communicate(stdoutdata)
else:
# do something when command fails 'ls /hom' (in this case) fails
pass
That is it!
I the last Popen we invoke Popen.communicate() to send the last output from ls command to tee command STDIN.
In the Python doc there's a tiny tutorial called Replacing shell pipeline, maybe you want take a look.

popen command not giving required output

I am using the below code to run a git command "git tag -l contains ad0beef66e5890cde6f0961ed03d8bc7e3defc63" ..if I run this command standalone I see the required output..but through the below program,it doesnt work,does anyone have any inputs on what could be wrong?
from subprocess import check_call,Popen,PIPE
revtext = "ad0beef66e5890cde6f0961ed03d8bc7e3defc63"
proc = Popen(['git', 'tag', '-l', '--contains', revtext ],stdout=PIPE ,stderr=PIPE)
(out, error) = proc.communicate()
print "OUT"
print out

Incorrect exit code in python when calling windows script

I don't seem to be getting the correct exit code from subprocess.call on Windows.
import subprocess
exit_code = subprocess.call(['ant.bat', 'fail'])
print exit_code # prints 0
Doing the same thing on windows seems to return something other than 0
> echo %errorlevel%
0
> ant fail
> echo %errorlevel%
1
Shouldn't the values from both calls give the same value? Am I doing something wrong?
In the worst case, how do I check the value of %errorlevel% in my python script?
UPDATE:
I tried something like this to get the errorlevel value:
environment = os.environment.copy()
cmd = subprocess.Popen(['ant.bat', 'fail'], env = environment)
for key, value in environment.items():
print '%s = %s' % (key, value)
However I do not see errorlevel in that dictionary (os.getenv['errorlevel'] also fails).
A process exit code and the errorlevel environment variable aren't the same:
ant.bat:
if "%1"=="batch_fail" exit /B 1
if "%1"=="proc_fail" exit 1
>>> import subprocess
>>> subprocess.call(['ant.bat', 'batch_fail'])
0
>>> subprocess.call(['ant.bat', 'proc_fail'])
1
batch_fail will set the errorlevel to 1, but that's no longer available after the shell exits. proc_fail, however, sets the process exit code to 1. The only solution that comes to mind is a wrapper batch file that calls ant.bat and sets the process exit code according to the errorlevel:
ant_wrapper.bat:
#echo off
call ant.bat %1
if errorlevel 1 exit 1
>>> subprocess.call(['ant_wrapper.bat'])
0
>>> subprocess.call(['ant_wrapper.bat', 'batch_fail'])
1
>>> subprocess.call(['ant_wrapper.bat', 'proc_fail'])
1
Edit:
Your update got me thinking about an alternate approach using Popen. You can run the batch file via cmd's /K option, which will run a command without exiting. Then simply send exit %errorlevel% via stdin, and communicate():
#test errorlevel==1
>>> p = subprocess.Popen(['cmd', '/K', 'ant.bat', 'batch_fail'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> stdoutdata, stderrdata = p.communicate(b'exit %errorlevel%\r\n')
>>> p.returncode
1
#test errorlevel==0
>>> p = subprocess.Popen(['cmd', '/K', 'ant.bat'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> stdoutdata, stderrdata = p.communicate(b'exit %errorlevel%\r\n')
>>> p.returncode
0
I was able to get the correct behavior by using the batch call command, like
cmd = [os.environ['COMSPEC'], '/c', 'call', bat_file]
try:
subprocess.check_call(cmd)
except subprocess.CalledProcessError:
# Error handling code
(I used subprocess.check_call but subprocess.call ought to work the same way).
It's also always a good idea to put if errorlevel 1 exit 1 after every command in your batch script, to propagate the errors (roughly the equivalent of bash's set -e).
os.system('ant.bat fail') does exactly what you want. It does return the errorlevel.

Categories

Resources