Incorrect exit code in python when calling windows script - python

I don't seem to be getting the correct exit code from subprocess.call on Windows.
import subprocess
exit_code = subprocess.call(['ant.bat', 'fail'])
print exit_code # prints 0
Doing the same thing on windows seems to return something other than 0
> echo %errorlevel%
0
> ant fail
> echo %errorlevel%
1
Shouldn't the values from both calls give the same value? Am I doing something wrong?
In the worst case, how do I check the value of %errorlevel% in my python script?
UPDATE:
I tried something like this to get the errorlevel value:
environment = os.environment.copy()
cmd = subprocess.Popen(['ant.bat', 'fail'], env = environment)
for key, value in environment.items():
print '%s = %s' % (key, value)
However I do not see errorlevel in that dictionary (os.getenv['errorlevel'] also fails).

A process exit code and the errorlevel environment variable aren't the same:
ant.bat:
if "%1"=="batch_fail" exit /B 1
if "%1"=="proc_fail" exit 1
>>> import subprocess
>>> subprocess.call(['ant.bat', 'batch_fail'])
0
>>> subprocess.call(['ant.bat', 'proc_fail'])
1
batch_fail will set the errorlevel to 1, but that's no longer available after the shell exits. proc_fail, however, sets the process exit code to 1. The only solution that comes to mind is a wrapper batch file that calls ant.bat and sets the process exit code according to the errorlevel:
ant_wrapper.bat:
#echo off
call ant.bat %1
if errorlevel 1 exit 1
>>> subprocess.call(['ant_wrapper.bat'])
0
>>> subprocess.call(['ant_wrapper.bat', 'batch_fail'])
1
>>> subprocess.call(['ant_wrapper.bat', 'proc_fail'])
1
Edit:
Your update got me thinking about an alternate approach using Popen. You can run the batch file via cmd's /K option, which will run a command without exiting. Then simply send exit %errorlevel% via stdin, and communicate():
#test errorlevel==1
>>> p = subprocess.Popen(['cmd', '/K', 'ant.bat', 'batch_fail'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> stdoutdata, stderrdata = p.communicate(b'exit %errorlevel%\r\n')
>>> p.returncode
1
#test errorlevel==0
>>> p = subprocess.Popen(['cmd', '/K', 'ant.bat'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> stdoutdata, stderrdata = p.communicate(b'exit %errorlevel%\r\n')
>>> p.returncode
0

I was able to get the correct behavior by using the batch call command, like
cmd = [os.environ['COMSPEC'], '/c', 'call', bat_file]
try:
subprocess.check_call(cmd)
except subprocess.CalledProcessError:
# Error handling code
(I used subprocess.check_call but subprocess.call ought to work the same way).
It's also always a good idea to put if errorlevel 1 exit 1 after every command in your batch script, to propagate the errors (roughly the equivalent of bash's set -e).

os.system('ant.bat fail') does exactly what you want. It does return the errorlevel.

Related

Run cmd file using python

I have a cmd file "file.cmd" containing 100s of lines of command.
Example
pandoc --extract-media -f docx -t gfm "sample1.docx" -o "sample1.md"
pandoc --extract-media -f docx -t gfm "sample2.docx" -o "sample2.md"
pandoc --extract-media -f docx -t gfm "sample3.docx" -o "sample3.md"
I am trying to run these commands using a script so that I don't have to go to a file and click on it.
This is my code, and it results in no output:
file1 = open('example.cmd', 'r')
Lines = file1.readlines()
# print(Lines)
for i in Lines:
print(i)
os.system(i)
You don't need to read the cmd file line by line. you can simply try the following:
import os
os.system('myfile.cmd')
or using the subprocess module:
import subprocess
p = subprocess.Popen(['myfile.cmd'], shell = True, close_fds = True)
stdout, stderr = proc.communicate()
Example:
myfile.cmd:
#ECHO OFF
ECHO Grettings From Python!
PAUSE
script.py:
import os
os.system('myfile.cmd')
The cmd will open with:
Greetings From Python!
Press any key to continue ...
You can debug the issue by knowing the return exit code by:
import os
return_code=os.system('myfile.cmd')
assert return_code == 0 #asserts that the return code is 0 indicating success!
Note: os.system works by calling system() in C can only take up to 65533 arguments after a command (so it is a 16 bit issue). Giving one more argument will result in the return code 32512 (which implies the exit code 127).
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function (os.system('command')).
since it is a command file (cmd), and only the shell can run it, then shell argument must set to be true. since you are setting the shell argument to true, the command needs to be string form and not a list.
use the Popen method for spawn a new process and the communicte for waiting on that process (you can time it out as well). if you whish to communicate with the child process, provide the PIPES (see mu example, but you dont have to!)
the code below for python 3.3 and beyond
import subprocess
try:
proc=subprocess.Popen('myfile.cmd', shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
outs, errs = proc.communicate(timeout=15) #timing out the execution, just if you want, you dont have to!
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
for older python versions
proc = subprocess.Popen('myfile.cmd', shell=True)
t=10
while proc.poll() is None and t >= 0:
print('Still waiting')
time.sleep(1)
t -= 1
proc.kill()
In both cases (python versions) if you dont need the timeout feature and you dont need to interact with the child process, then just, use:
proc = subprocess.Popen('myfile.cmd', shell=True)
proc.communicate()

python check_output prints but doesn't store in var

I usually use very simply subprocess.check_output:
process = subprocess.check_output("ps aux", shell=True)
print process #display the list of process
If I fear there is something in stderr, I use it like that:
process = subprocess.check_output("ps aux 2> /dev/null", shell=True)
print process #display the list of process
But I have a problem with nginx -V:
modules = subprocess.check_output("nginx -V", shell=True) #display the result
print modules #empty
modules = subprocess.check_output("nginx -V 2> /dev/null", shell=True) #display nothing
print modules #empty
Why is the command nginx -V behaving differently (all printing in stderr)? How can I design esealy a workaround with ``subprocess.check_output`?
The way to redirect standard error to standard output in the shell is 2>&1 but you would do well to avoid using the shell at all here.
p = subprocess.Popen(['nginx', '-V'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if out == '':
modules = err
modules = out
If you have a more recent Python, also consider switching to subprocess.run()

Python subprocess returncode takes on different values

I'm running a python script demo.py which is as follows:
#!/usr/bin/env python
from subprocess import Popen, PIPE, CalledProcessError
try:
process = Popen(["/root/script.sh"], stdout = PIPE, stderr = PIPE)
process_out, process_err = process.communicate()
return_code = process.returncode
if process_out:
print "output:", process_out
if process_err:
print "error:", process_err
print "return code:", return_code
except CalledProcessError as e:
print "CalledProcessError:", e
except Exception, fault:
print "fault:", fault
script.sh is as follows:
#!/bin/bash
cd /root/
mkdir foo
cd foo
cat << EOF > bar.txt
random text
EOF
The directory foo already exists. Hence, script.sh should fail and it does. demo.py correctly catches the error in process_err and prints it:
error: mkdir: cannot create directory `foo': File exists
But the value of return_code is still 0 (which indicates successful run).
If my script.sh is as follows:
#!/bin/bash
cd /root/
mkdir foo
process_err prints the same error message but now the value of return_code is 1.
Where is the problem?
Please also suggest scenarios in which process.returncode takes on values other than 0 and 1.
This is not a problem with Python, but with your Bash script that is continuing execution after mkdir. This is how Bash works by default, you have to tell it to exit when it encounters an error.
Use:
#!/bin/bash
set -e
Or, if you can't change the Bash script and you can only change the Python code:
process = Popen(['bash', '-e', '/root/script.sh'], stdout = PIPE, stderr = PIPE)
From help set:
-e Exit immediately if a command exits with a non-zero status.

How use python subprocess.call, sending copy of stdout to logfile, while detecting result of first command

My python script needs to invoke a program, detect if it failed (eg, result != 0) and send the output of the program to both stdout like normal plus a log file.
My default shell is bash. I'm using Python 2.7.9
To send output to both stdout and a file I'd normally use tee:
result = subprocess.call('some_program --an-option | tee -a ' + logfile , shell=True)
However, the pipe in bash will return true even if the first command fails, so this approach fails to detect if the command fails.
If I try to use set -o pipefail in the command (so that the result will indicate if the first command fails) like this:
result = subprocess.call('set -o pipefail && some_program --an_option | tee -a ' + logfile , shell=True)
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Is there a way in python to invoke a command, send the output to both the normal stdout console and a logfile, and still detect if the command failed?
Note: we have to continue sending some_program's output to stdout since stdout is being sent to a websocket.
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Pass executable='/bin/bash' otherwise /bin/sh is used.
You could implement tee in pure Python:
#!/usr/bin/env python2
import sys
from subprocess import Popen, PIPE
chunk_size = 1 << 13
p = Popen(["some_program", "--an-option"], stdout=PIPE, bufsize=1)
with p.stdout, open('logfile', 'ab') as logfile:
for chunk in iter(lambda: p.stdout.read(chunk_size), b''):
sys.stdout.write(chunk)
logfile.write(chunk)
if p.wait() != 0:
raise Error
My preference would to to send stdout to a pipe, and then read the pipe in the Python code. The Python code can write to stdout, a file, etc as required. It would also enable you to set shell=False as setting it to True is a potential security issue, as mentioned in the documentation.
However, the pipe in bash will return true even if the first command
fails, so this approach fails to detect if the command fails.
That is not true.
But I think you mean: the 'some_program --an-option | tee -a ' + logfile exit status code always is 0 even though fails in any command part.
Well, using multiple commands (when using && or ||) or connecting multiple commands together via pipes causes unreliable exit status code when returned.
Regardless, in the command: some_program --an-option | tee -a ' + logfile logfile is not written if some_program fails. So you don't need to worry regarding exit code.
Anyway the best way to do pipe along with subprocess is creating Popen objects ans handling stdout and stdin:
import subprocess as sp
STATUS_OK = 0
logfile = '/tmp/test.log'
commands = {
'main' : 'ls /home',
'pipe_to': 'tee -a ' + logfile
}
process = sp.Popen(commands['main'], shell=True, stdout=sp.PIPE)
# explicitly force waits till command terminate, set and return exit status code
process.wait()
if process.returncode == STATUS_OK:
stdoutdata = process.communicate()[0]
# pipe last command output to "tee" command
sp.Popen(commands['pipe_to'], stdin=sp.PIPE, shell=1).communicate(stdoutdata)
else:
# do something when command fails 'ls /hom' (in this case) fails
pass
That is it!
I the last Popen we invoke Popen.communicate() to send the last output from ls command to tee command STDIN.
In the Python doc there's a tiny tutorial called Replacing shell pipeline, maybe you want take a look.

Can python script know the return value of C++ main function in the Android environment

There are several ways of calling C++ executable programs. For example, we can use
def run_exe_return_code(run_cmd):
process=subprocess.Popen(run_cmd,stdout=subprocess.PIPE,shell=True)
(output,err)=process.communicate()
exit_code = process.wait()
print output
print err
print exit_code
return exit_code
to process a C++ executable program: run_exe_return_code('abc') while abc is created by the following C++ codes:
int main()
{
return 1;
}
In the above codes, the return value of the program is 1, and if we run this Python script in Linux we can always see the return value by the Python script is 1. However, in Android environment it seems that the return exit code in the above python script is 0, which means successful. Is there a solution where the Python script can know the return value of main function in Android environment?
By the way, in android environment, I use adb shell abc instead of abc in order to run the program.
For your android problem you can use fb-adb which "propagates program exit status instead of always exiting with status 0" (preferred), or use this workaround (hackish... not recommended for production use):
def run_exe_return_code(run_cmd):
process=subprocess.Popen(run_cmd + '; echo $?',stdout=subprocess.PIPE,shell=True)
(output,err)=process.communicate()
exit_code = process.wait()
print output
print err
print exit_code
return exit_code
Note that the last process's code is echo-ed so get it from the output, not from the exit_code of adb.
$? returns the last exit code. So printing it allows you to access it from python.
As to your original question:
I can not reproduce this. Here is a simple example:
Content of .c file:
reut#reut-VirtualBox:~/pyh$ cat c.c
int main() {
return 1;
}
Compile (to a.out by default...):
reut#reut-VirtualBox:~/pyh$ gcc c.c
Content of .py file:
reut#reut-VirtualBox:~/pyh$ cat tstc.py
#!/usr/bin/env python
import subprocess
def run_exe_return_code(run_cmd):
process=subprocess.Popen(run_cmd,stdout=subprocess.PIPE)
(output,err)=process.communicate()
exit_code = process.wait()
print output
print err
print exit_code
run_exe_return_code('./a.out')
Test:
reut#reut-VirtualBox:~/pyh$ ./tstc.py
None
1
exit_code is 1 as expected.
Notice that the return value is always an integer. You may want the output which you can get by using subprocess.check_output:
Run command with arguments and return its output as a byte string.
Example:
>>> subprocess.check_output(["echo", "Hello World!"])
'Hello World!\n'
Note: If the return value is 1, which signals an error, a CalledProcessError exception will be raised (which is usually a good thing since you can respond to it).
I think you can try commands.getstatusoutput, like this:
import commands
status, result = commands.getstatusoutput(run_cmd)
print result
Yes, you can!
The simple version of the code you submitted would be:
import subprocess
exit_code=subprocess.call('./a.out')`
print exit_code
with ./a.out the program compiled from:
int main(){
return 3;
}
Test:
python testRun.py
3
Ah, and note that shell=True can be a security hazard.
https://docs.python.org/2/library/subprocess.html
def run_exe_return_code(run_cmd):
process=subprocess.Popen(run_cmd,stdout=subprocess.PIPE,shell=True)
See this answer: https://stackoverflow.com/a/5631819/902846. Adapting it to your example, it would look like this:
def run_exe_return_code(run_cmd):
process = subprocess.Popen(run_cmd, stdout=subprocess.PIPE, shell=True)
(output, err) = process.communicate()
process.wait()
print output
print err
print process.returncode
return process.returncode
The short summary is that you can use Popen.wait, Popen.poll, or Popen.communicate as appropriate to cause the return code to be updated and then check the return code with Popen.returncode afterwards.
Also see the Python docs for Popen: https://docs.python.org/2/library/subprocess.html
def run_exe_android_return_code(run_cmd):
#adb shell '{your command here} > /dev/null 2>&1; echo $?'
process=subprocess.Popen(run_cmd,stdout=subprocess.PIPE,shell=True)
(output,err)=process.communicate()
pos1 = output.rfind('\n')
output = output[:pos1-1]
pos2 = output.rfind('\n')
output = output[pos2+1:]
print output
return output
This is the Python script that is used to check the return value of running an executable on Android.
def run_android_executable(full_path,executable):
executable = full_path+'/'+executable
run_cmd = 'adb shell \'LD_LIBRARY_PATH='+full_path+':$LD_LIBRARY_PATH '+executable+'; echo $?\''
print run_cmd
error_code=run_exe_android_return_code(run_cmd)
print 'the error code is'
print error_code
if(error_code=='1'):
print 'the executable returns error'
else:
print 'the exectuable runs smoothly'
This is the secript that is used to run the executable. It is a little different from Reut Sharabani's answer, and it works.

Categories

Resources