I am trying to retrive error(exit code importantly) thrown by any command line execution in python script itself. I am using subprocess for this. When I execute any wrong commands then it throws an error in terminal as usual, but then it stops executing python file and I can't get store error.
Look at the code. p_status is supposed to store exit code. But before printing it stops the script after throwing error in terminal.
process = subprocess.Popen([<command>], stdout = subprocess.PIPE)
output = process.communicate()
p_status = process.wait()
print(p_status)
I went through different solutions and tried all of them but couldn't get the required result.
Got this problem solved using the following code:
try:
subprocess.Popen([<command>], stdout = subprocess.PIPE)
except OSError as error:
print(error.errno) #for exit code
P.S - credits #karolch
Related
I'm trying to write a simple software to scan some bluetooth devices (beacon with advertising) and I have a problem with the instruction subprocess.Popen
p1 = subprocess.Popen(['timeout','10s','hcitool','lescan'],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
p1.wait()
output, error = p1.communicate()
print("stdout: {}".format(output))
print("stderr: {}".format(error))
the output and error variables are empty!
If I remove the stdout=subprocess.PIPE,stderr=subprocess.PIPE from the Popen I can see the right result in the console, if I change the command with ['ls','-l'] it works fine and I see the result in the variables .
I've tryed with subprocess.run (with the timeout) and it is the same.
If I don't use the timeout obviously the command never ends.
I can't use pybluez and my python version is the 3.7
Can someone help me?
Solved using ['timeout','-s','INT','10s','hcitool','lescan'] as command instead of ['timeout','10s','hcitool','lescan'].
Maybe in the second case the process was not killed well and I didn't receive the output.
Thank you the same.
I am trying to determine the best way to execute something in command line using python. I have accomplished this with subprocess.Popen() on individual files. However, I am trying to determine the best way to do this many time with numerous different files. I am not sure if I should create a batch file and then execute that in command, or if I am simply missing something in my code. Novice coder here so I apologize in advance. The script below returns a returncode of 1 when I use a loop, but a 0 when not in a loop. What is the best approach for the task at hand?
def check_output(command, console):
if console == True:
process = subprocess.Popen(command)
else:
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
output, error = process.communicate()
returncode = process.poll()
return returncode, output, error
for file in fileList.split(";"):
...code to create command string...
returncode, output, error = check_output(command, False)
if returncode != 0:
print("Process failed")
sys.exit()
EDIT: An example command string looks like this:
C:\Path\to\executable.exe -i C:\path\to\input.ext -o C:\path\to\output.ext
Try using the commands module (only available before python 3)
>>> import commands
>>> commands.getstatusoutput('ls /bin/ls')
(0, '/bin/ls')
Your code might look like this
import commands
def runCommand( command ):
ret,output = commands.getstatutoutput( command )
if ret != 0:
sys.stderr.writelines( "Error: "+output )
return ret
for file in fileList.split(';'):
commandStr = ""
# Create command string
if runCommand( commandStr ):
print("Command '%s' failed" % commandStr)
sys.exit(1)
You are not entirely clear about the problem you are trying to solve. If I had to guess why your command is failing in the loop, its probably the way you handle the console=False case.
If you are merely running commands one after another, then it is probably easiest to cast aside Python and stick your commands into a bash script. I assume you merely want to check errors and abort if one of the commands fails.
#!/bin/bash
function abortOnError(){
"$#"
if [ $? -ne 0 ]; then
echo "The command $1 failed with error code $?"
exit 1
fi
}
abortOnError ls /randomstringthatdoesnotexist
echo "Hello World" # This will never print, because we aborted
Update: OP updated his question with sample data that indicate he is on Windows.
You can get bash for Windows through cygwin or various other packages, but it may make more sense to use PowerShell if you are on Windows. Unfortunately, I do not have a Windows box, but there should be a similar mechanism for error checking. Here is a reference for PowerShell error handling.
You might consider using subprocess.call
from subprocess import call
for file_name in file_list:
call_args = 'command ' + file_name
call_args = call_args.split() # because call takes a list of strings
call(call_args)
It also will output 0 for success and 1 for failure.
What your code is trying to accomplish is to run a command on a file, and exit the script if there's an error. subprocess.check_output accomplishes this - if the subprocess exits with an error code it raises a Python error. Depending on whether you want to explicitly handle errors, your code would look something like this:
file in fileList.split(";"):
...code to create command string...
subprocess.check_output(command, shell=True)
Which will execute the command and print the shell error message if there is one, or
file in fileList.split(";"):
...code to create command string...
try:
subprocess.check_output(command,shell=True)
except subprocess.CalledProcessError:
...handle errors...
sys.exit(1)
Which will print the shell error code and exit, as in your script.
I want to make a simple windows executable loading program
which simply implemented using os.system('./calc.exe') in python
or WinExec(...), CreateProcess(...) in Windows API...
This would be a VERY simple and easy task.
However, I want to receive the detailed error report if my child process crashes.
I know I can get the error number code as the return value of
functions such as Popen.call() in Python, or something...
But when windows binary crashes, I can see the detailed error report
which contains the name of crashed module, violation code(0xC0000005, etc)
offset of crashed module, time, etc...
How can I get these information from the parent process and what would be the most easy and simple way to implement this?
Thank you in advance.
I haven't tested this, but something like this should do the trick:
import logging
import subprocess
cmd = "ls -al /directory/that/does/not/exist" # <- or Windows equivalent
logging.info(cmd)
try:
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except OSError, err:
logging.error(err.child_traceback)
(stdout, stderr) = process.communicate()
logging.debug(stdout)
if not stderr is None:
logging.error(stderr)
A job in Jenkins calls my python script, in which I want to call make to compile my UT codes, and raise sys.exit(1) if a compiling error like "make: *** [ ] Error 1" occurs.
I also need to print output in real time.
Here's my python script:
make_process = subprocess.Popen("make clean all;", shell=True, stdout=subprocess.PIPE, stderr=sys.stdout.fileno())
while True:
line = make_process.stdout.readline()
if not line:break
print line, #output to console in time
sys.stdout.flush()
But how do I capture the make error and let this python script fail?
Note:
make_process.wait() is 0 when make error happens.
this answer doesn't work for me:
Running shell command from Python and capturing the output
Update:
It turned out to be a makefile issue. See the comments on the accepted answer. But for this python question, pentadecagon gave the right answer.
You can check the return value of the make process by
make_process.poll()
This returns "None" if the process is still running, or the error code if it's finished. If you just want the output to end up on the console there is no need to do this manually
The output goes to the console anyway, and can do it like this:
make_process = subprocess.Popen("make clean all", stderr=subprocess.STDOUT)
if make_process.wait() != 0:
something_went_wrong();
Well, I have a python script running on Mac OS X. Now I need to modify it to support updating my SVN working copy into a specified time. However, after learning I've found that SVN commands only support updating the working copy into a specified version.
So I write a function to grub the information from the command: svn log XXX, to find the corresponding version to the specified time. Here is my solution:
process=os.popen('svn log XXX')
print process.readline()
print process.readline()
process.close()
To make the problem simple, I just print the first 2 lines in the output. However, when I was executing the script, I got the error message: svn: Write error: Broken pipe
I think that the reason why I got the message is that the svn command kept executing when I was closing the Popen. So the error message arise.
Is there any one who can help me slove the problem? Or give me a alternative solution to reach the goal. Thx!
I get that error whenever I use svn log | head, too, it's not Python specific. Try something like:
from subprocess import PIPE, Popen
process = Popen('svn log XXX', stdout=PIPE, stderr=PIPE)
print process.stdout.readline()
print process.stdout.readline()
to suppress the stderr. You could also just use
stdout, stderr = Popen('svn log XXX | head -n2', stdout=PIPE, stderr=PIPE, shell=True).communicate()
print stdout
Please use pysvn. It is quite easy to use. Or use subprocess.
Does you error still occur if you do finally:
print process.read()
And it is better to call wait() if you use os.popen or subprocess.