wrap python os.system method - python

I have the following perl module for wrapping CORE::system in perl scripts:
package system_wrapper;
sub check_system {
my ($cmd) = #_;
my $err = CORE::system($cmd);
if ($err != 0) {
print "Error occured when executing: $cmd. Exiting.\n";
exit(-1);
}
}
*CORE::GLOBAL::system = \&check_system;
1;
__END__
I'm attempting to acheive the same thing in python. I can't work out how to extend the syntax described here using decorators to this os method.
I would like calls to the wrapped method to be exactly the same as the unwrapped.
i.e. status = os.system("mycmd" + " myarg")

You can just monkey patch os.system. Rename the real os.system to something else,
then create a function using it and assign it to os.system:
def my_os_system(cmd):
err = os._system(cmd)
if err != 0:
print "Error occured when executing: %s. Exiting." % cmd
sys.exit(-1)
os._system = os.system
os.system = my_os_system

Related

Exit if the called python script encounters an error

I have a central python script that calls various other python scripts and looks like this:
os.system("python " + script1 + args1)
os.system("python " + script2 + args2)
os.system("python " + script3 + args3)
Now, I want to exit from my central script if any of the sub-scripts encounter an error.
What is happening with current code is that let's say script1 encounters an error. The console will display that error and then central script will move onto calling script2 and so on.
I want to display the encountered error and immediately exit my central code.
What is the best way to do this?
Overall this is a terrible way to execute a series of commands from within Python. However here's a minimal way to handle it:
#!python
import os, system
for script, args in some_tuple_of_commands:
exit_code = os.system("python " + script + args)
if exit_code > 0:
print("Error %d running 'python %s %s'" % (
exit_code, script, args), file=sys.stderr)
sys.exit(exit_code)
But, honestly this is all horrible. It's almost always a bad idea to concatenate strings and pass them to your shell for execution from within any programming language.
Look at the subprocess module for much more sane handling of subprocesses in Python.
Also consider trying the sh or the pexpect third party modules depending on what you're trying to do with input or output.
You can try subprocess
import subprocess,sys
try:
output = subprocess.check_output("python test.py", shell=True)
print(output)
except ValueError as e:
print e
sys.exit(0)
print("hello world")
I don't know if it's ideal for you but enclosing these commands in a function seems a good idea to me:
I am using the fact that when a process exits with error os.system(process) returns 256 else it returns 0 as an output respectively.
def runscripts():
if os.system("python " + script1 + args1):return(-1); #Returns -1 if script1 fails and exits.
if os.system("python " + script2 + args2):return(-2); #Returns -2 and exits
if os.system("python " + script3 + args3):return(-3); #Pretty obvious
return(0)
runscripts()
#or if you want to exit the main program
if runscripts():sys.exit(0)
Invoking the operating system like that is a security breach waiting to happen. One should use the subprocess module, because it is more powerful and does not invoke a shell (unless you specifically tell it to). In general, avoid invoking shell whenever possible (see this post).
You can do it like this:
import subprocess
import sys
# create a list of commands
# each command to subprocess.run must be a list of arguments, e.g.
# ["python", "echo.py", "hello"]
cmds = [("python " + script + " " + args).split()
for script, args in [(script1, args1), (script2, args2), (script3,
args3)]]
def captured_run(arglist):
"""Run a subprocess and return the output and returncode."""
proc = subprocess.run( # PIPE captures the output
arglist, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return proc.stdout, proc.stderr, proc.returncode
for cmd in cmds:
stdout, stderr, rc = captured_run(cmd)
# do whatever with stdout, stderr (note that they are bytestrings)
if rc != 0:
sys.exit(rc)
If you don't care about the output, just remove the subprocess.PIPE stuff and return only the returncode from the function. You may also want to add a timeout to the execution, see the subprocess docs linked above for how to do that.

Can python script know the return value of C++ main function in the Android environment

There are several ways of calling C++ executable programs. For example, we can use
def run_exe_return_code(run_cmd):
process=subprocess.Popen(run_cmd,stdout=subprocess.PIPE,shell=True)
(output,err)=process.communicate()
exit_code = process.wait()
print output
print err
print exit_code
return exit_code
to process a C++ executable program: run_exe_return_code('abc') while abc is created by the following C++ codes:
int main()
{
return 1;
}
In the above codes, the return value of the program is 1, and if we run this Python script in Linux we can always see the return value by the Python script is 1. However, in Android environment it seems that the return exit code in the above python script is 0, which means successful. Is there a solution where the Python script can know the return value of main function in Android environment?
By the way, in android environment, I use adb shell abc instead of abc in order to run the program.
For your android problem you can use fb-adb which "propagates program exit status instead of always exiting with status 0" (preferred), or use this workaround (hackish... not recommended for production use):
def run_exe_return_code(run_cmd):
process=subprocess.Popen(run_cmd + '; echo $?',stdout=subprocess.PIPE,shell=True)
(output,err)=process.communicate()
exit_code = process.wait()
print output
print err
print exit_code
return exit_code
Note that the last process's code is echo-ed so get it from the output, not from the exit_code of adb.
$? returns the last exit code. So printing it allows you to access it from python.
As to your original question:
I can not reproduce this. Here is a simple example:
Content of .c file:
reut#reut-VirtualBox:~/pyh$ cat c.c
int main() {
return 1;
}
Compile (to a.out by default...):
reut#reut-VirtualBox:~/pyh$ gcc c.c
Content of .py file:
reut#reut-VirtualBox:~/pyh$ cat tstc.py
#!/usr/bin/env python
import subprocess
def run_exe_return_code(run_cmd):
process=subprocess.Popen(run_cmd,stdout=subprocess.PIPE)
(output,err)=process.communicate()
exit_code = process.wait()
print output
print err
print exit_code
run_exe_return_code('./a.out')
Test:
reut#reut-VirtualBox:~/pyh$ ./tstc.py
None
1
exit_code is 1 as expected.
Notice that the return value is always an integer. You may want the output which you can get by using subprocess.check_output:
Run command with arguments and return its output as a byte string.
Example:
>>> subprocess.check_output(["echo", "Hello World!"])
'Hello World!\n'
Note: If the return value is 1, which signals an error, a CalledProcessError exception will be raised (which is usually a good thing since you can respond to it).
I think you can try commands.getstatusoutput, like this:
import commands
status, result = commands.getstatusoutput(run_cmd)
print result
Yes, you can!
The simple version of the code you submitted would be:
import subprocess
exit_code=subprocess.call('./a.out')`
print exit_code
with ./a.out the program compiled from:
int main(){
return 3;
}
Test:
python testRun.py
3
Ah, and note that shell=True can be a security hazard.
https://docs.python.org/2/library/subprocess.html
def run_exe_return_code(run_cmd):
process=subprocess.Popen(run_cmd,stdout=subprocess.PIPE,shell=True)
See this answer: https://stackoverflow.com/a/5631819/902846. Adapting it to your example, it would look like this:
def run_exe_return_code(run_cmd):
process = subprocess.Popen(run_cmd, stdout=subprocess.PIPE, shell=True)
(output, err) = process.communicate()
process.wait()
print output
print err
print process.returncode
return process.returncode
The short summary is that you can use Popen.wait, Popen.poll, or Popen.communicate as appropriate to cause the return code to be updated and then check the return code with Popen.returncode afterwards.
Also see the Python docs for Popen: https://docs.python.org/2/library/subprocess.html
def run_exe_android_return_code(run_cmd):
#adb shell '{your command here} > /dev/null 2>&1; echo $?'
process=subprocess.Popen(run_cmd,stdout=subprocess.PIPE,shell=True)
(output,err)=process.communicate()
pos1 = output.rfind('\n')
output = output[:pos1-1]
pos2 = output.rfind('\n')
output = output[pos2+1:]
print output
return output
This is the Python script that is used to check the return value of running an executable on Android.
def run_android_executable(full_path,executable):
executable = full_path+'/'+executable
run_cmd = 'adb shell \'LD_LIBRARY_PATH='+full_path+':$LD_LIBRARY_PATH '+executable+'; echo $?\''
print run_cmd
error_code=run_exe_android_return_code(run_cmd)
print 'the error code is'
print error_code
if(error_code=='1'):
print 'the executable returns error'
else:
print 'the exectuable runs smoothly'
This is the secript that is used to run the executable. It is a little different from Reut Sharabani's answer, and it works.

Bidrectional node/python communication

I'm trying to implement simple bidirectional communication between node and a spawned Python process.
Python:
import sys
for l in sys.stdin:
print "got: %s" % l
Node:
var spawn = require('child_process').spawn;
var child = spawn('python', ['-u', 'ipc.py']);
child.stdout.on('data', function(data){console.log("stdout: " + data)});
var i = 0;
setInterval(function(){
console.log(i);
child.stdin.write("i = " + i++ + "\n");
}, 1000);
Using -u on Python forces unbuffered I/O so I would expect to see the output (I've also tried sys.stdout.flush()) but don't. I know I can use child.stdout.end() but that prevents me from writing data later.
Your Python code crashes with TypeError: not all arguments converted during string formatting at line
print "got: " % l
You ought to write
print "got: %s" % l
You can see the errors that Python outputs by doing:
var child = spawn('python', ['-u', 'ipc.py'],
{ stdio: [ 'pipe', 'pipe', 2 ] });
on Node.js, that is, pipe only standard output but let the standard error go to Node's stderr.
Even with these fixes, and even accounting for -u the sys.stdin.__iter__ will be buffered. To work around it, use .readline instead:
for line in iter(sys.stdin.readline, ''):
print "got: %s" % line
sys.stdout.flush()

ImportError: No module named seqfmt

I am running a python script from Groovy via:
Process p = Runtime.getRuntime().exec("python /Users/afrieden/Projects/hgvs/hgvs/tests/test_gsg_variants.py");
String s = null;
BufferedReader stdInput = new BufferedReader(new InputStreamReader(p.getInputStream()));
BufferedReader stdError = new BufferedReader(new InputStreamReader(p.getErrorStream()));
System.out.println("Here is the standard output of the command:\n");
while ((s = stdInput.readLine()) != null) {
System.out.println(s);
}
// read any errors from the attempted command
System.out.println("Here is the standard error of the command (if any):\n");
while ((s = stdError.readLine()) != null) {
System.out.println(s);
}
However it calls what looks like a cython library seqfmt. (seqfmt.c and seqfmt.pyx).
I have added it in the sys import:
import sys
sys.path.append("/Users/afrieden/Projects/hgvs/build/lib/")
sys.path.append("/Users/afrieden/pythonLib/pygr-0.8.2/")
sys.path.append("/Users/afrieden/pythonLib/pygr-0.8.2/pygr/seqfmt.pyx")
sys.path.append("/Users/afrieden/pythonLib/pygr-0.8.2/pygr/seqfmt.c")
import hgvs
import csv
import hgvs.utils
from pygr.seqdb import SequenceFileDB
Any thoughts on how I can get it to run? Thanks!
EDIT:
It does work with python from the command line just fine.
Simplifying your script slightly, does this work:
def proc = [ 'bash', '-c', 'python /Users/afrieden/Projects/hgvs/hgvs/tests/test_gsg_variants.py' ].execute()
StringWriter out = new StringWriter()
StringWriter err = new StringWriter()
proc.waitForProcessOutput( out, err )
println 'Here is the standard output of the command:'
println out.toString()
println 'Here is the standard error of the command (if any):'
println err.toString()

Executing many sub processes in groovy fails

I need to create a script that calls an application (c++ binary) 4000 times. The application takes some arguments and for each call writes a zip file to disk. So when the script is executed 4000 zip files will be written to disk. The application supports multiple threads.
I first created a bash script that does the job and it works fine. But now I need the script to be platform independent. I have therefore tried to port the script to groovy, something like this:
for (int i = 1; i <= 4000; i++) {
def command = """myExecutable
a=$argA
b=$outDir"""
def proc = command.execute() // Call *execute* on the string
proc.waitFor() // Wait for the command to finish
// Obtain status and output
println "return code: ${ proc.exitValue()}"
println "stderr: ${proc.err.text}"
println "stdout: ${proc.in.text}" // *out* from the external program is *in* for groovy
println "iteration : " + i
}
But after 381 zipfiles have been written to disk the script just hangs. Do I need to close the process after each call or something similar?
Here:
http://groovy.codehaus.org/Process+Management
it says that its known that java.lang.Process might hang or deadlock. Is it no-go to do something like this in groovy?
I will also give it at try in python to see if it gives the same problems
It might be the output stream blocking:
(1..<4000).each { i ->
println "iteration : $i"
def command = """myExecutable
a=$argA
b=$outDir"""
def proc = command.execute()
// Consume the outputs from the process and pipe them to our output streams
proc.consumeProcessOutput( System.out, System.err )
// Wait for the command to finish
proc.waitFor()
// Obtain status
println "return code: ${proc.exitValue()}"
}
Yes, you should close streams belongs to process.
Or, as say #tim_yates you shoul use consumeProcessOutput, or, in concurent solution, waitForProcessOutput, which closes them for you.
For parallel computation you could use smth. like this:
import groovyx.gpars.GParsPool
GParsPool.withPool(8){ // Start in pool with 8 threads.
(1..4000).toList().eachParallel {
def p = "myExecutable a=$argA b=$outDir".execute()
def sout = new StringBuffer();
def serr = new StringBuffer();
p.waitForProcessOutput(sout, serr)
synchronized (System.out) {
println "return code: ${ p.exitValue()}"
println "stderr: $serr"
println "stdout: $sout"
println "iteration $it"
}
}
}

Categories

Resources