I want to embed C++ in python application. I don't want to use Boost library.
If C++ function do assertion, I want to catch it and print error in my python application or get some detailed information like line number in python script that caused error. and main thing is "I want to proceed further in python execution flow"
How can I do it? I can't find any functions to get detailed assertion information in Python API or C++.
C++ Code
void sum(int iA, int iB)
{
assert(iA + iB >10);
}
Python Code
from ctypes import *
mydll = WinDLL("C:\\Users\\cppwrapper.dll")
try:
mydll.sum(10,3)
catch:
print "exception occurred"
# control should go to user whether exceptions occurs, after exception occurs if he provide yes then continue with below or else abort execution, I need help in this part as well
import re
for test_string in ['555-1212', 'ILL-EGAL']:
if re.match(r'^\d{3}-\d{4}$', test_string):
print test_string, 'is a valid US local phone number'
else:
print test_string, 'rejected'
Thanks in advance.
This can't really be done in exactly the way you say, (as was also pointed out in the comments).
Once the assertion happens and SIGABRT is sent to the process, it's in the operating system's hands what will happen, and generally the process will be killed.
The simplest way to recover from a process being killed, is to have the process launched by an external process. Like, a secondary python script, or a shell script. It's easy in bash scripting, for instance, to launch another process, check if it terminates normally or is aborted, log it, and continue.
For instance here's some bash code that executes command line $command, logs the standard error channel to a log file, checks the return code (which will be 130 or something for an SIGABRT) and does something in the various cases:
$command 2> error.log
error_code="$?"
if check_errs $error_code; then
# Do something...
return 0
else
# Do something else...
return 1
fi
Where check_errs is some other subroutine that you would write.
Related
I have a python script running in the build phase of Jenkins, in the execute shell area.
The problem is that if the script fail, I still see the build as successful. I did check and python use something similar to the return code for a shell command (the one you call with $?), although I can't figure out where to put the call to have "any" failure in the python script to trigger that return code
import sys
..... code here
....functions
sys.exit(-1)
I need to return sys.exit(-1), but where do you put it in the python code? So far I can only handle it using try blocks, and in the exception part I put the sys.exit(-1), but this add a lots of code, since I have a lots of functions in the script.
Is there a global location that I can use, to trigger the failure, so the Jenkins job fail?
def main():
try:
do_the_work()
sys.exit(0) # success
except:
# insert code to log and debug the problem
sys.exit(-1)
In words: if do_the_work returns, sys.exit(0). If do_the_work raises any exception which it does not itself handle, sys.exit(-1). Inside do_the_work, do not return except on success. raise an exception on any unrecoverable error state, for example
class DoSomethingError( exception)
...
ok = do_something()
if not ok:
print ("do_something error return")
raise DoSomethingError
Logging and debugging: search stack overflow for python answers concerning how to obtain and log an error traceback from a caught exception.
This question is a repeat, please see the below similar questions which are already answered.
How/When does Execute Shell mark a build as failure in Jenkins?
Jenkins Build Script exits after Google Test execution
I am writing a tool which is written in python and C. The python script reads a configuration file, performs some validation makes several calls to the C program.
system: RHE 5.7,
python: 2.7.6,
gcc: 4.5.2
Some of the parameters to the called C program are the paths of input files. There is one case where the input file path is the same for several C program invocations. In this case only the first call succeeds, and the returncode from the python subprocess module is '-11'.
I am not sure how to progress this. For a start, I cannot find documentation indicating what '-11' might mean as an exit status. It does not appear to be in the 'standard' codes in /usr/include/sysexits.h. I am guessing that the code could also be interpreted as 0xf5 or 245, since exit codes are I believe really signed 8-bit values.
I have added debug to the start of the C program to print out the arguments it was called with, but nothing appears for the failed invocations. I can understand how the C might fail re-opening a file which was read on the previous invocation (perhaps), but the code doesn't even get that far!
So, where is the exit code coming from? Is it from the (bash) environment which the python subprocess module presumably uses to invoke the C program? Is it from the C runtime for the C program before it even reaches main?
I suppose I could progress this by moving the 'loop' down into the C, so that it only gets called once for each input file path, but that still does not explain this behaviour. Could someone please explain how I can determine the cause of this error? Thanks.
(FWIW) calling from python:
try:
subprocess.check_call( args )
except subprocess.CalledProcessError as e:
print e
Entry to the C:
printf( "\n--- swizzle\n\nargs:\n" );
for ( int i = 0; i < argc; i++ ) printf( "- %s\n", argv[ i ]);
Error output:
Command '[..]' returned non-zero exit status -11
Return code -11 means "segmentation fault". A negative return code usually means that the process was terminated by a signal. Return code -11 means it was signal 11, which is SIGSEGV.
NB: I have not attempted to reproduce the problem described below under Windows, or with versions of Python other than 2.7.3.
The most reliable way to elicit the problem in question is to pipe the output of the following test script through : (under bash):
try:
for n in range(20):
print n
except:
pass
I.e.:
% python testscript.py | :
close failed in file object destructor:
sys.excepthook is missing
lost sys.stderr
My question is:
How can I modify the test script above to avoid the error message when the script is run as shown (under Unix/bash)?
(As the test script shows, the error cannot be trapped with a try-except.)
The example above is, admittedly, highly artificial, but I'm running into the same problem sometimes when the output of a script of mine is piped through some 3rd party software.
The error message is certainly harmless, but it is disconcerting to end-users, so I would like to silence it.
EDIT: The following script, which differs from the original one above only in that it redefines sys.excepthook, behaves exactly like the one given above.
import sys
STDERR = sys.stderr
def excepthook(*args):
print >> STDERR, 'caught'
print >> STDERR, args
sys.excepthook = excepthook
try:
for n in range(20):
print n
except:
pass
How can I modify the test script above to avoid the error message when the script is run as shown (under Unix/bash)?
You will need to prevent the script from writing anything to standard output. That means removing any print statements and any use of sys.stdout.write, as well as any code that calls those.
The reason this is happening is that you're piping a nonzero amount of output from your Python script to something which never reads from standard input. This is not unique to the : command; you can get the same result by piping to any command which doesn't read standard input, such as
python testscript.py | cd .
Or for a simpler example, consider a script printer.py containing nothing more than
print 'abcde'
Then
python printer.py | python printer.py
will produce the same error.
When you pipe the output of one program into another, the output produced by the writing program gets backed up in a buffer, and waits for the reading program to request that data from the buffer. As long as the buffer is nonempty, any attempt to close the writing file object is supposed to fail with an error. This is the root cause of the messages you're seeing.
The specific code that triggers the error is in the C language implementation of Python, which explains why you can't catch it with a try/except block: it runs after the contents of your script has finished processing. Basically, while Python is shutting itself down, it attempts to close stdout, but that fails because there is still buffered output waiting to be read. So Python tries to report this error as it would normally, but sys.excepthook has already been removed as part of the finalization procedure, so that fails. Python then tries to print a message to sys.stderr, but that has already been deallocated so again, it fails. The reason you see the messages on the screen is that the Python code does contain a contingency fprintf to write out some output to the file pointer directly, even if Python's output object doesn't exist.
Technical details
For those interested in the details of this procedure, let's take a look at the Python interpreter's shutdown sequence, which is implemented in the Py_Finalize function of pythonrun.c.
After invoking exit hooks and shutting down threads, the finalization code calls PyImport_Cleanup to finalize and deallocate all imported modules. The next-to-last task performed by this function is removing the sys module, which mainly consists of calling _PyModule_Clear to clear all the entries in the module's dictionary - including, in particular, the standard stream objects (the Python objects) such as stdout and stderr.
When a value is removed from a dictionary or replaced by a new value, its reference count is decremented using the Py_DECREF macro. Objects whose reference count reaches zero become eligible for deallocation. Since the sys module holds the last remaining references to the standard stream objects, when those references are unset by _PyModule_Clear, they are then ready to be deallocated.1
Deallocation of a Python file object is accomplished by the file_dealloc function in fileobject.c. This first invokes the Python file object's close method using the aptly-named close_the_file function:
ret = close_the_file(f);
For a standard file object, close_the_file(f) delegates to the C fclose function, which sets an error condition if there is still data to be written to the file pointer. file_dealloc then checks for that error condition and prints the first message you see:
if (!ret) {
PySys_WriteStderr("close failed in file object destructor:\n");
PyErr_Print();
}
else {
Py_DECREF(ret);
}
After printing that message, Python then attempts to display the exception using PyErr_Print. That delegates to PyErr_PrintEx, and as part of its functionality, PyErr_PrintEx attempts to access the Python exception printer from sys.excepthook.
hook = PySys_GetObject("excepthook");
This would be fine if done in the normal course of a Python program, but in this situation, sys.excepthook has already been cleared.2 Python checks for this error condition and prints the second message as a notification.
if (hook && hook != Py_None) {
...
} else {
PySys_WriteStderr("sys.excepthook is missing\n");
PyErr_Display(exception, v, tb);
}
After notifying us about the missing excepthook, Python then falls back to printing the exception info using PyErr_Display, which is the default method for displaying a stack trace. The very first thing this function does is try to access sys.stderr.
PyObject *f = PySys_GetObject("stderr");
In this case, that doesn't work because sys.stderr has already been cleared and is inaccessible.3 So the code invokes fprintf directly to send the third message to the C standard error stream.
if (f == NULL || f == Py_None)
fprintf(stderr, "lost sys.stderr\n");
Interestingly, the behavior is a little different in Python 3.4+ because the finalization procedure now explicitly flushes the standard output and error streams before builtin modules are cleared. This way, if you have data waiting to be written, you get an error that explicitly signals that condition, rather than an "accidental" failure in the normal finalization procedure. Also, if you run
python printer.py | python printer.py
using Python 3.4 (after putting parentheses on the print statement of course), you don't get any error at all. I suppose the second invocation of Python may be consuming standard input for some reason, but that's a whole separate issue.
1Actually, that's a lie. Python's import mechanism caches a copy of each imported module's dictionary, which is not released until _PyImport_Fini runs, later in the implementation of Py_Finalize, and that's when the last references to the standard stream objects disappear. Once the reference count reaches zero, Py_DECREF deallocates the objects immediately. But all that matters for the main answer is that the references are removed from the sys module's dictionary and then deallocated sometime later.
2Again, this is because the sys module's dictionary is cleared completely before anything is really deallocated, thanks to the attribute caching mechanism. You can run Python with the -vv option to see all the module's attributes being unset before you get the error message about closing the file pointer.
3This particular piece of behavior is the only part that doesn't make sense unless you know about the attribute caching mechanism mentioned in previous footnotes.
I ran into this sort of issue myself today and went looking for an answer. I think a simple workaround here is to ensure you flush stdio first, so python blocks instead of failing during script shutdown. For example:
--- a/testscript.py
+++ b/testscript.py
## -9,5 +9,6 ## sys.excepthook = excepthook
try:
for n in range(20):
print n
+ sys.stdout.flush()
except:
pass
Then with this script nothing happens, as the exception (IOError: [Errno 32] Broken pipe) is suppressed by the try...except.
$ python testscript.py | :
$
In your program throws an exception that can not be caught using try/except block. To catch him, override function sys.excepthook:
import sys
sys.excepthook = lambda *args: None
From documentation:
sys.excepthook(type, value, traceback)
When an exception is raised and uncaught, the interpreter calls
sys.excepthook with three arguments, the exception class, exception
instance, and a traceback object. In an interactive session this
happens just before control is returned to the prompt; in a Python
program this happens just before the program exits. The handling of
such top-level exceptions can be customized by assigning another
three-argument function to sys.excepthook.
Illustrative example:
import sys
import logging
def log_uncaught_exceptions(exception_type, exception, tb):
logging.critical(''.join(traceback.format_tb(tb)))
logging.critical('{0}: {1}'.format(exception_type, exception))
sys.excepthook = log_uncaught_exceptions
I realize that this is an old question, but I found it in a Google search for the error. In my case it was a coding error. One of my last statements was:
print "Good Bye"
The solution was simply fixing the syntax to:
print ("Good Bye")
[Raspberry Pi Zero, Python 2.7.9]
I have inherited some code which is periodically (randomly) failing due to an Input/Output error being raised during a call to print. I am trying to determine the cause of the exception being raised (or at least, better understand it) and how to handle it correctly.
When executing the following line of Python (in a 2.6.6 interpreter, running on CentOS 5.5):
print >> sys.stderr, 'Unable to do something: %s' % command
The exception is raised (traceback omitted):
IOError: [Errno 5] Input/output error
For context, this is generally what the larger function is trying to do at the time:
from subprocess import Popen, PIPE
import sys
def run_commands(commands):
for command in commands:
try:
out, err = Popen(command, shell=True, stdout=PIPE, stderr=PIPE).communicate()
print >> sys.stdout, out
if err:
raise Exception('ERROR -- an error occurred when executing this command: %s --- err: %s' % (command, err))
except:
print >> sys.stderr, 'Unable to do something: %s' % command
run_commands(["ls", "echo foo"])
The >> syntax is not particularly familiar to me, it's not something I use often, and I understand that it is perhaps the least preferred way of writing to stderr. However I don't believe the alternatives would fix the underlying problem.
From the documentation I have read, IOError 5 is often misused, and somewhat loosely defined, with different operating systems using it to cover different problems. The best I can see in my case is that the python process is no longer attached to the terminal/pty.
As best I can tell nothing is disconnecting the process from the stdout/stderr streams - the terminal is still open for example, and everything 'appears' to be fine. Could it be caused by the child process terminating in an unclean fashion? What else might be a cause of this problem - or what other steps could I introduce to debug it further?
In terms of handling the exception, I can obviously catch it, but I'm assuming this means I wont be able to print to stdout/stderr for the remainder of execution? Can I reattach to these streams somehow - perhaps by resetting sys.stdout to sys.__stdout__ etc? In this case not being able to write to stdout/stderr is not considered fatal but if it is an indication of something starting to go wrong I'd rather bail early.
I guess ultimately I'm at a bit of a loss as to where to start debugging this one...
I think it has to do with the terminal the process is attached to. I got this error when I run a python process in the background and closed the terminal in which I started it:
$ myprogram.py
Ctrl-Z
$ bg
$ exit
The problem was that I started a not daemonized process in a remote server and logged out (closing the terminal session). A solution was to start a screen/tmux session on the remote server and start the process within this session. Then detaching the session+log out keeps the terminal associated with the process. This works at least in the *nix world.
I had a very similar problem. I had a program that was launching several other programs using the subprocess module. Those subprocesses would then print output to the terminal. What I found was that when I closed the main program, it did not terminate the subprocesses automatically (as I had assumed), rather they kept running. So if I terminated both the main program and then the terminal it had been launched from*, the subprocesses no longer had a terminal attached to their stdout, and would throw an IOError. Hope this helps you.
*NB: it must be done in this order. If you just kill the terminal, (for some reason) that would kill both the main program and the subprocesses.
I just got this error because the directory where I was writing files to ran out of memory. Not sure if this is at all applicable to your situation.
I'm new here, so please forgive if I slip up a bit when it comes to the code detail.
Recently I was able to figure out what cause the I/O error of the print statement when the terminal associated with the run of the python script is closed.
It is because the string to be printed to stdout/stderr is too long. In this case, the "out" string is the culprit.
To fix this problem (without having to keep the terminal open while running the python script), simply read the "out" string line by line, and print line by line, until we reach the end of the "out" string. Something like:
while true:
ln=out.readline()
if not ln: break
print ln.strip("\n") # print without new line
The same problem occurs if you print the entire list of strings out to the screen. Simply print the list one item by one item.
Hope that helps!
The problem is you've closed the stdout pipe which python is attempting to write to when print() is called
This can be caused by running a script in the background using & and then closing the terminal session (ie. closing stdout)
$ python myscript.py &
$ exit
One solution is to set stdout to a file when running in the background
Example
$ python myscript.py > /var/log/myscript.log 2>&1 &
$ exit
No errors on print()
It could happen when your shell crashes while the print was trying to write the data into it.
For my case, I just restart the service, then this issue disappear. don't now why.
My issue was the same OSError Input/Output error, for Odoo.
After I restart the service, it disappeared.
I've got an Apache2/web2py server running using the wsgi handler functionality. Within one of the controllers, I am trying to run an external executable to perform some processing on 2 files.
My approach to this is to use the subprocess module to kick off the executable. I have simplified the code to a bare-bones implementation with little success.
from subprocess import *
p = Popen(("echo", "Hello"), shell=False)
ret = p.wait()
print "Process ended with status %s" % ret
When running the above code on its own (create new file and running via python command line), it works exactly as expected.
However, as soon as I place the exact same code into my web2py controller, the external process stops working. Instead of the process returning with code 0 as is expected in the above example, it always returns -6 and "Hello" is not printed to stdout.
After doing some digging, I found that negative results from p.wait() implies that a signal caused the process to end abnormally. And, according to some docs I found, -6 corresponds to the SIGABRT signal.
I would have expected this signal to be a result of some poorly executed code in my child process. However, since this is only running echo (and since it works outside of web2py) I have my doubts that the child process is signalling itself.
Is there some web2py limitation/configuration that causes Popen() requests to always fail? If so, how can I modify my logic so that the controller (or whatever) is actually able to spawn this external process?
** EDIT: Looks like web2py applications may not like the subprocess module. According to a reply to a message reply in the web2py email group:
"You should not use subprocess in a web2py application (if you really need too, look into the admin/controllers/shell.py) but you can use it in a web2py program running from shell (web2py.py -R myprogram.py)."
I will be checking out some options based on the note here and see if any solution presents itself.
In the end, the best I was able to come up with involved setting up a simple XML RPC server and call the functions from that:
my_server.py
#my_server.py
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
from subprocess import *
proc_srvr = xmlrpclib.ServerProxy("http://localhost:12345")
def echo_fn():
p = Popen(("echo", "hello"), shell=False)
ret = p.wait()
print "Process ended with status %s" % ret
return True # RPC Server doesn't like to return None
def main():
server = SimpleXMLRPCServer(("localhost", 12345), ErrorHandler)
server.register_function(echo_fn, "echo_fn")
while True:
server.handle_request()
if __name__ == "__main__":
main()
web2py_controller.py
#web2py_controller.py
def run_echo():
proc_srvr = xmlrpclib.ServerProxy("http://localhost:12345")
proc_srvr.echo_fn()
I'll be honest, I'm not a Python nor SimpleRPCServer guru, so the overall code may not be up to best-practice standards. However, going this route did allow me to, in effect, call a subprocess from a controller in web2py.
(Note, this was a quick and dirty simplification of the code that I have in my project. I have not validated it is in a working state, so it may require some tweaks.)