Trigger failure in Jenkins, when anything in a python script fail - python

I have a python script running in the build phase of Jenkins, in the execute shell area.
The problem is that if the script fail, I still see the build as successful. I did check and python use something similar to the return code for a shell command (the one you call with $?), although I can't figure out where to put the call to have "any" failure in the python script to trigger that return code
import sys
..... code here
....functions
sys.exit(-1)
I need to return sys.exit(-1), but where do you put it in the python code? So far I can only handle it using try blocks, and in the exception part I put the sys.exit(-1), but this add a lots of code, since I have a lots of functions in the script.
Is there a global location that I can use, to trigger the failure, so the Jenkins job fail?

def main():
try:
do_the_work()
sys.exit(0) # success
except:
# insert code to log and debug the problem
sys.exit(-1)
In words: if do_the_work returns, sys.exit(0). If do_the_work raises any exception which it does not itself handle, sys.exit(-1). Inside do_the_work, do not return except on success. raise an exception on any unrecoverable error state, for example
class DoSomethingError( exception)
...
ok = do_something()
if not ok:
print ("do_something error return")
raise DoSomethingError
Logging and debugging: search stack overflow for python answers concerning how to obtain and log an error traceback from a caught exception.

This question is a repeat, please see the below similar questions which are already answered.
How/When does Execute Shell mark a build as failure in Jenkins?
Jenkins Build Script exits after Google Test execution

Related

How to avoid Python Subprocess stopping execution

I have a python program that process a lot of files, and one step is made through a .JAR file
I currently have something like that
for row in rows:
try:
subprocess.check_call(f'java -jar ffdec/ffdec.jar -export png "{out_dir}/" "{row[0]}.swf", stdout=subprocess.DEVNULL)
except (OSError, subprocess.SubprocessError, subprocess.CalledProcessError):
print(f"Error on {row[0]}")
continue
That works fine for executing the os command (i'm on Windows 10) and not stop on errors.
However, there is one specific error that stop the execution of my python programm.
I think it is because the .jar file doesn't really stop, and still run in the background, thus preventing python from continuing.
I there a way to call a command in Python and run it asynchronously, or skip it after a timeout of 20sec ?
I can also make a Java program to run that part of the process, but for convenience issue I'll prefer having all on Python
Just in case, i'll put here the error that stops my program (all other get properly caught by try: except:)
f�vr. 25, 2021 8:05:00 AM com.jpexs.decompiler.flash.console.ConsoleAbortRetryIgnoreHandler handle
GRAVE: Error occured
java.util.EmptyStackException
at java.util.Stack.peek(Unknown Source)
at com.jpexs.decompiler.flash.exporters.commonshape.SVGExporter.addUse(SVGExporter.java:230)
at com.jpexs.decompiler.flash.timeline.Timeline.toSVG(Timeline.java:1043)
at com.jpexs.decompiler.flash.exporters.FrameExporter.lambda$exportFrames$0(FrameExporter.java:216)
at com.jpexs.decompiler.flash.RetryTask.run(RetryTask.java:41)
at com.jpexs.decompiler.flash.exporters.FrameExporter.exportFrames(FrameExporter.java:220)
at com.jpexs.decompiler.flash.console.CommandLineArgumentParser.parseExport(CommandLineArgumentParser.java:2298)
at com.jpexs.decompiler.flash.console.CommandLineArgumentParser.parseArguments(CommandLineArgumentParser.java:891)
at com.jpexs.decompiler.flash.gui.Main.main(Main.java:1972)
After checking in depth subprocess documentation, I found a parameter called timeout :
subprocess.check_call('...', stdout=subprocess.DEVNULL, timeout=20)
That can do the job for me
Documentation for timeout

Airflow Docker Operator returning sucess when python code inside fail

I'm a pretty new dev in the docker world and at this point I really need help.
I have a python script inside a docker container. When python exits with sys.exit(-1) the docker container still exits with success and consequently mark success in airflow too, omitting real errors and trace-backs.
How can I exit from container with error ?
I know this question is old, but I just figured out the answer myself.
First, remove any old images that airflow might be using. List them with docker images, and then remove with docker rmi <image_id>
Now, in your code add a sys.exit(1) in a try/except for the main part of the code that is getting called by the DAG.
First, set remove=True to your docker container, so it's automatically removed when it's finished running. Second, use python's try:except: in both your python code and your DAG to catch the exceptions. And last - raise AirflowException in the DAG, so Airflow can detect the issue and mark the task as failed.
In your DAG:
from airflow.exceptions import AirflowException
try:
client = docker.from_env()
response = client.containers.run(image, command, remove=True, ...)
...
except Exception as e:
raise AirflowException(e)
In your python code:
import logging
try:
...
except Exception as e:
logging.exception(e)

Batch file not receiving the ERRORLEVEL from the python script it launches

I have a batch file running a python script.
This python script always ends using sys.exit(code) no matter what, with code being an error code.
The batch file doesn't get my error code and instead always reads 0 with the following:
..\..\STS\Python\python.exe Version_code_ID.py %*
echo Error level = %ERRORLEVEL%
This instalation of python is in 3.7.1.
I know for sure my python code exited with other codes than 0 thanks to my logger (and me voluntarily causing errors for testing purpose).
For example I had it exit with 1 and 12, both attempts resulting in getting 0 in the batch.
Just in case, here is also the python function I use to exit:
def exit(code):
if(code > 0 ):
log.error("The computing module will shutdown with the following error code : "+str(code))
elif(code == 0):
log.info("Execution sucessfull")
log.info("STOP of VERSION_CODE_ID Computing Module")
log.removeHandler(stream_handler)
log.removeHandler(file_handler)
stream_handler.close()
file_handler.close()
sys.exit(code)
log is just the name of my logging.handlers logger.
Any idea what might be causing this problem?
Turns out the problem was that the bulk of my python script was in this try clause:
try:
#All of my code
except SystemExit:
()
except Exception as ex:
fun.log.error('ERROR: Unknown exception: ' + repr(ex))
I originally added the
except SystemExit:
()
because I thought SystemExit showing-up as an "exception" in the console was a problem, but it's not.
In short the solution was to remove that except.
The reason seem to be that catching the systemExit, as the name implies, doesn't let it get sent to the batch, which then believes no error code was sent.
Surprisingly for me except Exception as ex: doesn't catch the systemExit made by sys.exit(). That's great in my case since I still need it to log unknown exceptions.

How to catch assertion(caused in C++) during runtime in python

I want to embed C++ in python application. I don't want to use Boost library.
If C++ function do assertion, I want to catch it and print error in my python application or get some detailed information like line number in python script that caused error. and main thing is "I want to proceed further in python execution flow"
How can I do it? I can't find any functions to get detailed assertion information in Python API or C++.
C++ Code
void sum(int iA, int iB)
{
assert(iA + iB >10);
}
Python Code
from ctypes import *
mydll = WinDLL("C:\\Users\\cppwrapper.dll")
try:
mydll.sum(10,3)
catch:
print "exception occurred"
# control should go to user whether exceptions occurs, after exception occurs if he provide yes then continue with below or else abort execution, I need help in this part as well
import re
for test_string in ['555-1212', 'ILL-EGAL']:
if re.match(r'^\d{3}-\d{4}$', test_string):
print test_string, 'is a valid US local phone number'
else:
print test_string, 'rejected'
Thanks in advance.
This can't really be done in exactly the way you say, (as was also pointed out in the comments).
Once the assertion happens and SIGABRT is sent to the process, it's in the operating system's hands what will happen, and generally the process will be killed.
The simplest way to recover from a process being killed, is to have the process launched by an external process. Like, a secondary python script, or a shell script. It's easy in bash scripting, for instance, to launch another process, check if it terminates normally or is aborted, log it, and continue.
For instance here's some bash code that executes command line $command, logs the standard error channel to a log file, checks the return code (which will be 130 or something for an SIGABRT) and does something in the various cases:
$command 2> error.log
error_code="$?"
if check_errs $error_code; then
# Do something...
return 0
else
# Do something else...
return 1
fi
Where check_errs is some other subroutine that you would write.

IOError Input/Output Error When Printing

I have inherited some code which is periodically (randomly) failing due to an Input/Output error being raised during a call to print. I am trying to determine the cause of the exception being raised (or at least, better understand it) and how to handle it correctly.
When executing the following line of Python (in a 2.6.6 interpreter, running on CentOS 5.5):
print >> sys.stderr, 'Unable to do something: %s' % command
The exception is raised (traceback omitted):
IOError: [Errno 5] Input/output error
For context, this is generally what the larger function is trying to do at the time:
from subprocess import Popen, PIPE
import sys
def run_commands(commands):
for command in commands:
try:
out, err = Popen(command, shell=True, stdout=PIPE, stderr=PIPE).communicate()
print >> sys.stdout, out
if err:
raise Exception('ERROR -- an error occurred when executing this command: %s --- err: %s' % (command, err))
except:
print >> sys.stderr, 'Unable to do something: %s' % command
run_commands(["ls", "echo foo"])
The >> syntax is not particularly familiar to me, it's not something I use often, and I understand that it is perhaps the least preferred way of writing to stderr. However I don't believe the alternatives would fix the underlying problem.
From the documentation I have read, IOError 5 is often misused, and somewhat loosely defined, with different operating systems using it to cover different problems. The best I can see in my case is that the python process is no longer attached to the terminal/pty.
As best I can tell nothing is disconnecting the process from the stdout/stderr streams - the terminal is still open for example, and everything 'appears' to be fine. Could it be caused by the child process terminating in an unclean fashion? What else might be a cause of this problem - or what other steps could I introduce to debug it further?
In terms of handling the exception, I can obviously catch it, but I'm assuming this means I wont be able to print to stdout/stderr for the remainder of execution? Can I reattach to these streams somehow - perhaps by resetting sys.stdout to sys.__stdout__ etc? In this case not being able to write to stdout/stderr is not considered fatal but if it is an indication of something starting to go wrong I'd rather bail early.
I guess ultimately I'm at a bit of a loss as to where to start debugging this one...
I think it has to do with the terminal the process is attached to. I got this error when I run a python process in the background and closed the terminal in which I started it:
$ myprogram.py
Ctrl-Z
$ bg
$ exit
The problem was that I started a not daemonized process in a remote server and logged out (closing the terminal session). A solution was to start a screen/tmux session on the remote server and start the process within this session. Then detaching the session+log out keeps the terminal associated with the process. This works at least in the *nix world.
I had a very similar problem. I had a program that was launching several other programs using the subprocess module. Those subprocesses would then print output to the terminal. What I found was that when I closed the main program, it did not terminate the subprocesses automatically (as I had assumed), rather they kept running. So if I terminated both the main program and then the terminal it had been launched from*, the subprocesses no longer had a terminal attached to their stdout, and would throw an IOError. Hope this helps you.
*NB: it must be done in this order. If you just kill the terminal, (for some reason) that would kill both the main program and the subprocesses.
I just got this error because the directory where I was writing files to ran out of memory. Not sure if this is at all applicable to your situation.
I'm new here, so please forgive if I slip up a bit when it comes to the code detail.
Recently I was able to figure out what cause the I/O error of the print statement when the terminal associated with the run of the python script is closed.
It is because the string to be printed to stdout/stderr is too long. In this case, the "out" string is the culprit.
To fix this problem (without having to keep the terminal open while running the python script), simply read the "out" string line by line, and print line by line, until we reach the end of the "out" string. Something like:
while true:
ln=out.readline()
if not ln: break
print ln.strip("\n") # print without new line
The same problem occurs if you print the entire list of strings out to the screen. Simply print the list one item by one item.
Hope that helps!
The problem is you've closed the stdout pipe which python is attempting to write to when print() is called
This can be caused by running a script in the background using & and then closing the terminal session (ie. closing stdout)
$ python myscript.py &
$ exit
One solution is to set stdout to a file when running in the background
Example
$ python myscript.py > /var/log/myscript.log 2>&1 &
$ exit
No errors on print()
It could happen when your shell crashes while the print was trying to write the data into it.
For my case, I just restart the service, then this issue disappear. don't now why.
My issue was the same OSError Input/Output error, for Odoo.
After I restart the service, it disappeared.

Categories

Resources