Is there a way to have Python print a statement when a script finishes successfully?
Example code would be something like:
if 'code variable' == 0:
print "Script ran successfully"
else:
print "There was an error"
How could I pass the value of the exit code to a variable (e.g. 'code variable')?
I feel like this would be a nice thing to include in a script for other users.
Thanks.
You can do this from the shell -- e.g. in Bash:
python python_code.py && echo "script exited successfully" || echo "there was an error."
You can't have a program write something like this for itself because it doesn't know it's exit code until it has exited -- at which time it isn't running any longer to report the error :-).
There are other things you can do to proxy this behavior from within the process itself:
try:
main()
except SystemExit as ext:
if ext.code:
print ("Error")
else:
print ("Success")
raise SystemExit(ext.code)
else:
print ("Success")
However, this doesn't help if somebody uses os._exit -- and we're only catching sys.exit here, no other exceptions that could be causing a non-zero exit status.
Just write print at the end of a script if it's in form of executing straight from top to bottom. If there's an error, python stops the script and your print won't be executed. The different case is when you use try for managing exceptions.
Or make yourself a script for running python script.py with try and your except will give you an exception for example to a file or wherever you'd like it to store/show.
Related
I have a parent shell script that calls a python script. To notify in case of python script failure, have added a TRAP in shell script. But somehow, python script is getting killed/stopped for some reason without going through the TRAP function.
Request to help with the scenarios when a script can behave in such manner
Shell Script (Parent Process):
parent.sh
set -e
on_exit(){
if [ "$?" -eq 0 ]
then
echo "Success"
else
echo "Failure"
fi
}
trap on_exit EXIT
py_script=$(python child.py)
Python Script (Child Process): child.py
def func():
isDone = "false"
while isDone == "false":
print("Waiting")
try:
## GET request which sets isDone="true" on specific value
except Exception as e:
print("Something went wrong")
sys.exit(1)
time.sleep(10)
print("Completed")
Python script never prints "Something went wrong".
Is it possible that Linux is killing the process in background if it runs for around 12 hours?
EDIT:
Investigated it further and got to know that the python process was still running in the background without performing anything. When killed it manually it threw the notification.
But the question remains, in which scenario can a process go into a state without executing further lines without being killed. I am not aware of any pending state.
I have a batch file running a python script.
This python script always ends using sys.exit(code) no matter what, with code being an error code.
The batch file doesn't get my error code and instead always reads 0 with the following:
..\..\STS\Python\python.exe Version_code_ID.py %*
echo Error level = %ERRORLEVEL%
This instalation of python is in 3.7.1.
I know for sure my python code exited with other codes than 0 thanks to my logger (and me voluntarily causing errors for testing purpose).
For example I had it exit with 1 and 12, both attempts resulting in getting 0 in the batch.
Just in case, here is also the python function I use to exit:
def exit(code):
if(code > 0 ):
log.error("The computing module will shutdown with the following error code : "+str(code))
elif(code == 0):
log.info("Execution sucessfull")
log.info("STOP of VERSION_CODE_ID Computing Module")
log.removeHandler(stream_handler)
log.removeHandler(file_handler)
stream_handler.close()
file_handler.close()
sys.exit(code)
log is just the name of my logging.handlers logger.
Any idea what might be causing this problem?
Turns out the problem was that the bulk of my python script was in this try clause:
try:
#All of my code
except SystemExit:
()
except Exception as ex:
fun.log.error('ERROR: Unknown exception: ' + repr(ex))
I originally added the
except SystemExit:
()
because I thought SystemExit showing-up as an "exception" in the console was a problem, but it's not.
In short the solution was to remove that except.
The reason seem to be that catching the systemExit, as the name implies, doesn't let it get sent to the batch, which then believes no error code was sent.
Surprisingly for me except Exception as ex: doesn't catch the systemExit made by sys.exit(). That's great in my case since I still need it to log unknown exceptions.
I have a shell script calling Python inside it.
#! /bin/bash
shopt -s extglob
echo "====test===="
~/.conda/envs/my_env/bin/python <<'EOF'
import sys
import os
try:
print("inside python")
x = 2/0
except Exception as e:
print("Exception: %s" % e)
sys.exit(2)
print("at the end of python")
EOF
echo "end of script"
If I execute this, the lines below still get printed.
"end of script"
I want to exit the shell in the exception block of the python script and let the script not reach EOF
Is there a way to create and kill a subprocess in the except block above, that will kill the entire shell script?
Can I spawn a dummy subprocess and kill it inside the exception block there by killing the entire shell script?
Any examples would be helpful.
Thanks in advance.
The whole EOF ... EOF block gets executed within the Python runtime so exiting from it doesn't affect the bash script. You'll need to collect the exit status and check it after the Python execution if you want to stop the further bash script progress, i.e.:
#!/bin/bash
~/.conda/envs/my_env/bin/python <<'EOF'
import sys
sys.exit(0x01) # use any exit code from 0-0xFF range, comment out for a clean exit
print("End of the Python script that will not execute without commenting out the above.")
EOF
exit_status=$? # store the exit status for later use
# now lets check the exit status and see if python returned a non-zero exit status
if [ $exit_status -ne 0 ]; then
echo "Python exited with a non-zero exit status, abort!"
exit $exit_status # exit the bash script with the same status
fi
# continue as usual...
echo "All is good, end of script"
From the shell script you have 2 options:
set -e: all errors quit the script
check python subcommand return code, abort if non-zero
(maybe more details here: Aborting a shell script if any command returns a non-zero value?)
Now, if you don't want to change the handling from your shell script, you could get the parent process of the python script and kill it:
except Exception as e:
import os,signal,sys
print("Exception: %s" % e)
os.kill(os.getppid(),signal.SIGTERM)
sys.exit(2)
if you need this on windows, this doesn't work (os.kill doesn't exist), you have to adapt it to invoke taskkill:
subprocess.call(["taskkill","/F","/PID",str(os.getppid())])
Now I would say that killing the parent process is bad practice. Unless you don't control the code of this parent process, you should try to handle the exit gracefully.
One way to kill the entire script could be to save the PID and then using Python's system commands to execute a kill command on the PID when the exception happens. If we imported 'os' it would be something along the lines of:
# In a shell
PID=$$
...
// Some Python Exception happens
os.system('kill -9' + $PID)
I have a shell script TestNode.sh. This script has content like this:
port_up=$(python TestPorts.py)
python TestRPMs.py
Now, I want to capture the value returned by these scripts.
TestPorts.py
def CheckPorts():
if PortWorking(8080):
print "8080 working"
return "8080"
elif PortWorking(9090):
print "9090 working"
return "9090"
But as I checked the answers available, they are not working for me. The print is pushing the value in variable port_up, but I wanted that print should print on the console and the variable port_up should get the value from return statement. Is there a way to achieve this?
Note: I don't wish to use sys.exit(). Is it possible to achieve the same without this?
but I wanted that print should print on the console and the variable port_up should get the value from return statement.
Then don't capture the output. Instead do:
python TestPorts.py
port_up=$? # return value of the last statement
python TestRPMs.py
You could do:
def CheckPorts():
if PortWorking(8080):
sys.stderr.write("8080 working")
print 8080
But then I am not very happy to print "output" to stderr either.
Alternatively, you could skip printing that "8080 working" message in python script and print it from the shell script.
def CheckPorts():
if PortWorking(8080):
return "8080"
and:
port_up=$(python TestPorts.py)
echo "$port_up working"
python TestRPMs.py
To return an exit code from a Python script you can use sys.exit(); exit() may also work. In the Bash (and similar) shell, the exit code of the previous command can be found in $?.
However, the Linux shell exit codes are 8 bit unsigned integers, i.e. in the range 0-255, as mentioned in this answer. So your strategy isn't going to work.
Perhaps you can print "8080 working" to stderr or a logfile and print "8080" to stdout so you can capture it with $().
I'm writing an IRC bot in Python, due to the alpha nature of it, it will likely get unexpected errors and exit.
What's the techniques that I can use to make the program run again?
You can use sys.exit() to tell that the program exited abnormally (generally, 1 is returned in case of error).
Your Python script could look something like this:
import sys
def main():
# ...
if __name__ == '__main__':
try:
main()
except Exception as e:
print >> sys.stderr, e
sys.exit(1)
else:
sys.exit()
You could call again main() in case of error, but the program might not be in a state where it can work correctly again.
It may be safer to launch the program in a new process instead.
So you could write a script which invokes the Python script, gets its return value when it finishes, and relaunches it if the return value is different from 0 (which is what sys.exit() uses as return value by default).
This may look something like this:
import subprocess
command = 'thescript'
args = ['arg1', 'arg2']
while True:
ret_code = subprocess.call([command] + args)
if ret_code == 0:
break
You can create wrapper using subprocess(http://docs.python.org/library/subprocess.html) which will spawn your application as a child process and track it's execution.
The easiest way is to catch errors, and close the old and open a new instance of the program when you do catch em.
Note that it will not always work (in cases it stops working without throwing an error).