I want to run a function after I "command c" on terminal. I am scraping some data onto a file using python, and then I want to immediately close and push the file to google drive when I terminate the scraping process.I have written two different files for scraping and pushing but I would like to do this all in one file. How do I indicate the change of "command c" in python? If command c..do this...
Any help would be super useful! Thanks!
The sequence ctrl+C (or command+C on Mac) causes a KeyboardInterrupt exception to be raised.
You can simply catch this exception, with a simple try-except.
For instance, the following code will run a dummy work that takes some time, and on a ctrl+C, will stop that work and print its current state.
import time
try:
for i in range(100):
time.sleep(1)
except KeyboardInterrupt:
print(i)
Note that, as specified in the doc, the KeyboardInterrupt exception inherits from BaseException but not from Exception.
As a consequence, a except Exception clause would not catch a KeyboardInterrupt, while a except BaseException would catch both KeyboardInterrupt and Exception.
What you need is atexit module https://docs.python.org/2/library/atexit.html
create a handler function to do the persisting part. register it with atexit to run at the exit.
def goodbye(name, adjective):
print 'Goodbye, %s, it was %s to meet you.' % (name, adjective)
import atexit
atexit.register(goodbye, 'Donny', 'nice')
Related
I have a batch file running a python script.
This python script always ends using sys.exit(code) no matter what, with code being an error code.
The batch file doesn't get my error code and instead always reads 0 with the following:
..\..\STS\Python\python.exe Version_code_ID.py %*
echo Error level = %ERRORLEVEL%
This instalation of python is in 3.7.1.
I know for sure my python code exited with other codes than 0 thanks to my logger (and me voluntarily causing errors for testing purpose).
For example I had it exit with 1 and 12, both attempts resulting in getting 0 in the batch.
Just in case, here is also the python function I use to exit:
def exit(code):
if(code > 0 ):
log.error("The computing module will shutdown with the following error code : "+str(code))
elif(code == 0):
log.info("Execution sucessfull")
log.info("STOP of VERSION_CODE_ID Computing Module")
log.removeHandler(stream_handler)
log.removeHandler(file_handler)
stream_handler.close()
file_handler.close()
sys.exit(code)
log is just the name of my logging.handlers logger.
Any idea what might be causing this problem?
Turns out the problem was that the bulk of my python script was in this try clause:
try:
#All of my code
except SystemExit:
()
except Exception as ex:
fun.log.error('ERROR: Unknown exception: ' + repr(ex))
I originally added the
except SystemExit:
()
because I thought SystemExit showing-up as an "exception" in the console was a problem, but it's not.
In short the solution was to remove that except.
The reason seem to be that catching the systemExit, as the name implies, doesn't let it get sent to the batch, which then believes no error code was sent.
Surprisingly for me except Exception as ex: doesn't catch the systemExit made by sys.exit(). That's great in my case since I still need it to log unknown exceptions.
I'm working on a terminal that can call other programs like any other terminal. I'm using subprocess for it, on Windows.
I'm running into 2 issues.
First:
Currently, I'm using OSError for all errors raised when using subprocess.Popen.
The code for it is here:
try:
subprocess.Popen([command])
except OSError:
print("'" + command[0] + "' is not recognised as a command, program or bterm file.")
When I type python, it opens command-line python correctly.
When I type asdfa, it returns the error.
The problem is, when I type python non-existent-file.py I get the same error, when the child argument was the issue.
I want the terminal to return (null): can't open file 'test': [Errno 2] No such file or directory like when it's called from cmd or bash.
How can I distinguish between these 2 errors, while keeping my custom error message for when the file doesn't exist?
Second: Whenever I pass multi-word args into subprocess.Popen or subprocess.call I automatically get that error, which I don't get using os.system()
I don't want to use os.system because I can't raise custom errors with it.
What am I doing wrong?
Exceptions in subprocess calls:
Exceptions raised in the child process, before the new program has started to execute, will be re-raised in the parent.
Additionally, the exception object will have one extra attribute called child_traceback, which is a string containing traceback information from the child’s point of view.
The most common exception raised is OSError.
This occurs, for example, when trying to execute a non-existent file. Applications should prepare for OSError exceptions.
A ValueError will be raised if Popen is called with invalid arguments.
check_call() and check_output() will raise CalledProcessError if the called process returns a non-zero return code.
You can find more at:
https://docs.python.org/2/library/subprocess.html#exceptions
As well as you can find the Exception Heirarchy at:
https://docs.python.org/2/library/exceptions.html#exception-hierarchy
try:
output = subprocess.check_output("\\test.exe')
except subprocess.CalledProcessError as e:
print("Something Fishy... returncode: " + e.returncode + ", output:\n" + e.output)
else:
print("Working Fine:\n" + output)
You could test for the existence of the executable first with the help of shutil.which.
if shutil.which(commands[0]):
try:
subprocess.Popen([commands])
except OSError as err:
print(err)
else:
print("'{}' is not recognised as a command, program or bterm file.".format(commands[0])
The documentation has a great deal of info however: https://docs.python.org/dev/library/subprocess.html which may be helpful.
Edit: showed how to capture output, thanks to Auxilor
I have a python script running in the build phase of Jenkins, in the execute shell area.
The problem is that if the script fail, I still see the build as successful. I did check and python use something similar to the return code for a shell command (the one you call with $?), although I can't figure out where to put the call to have "any" failure in the python script to trigger that return code
import sys
..... code here
....functions
sys.exit(-1)
I need to return sys.exit(-1), but where do you put it in the python code? So far I can only handle it using try blocks, and in the exception part I put the sys.exit(-1), but this add a lots of code, since I have a lots of functions in the script.
Is there a global location that I can use, to trigger the failure, so the Jenkins job fail?
def main():
try:
do_the_work()
sys.exit(0) # success
except:
# insert code to log and debug the problem
sys.exit(-1)
In words: if do_the_work returns, sys.exit(0). If do_the_work raises any exception which it does not itself handle, sys.exit(-1). Inside do_the_work, do not return except on success. raise an exception on any unrecoverable error state, for example
class DoSomethingError( exception)
...
ok = do_something()
if not ok:
print ("do_something error return")
raise DoSomethingError
Logging and debugging: search stack overflow for python answers concerning how to obtain and log an error traceback from a caught exception.
This question is a repeat, please see the below similar questions which are already answered.
How/When does Execute Shell mark a build as failure in Jenkins?
Jenkins Build Script exits after Google Test execution
I am working with the cwiid library, which is a library written in C, but used in python. The library allows me to use a Wiimote to control some motors on a robot. The code is running as a daemon on an embedded device without a monitor, keyboard, or mouse.
When I try to initialize the object:
import cwiid
while True:
try:
wm = cwiid.Wiimote()
except RuntimeError:
# RuntimeError exception thrown if no Wiimote is trying to connect
# Wait a second
time.sleep(1)
# Try again
continue
99% of the time, everything works, but once in a while, the library gets into some sort of weird state where the call to cwiid.Wiimote() results in the library writing "Socket connect error (control channel)" to stderr, and python throwing an exception. When this happens, every subsequent call to cwiid.Wiimote() results in the same thing being written to stderr, and the same exception being thrown until I reboot the device.
What I want to do is detect this problem, and have python reboot the device automatically.
The type of exception the cwiid library throws if it's in a weird state is also RuntimeError, which is no different than a connection timeout exception (which is very common), so I can't seem to differentiate it that way. What I want to do is read stderr right after running cwiid.Wiimote() to see if the message "Socket connect error (control channel)" appears, and if so, reboot.
So far, I can redirect stderr to prevent the message from showing up by using some os.dup() and os.dup2() methods, but that doesn't appear to help me read stderr.
Most of the examples online deal with reading stderr if you're running something with subprocess, which doesn't apply in this case.
How could I go about reading stderr to detect the message being written to it?
I think what I'm looking for is something like:
while True:
try:
r, w = os.pipe()
os.dup2(sys.stderr.fileno(), r)
wm = cwiid.Wiimote()
except RuntimeError:
# RuntimeError exception thrown if no Wiimote is trying to connect
if ('Socket connect error (control channel)' in os.read(r, 100)):
# Reboot
# Wait a second
time.sleep(1)
# Try again
continue
This doesn't seem to work the way I think it should though.
As an alternative to fighting with stderr, how about the following which retries several times in quick succession (which should handle connection errors) before giving up:
while True:
for i in range(50): # try 50 times
try:
wm = cwiid.Wiimote()
break # break out of "for" and re-loop in "while"
except RuntimeError:
time.sleep(1)
else:
raise RuntimeError("permanent Wiimote failure... reboot!")
Under the hood, subprocess uses anonymous pipes in addition to dups to redirect subprocess output. To get a process to read its own stderr, you need to do this manually. It involves getting an anonymous pipe, redirecting the standard error to the pipe's input, running the stderr-writing action in question, reading the output from the other end of the pipe, and cleaning everything back up. It's all pretty fiddly, but I think I got it right in the code below.
The following wrapper for your cwiid.Wiimote call will return a tuple consisting of the result returned by the function call (None in case of RuntimeError) and stderr output generated, if any. See the tests function for example of how it's supposed to work under various conditions. I took a stab at adapting your example loop but don't quite understand what's supposed to happen when the cwiid.Wiimote call succeeds. In your example code, you just immediately re-loop.
Edit: Oops! Fixed a bug in example_loop() where Wiimote was called instead of passed as an argument.
import time
import os
import fcntl
def capture_runtime_stderr(action):
"""Handle runtime errors and capture stderr"""
(r,w) = os.pipe()
fcntl.fcntl(w, fcntl.F_SETFL, os.O_NONBLOCK)
saved_stderr = os.dup(2)
os.dup2(w, 2)
try:
result = action()
except RuntimeError:
result = None
finally:
os.close(w)
os.dup2(saved_stderr, 2)
with os.fdopen(r) as o:
output = o.read()
return (result, output)
## some tests
def return_value():
return 5
def return_value_with_stderr():
os.system("echo >&2 some output")
return 10
def runtime_error():
os.system("echo >&2 runtime error occurred")
raise RuntimeError()
def tests():
print(capture_runtime_stderr(return_value))
print(capture_runtime_stderr(return_value_with_stderr))
print(capture_runtime_stderr(runtime_error))
os.system("echo >&2 never fear, stderr is back to normal")
## possible code for your loop
def example_loop():
while True:
(wm, output) = capture_runtime_stderr(cmiid.Wiimote)
if wm == None:
if "Socket connect error" in output:
raise RuntimeError("library borked, time to reboot")
time.sleep(1)
continue
## do something with wm??
I am doing scientific programming in python, which requires me to run the same script in parallel with minor parameter tweaks. Also I frequently exit the program with either a keyboard interrupt or an exception being raised.
I'd like to use locks to prevent writing into a directory I may already be working in with another instance of my script. I tried lockfile, and in cases of interrupts/exceptions the lock remains on the directory. Is there any way I could release locks when my program is exiting, both "legally" and due to exceptions/interrupts. I am thinking can I somehow work with the garbage collection routines and add in the provision to unlock the directory?
To unlock file in case of exception you can use finally statement which is executed in both cases: if try statement succeeds and also if exception is raised.
More Info: https://docs.python.org/2/tutorial/errors.html#defining-clean-up-actions To unlock when you hit keyboard interrupt you will have to implement custom handling of this signal. In the handling method you will release lock and then exit program. Writing custom signal handler is well described here: How do I capture SIGINT in Python?
I tried this with Python 3.3
# Print something when program exits
class FinalExit:
def __del__(self):
print('FinalExit')
class AnException(Exception):
pass
def throwException():
raise AnException()
if __name__ == '__main__':
f=FinalExit()
throwException()
print('Not Printed')
The output is:
Traceback (most recent call last):
File "GC.py", line 20, in <module>
throwException()
File "GC.py", line 13, in throwException
raise AnException()
__main__.AnException
FinalExit
Note the text FinalExit occurs after the detail about the exception, but the text Not Printed is indeed not printed.
I'm sure you can use this principle to create and destroy a lock file.
You can use lockfile-create if you are on linux:
import os
import sys
from time import sleep
from subprocess import check_call, CalledProcessError
try:
check_call(["lockfile-create", "-q","-p", "-r", "0", "-l", "my.lock"])
except CalledProcessError as e:
print("{} is already running".format(sys.argv[0]))
print(e.returncode)
exit(1)
# main body
for i in range(10):
sleep(2)
print(1)
check_call(["rm","-f","my.lock"])
Using the -p flag means the pid of the process will be written to the lock file, the pid is used to see if the process is still active or if the lock file is stale.
See this answer for a full explanation.