One of the modules in an app I'm working on is intended to be used as a long-running process on Linux, and I would like it to gracefully handle SIGTERM, SIGHUP and possibly other signals. The core part of the program is in fact a loop which periodically runs a function (which in turn wakes up another thread, but this is less important). It looks more or less like this:
while True:
try:
do_something()
sleep(60)
except KeyboardInterrupt:
break
cleanup_and_exit()
What I'd like to add now is to catch SIGTERM and exit the loop, the same way a KeyboardInterrupt exception would.
One thought I have is to add a flag which will be set to True by the signal handler function, and replace the sleep(60) with sleep(0.1) or whatever, with a counter that counts seconds:
_exit_flag = False
while not _exit_flag:
try:
for _ in xrange(600):
if _exit_flag: break
do_something()
sleep(0.1)
except KeyboardInterrupt:
break
cleanup_and_exit()
and somewhere else:
def signal_handler(sig, frame):
_exit_flag = True
But I'm not sure this is the best / most efficient way to do it.
Rather than using a sentinel in the main loop and thus having to wake more frequently than you'd really like to check for it, why not push the cleanup in to the handler? Something like:
class BlockingAction(object):
def __new__(cls, action):
if isinstance(action, BlockingAction):
return action
else:
new_action = super(BlockingAction, cls).__new__(cls)
new_action.action = action
new_action.active = False
return new_action
def __call__(self, *args, **kwargs):
self.active = True
result = self.action(*args, **kwargs)
self.active = False
return result
class SignalHandler(object):
def __new__(cls, sig, action):
if isinstance(action, SignalHandler):
handler = action
else:
handler = super(SignalHandler, cls).__new__(cls)
handler.action = action
handler.blocking_actions = []
signal.signal(sig, handler)
return handler
def __call__(self, signum, frame):
while any(a.active for a in self.blocking_actions):
time.sleep(.01)
return self.action()
def blocks_on(self, action):
blocking_action = BlockingAction(action)
self.blocking_actions.append(blocking_action)
return blocking_action
def handles(signal):
def get_handler(action):
return SignalHandler(signal, action)
return get_handler
#handles(signal.SIGTERM)
#handles(signal.SIGHUP)
#handles(signal.SIGINT)
def cleanup_and_exit():
# Note that this assumes that this method actually exits the program explicitly
# If it does not, you'll need some form of sentinel for the while loop
pass
#cleanup_and_exit.blocks_on
def do_something():
pass
while True:
do_something()
time.sleep(60)
Related
When I hit ctrl-c the prompt pops up on the screen and I input no
because I want to keep running the program but the program exits anyway.
Where am I going wrong? Is this functionality even possible?
from functools import wraps
import sys
class CleanExit(object):
def __init__(self, *args, **kw):
# You can supply an optional function
# to be called when the KeyboardInterrupt
# takes place.
# If the function doesn't require any arguments:
# #CleanExit(handler=func)
# If the function requires arguments:
# #CleanExit('foo', handler=handle)
self.kw = kw
self.args = args
self.handler = kw.get('handler')
def __call__(self, original_func):
decorator_self = self
#wraps(original_func)
def wrappee(*args, **kwargs):
try:
original_func(*args,**kwargs)
except KeyboardInterrupt:
if self.handler:
self.handler(*self.args)
else:
sys.exit(0)
return wrappee
def handle():
answer = raw_input("Are you sure you want to exit?").lower().strip()
if 'y' in answer:
sys.exit(0)
#CleanExit(handler=handle)
def f():
while 1: pass
f()
Your problem is that you're not doing anything to continue the function after handling it - so your code handles the interrupt, and then exits anyway. You can recursively re-enter wrappee if the handler exits successfully like this:
def __call__(self, original_func):
decorator_self = self
#wraps(original_func)
def wrappee(*args, **kwargs):
try:
original_func(*args,**kwargs)
except KeyboardInterrupt:
if self.handler:
self.handler(*self.args)
wrappee(*args, **kwargs)
else:
sys.exit(0)
return wrappee
Now this should work. Note that this is a little bit naughty, as Python can't optimise tail calls, so if you KeyboardInterrupt more often than sys.getrecursionlimit(), Python will run out of stack frames and crash.
EDIT: That was silly - having thought about it, this function is so trivial to derecurse by hand it probably doesn't even count.
def __call__(self, original_func):
decorator_self = self
#wraps(original_func)
def wrappee(*args, **kwargs):
while True:
try:
original_func(*args,**kwargs)
except KeyboardInterrupt:
if self.handler:
self.handler(*self.args)
else:
sys.exit(0)
return wrappee
should also work just fine.
I'm trying to light a 5mm LED while a function is running. When this function (more details about this below) is finished and has returned a value I would like to break the while loop.
Current code for while loop:
pins = [3,5,8,15,16]
def piBoard():
finished = 0
while finished!=10:
for pin in pins
GPIO.output(
pin, GPIO.HIGH
)
time.sleep(0.1)
GPIO.output(
pin, GPIO.LOW
)
finished+=1
Now in the above example I just run the while loop until the count is equal to 10, not best practice. I would like the while loop to break if my next function has returned a value.
Function I want to break my while loop when returned its value
def myFunction():
Thread(target = piBoard().start()
// Trying to recognize the song
return the song which is recognized
Thanks, - K.
It sounds to me like you want to write a class that extends Thread and implements __enter__ and __exit__ methods to make it work in the with statement. Simple to implement, simple syntax, works pretty well. The class will look like this:
import threading
class Blinky(threading.Thread):
def __init__(self):
super().__init__()
self.daemon = True
self._finished = False
def __enter__(self):
self.start()
def __exit__(self, exc_type, exc_val, exc_tb):
self.stop()
def run(self):
# turn light on
while not self._finished:
time.sleep(.5)
# turn light off
def stop(self):
self._finished = True
Then, to run your function, you simply put:
with Blinky():
my_function()
The light should turn on once the with statement is reached and turn off up to a half second after the context of the with is exited.
In while condition put true and in while loop put if statement which will check if your function return any value if return write break
You need some kind of inter-thread communication. threading.Event is about as simple as you can get.
import threading
song_recognized_event = threading.event()
in your song recognizer, call set() once the song is recognized.
In your LED loop, check isSet() occasionally while toggling LEDs.
while not song_recognized_event.isSet():
# toggle LEDs
Run clear() to reset it.
if you are open to using threads.
you can achieve this by using threads.
here's the example code
from concurrent.futures._base import as_completed
from concurrent.futures.thread import ThreadPoolExecutor
WORK_FINISHED = False
def piBoard():
while not WORK_FINISHED:
# Do some stuff
# Drink some coffee
def myFunction():
time.sleep(5)
global WORK_FINISHED
WORK_FINISHED = True #update gobal status flag
return something
if __name__ == '__main__':
futures = []
MAX_WORKERS = 5 #max number of threads you want to create
with ThreadPoolExecutor(MAX_WORKERS) as executor:
executor.submit(piBoard)
# submit your function to worker thread
futures.append(executor.submit(myFunction))
# if you need to get return value from `myFunction`
for fut in as_completed(futures):
res = fut.result()
Hope this helps.
Using decorator and asyncio, inspired by #Eric Ed Lohmar:
import asyncio
def Blink():
from functools import wraps
async def _blink():
while True:
print("OFF")
await asyncio.sleep(.5)
print("ON")
await asyncio.sleep(.5)
def Blink_decorator(func):
#wraps(func)
async def wrapper(*args,**kwargs):
asyncio.ensure_future(_blink())
await func(*args,**kwargs)
return wrapper
return Blink_decorator
#Blink()
async def longTask():
print("Mission Start")
await asyncio.sleep(3)
print("Mission End")
def main():
loop = asyncio.get_event_loop()
loop.run_until_complete(longTask())
I'm trying to find the way to start a new Process and get its output if it takes less than X seconds. If the process takes more time I would like to ignore the Process result, kill the Process and carry on.
I need to basically add the timer to the code below. Now sure if there's a better way to do it, I'm open to a different and better solution.
from multiprocessing import Process, Queue
def f(q):
# Ugly work
q.put(['hello', 'world'])
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print q.get()
p.join()
Thanks!
You may find the following module useful in your case:
Module
#! /usr/bin/env python3
"""Allow functions to be wrapped in a timeout API.
Since code can take a long time to run and may need to terminate before
finishing, this module provides a set_timeout decorator to wrap functions."""
__author__ = 'Stephen "Zero" Chappell ' \
'<stephen.paul.chappell#atlantis-zero.net>'
__date__ = '18 December 2017'
__version__ = 1, 0, 1
__all__ = [
'set_timeout',
'run_with_timeout'
]
import multiprocessing
import sys
import time
DEFAULT_TIMEOUT = 60
def set_timeout(limit=None):
"""Return a wrapper that provides a timeout API for callers."""
if limit is None:
limit = DEFAULT_TIMEOUT
_Timeout.validate_limit(limit)
def wrapper(entry_point):
return _Timeout(entry_point, limit)
return wrapper
def run_with_timeout(limit, polling_interval, entry_point, *args, **kwargs):
"""Execute a callable object and automatically poll for results."""
engine = set_timeout(limit)(entry_point)
engine(*args, **kwargs)
while engine.ready is False:
time.sleep(polling_interval)
return engine.value
def _target(queue, entry_point, *args, **kwargs):
"""Help with multiprocessing calls by being a top-level module function."""
# noinspection PyPep8,PyBroadException
try:
queue.put((True, entry_point(*args, **kwargs)))
except:
queue.put((False, sys.exc_info()[1]))
class _Timeout:
"""_Timeout(entry_point, limit) -> _Timeout instance"""
def __init__(self, entry_point, limit):
"""Initialize the _Timeout instance will all needed attributes."""
self.__entry_point = entry_point
self.__limit = limit
self.__queue = multiprocessing.Queue()
self.__process = multiprocessing.Process()
self.__timeout = time.monotonic()
def __call__(self, *args, **kwargs):
"""Begin execution of the entry point in a separate process."""
self.cancel()
self.__queue = multiprocessing.Queue(1)
self.__process = multiprocessing.Process(
target=_target,
args=(self.__queue, self.__entry_point) + args,
kwargs=kwargs
)
self.__process.daemon = True
self.__process.start()
self.__timeout = time.monotonic() + self.__limit
def cancel(self):
"""Terminate execution if possible."""
if self.__process.is_alive():
self.__process.terminate()
#property
def ready(self):
"""Property letting callers know if a returned value is available."""
if self.__queue.full():
return True
elif not self.__queue.empty():
return True
elif self.__timeout < time.monotonic():
self.cancel()
else:
return False
#property
def value(self):
"""Property that retrieves a returned value if available."""
if self.ready is True:
valid, value = self.__queue.get()
if valid:
return value
raise value
raise TimeoutError('execution timed out before terminating')
#property
def limit(self):
"""Property controlling what the timeout period is in seconds."""
return self.__limit
#limit.setter
def limit(self, value):
self.validate_limit(value)
self.__limit = value
#staticmethod
def validate_limit(value):
"""Verify that the limit's value is not too low."""
if value <= 0:
raise ValueError('limit must be greater than zero')
To use, see the following example that demonstrates its usage:
Example
from time import sleep
def main():
timeout_after_four_seconds = timeout(4)
# create copies of a function that have a timeout
a = timeout_after_four_seconds(do_something)
b = timeout_after_four_seconds(do_something)
c = timeout_after_four_seconds(do_something)
# execute the functions in separate processes
a('Hello', 1)
b('World', 5)
c('Jacob', 3)
# poll the functions to find out what they returned
results = [a, b, c]
polling = set(results)
while polling:
for process, name in zip(results, 'abc'):
if process in polling:
ready = process.ready
if ready is True: # if the function returned
print(name, 'returned', process.value)
polling.remove(process)
elif ready is None: # if the function took too long
print(name, 'reached timeout')
polling.remove(process)
else: # if the function is running
assert ready is False, 'ready must be True, False, or None'
sleep(0.1)
print('Done.')
def do_something(data, work):
sleep(work)
print(data)
return work
if __name__ == '__main__':
main()
Does the process you are running involve a loop?
If so you can get the timestamp prior to starting the loop and include an if statement within the loop with an sys.exit(); command terminating the script if the current timestamp differs from the recorded start time stamp by more than x seconds.
All you need to adapt the queue example from the docs to your case is to pass the timeout to the q.get() call and terminate the process on timeout:
from Queue import Empty
...
try:
print q.get(timeout=timeout)
except Empty: # no value, timeout occured
p.terminate()
q = None # the queue might be corrupted after the `terminate()` call
p.join()
Using a Pipe might be more lightweight otherwise the code is the same (you could use .poll(timeout), to find out whether there is a data to receive).
I am trying to combine the answers I got from two different python questions.
Here is the the first question and answer. Basically I just wanted to spawn two threads, one to powerDown() and the other to powerUp(), where powerUp() pends on powerDown()
How to spawn a thread inside another thread in the same object in python?
import threading
class Server(threading.Thread):
# some code
def run(self):
self.reboot()
# This is the top level function called by other objects
def reboot(self):
# perhaps add a lock
if not hasattr(self, "_down"):
self._down = threading.Thread(target=self.__powerDown)
self._down.start()
up = threading.Thread(target=self.__powerUp)
up.start()
def __powerDown(self):
# do something
def __powerUp(self):
if not hasattr(self, "_down"):
return
self._down.join()
# do something
del self._down
Here is the the second question and answer. Basically I wanted to start a thread, and then call a function of the object.
How to call a function on a running Python thread
import queue
import threading
class SomeClass(threading.Thread):
def __init__(self, q, loop_time = 1.0/60):
self.q = q
self.timeout = loop_time
super(SomeClass, self).__init__()
def onThread(self, function, *args, **kwargs):
self.q.put((function, args, kwargs))
def run(self):
while True:
try:
function, args, kwargs = self.q.get(timeout=self.timeout)
function(*args, **kwargs)
except queue.Empty:
self.idle()
def idle(self):
# put the code you would have put in the `run` loop here
def doSomething(self):
pass
def doSomethingElse(self):
pass
Here is combined idea code. Basically I wanted to spawn a thread, then then queue up a functions to execute, which in this case is reboot(). reboot() in turns creates two threads, the powerDown() and powerUp() threads, where powerDown() pends on powerUp()
import threading
import Queue
class Server(threading.Thread):
def __init__(self, q, loop_time = 1.0/60):
self.q = q
self.timeout = loop_time
super(Server, self).__init__()
def run(self):
while True:
try:
function, args, kwargs = self.q.get(timeout=self.timeout)
function(*args, **kwargs)
except queue.Empty:
self.idle()
def idle(self):
# put the code you would have put in the `run` loop here
# This is the top level function called by other objects
def reboot(self):
self.__onthread(self.__reboot)
def __reboot(self):
if not hasattr(self, "_down"):
self._down = threading.Thread(target=self.__powerDown)
self._down.start()
up = threading.Thread(target=self.__powerUp)
up.start()
def __onThread(self, function, *args, **kwargs):
self.q.put((function, args, kwargs))
def __powerDown(self):
# do something
def __powerUp(self):
if not hasattr(self, "_down"):
return
self._down.join()
# do something
del self._down
All work, except when I create two Server subclasses.
class ServerA(Server):
pass
class ServerB(Server):
pass
Here is the code that instatiats both subclasses, and call the start() and reboot functions
serverA = ServerA(None)
serverB = ServerB(None)
serverA.start()
serverB.start()
serverA.reboot()
serverB.reboot()
I expect serverA.reboot() and serverB.reboot() to happen concurrently, which is what I want, but they DO NOT! serverB.reboot() gets executed after serverA.reboot() is done. That is, if I put print statements, I get
serverA started
serverB started
serverA.reboot() called
serverA.__powerDown called
serverA.__powerUp called
serverB.reboot() called
serverB.__powerDown called
serverB.__powerUp called
I know for a fact that it takes longer for ServerA to reboot, so I expect something like this
serverA started
serverB started
serverA.reboot() called
serverB.reboot() called
serverA.__powerDown called
serverB.__powerDown called
serverB.__powerUp called
serverA.__powerUp called
I hope that makes sense. If it does, why aren't my reboot() functions happening simultaneously?
Why are you sending None while you are expecting a queue object in the first place ? This causes an exception which complains that None type object doesn't have a get method. Besides that the exception you want to be handled in the run method is Queue.Empty and not queue.Empty.
Here is the revised code and its output on my machine:
import threading
import Queue
class Server(threading.Thread):
def __init__(self, title, q, loop_time = 1.0/60):
self.title = title
self.q = q
self.timeout = loop_time
super(Server, self).__init__()
def run(self):
print "%s started" % self.title
while True:
try:
function, args, kwargs = self.q.get(timeout=self.timeout)
function(*args, **kwargs)
except Queue.Empty:
# print "empty"
self.idle()
def idle(self):
pass
# put the code you would have put in the `run` loop here
# This is the top level function called by other objects
def reboot(self):
self.__onThread(self.__reboot)
def __reboot(self):
if not hasattr(self, "_down"):
self._down = threading.Thread(target=self.__powerDown)
self._down.start()
up = threading.Thread(target=self.__powerUp)
up.start()
def __onThread(self, function, *args, **kwargs):
self.q.put((function, args, kwargs))
def __powerDown(self):
# do something
print "%s power down" % self.title
pass
def __powerUp(self):
print "%s power up" % self.title
if not hasattr(self, "_down"):
return
self._down.join()
# do something
del self._down
class ServerA(Server):
pass
class ServerB(Server):
pass
def main():
serverA = ServerA("A", Queue.Queue())
serverB = ServerB("B", Queue.Queue())
serverA.start()
serverB.start()
serverA.reboot()
serverB.reboot()
if __name__ == '__main__':
main()
Output:
A started
B started
B power down
A power down
B power up
A power up
Ok, I've wrote this class based in a bunch of others Spinner classes that I've googled in Google Code Search.
It's working as intended, but I'm looking for a better way to handle KeyboardInterrupt and SystemExit exceptions. Is there better approaches?
Here's my code:
#!/usr/bin/env python
import itertools
import sys
import threading
class Spinner(threading.Thread):
'''Represent a random work indicator, handled in a separate thread'''
# Spinner glyphs
glyphs = ('|', '/', '-', '\\', '|', '/', '-')
# Output string format
output_format = '%-78s%-2s'
# Message to output while spin
spin_message = ''
# Message to output when done
done_message = ''
# Time between spins
spin_delay = 0.1
def __init__(self, *args, **kwargs):
'''Spinner constructor'''
threading.Thread.__init__(self, *args, **kwargs)
self.daemon = True
self.__started = False
self.__stopped = False
self.__glyphs = itertools.cycle(iter(self.glyphs))
def __call__(self, func, *args, **kwargs):
'''Convenient way to run a routine with a spinner'''
self.init()
skipped = False
try:
return func(*args, **kwargs)
except (KeyboardInterrupt, SystemExit):
skipped = True
finally:
self.stop(skipped)
def init(self):
'''Shows a spinner'''
self.__started = True
self.start()
def run(self):
'''Spins the spinner while do some task'''
while not self.__stopped:
self.spin()
def spin(self):
'''Spins the spinner'''
if not self.__started:
raise NotStarted('You must call init() first before using spin()')
if sys.stdin.isatty():
sys.stdout.write('\r')
sys.stdout.write(self.output_format % (self.spin_message,
self.__glyphs.next()))
sys.stdout.flush()
time.sleep(self.spin_delay)
def stop(self, skipped=None):
'''Stops the spinner'''
if not self.__started:
raise NotStarted('You must call init() first before using stop()')
self.__stopped = True
self.__started = False
if sys.stdin.isatty() and not skipped:
sys.stdout.write('\b%s%s\n' % ('\b' * len(self.done_message),
self.done_message))
sys.stdout.flush()
class NotStarted(Exception):
'''Spinner not started exception'''
pass
if __name__ == '__main__':
import time
# Normal example
spinner1 = Spinner()
spinner1.spin_message = 'Scanning...'
spinner1.done_message = 'DONE'
spinner1.init()
skipped = False
try:
time.sleep(5)
except (KeyboardInterrupt, SystemExit):
skipped = True
finally:
spinner1.stop(skipped)
# Callable example
spinner2 = Spinner()
spinner2.spin_message = 'Scanning...'
spinner2.done_message = 'DONE'
spinner2(time.sleep, 5)
Thank you in advance.
You probably don't need to worry about catching SystemExit as it is raised by sys.exit(). You might want to catch it to clean up some resources just before your program exits.
The other way to catch KeyboardInterrupt is to register a signal handler to catch SIGINT. However for your example using try..except makes more sense, so you're on the right track.
A few minor suggestions:
Perhaps rename the __call__ method to start, to make it more clear you're starting a job.
You might also want to make the Spinner class reusable by attaching a new thread within the start method, rather than in the constructor.
Also consider what happens when the user hits CTRL-C for the current spinner job -- can the next job be started, or should the app just exit?
You could also make the spin_message the first argument to start to associate it with the task about to be run.
For example, here is how someone might use Spinner:
dbproc = MyDatabaseProc()
spinner = Spinner()
spinner.done_message = 'OK'
try:
spinner.start("Dropping the database", dbproc.drop, "mydb")
spinner.start("Re-creating the database", dbproc.create, "mydb")
spinner.start("Inserting data into tables", dbproc.populate)
...
except (KeyboardInterrupt, SystemExit):
# stop the currently executing job
spinner.stop()
# do some cleanup if needed..
dbproc.cleanup()