gevent: debug spinning thread? - python

In my gevent-based program, I've got a thread somewhere which is suck in a loop something like:
while True:
gevent.sleep(0)
How can I figure out which thread this is? Is it possible to list (and get stack traces for) the running threads?

method 1. TimeOut
I use this in my code to keep track greenlets that potentially block. A NodeTaskTimeout is raised when this happens. Just wrap your jobs in a Timeout or provide them with a TimeOut object.
with Timeout(90, False):
task_jobs.join()
if task_jobs:
print 'task jobs killed', task_jobs
task_jobs.kill()
if settings.DEBUG:
raise NodeTaskTimeout
This print's out the task if it hangs/blocks/takes to long.
Especially nasty are those jobs that depend on each other and cause a deadlock
job1 /thread -> job2/thread2 -> job3/thread3 and job/thread3 only finishes when job1 is done wich will never happen because job2 is not done and job2 is not done because of job3 is not done.. you get the idea ;)
Mehtod 2 Settrace
http://www.rfk.id.au/blog/entry/detect-gevent-blocking-with-greenlet-settrace/
but you also need to put code that you suspect to be "spinning" in a with block.

There is an official API for checking the block, it will pinpoint the exact line which caused the block, check code example below:
# gevent 1.3.7
# greenlet 0.4.15
# zope.event 4.4
import gevent
from gevent import config, get_hub
from gevent.events import IEventLoopBlocked
import logging
from pprint import pformat
import time
import zope.event
# initial logging
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
# setup gevent config
# enable the monitor thread
config.monitor_thread = True
config.max_blocking_time = 4
# start the monitor thread
hub = get_hub()
monitor = hub.start_periodic_monitoring_thread()
# register the event to logging system
def g(event):
log.error('Greenlet: {}, exceed the max blocking time: {}'.format(event.greenlet, event.blocking_time))
log.error(pformat(event.info))
event = IEventLoopBlocked()
zope.event.subscribers.append(g)
# you can also create you own monitoring function
# def check(hub):
# print('< periodic check in monitoring thread >')
#
# monitor.add_monitoring_function(check, 1)
def gl1():
# use time.sleep to trigger block
log.info('block at gl1 for 2 seconds')
time.sleep(2)
log.info('leave gl1 now')
def gl2():
# use time.sleep to trigger block
log.info('block at gl2 for 6 seconds should be detected')
time.sleep(6)
log.info('leave gl2 now')
def gl3():
# gevent.sleep will not block
log.info('gl3 will not block since it use gevent.sleep')
gevent.sleep(8)
log.info('leave gl3 now')
gevent.joinall([
gevent.spawn(gl3),
gevent.spawn(gl1),
gevent.spawn(gl2),
])
I hope it helps!

Related

Is there any graceful way to interrupt a python concurrent future result() call?

The only mechanism I can find for handling a keyboard interrupt is to poll. Without the while loop below, the signal processing never happens and the process hangs forever.
Is there any graceful mechanism for allowing a keyboard interrupt to function when given a concurrent future object?
Putting polling loops all over my code base seems to defeat the purpose of using futures at all.
More info:
Waiting on the future in the main thread in Windows blocks all signal handling, even if it's fully cancellable and even if it has not "started" yet. The word "exiting" doesn't even print. So 'cancellability' is only part (the easy part) of the issue.
In my real code, I obtain futures via executors (run coro threadsafe, in this case), this was just a simplified example
import concurrent.futures
import signal
import time
import sys
fut = concurrent.futures.Future()
def handler(signum, frame):
print("exiting")
fut.cancel()
signal.signal(signal.SIGINT, orig)
sys.exit()
orig = signal.signal(signal.SIGINT, handler)
# a time sleep is fully interruptible with a signal... but a future isnt
# time.sleep(100)
while True:
try:
fut.result(.03)
except concurrent.futures.TimeoutError:
pass
Defeating the purpose or not, it is how futures currently work in Python.
First of all, directly instantiating a Future() should only be done for testing purposes, normally you would obtain an instance by submitting work to an executor.
Furthermore, you cannot really cancel() a future cleanly that is executing in a thread; attempting to do so will make cancel() return False.
Indeed, in the following test I get could cancel: False printed out:
import concurrent.futures
import signal
import time
import sys
def task(delay):
time.sleep(delay)
return delay
def handler(signum, frame):
print("exiting")
print("could cancel:", fut.cancel())
raise RuntimeError("if in doubt, use brute force")
signal.signal(signal.SIGINT, handler)
with concurrent.futures.ThreadPoolExecutor() as executor:
fut = executor.submit(task, 240)
try:
print(fut.result())
except Exception as ex:
print(f"fut.result() ==> {type(ex).__name__}: {ex}")
If I also raise an exception in my signal handler, that exception is caught when trying to fetch the result, and I'm also seeing fut.result() ==> RuntimeError: if in doubt, use brute force printed out. However, that does not exit the executor loop immediately either, because the task is still running there.
Interestingly, clicking Ctrl-C a couple more times would eventually break even the cleanup loop, and the program would exit, but it's probably not what you're after. You might also be able to kill off futures more freely by employing a ProcessPoolExecutor, but .cancel() would still return False for running futures.
In that light, I think your approach to poll result() is not an unreasonable one. If possible, you could also move your program to asyncio where you would be able to cancel tasks at the yield points or I/O, or somehow make your task itself react to user input by exiting earlier, potentially based on information from a signal.
For instance, here I'm setting a global variable from my interrupt handler, which is then polled from my task:
import concurrent.futures
import signal
import time
import sys
interrupted = False
def task(delay):
slept = 0
for _ in range(int(delay)):
time.sleep(1)
slept += 1
if interrupted:
print("interrupted, wrapping up work prematurely")
break
return slept
def handler(signum, frame):
global interrupted
print("exiting")
print("could cancel:", fut.cancel())
interrupted = True
signal.signal(signal.SIGINT, handler)
with concurrent.futures.ThreadPoolExecutor() as executor:
fut = executor.submit(task, 40)
try:
print(fut.result())
except Exception as ex:
print(f"fut.result() ==> {type(ex).__name__}: {ex}")
Now I'm able to interrupt my work in a more fine grained fashion:
^Cexiting
could cancel: False
interrupted, wrapping up work prematurely
5
In addition, you might also be able to split your work into many smaller tasks, then you could cancel any futures that aren't running yet, also improving responsiveness to SIGINT or other types of user input.
OK, I wrote a solution to this based on digging in cypython source and some bug reports - but it's not pretty.
If you want to be able to interrupt a future, especially on Windows, the following seems to work:
#contextlib.contextmanager
def interrupt_futures(futures): # pragma: no cover
"""Allows a list of futures to be interrupted.
If an interrupt happens, they will all have their exceptions set to KeyboardInterrupt
"""
# this has to be manually tested for now, because the tests interfere with the test runner
def do_interr(*_):
for ent in futures:
try:
ent.set_exception(KeyboardInterrupt)
except:
# if the future is already resolved or cancelled, ignore it
pass
return 1
if sys.platform == "win32":
from ctypes import wintypes # pylint: disable=import-outside-toplevel
kernel32 = ctypes.WinDLL("kernel32", use_last_error=True)
CTRL_C_EVENT = 0
CTRL_BREAK_EVENT = 1
HANDLER_ROUTINE = ctypes.WINFUNCTYPE(wintypes.BOOL, wintypes.DWORD)
#HANDLER_ROUTINE
def handler(ctrl):
if ctrl == CTRL_C_EVENT:
handled = do_interr()
elif ctrl == CTRL_BREAK_EVENT:
handled = do_interr()
else:
handled = False
# If not handled, call the next handler.
return handled
if not kernel32.SetConsoleCtrlHandler(handler, True):
raise ctypes.WinError(ctypes.get_last_error())
was = signal.signal(signal.SIGINT, do_interr)
yield
signal.signal(signal.SIGINT, was)
# restore default handler
kernel32.SetConsoleCtrlHandler(handler, False)
else:
was = signal.signal(signal.SIGINT, do_interr)
yield
signal.signal(signal.SIGINT, was)
This allows you to do this:
with interrupt_futures([fut]):
fut.result()
For the duration of that call, interrupt signals will be intercepted and will result in the future raising a KeyboardInterrupt to the caller requesting the result - instead of simply ignoring all interrupts.

How to use Queue with threading properly

I am new to queue & threads kindly help with the below code , here I am trying to execute the function hd , I need to run the function multiple times but only after a single run has been completed
import queue
import threading
import time
fifo_queue = queue.Queue()
def hd():
print("hi")
time.sleep(1)
print("done")
for i in range(3):
cc = threading.Thread(target=hd)
fifo_queue.put(cc)
cc.start()
Current Output
hi
hi
hi
donedonedone
Expected Output
hi
done
hi
done
hi
done​
You can use a Semaphore for your purposes
A semaphore manages an internal counter which is decremented by each acquire() call and incremented by each release() call. The counter can never go below zero; when acquire() finds that it is zero, it blocks, waiting until some other thread calls release().
A default value of Semaphore is 1,
class threading.Semaphore(value=1)
so only one thread would be active at once:
import queue
import threading
import time
fifo_queue = queue.Queue()
semaphore = threading.Semaphore()
def hd():
with semaphore:
print("hi")
time.sleep(1)
print("done")
for i in range(3):
cc = threading.Thread(target=hd)
fifo_queue.put(cc)
cc.start()
hi
done
hi
done
hi
done
As #user2357112supportsMonica mentioned in comments RLock would be more safe option
class threading.RLock
This class implements reentrant lock objects. A reentrant lock must be released by the thread that acquired it. Once a thread has acquired a reentrant lock, the same thread may acquire it again without blocking; the thread must release it once for each time it has acquired it.
import queue
import threading
import time
fifo_queue = queue.Queue()
lock = threading.RLock()
def hd():
with lock:
print("hi")
time.sleep(1)
print("done")
for i in range(3):
cc = threading.Thread(target=hd)
fifo_queue.put(cc)
cc.start()
please put the print("down") before sleep.
it will work fine.
Reason:
your program will do this:
thread1:
print
sleep
print
but while the thread is sleeping, other threads will be working and printing their first command.
in my way the thread will write the first, write the second and then go to sleep and wait for other threads to show up.

Waking up a specific thread from sleep from another thread in python

I've been chasing my tail on this one for days now, and I'm going crazy. I'm a total amateur and totally new to python, so please excuse my stupidity.
My main thread repeatedly checks a database for an entry and then starts threads for every new entry that it finds in the database.
The threads that it starts up basically poll the database for a value and if it doesn't find that value, it does some things and then it sleeps for 60 seconds and starts over.
Simplified non-code for the started up thread
while True:
stop = _Get_a_Value_From_Database_for_Exit() #..... a call to DBMS
If stop = 0:
Do_stuff()
time.sleep(60)
else:
break
There could be many of these threads running at any given time. What I'd like to do is have the main thread check another spot in the database for specific value and then can interrupt the sleep in the example above in a specific thread that was started. The goal would be to exit a specific thread like the one listed above without having to wait the remainder of the sleep duration. All of these threads can be referenced by the database id that is shared. I've seen references to event.wait() and event.set() been trying to figure out how I replace the time.sleep() with it, but I have no idea how I could use it to wake up a specific thread instead of all of them.
This is where my ignorance show through: is there a way where I could do something based on database id for the event.wait (Like 12345.wait(60) in the started up thread and 12345.set() in the main thread (all dynamic based on the ever changing database id).
Thanks for your help!!
The project is a little complex, and here's my version of it.
scan database file /tmp/db.dat, prefilled with two words
manager: create a thread for each word; default is a "whiskey" thread and a "syrup" thread
if a word ends in _stop, like syrup_stop, tell that thread to die by setting its stop event
each thread scans the database file and exits if it sees the word stop. It'll also exit if its stop event is set.
note that if the Manager thread sets a worker's stop_event, the worker will immediately exit. Each thread does a little bit of stuff, but spends most of its time in the stop_ev.wait() call. Thus, when the event does get set, it doesn't have to sit around, it can exit immediately.
The server is fun to play around with! Start it, then send commands to it by adding lines to the database. Try each one of the following:
$ echo pie >> /tmp/db.dat # start new thread
$ echo pie_stop >> /tmp/db.dat # stop thread by event
$ echo whiskey_stop >> /tmp/db.dat # stop another thread "
$ echo stop >> /tmp/db.dat # stop all threads
source
import logging, sys, threading, time
STOP_VALUE = 'stop'
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)-4s %(threadName)s %(levelname)s %(message)s",
datefmt="%H:%M:%S",
stream=sys.stderr,
)
class Database(list):
PATH = '/tmp/db.dat'
def __init__(self):
super(Database,self).__init__()
self._update_lock = threading.Lock()
def update(self):
with self._update_lock:
self[:] = [ line.strip() for line in open(self.PATH) ]
db = Database()
def spawn(events, key):
events[key] = threading.Event()
th = threading.Thread(
target=search_worker,
kwargs=dict(stop_ev=events[key]),
name='thread-{}'.format(key),
)
th.daemon = True
th.start()
def search_worker(stop_ev):
"""
scan database until "stop" found, or our event is set
"""
logging.info('start')
while True:
logging.debug('scan')
db.update()
if STOP_VALUE in db:
logging.info('stopvalue: done')
return
if stop_ev.wait(timeout=10):
logging.info('event: done')
return
def manager():
"""
scan database
- word: spawn thread if none already
- word_stop: tell thread to die by setting its stop event
"""
logging.info('start')
events = dict()
while True:
db.update()
for key in db:
if key == STOP_VALUE:
continue
if key in events:
continue
if key.endswith('_stop'):
key = key.split('_')[0]
if key not in events:
logging.error('stop: missing key=%s!', key)
else:
# signal thread to stop
logging.info('stop: key=%s', key)
events[key].set()
del events[key]
else:
spawn(events, key)
logging.info('spawn: key=%s', key)
time.sleep(2)
if __name__=='__main__':
with open(Database.PATH, 'w') as dbf:
dbf.write(
'whiskey\nsyrup\n'
)
db.update()
logging.info('start: db=%s -- %s', db.PATH, db)
manager_t = threading.Thread(
target=manager,
name='manager',
)
manager_t.start()
manager_t.join()
Rather change your design architecture and go in for a distributed process-to-process message-passing, than to repetitively re-chase a dbEngine in an infinite loop with repetitive dbSeek-s for a dumb value, re-test it for an in-equality and then trying to "kill-a-sleep".
Both ZeroMQ or nanomsg are smart, broker-less, messaging layers very good in this sense.
A desire to cross-breed fire and water IMHO does not bring anything good for a real-world system.
A smart, scaleable, distributed process-to-process design does.
( Fig. on a simple distributed process-to-process messaging/coordination, courtesy imatix/ZeroMQ )

Allow Greenlet to finish work at end of main module execution

I'm making a library that uses gevent to do some work asynchronously. I'd like to guarantee that the work is completed, even if the main module finishes execution.
class separate_library(object):
def __init__(self):
import gevent.monkey; gevent.monkey.patch_all()
def do_work(self):
from gevent import spawn
spawn(self._do)
def _do(self):
from gevent import sleep
sleep(1)
print 'Done!'
if __name__ == '__main__':
lib = separate_library()
lib.do_work()
If you run this, you'll notice the program ends immediately, and Done! doesn't get printed.
Now, the main module doesn't know, or care, how separate_library actually accomplishes the work (or even that gevent is being used), so it's unreasonable to require joining there.
Is there any way separate_library can detect certain types of program exits, and stall until the work is done? Keyboard interrupts, SIGINTs, and sys.exit() should end the program immediately, as that is probably the expected behaviour.
Thanks!
Try using a new thread that is not a daemon thread that spawns your gevent threads. Your program will not exit due to this non daemon thread.
import gevent
import threading
class separate_library(object):
def __init__(self):
import gevent.monkey; gevent.monkey.patch_all()
def do_work(self):
t = threading.Thread(target=self.spawn_gthreads)
t.setDaemon(False)
t.start()
def spawn_gthreads(self):
from gevent import spawn
gthreads = [spawn(self._do,x) for x in range(10)]
gevent.joinall(gthreads)
def _do(self,sec):
from gevent import sleep
sleep(sec)
print 'Done!'
if __name__ == '__main__':
lib = separate_library()
lib.do_work()

Adding objects to queue without interruption

I would like to put two objects into a queue, but I've got to be sure the objects are in both queues at the same time, therefore it should not be interrupted in between - something like an atomic block. Does some one have a solution? Many thanks...
queue_01.put(car)
queue_02.put(bike)
You could use a Condition object. You can tell the threads to wait with cond.wait(), and signal when the queues are ready with cond.notify_all(). See, for example, Doug Hellman's wonderful Python Module of the Week blog. His code uses multiprocessing; here I've adapted it for threading:
import threading
import Queue
import time
def stage_1(cond,q1,q2):
"""perform first stage of work, then notify stage_2 to continue"""
with cond:
q1.put('car')
q2.put('bike')
print 'stage_1 done and ready for stage 2'
cond.notify_all()
def stage_2(cond,q):
"""wait for the condition telling us stage_1 is done"""
name=threading.current_thread().name
print 'Starting', name
with cond:
cond.wait()
print '%s running' % name
def run():
# http://www.doughellmann.com/PyMOTW/multiprocessing/communication.html#synchronizing-threads-with-a-condition-object
condition=threading.Condition()
queue_01=Queue.Queue()
queue_02=Queue.Queue()
s1=threading.Thread(name='s1', target=stage_1, args=(condition,queue_01,queue_02))
s2_clients=[
threading.Thread(name='stage_2[1]', target=stage_2, args=(condition,queue_01)),
threading.Thread(name='stage_2[2]', target=stage_2, args=(condition,queue_02)),
]
# Notice stage2 processes are started before stage1 process, and yet they wait
# until stage1 finishes
for c in s2_clients:
c.start()
time.sleep(1)
s1.start()
s1.join()
for c in s2_clients:
c.join()
run()
Running the script yields
Starting stage_2[1]
Starting stage_2[2]
stage_1 done and ready for stage 2 <-- Notice that stage2 is prevented from running until the queues have been packed.
stage_2[2] running
stage_2[1] running
To atomically add to two different queues, acquire the locks for both queues first. That's easiest to do by making a subclass of Queue that uses recursive locks.
import Queue # Note: module renamed to "queue" in Python 3
import threading
class MyQueue(Queue.Queue):
"Make a queue that uses a recursive lock instead of a regular lock"
def __init__(self):
Queue.Queue.__init__(self)
self.mutex = threading.RLock()
queue_01 = MyQueue()
queue_02 = MyQueue()
with queue_01.mutex:
with queue_02.mutex:
queue_01.put(1)
queue_02.put(2)

Categories

Resources