Can't catch SIGINT in multithreaded program - python

I've seen many topics about this particular problem but i still can't figure why i'm not catching a SIGINT in my main Thread.
Here is my code:
def connect(self, retry=100):
tries=retry
logging.info('connecting to %s' % self.path)
while True:
try:
self.sp = serial.Serial(self.path, 115200)
self.pileMessage = pilemessage.Pilemessage()
self.pileData = pilemessage.Pilemessage()
self.reception = reception.Reception(self.sp,self.pileMessage,self.pileData)
self.reception.start()
self.collisionlistener = collisionListener.CollisionListener(self)
self.message = messageThread.Message(self.pileMessage,self.collisionlistener)
self.datastreaminglistener = dataStreamingListener.DataStreamingListener(self)
self.datastreaming = dataStreaming.Data(self.pileData,self.datastreaminglistener)
return
except serial.serialutil.SerialException:
logging.info('retrying')
if not retry:
raise SpheroError('failed to connect after %d tries' % (tries-retry))
retry -= 1
def disconnect(self):
self.reception.stop()
self.message.stop()
self.datastreaming.stop()
while not self.pileData.isEmpty():
self.pileData.pop()
self.datastreaminglistener.remove()
while not self.pileMessage.isEmpty():
self.pileMessage.pop()
self.collisionlistener.remove()
self.sp.close()
if __name__ == '__main__':
import time
try:
logging.getLogger().setLevel(logging.DEBUG)
s = Sphero("/dev/rfcomm0")
s.connect()
s.set_motion_timeout(65525)
s.set_rgb(0,255,0)
s.set_back_led_output(255)
s.configure_locator(0,0)
except KeyboardInterrupt:
s.disconnect()
In the main function I call Connect() which is launching Threads over which i don't have direct controll.
When I launch this script I would like to be able to stop it when hitting Control+C by calling the "disconnect()" function which stops all the other threads.
In the code i provided it doesn't work because there is no thread in the main function. But I already tryied putting all the instuctions from Main() in a Thread with a While loop without success.
Is there a simple way to solve my problem ?
Thanx

Your indentation is messed up, but there's enough to go on.
Your main thread isn't catching SIGINT because it's not alive. There is nothing that stops your main thread from continuing past the try block, seeing no more code, and closing up shop.
I am not familiar with Sphero. I just attempted to google its docs and was linked to a bunch of 404 pages, so I'll tell you what you would normally do in a threaded environment - join your threads to the main thread so that the main thread can't finish execution before the worker threads.
for t in my_thread_list:
t.join() #main thread can't get past here until all the threads finish
If your Sphero object doesn't provide join-like functionality, you could hack something in that blocks, i.e.
raw_input('Press Enter to disconnect')
s.disconnect()

Related

Non-blocking multiprocessing.connection.Listener?

I use multiprocessing.connection.Listener for communication between processes, and it works as a charm for me. Now i would really love my mainloop to do something else between commands from client. Unfortunately listener.accept() blocks execution until connection from client process is established.
Is there a simple way of managing non blocking check for multiprocessing.connection? Timeout? Or shall i use a dedicated thread?
# Simplified code:
from multiprocessing.connection import Listener
def mainloop():
listener = Listener(address=(localhost, 6000), authkey=b'secret')
while True:
conn = listener.accept() # <--- This blocks!
msg = conn.recv()
print ('got message: %r' % msg)
conn.close()
One solution that I found (although it might not be the most "elegant" solution is using conn.poll. (documentation) Poll returns True if the Listener has new data, and (most importantly) is nonblocking if no argument is passed to it. I'm not 100% sure that this is the best way to do this, but I've had success with only running listener.accept() once, and then using the following syntax to repeatedly get input (if there is any available)
from multiprocessing.connection import Listener
def mainloop():
running = True
listener = Listener(address=(localhost, 6000), authkey=b'secret')
conn = listener.accept()
msg = ""
while running:
while conn.poll():
msg = conn.recv()
print (f"got message: {msg}")
if msg == "EXIT":
running = False
# Other code can go here
print(f"I can run too! Last msg received was {msg}")
conn.close()
The 'while' in the conditional statement can be replaced with 'if,' if you only want to get a maximum of one message at a time. Use with caution, as it seems sort of 'hacky,' and I haven't found references to using conn.poll for this purpose elsewhere.
You can run the blocking function in a thread:
conn = await loop.run_in_executor(None, listener.accept)
I've not used the Listener object myself- for this task I normally use multiprocessing.Queue; doco at the following link:
https://docs.python.org/2/library/queue.html#Queue.Queue
That object can be used to send and receive any pickle-able object between Python processes with a nice API; I think you'll be most interested in:
in process A
.put('some message')
in process B
.get_nowait() # will raise Queue.Empty if nothing is available- handle that to move on with your execution
The only limitation with this is you'll need to have control of both Process objects at some point in order to be able to allocate the queue to them- something like this:
import time
from Queue import Empty
from multiprocessing import Queue, Process
def receiver(q):
while 1:
try:
message = q.get_nowait()
print 'receiver got', message
except Empty:
print 'nothing to receive, sleeping'
time.sleep(1)
def sender(q):
while 1:
message = 'some message'
q.put('some message')
print 'sender sent', message
time.sleep(1)
some_queue = Queue()
process_a = Process(
target=receiver,
args=(some_queue,)
)
process_b = Process(
target=sender,
args=(some_queue,)
)
process_a.start()
process_b.start()
print 'ctrl + c to exit'
try:
while 1:
time.sleep(1)
except KeyboardInterrupt:
pass
process_a.terminate()
process_b.terminate()
process_a.join()
process_b.join()
Queues are nice because you can actually have as many consumers and as many producers for that exact same Queue object as you like (handy for distributing tasks).
I should point out that just calling .terminate() on a Process is bad form- you should use your shiny new messaging system to pass a shutdown message or something of that nature.
The multiprocessing module comes with a nice feature called Pipe(). It is a nice way to share resources between two processes(never tried more than two before). With the dawn of python 3.80 came the shared memory function in the multiprocessing module but i have not really tested that so i cannot vouch for it
You will use the pipe function something like
from multiprocessing import Pipe
.....
def sending(conn):
message = 'some message'
#perform some code
conn.send(message)
conn.close()
receiver, sender = Pipe()
p = Process(target=sending, args=(sender,))
p.start()
print receiver.recv() # prints "some message"
p.join()
with this you should be able to have separate processes running independently and when you get to the point which you need the input from one process. If there is somehow an error due to the unrelieved data of the other process you can put it on a kind of sleep or halt or use a while loop to constantly check pending when the other process finishes with that task and sends it over
while not parent_conn.recv():
time.sleep(5)
this should keep it in an infinite loop until the other process is done running and sends the result. This is also about 2-3 times faster than Queue. Although queue is also a good option personally I do not use it.

Threads and twisted can not be stopped by ctrl + c

I installed the signal in the main method,
But when I pressed ctrl+c during running the process wasn't stopped,
exceptions.SystemExit: 0
^CKilled by user
Unhandled Error
EventTrigger and MemoryInfo are classes inherit from threading
and HttpStreamClient is a class inferits from twosted.reactor
How to kill my process by ctrl+c , thanks
Code
def signal_handler(*args):
print("Killed by user")
# teardown()
sys.exit(0)
def install_signal():
for sig in (SIGABRT, SIGILL, SIGINT, SIGSEGV, SIGTERM):
signal(sig, signal_handler)
def main():
try:
global cgi, config
install_signal()
config = Config().read_file(sys.argv[1])[0]
init_export_folder()
setup_logging()
threads = [
EventTrigger(config),
MemoryInfo(config),
]
for thr in threads:
thr.setDaemon(True)
thr.start()
HttpStreamClient(config).run()
for thr in threads:
thr.join()
except BaseException as e:
traceback.print_exc(file=sys.stdout)
raise e
I think your problem might be the forceful nature that you are terminating the process.
While using twisted you should call reactor.stop() to get the initial run call to stop blocking.
If you change your signal_handler to shutdown the reactor.
def signal_handler(*args):
print("Killed by user")
reactor.stop()
Your threads could still keep the process alive. Thread.join doesn't forcefully stop a thread, which in general is never really a good idea. If EventTrigger or MemoryInfo are still running the thr.join will block. You will need a mechanism to stop threads. Maybe take a look here.
sys.exit() raises a Python exception; I'm pretty sure raising an exception in a signal handler does not do much. Either call reactor.stop() as Alex says or use os._exit(0). Be aware that using os._exit(0) will terminate the process without further ado.

How to create global error handler in a multi-threaded python applcation

I am developing a multi-threaded application in python. I have following scenario.
There are 2-3 producer threads which communicate with DB and get some data in large chunks and fill them up in a queue
There is an intermediate worker which breaks large chunks fetched by producer threads into smaller ones and fill them up in another queue.
There are 5 consumer threads which consume queue created by intermediate worker thread.
objects of data sources are accessed by producer threads through their API. these data sources are completely separate. So these producer understands only presence or absence of data which is supposed to be given out by data source object.
I create threads of these three types and i make main thread wait for completion of these threads by calling join() on them.
Now for such a setup I want a common error handler which senses failure of any thread, any exception and decides what to do. For e.g if I press ctrl+c after I start my application, main thread dies but producer, consumer threads continue to run. I would like that once ctrl+c is pressed entire application should shut down. Similarly if some DB error occurs in data source module, then producer thread should get notified of that.
This is what I have done so far:
I have created a class ThreadManager, it's object is passed to all threads. I have written an error handler method and passed it to sys.excepthook. This handler should catch exceptions, error and then it should call methods of ThreadManager class to control the running threads. Here is snippet:
class Producer(threading.Thread):
....
def produce():
data = dataSource.getData()
class DataSource:
....
def getData():
raise Exception("critical")
def customHandler(exceptionType, value, stackTrace):
print "In custom handler"
sys.excepthook = customHandler
Now when a thread of producer class calls getData() of DataSource class, exception is thrown. But this exception is never caught by my customHandler method.
What am I missing? Also in such scenario what other strategy can I apply? Please help. Thank you for having enough patience to read all this :)
What you need is a decorator. In essence you are modifying your original function and putting in inside a try-except:
def exception_decorator(func):
def _function(*args):
try:
result = func(*args)
except:
print('*** ESC default handler ***')
os._exit(1)
return result
return _function
If your thread function is called myfunc, then you add the following line above your function definition
#exception_decorator
def myfunc():
pass;
Can't you just catch "KeyboardInterrupt" when pressing Ctrl+C and do:
for thread in threading.enumerate():
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
and have a flag in each threaded class which is self.alive
you could theoretically call thread.alive = False and have it stop gracefully?
for thread in threading.enumerate():
thread.alive = False
time.sleep(5) # Grace period
thread._Thread__stop()
thread._Thread__delete()
while len(threading.enumerate()) > 1:
time.sleep(1)
os._exit(0)
example:
import os
from threading import *
from time import sleep
class worker(Thread):
def __init__(self):
self.alive = True
Thread.__init__(self)
self.start()
def run(self):
while self.alive:
sleep(0.1)
runner = worker()
try:
raw_input('Press ctrl+c!')
except:
pass
for thread in enumerate():
thread.alive = False
sleep(1)
try:
thread._Thread__stop()
thread._Thread__delete()
except:
pass
# There will always be 1 thread alive and that's the __main__ thread.
while len(enumerate()) > 1:
sleep(1)
os._exit(0)
Try going about it by changing the internal system exception handler?
import sys
origExcepthook = sys.excepthook
def uberexcept(exctype, value, traceback):
if exctype == KeyboardInterrupt:
print "Gracefully shutting down all the threads"
# enumerate() thingie here.
else:
origExcepthook(exctype, value, traceback)
sys.excepthook = uberexcept

Dbus/GLib Main Loop, Background Thread

I'm starting out with DBus and event driven programming in general. The service that I'm trying to create really consists of three parts but two are really "server" things.
1) The actual DBus server talks to a remote website over HTTPS, manages sessions, and conveys info the clients.
2) The other part of the service calls a keep alive page every 2 minutes to keep the session active on the external website
3) The clients make calls to the service to retrieve info from the service.
I found some simple example programs. I'm trying to adapt them to prototype #1 and #2. Rather than building separate programs for both. I thought I that I can run them in a single, two threaded process.
The problem that I'm seeing is that I call time.sleep(X) in my keep alive thread. The thread goes to sleep, but won't ever wake up. I think that the GIL isn't released by the GLib main loop.
Here's my thread code:
class Keepalive(threading.Thread):
def __init__(self, interval=60):
super(Keepalive, self).__init__()
self.interval = interval
bus = dbus.SessionBus()
self.remote = bus.get_object("com.example.SampleService", "/SomeObject")
def run(self):
while True:
print('sleep %i' % self.interval)
time.sleep(self.interval)
print('sleep done')
reply_status = self.remote.keepalive()
if reply_status:
print('Keepalive: Success')
else:
print('Keepalive: Failure')
From the print statements, I know that the sleep starts, but I never see "sleep done."
Here is the main code:
if __name__ == '__main__':
try:
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
session_bus = dbus.SessionBus()
name = dbus.service.BusName("com.example.SampleService", session_bus)
object = SomeObject(session_bus, '/SomeObject')
mainloop = gobject.MainLoop()
ka = Keepalive(15)
ka.start()
print('Begin main loop')
mainloop.run()
except Exception as e:
print(e)
finally:
ka.join()
Some other observations:
I see the "begin main loop" message, so I know it's getting control. Then, I see "sleep %i," and after that, nothing.
If I ^C, then I see "sleep done." After ~20 seconds, I get an exception from self.run() that the remote application didn't respond:
DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
What's the best way to run my keep alive code within the server?
Thanks,
You have to explicitly enable multithreading when using gobject by calling gobject.threads_init(). See the PyGTK FAQ for background info.
Next to that, for the purpose you're describing, timeouts seem to be a better fit. Use as follows:
# Enable timer
self.timer = gobject.timeout_add(time_in_ms, self.remote.keepalive)
# Disable timer
gobject.source_remove(self.timer)
This calls the keepalive function every time_in_ms (milli)seconds. Further details, again, can be found at the PyGTK reference.

Need some assistance with Python threading/queue

import threading
import Queue
import urllib2
import time
class ThreadURL(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
host = self.queue.get()
sock = urllib2.urlopen(host)
data = sock.read()
self.queue.task_done()
hosts = ['http://www.google.com', 'http://www.yahoo.com', 'http://www.facebook.com', 'http://stackoverflow.com']
start = time.time()
def main():
queue = Queue.Queue()
for i in range(len(hosts)):
t = ThreadURL(queue)
t.start()
for host in hosts:
queue.put(host)
queue.join()
if __name__ == '__main__':
main()
print 'Elapsed time: {0}'.format(time.time() - start)
I've been trying to get my head around how to perform Threading and after a few tutorials, I've come up with the above.
What it's supposed to do is:
Initialiase the queue
Create my Thread pool and then queue up the list of hosts
My ThreadURL class should then begin work once a host is in the queue and read the website data
The program should finish
What I want to know first off is, am I doing this correctly? Is this the best way to handle threads?
Secondly, my program fails to exit. It prints out the Elapsed time line and then hangs there. I have to kill my terminal for it to go away. I'm assuming this is due to my incorrect use of queue.join() ?
Your code looks fine and is quite clean.
The reason your application still "hangs" is that the worker threads are still running, waiting for the main application to put something in the queue, even though your main thread is finished.
The simplest way to fix this is to mark the threads as daemons, by doing t.daemon = True before your call to start. This way, the threads will not block the program stopping.
looks fine. yann is right about the daemon suggestion. that will fix your hang. my only question is why use the queue at all? you're not doing any cross thread communication, so it seems like you could just send the host info as an arg to ThreadURL init() and drop the queue.
nothing wrong with it, just wondering.
One thing, in the thread run function, the while True loop, if some exception happened, the task_done() may not be called however the get() has already been called. Thus the queue.join() may never end.

Categories

Resources