Event handling in Autobahn Python - python

I want my websocket client to close connection as soon as my_flag is set to True.
Here's my socket class:
class BridgeSocket(WebSocketClientProtocol):
def __init__(self,factory,my_flag):
self.my_flag = my_flag
Now, my_flag is set as true after some time somewhere else in the program run.(inside a different thread). Instead of waiting in a
while True:
sleep(1)
kind of loop, is there any event which I can define and attach to my websocket class.
i.e. a function which gets fired when my_flag is set to true

Use threading.Event.
# initialization code
self.my_event = threading.Event()
# sct.my_event.set() is called from somewhere
# waiting code, to replace while True:...
sct.my_event.wait()

You should spawn your background task using Twisted deferToThread and then notify the WebSocket stuff running on the main thread using callFromThread.

Related

Equivalent of thread.interrupt_main() in Python 3

In Python 2 there is a function thread.interrupt_main(), which raises a KeyboardInterrupt exception in the main thread when called from a subthread.
This is also available through _thread.interrupt_main() in Python 3, but it's a low-level "support module", mostly for use within other standard modules.
What is the modern way of doing this in Python 3, presumably through the threading module, if there is one?
Well raising an exception manually is kinda low-level, so if you think you have to do that just use _thread.interrupt_main() since that's the equivalent you asked for (threading module itself doesn't provide this).
It could be that there is a more elegant way to achieve your ultimate goal, though. Maybe setting and checking a flag would be already enough or using a threading.Event like #RFmyD already suggested, or using message passing over a queue.Queue. It depends on your specific setup.
If you need a way for a thread to stop execution of the whole program, this is how I did it with a threading.Event:
def start():
"""
This runs in the main thread and starts a sub thread
"""
stop_event = threading.Event()
check_stop_thread = threading.Thread(
target=check_stop_signal, args=(stop_event), daemon=True
)
check_stop_thread.start()
# If check_stop_thread sets the check_stop_signal, sys.exit() is executed here in the main thread.
# Since the sub thread is a daemon, it will be terminated as well.
stop_event.wait()
logging.debug("Threading stop event set, calling sys.exit()...")
sys.exit()
def check_stop_signal(stop_event):
"""
Checks continuously (every 0.1 s) if a "stop" flag has been set in the database.
Needs to run in its own thread.
"""
while True:
if io.check_stop():
logger.info("Program was aborted by user.")
logging.debug("Setting threading stop event...")
stop_event.set()
break
sleep(0.1)
You might want to look into the threading.Event module.

Gracefully stopping python daemon with child proceses

I'm trying to implement a python daemon in the traditional start/stop/restart style to control a consumer to a messaging queue. I've successfully used python-daemons to create a single consumer, but I need more than one listener for the volume of messages. This led me to implement the multiprocessing library in my run function along with a os.kill call for the stop function:
def run(self):
for num in range(self.num_instances):
p = multiprocessing.Process(target=self.start_listening)
p.start()
def start_listening(self):
with open('/tmp/pids/listener_{}.pid'.format(os.getpid()), 'w') as f:
f.write("{}".format(os.getpid()))
while True:
// implement message queue listener
def stop(self):
for pid in os.listdir('/tmp/pids/'):
os.kill(int(os.path.basename(pid)), signal.SIGTERM)
shutil.rmtree('/tmp/pids/')
super().stop()
This is almost ok, but I'd really like to have a graceful shutdown of the child processes and do some clean up which would include logging. I read about signal handlers so I switched the signal.SIGTERM to signal.SIGINT and added a handler to the daemon class.
def __init__(self):
....
signal.signal(signal.SIGINT, self.graceful_stop)
def stop(self):
for pid in os.listdir('/tmp/pids/'):
os.kill(int(os.path.basename(pid)), signal.SIGINT)
super().stop()
def graceful_stop(self):
self.log.deug("Gracefully stopping the child {}".format(os.getpid()))
os.rm('/tmp/pids/listener_{}.pid".format(os.getpid()))
...
However, when tested, the child processes get killed but it doesn't seem like the graceful_stop function never gets called (files remain, logging doesn't get logged, etc). Am I implementing the handler wrong for the child processes? Is there a better way of having multiple listeners with a single control point?
I figured it out. The signal.signal declaration had to be explicitly put in each sub process's start_listening function.
def start_listening(self):
signal.signal(signal.SIGINT, self.graceful_stop)
with open('/tmp/pids/listener_{}.pid'.format(os.getpid()), 'w') as f:
f.write("{}".format(os.getpid()))
while True:
// implement message queue listener

Why monitoring a keyboard interrupt in python thread doesn't work

I have a very simple python code:
def monitor_keyboard_interrupt():
is_done = False
while True:
if is_done
break
try:
print(sys._getframe().f_code.co_name)
except KeyboardInterrupt:
is_done = True
def test():
monitor_keyboard_thread = threading.Thread(target = monitor_keyboard_interrupt)
monitor_keyboard_thread.start()
monitor_keyboard_thread.join()
def main():
test()
if '__main__' == __name__:
main()
However when I press 'Ctrl-C' the thread isn't stopped. Can someone explain what I'm doing wrong. Any help is appreciated.
Simple reason:
Because only the <_MainThread(MainThread, started 139712048375552)> can create signal handlers and listen for signals.
This includes KeyboardInterrupt which is a SIGINT.
THis comes straight from the signal docs:
Some care must be taken if both signals and threads are used in the
same program. The fundamental thing to remember in using signals and
threads simultaneously is: always perform signal() operations in the
main thread of execution. Any thread can perform an alarm(),
getsignal(), pause(), setitimer() or getitimer(); only the main thread
can set a new signal handler, and the main thread will be the only one
to receive signals (this is enforced by the Python signal module, even
if the underlying thread implementation supports sending signals to
individual threads). This means that signals can’t be used as a means
of inter-thread communication. Use locks instead.

How to interrupt python multithreaded app?

I'm trying to run the following code (it i simplified a bit):
def RunTests(self):
from threading import Thread
import signal
global keep_running
keep_running = True
signal.signal( signal.SIGINT, stop_running )
for i in range(0, NumThreads):
thread = Thread(target = foo)
self._threads.append(thread)
thread.start()
# wait for all threads to finish
for t in self._threads:
t.join()
def stop_running(signl, frme):
global keep_testing
keep_testing = False
print "Interrupted by the Master. Good by!"
return 0
def foo(self):
global keep_testing
while keep_testing:
DO_SOME_WORK();
I expect that the user presses Ctrl+C the program will print the good by message and interrupt. However it doesn't work. Where is the problem?
Thanks
Unlike regular processes, Python doesn't appear to handle signals in a truly asynchronous manner. The 'join()' call is somehow blocking the main thread in a manner that prevents it from responding to the signal. I'm a bit surprised by this since I don't see anything in the documentation indicating that this can/should happen. The solution, however, is simple. In your main thread, add the following loop prior to calling 'join()' on the threads:
while keep_testing:
signal.pause()
From the threading docs:
A thread can be flagged as a “daemon thread”. The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. The flag can be set through the daemon property.
You could try setting thread.daemon = True before calling start() and see if that solves your problem.

Proper way of cancelling accept and closing a Python processing/multiprocessing Listener connection

(I'm using the pyprocessing module in this example, but replacing processing with multiprocessing should probably work if you run python 2.6 or use the multiprocessing backport)
I currently have a program that listens to a unix socket (using a processing.connection.Listener), accept connections and spawns a thread handling the request. At a certain point I want to quit the process gracefully, but since the accept()-call is blocking and I see no way of cancelling it in a nice way. I have one way that works here (OS X) at least, setting a signal handler and signalling the process from another thread like so:
import processing
from processing.connection import Listener
import threading
import time
import os
import signal
import socket
import errno
# This is actually called by the connection handler.
def closeme():
time.sleep(1)
print 'Closing socket...'
listener.close()
os.kill(processing.currentProcess().getPid(), signal.SIGPIPE)
oldsig = signal.signal(signal.SIGPIPE, lambda s, f: None)
listener = Listener('/tmp/asdf', 'AF_UNIX')
# This is a thread that handles one already accepted connection, left out for brevity
threading.Thread(target=closeme).start()
print 'Accepting...'
try:
listener.accept()
except socket.error, e:
if e.args[0] != errno.EINTR:
raise
# Cleanup here...
print 'Done...'
The only other way I've thought about is reaching deep into the connection (listener._listener._socket) and setting the non-blocking option...but that probably has some side effects and is generally really scary.
Does anyone have a more elegant (and perhaps even correct!) way of accomplishing this? It needs to be portable to OS X, Linux and BSD, but Windows portability etc is not necessary.
Clarification:
Thanks all! As usual, ambiguities in my original question are revealed :)
I need to perform cleanup after I have cancelled the listening, and I don't always want to actually exit that process.
I need to be able to access this process from other processes not spawned from the same parent, which makes Queues unwieldy
The reasons for threads are that:
They access a shared state. Actually more or less a common in-memory database, so I suppose it could be done differently.
I must be able to have several connections accepted at the same time, but the actual threads are blocking for something most of the time. Each accepted connection spawns a new thread; this in order to not block all clients on I/O ops.
Regarding threads vs. processes, I use threads for making my blocking ops non-blocking and processes to enable multiprocessing.
Isnt that what select is for??
Only call accept on the socket if the select indicates it will not block...
The select has a timeout, so you can break out occasionally occasionally to check
if its time to shut down....
I thought I could avoid it, but it seems I have to do something like this:
from processing import connection
connection.Listener.fileno = lambda self: self._listener._socket.fileno()
import select
l = connection.Listener('/tmp/x', 'AF_UNIX')
r, w, e = select.select((l, ), (), ())
if l in r:
print "Accepting..."
c = l.accept()
# ...
I am aware that this breaks the law of demeter and introduces some evil monkey-patching, but it seems this would be the most easy-to-port way of accomplishing this. If anyone has a more elegant solution I would be happy to hear it :)
I'm new to the multiprocessing module, but it seems to me that mixing the processing module and the threading module is counter-intuitive, aren't they targetted at solving the same problem?
Anyway, how about wrapping your listen functions into a process itself? I'm not clear how this affects the rest of your code, but this may be a cleaner alternative.
from multiprocessing import Process
from multiprocessing.connection import Listener
class ListenForConn(Process):
def run(self):
listener = Listener('/tmp/asdf', 'AF_UNIX')
listener.accept()
# do your other handling here
listen_process = ListenForConn()
listen_process.start()
print listen_process.is_alive()
listen_process.terminate()
listen_process.join()
print listen_process.is_alive()
print 'No more listen process.'
Probably not ideal, but you can release the block by sending the socket some data from the signal handler or the thread that is terminating the process.
EDIT: Another way to implement this might be to use the Connection Queues, since they seem to support timeouts (apologies, I misread your code in my first read).
I ran into the same issue. I solved it by sending a "stop" command to the listener. In the listener's main thread (the one that processes the incoming messages), every time a new message is received, I just check to see if it's a "stop" command and exit out of the main thread.
Here's the code I'm using:
def start(self):
"""
Start listening
"""
# set the command being executed
self.command = self.COMMAND_RUN
# startup the 'listener_main' method as a daemon thread
self.listener = Listener(address=self.address, authkey=self.authkey)
self._thread = threading.Thread(target=self.listener_main, daemon=True)
self._thread.start()
def listener_main(self):
"""
The main application loop
"""
while self.command == self.COMMAND_RUN:
# block until a client connection is recieved
with self.listener.accept() as conn:
# receive the subscription request from the client
message = conn.recv()
# if it's a shut down command, return to stop this thread
if isinstance(message, str) and message == self.COMMAND_STOP:
return
# process the message
def stop(self):
"""
Stops the listening thread
"""
self.command = self.COMMAND_STOP
client = Client(self.address, authkey=self.authkey)
client.send(self.COMMAND_STOP)
client.close()
self._thread.join()
I'm using an authentication key to prevent would be hackers from shutting down my service by sending a stop command from an arbitrary client.
Mine isn't a perfect solution. It seems a better solution might be to revise the code in multiprocessing.connection.Listener, and add a stop() method. But, that would require sending it through the process for approval by the Python team.

Categories

Resources