I want to emit a delayed message to a socket client. For example, when a new client connects, "checking is started" message should be emitted to the client, and after a certain seconds another message from a thread should be emitted.
#socket.on('doSomething', namespace='/test')
def onDoSomething(data):
t = threading.Timer(4, checkSomeResources)
t.start()
emit('doingSomething', 'checking is started')
def checkSomeResources()
# ...
# some work which takes several seconds comes here
# ...
emit('doingSomething', 'checking is done')
But the code does not work because of context issue. I get
RuntimeError('working outside of request context')
Is it possible to make emitting from a thread?
The problem is that the thread does not have the context to know what user to address the message to.
You can pass request.namespace to the thread as an argument, and then send the message with it. Example:
#socket.on('doSomething', namespace='/test')
def onDoSomething(data):
t = threading.Timer(4, checkSomeResources, request.namespace)
t.start()
emit('doingSomething', 'checking is started')
def checkSomeResources(namespace)
# ...
# some work which takes several seconds comes here
# ...
namespace.emit('doingSomething', 'checking is done')
Related
I have attempted to follow guidance given here: Handling long running tasks in pika / RabbitMQ and here: https://github.com/pika/pika/issues/753#issuecomment-318124510 on how to run long tasks in a separate thread to avoid interrupting the connection heartbeat. I'm a beginner to threading and still struggling to understand this solution.
For my final use case, I need to make function calls that are several minutes long, represented in the example code below by the long_function(). I've found that if the sleep call in long_function() exceeds the length of the heartbeat timeout, I lose connection (presumably because this function is blocking thread #2 from receiving/acknowledging the heartbeat messages from thread #1) and I get this message in the logs: ERROR: Unexpected connection close detected: StreamLostError: ("Stream connection lost: RxEndOfFile(-1, 'End of input stream (EOF)')",). A sleep call of the same length in the target function of thread #2 does not lead to a StreamLostError.
What's the proper solution for overcoming the StreamLostError here? Do I launch all subsequent function calls in their own threads to avoid blocking thread #2? Do I increase the heartbeat to be longer than long_function()? If this is the solution, what was the point of running my long task in a separate thread? Why not just make the heartbeat timeout in the main thread long enough to accommodate the whole message being processed? Thanks!
import functools
import logging
import pika
import threading
import time
import os
import ssl
from common_utils.rabbitmq_utils import send_message_to_queue, initialize_rabbitmq_channel
import json
import traceback
logging.basicConfig(format='%(asctime)s %(levelname)s: %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S')
def send_message_to_queue(channel, queue_name, body):
channel.basic_publish(exchange='',
routing_key=queue_name,
body=json.dumps(body),
properties=pika.BasicProperties(delivery_mode=2)
)
logging.info("RabbitMQ publish to queue {} confirmed".format(queue_name))
def initialize_rabbitmq_channel(timeout=5*60):
credentials = pika.PlainCredentials(os.environ.get("RABBITMQ_USER"), os.environ.get("RABBITMQ_PASSWORD"))
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
params = pika.ConnectionParameters(port=5671, host=os.environ.get("RABBITMQ_HOST"), credentials=credentials,
ssl_options=pika.SSLOptions(context), virtual_host="/", heartbeat=timeout)
connection = pika.BlockingConnection(params)
return connection.channel(), connection
def long_function():
logging.info("Long function starting...")
time.sleep(5)
logging.info("Long function finished.")
def ack_message(channel, delivery_tag):
"""
Note that `channel` must be the same pika channel instance via which
the message being ACKed was retrieved (AMQP protocol constraint).
"""
if channel.is_open:
channel.basic_ack(delivery_tag)
logging.info("Message {} acknowledged".format(delivery_tag))
else:
logging.error("Channel is closed and message acknowledgement will fail")
pass
def do_work(connection, channel, delivery_tag, body):
thread_id = threading.get_ident()
fmt1 = 'Thread id: {} Delivery tag: {} Message body: {}'
logging.info(fmt1.format(thread_id, delivery_tag, body))
# Simulating work including a call to another function that exceeds heartbeat timeout
time.sleep(5)
long_function()
send_message_to_queue(channel, "test_inactive", json.loads(body))
cb = functools.partial(ack_message, channel, delivery_tag)
connection.add_callback_threadsafe(cb)
def on_message(connection, channel, method, property, body):
t = threading.Thread(target=do_work, args=(connection, channel, method.delivery_tag, body))
t.start()
t.join()
if __name__ == "__main__":
channel, connection = initialize_rabbitmq_channel(timeout=3)
channel.basic_qos(prefetch_count=1)
channel.basic_consume(queue="test_queue",
auto_ack=False,
on_message_callback=lambda channel, method, property, body: on_message(connection, channel, method, property, body)
)
channel.start_consuming()
I have a multithreaded server and client programs for a simple game. Whenever a client force exits the game, I try to catch the exception with a try catch "except BrokenPipeError" and inform the other players.
I also want to end the exited client thread; however, I take input such as this:
while True:
client = serverSocket.accept()
t = ServerThread(client)
t.start()
I tried to use a threading event with stop() function; however, I believe I can not use a .join statement to exit the thread because of the way I take input. How should I end the force exited client. I know that multiprocessing library has a terminate function but I am also required to use the threading library. I tried os_exit(1) but I believe this command kills the entire process. What is the standard exit process for programs such as this?
First of all join() do nothing else but waits for thread to stop.
Thread stops when it reaches end of threaded subroutine. For example
class ServerThread(threading.Thread):
def __init__(self,client,name):
super().__init__(self)
self.client = client
self.name = name
def inform(self,msg):
print("{}: got message {}".format( self.name, msg ))
self.client[0].send(msg)
def run(self):
while True:
try:
self.client[0].recv(1024)
except BrokenPipeError: #client exits
# do stuff
break # -> ends loop
return # -> thread exits, join returns
If you want to inform other clients that someone leaves, i would make another monitoring thread
class Monitoring(threading.Thread):
def __init__(self):
super().__init__(self,daemon=True) # daemon means thread stops when main thread do
self.clients=[]
def add_client(self,client):
self.clients.append(client)
def inform_client_leaves(self,client_leaved):
for client in self.clients:
if client.is_alive():
client.inform("Client '{}' leaves".format(client_leaved.name))
def run(self):
while True:
for thread in list(self.threads):
if not thread.is_alive(): # client exited
self.threads.remove(thread)
self.inform_client_exits(thread)
time.sleep(1)
So the initial code would looks like
mon = Monitoring()
mon.start()
while True:
client = serverSocket.accept()
t = ServerThread(client,"Client1")
t.start()
mon.add_client(t)
TL;DR: Calling future.set_result doesn't immediately resolve loop.run_until_complete. Instead it blocks for an additional 5 seconds.
Full context:
In my project, I'm using autobahn and asyncio to send and receive messages with a websocket server. For my use case, I need a 2nd thread for websocket communication, since I have arbitrary blocking code that will be running in the main thread. The main thread also needs to be able to schedule messages for the communication thread to send back and forth with the server. My current goal is to send a message originating from the main thread and block until the response comes back, using the communication thread for all message passing.
Here is a snippet of my code:
import asyncio
import threading
from autobahn.asyncio.websocket import WebSocketClientFactory, WebSocketClientProtocol
CLIENT = None
class MyWebSocketClientProtocol(WebSocketClientProtocol):
# -------------- Boilerplate --------------
is_connected = False
msg_queue = []
msg_listeners = []
def onOpen(self):
self.is_connected = True
for msg in self.msg_queue[::]:
self.publish(msg)
def onClose(self, wasClean, code, reason):
is_connected = False
def onMessage(self, payload, isBinary):
for listener in self.msg_listeners:
listener(payload)
def publish(self, msg):
if not self.is_connected:
self.msg_queue.append(msg)
else:
self.sendMessage(msg.encode('utf-8'))
# /----------------------------------------
def send_and_wait(self):
future = asyncio.get_event_loop().create_future()
def listener(msg):
print('set result')
future.set_result(123)
self.msg_listeners.append(listener)
self.publish('hello')
return future
def worker(loop, ready):
asyncio.set_event_loop(loop)
factory = WebSocketClientFactory('ws://127.0.0.1:9000')
factory.protocol = MyWebSocketClientProtocol
transport, protocol = loop.run_until_complete(loop.create_connection(factory, '127.0.0.1', 9000))
global CLIENT
CLIENT = protocol
ready.set()
loop.run_forever()
if __name__ == '__main__':
# Set up communication thread to talk to the server
threaded_loop = asyncio.new_event_loop()
thread_is_ready = threading.Event()
thread = threading.Thread(target=worker, args=(threaded_loop, thread_is_ready))
thread.start()
thread_is_ready.wait()
# Send a message and wait for response
print('starting')
loop = asyncio.get_event_loop()
result = loop.run_until_complete(CLIENT.send_and_wait())
print('done') # this line gets called 5 seconds after it should
I'm using the autobahn echo server example to respond to my messages.
Problem: The WebSocketClientProtocol receives the response to its outgoing message and calls set_result on its pending future, but loop.run_until_complete blocks an additional ~4.9 seconds until eventually resolving.
I understand that run_until_complete also processes other pending events on the event loop. Is it possible that the main thread has somehow queued up a bunch of events that have to now get processed once I start the loop? Also, if I move run_until_complete into the communications thread or move the create_connection into the main thread, then the event loop doesn't block me.
Lastly, I tried to recreate this problem without using autobahn, but I couldn't cause the extra delay. I'm curious if maybe this is an issue with the nature of autobahn's callback timing (onMessage for example).
I use multiprocessing.connection.Listener for communication between processes, and it works as a charm for me. Now i would really love my mainloop to do something else between commands from client. Unfortunately listener.accept() blocks execution until connection from client process is established.
Is there a simple way of managing non blocking check for multiprocessing.connection? Timeout? Or shall i use a dedicated thread?
# Simplified code:
from multiprocessing.connection import Listener
def mainloop():
listener = Listener(address=(localhost, 6000), authkey=b'secret')
while True:
conn = listener.accept() # <--- This blocks!
msg = conn.recv()
print ('got message: %r' % msg)
conn.close()
One solution that I found (although it might not be the most "elegant" solution is using conn.poll. (documentation) Poll returns True if the Listener has new data, and (most importantly) is nonblocking if no argument is passed to it. I'm not 100% sure that this is the best way to do this, but I've had success with only running listener.accept() once, and then using the following syntax to repeatedly get input (if there is any available)
from multiprocessing.connection import Listener
def mainloop():
running = True
listener = Listener(address=(localhost, 6000), authkey=b'secret')
conn = listener.accept()
msg = ""
while running:
while conn.poll():
msg = conn.recv()
print (f"got message: {msg}")
if msg == "EXIT":
running = False
# Other code can go here
print(f"I can run too! Last msg received was {msg}")
conn.close()
The 'while' in the conditional statement can be replaced with 'if,' if you only want to get a maximum of one message at a time. Use with caution, as it seems sort of 'hacky,' and I haven't found references to using conn.poll for this purpose elsewhere.
You can run the blocking function in a thread:
conn = await loop.run_in_executor(None, listener.accept)
I've not used the Listener object myself- for this task I normally use multiprocessing.Queue; doco at the following link:
https://docs.python.org/2/library/queue.html#Queue.Queue
That object can be used to send and receive any pickle-able object between Python processes with a nice API; I think you'll be most interested in:
in process A
.put('some message')
in process B
.get_nowait() # will raise Queue.Empty if nothing is available- handle that to move on with your execution
The only limitation with this is you'll need to have control of both Process objects at some point in order to be able to allocate the queue to them- something like this:
import time
from Queue import Empty
from multiprocessing import Queue, Process
def receiver(q):
while 1:
try:
message = q.get_nowait()
print 'receiver got', message
except Empty:
print 'nothing to receive, sleeping'
time.sleep(1)
def sender(q):
while 1:
message = 'some message'
q.put('some message')
print 'sender sent', message
time.sleep(1)
some_queue = Queue()
process_a = Process(
target=receiver,
args=(some_queue,)
)
process_b = Process(
target=sender,
args=(some_queue,)
)
process_a.start()
process_b.start()
print 'ctrl + c to exit'
try:
while 1:
time.sleep(1)
except KeyboardInterrupt:
pass
process_a.terminate()
process_b.terminate()
process_a.join()
process_b.join()
Queues are nice because you can actually have as many consumers and as many producers for that exact same Queue object as you like (handy for distributing tasks).
I should point out that just calling .terminate() on a Process is bad form- you should use your shiny new messaging system to pass a shutdown message or something of that nature.
The multiprocessing module comes with a nice feature called Pipe(). It is a nice way to share resources between two processes(never tried more than two before). With the dawn of python 3.80 came the shared memory function in the multiprocessing module but i have not really tested that so i cannot vouch for it
You will use the pipe function something like
from multiprocessing import Pipe
.....
def sending(conn):
message = 'some message'
#perform some code
conn.send(message)
conn.close()
receiver, sender = Pipe()
p = Process(target=sending, args=(sender,))
p.start()
print receiver.recv() # prints "some message"
p.join()
with this you should be able to have separate processes running independently and when you get to the point which you need the input from one process. If there is somehow an error due to the unrelieved data of the other process you can put it on a kind of sleep or halt or use a while loop to constantly check pending when the other process finishes with that task and sends it over
while not parent_conn.recv():
time.sleep(5)
this should keep it in an infinite loop until the other process is done running and sends the result. This is also about 2-3 times faster than Queue. Although queue is also a good option personally I do not use it.
I am trying to create a 'keepalive' websocket thread to send an emit every 10 seconds to the browser once someone connects to the page, but I'm getting an error and am not sure how to get around it.
Any ideas on how to make this work?
And how would I kill this thread once a 'disconnect' is sent?
Thanks!
#socketio.on('connect', namespace='/endpoint')
def test_connect():
emit('my response', {'data': '<br>Client thinks i\'m connected'})
def background_thread():
"""Example of how to send server generated events to clients."""
count = 0
while True:
time.sleep(10)
count += 1
emit('my response', {'data': 'websocket is keeping alive'}, namespace='/endpoint')
global thread
if thread is None:
thread = Thread(target=background_thread)
thread.start()
You wrote your background thread in a way that requires it to know who's the client, since you are sending a direct message to it. For that reason the background thread needs to have access to the request context. In Flask you can install a copy of the current request context in a thread using the copy_current_request_context decorator:
#copy_current_request_context
def background_thread():
"""Example of how to send server generated events to clients."""
count = 0
while True:
time.sleep(10)
count += 1
emit('my response', {'data': 'websocket is keeping alive'}, namespace='/endpoint')
Couple of notes:
It is not necessary to set the namespace when you are sending back to the client, by default the emit call will be on the same namespace used by the client. The namespace needs to be specified when you broadcast or send messages outside of a request context.
Keep in mind your design will require a separate thread for each client that connects. It would be more efficient to have a single background thread that broadcasts to all clients. See the example application that I have on the Github repository for an example: https://github.com/miguelgrinberg/Flask-SocketIO/tree/master/example
To stop the thread when the client disconnects you can use any multi-threading mechanism to let the thread know it needs to exit. This can be, for example, a global variable that you set on the disconnect event. A not so great alternative that is easy to implement is to wait for the emit to raise an exception when the client went away and use that to exit the thread.