RuntimeError: working outside of request context - python

I am trying to create a 'keepalive' websocket thread to send an emit every 10 seconds to the browser once someone connects to the page, but I'm getting an error and am not sure how to get around it.
Any ideas on how to make this work?
And how would I kill this thread once a 'disconnect' is sent?
Thanks!
#socketio.on('connect', namespace='/endpoint')
def test_connect():
emit('my response', {'data': '<br>Client thinks i\'m connected'})
def background_thread():
"""Example of how to send server generated events to clients."""
count = 0
while True:
time.sleep(10)
count += 1
emit('my response', {'data': 'websocket is keeping alive'}, namespace='/endpoint')
global thread
if thread is None:
thread = Thread(target=background_thread)
thread.start()

You wrote your background thread in a way that requires it to know who's the client, since you are sending a direct message to it. For that reason the background thread needs to have access to the request context. In Flask you can install a copy of the current request context in a thread using the copy_current_request_context decorator:
#copy_current_request_context
def background_thread():
"""Example of how to send server generated events to clients."""
count = 0
while True:
time.sleep(10)
count += 1
emit('my response', {'data': 'websocket is keeping alive'}, namespace='/endpoint')
Couple of notes:
It is not necessary to set the namespace when you are sending back to the client, by default the emit call will be on the same namespace used by the client. The namespace needs to be specified when you broadcast or send messages outside of a request context.
Keep in mind your design will require a separate thread for each client that connects. It would be more efficient to have a single background thread that broadcasts to all clients. See the example application that I have on the Github repository for an example: https://github.com/miguelgrinberg/Flask-SocketIO/tree/master/example
To stop the thread when the client disconnects you can use any multi-threading mechanism to let the thread know it needs to exit. This can be, for example, a global variable that you set on the disconnect event. A not so great alternative that is easy to implement is to wait for the emit to raise an exception when the client went away and use that to exit the thread.

Related

asyncio run_until_complete blocks after future has set result (autobahn websockets & threading)

TL;DR: Calling future.set_result doesn't immediately resolve loop.run_until_complete. Instead it blocks for an additional 5 seconds.
Full context:
In my project, I'm using autobahn and asyncio to send and receive messages with a websocket server. For my use case, I need a 2nd thread for websocket communication, since I have arbitrary blocking code that will be running in the main thread. The main thread also needs to be able to schedule messages for the communication thread to send back and forth with the server. My current goal is to send a message originating from the main thread and block until the response comes back, using the communication thread for all message passing.
Here is a snippet of my code:
import asyncio
import threading
from autobahn.asyncio.websocket import WebSocketClientFactory, WebSocketClientProtocol
CLIENT = None
class MyWebSocketClientProtocol(WebSocketClientProtocol):
# -------------- Boilerplate --------------
is_connected = False
msg_queue = []
msg_listeners = []
def onOpen(self):
self.is_connected = True
for msg in self.msg_queue[::]:
self.publish(msg)
def onClose(self, wasClean, code, reason):
is_connected = False
def onMessage(self, payload, isBinary):
for listener in self.msg_listeners:
listener(payload)
def publish(self, msg):
if not self.is_connected:
self.msg_queue.append(msg)
else:
self.sendMessage(msg.encode('utf-8'))
# /----------------------------------------
def send_and_wait(self):
future = asyncio.get_event_loop().create_future()
def listener(msg):
print('set result')
future.set_result(123)
self.msg_listeners.append(listener)
self.publish('hello')
return future
def worker(loop, ready):
asyncio.set_event_loop(loop)
factory = WebSocketClientFactory('ws://127.0.0.1:9000')
factory.protocol = MyWebSocketClientProtocol
transport, protocol = loop.run_until_complete(loop.create_connection(factory, '127.0.0.1', 9000))
global CLIENT
CLIENT = protocol
ready.set()
loop.run_forever()
if __name__ == '__main__':
# Set up communication thread to talk to the server
threaded_loop = asyncio.new_event_loop()
thread_is_ready = threading.Event()
thread = threading.Thread(target=worker, args=(threaded_loop, thread_is_ready))
thread.start()
thread_is_ready.wait()
# Send a message and wait for response
print('starting')
loop = asyncio.get_event_loop()
result = loop.run_until_complete(CLIENT.send_and_wait())
print('done') # this line gets called 5 seconds after it should
I'm using the autobahn echo server example to respond to my messages.
Problem: The WebSocketClientProtocol receives the response to its outgoing message and calls set_result on its pending future, but loop.run_until_complete blocks an additional ~4.9 seconds until eventually resolving.
I understand that run_until_complete also processes other pending events on the event loop. Is it possible that the main thread has somehow queued up a bunch of events that have to now get processed once I start the loop? Also, if I move run_until_complete into the communications thread or move the create_connection into the main thread, then the event loop doesn't block me.
Lastly, I tried to recreate this problem without using autobahn, but I couldn't cause the extra delay. I'm curious if maybe this is an issue with the nature of autobahn's callback timing (onMessage for example).

How do I send something to connected websocket clients from another thread?

I am writing a Python 3.5 program which handles some signals and serves this data to a small amount of websocket clients.
I want the websocket server and the signal handling to happen in the same program, therefore I am using threading.
The problem is I don't know how to send data from the worker thread to the client.
The Websocket server is implemented with a simple library called "websockets". The server is set up and clients can connect and talk to the server within the "new websocket client has connected" handler.
The server is set up with the help of an event loop:
start_server = websockets.serve(newWsHandler, host, port)
loop = asyncio.get_event_loop()
loop.run_until_complete(start_server)
loop.run_forever()
Because I want my program to do signal handling too, and loop.run_forever() is a blocking call, I create an endless worker thread before I start my server. This works as expected.
When the worker thread detects a signal change, it has to alert the connected websocket clients. But a simple client.send() does not work. Putting await in front of it does not work either (since that only works within coroutines, I think). I tried making a separate "async def" function and adding it to the event loop, but it gets a bit complicated because it's not on the same thread.
So the main question is: what is the best way send something to a websocket client from a worker thread? I don't receive anything in response.
EDIT:
It will probably help if I add some mock code.
def signalHandler():
#check signals
...
if alert:
connections[0].send("Alert") #NEED HELP HERE
async def newWsHandler(websocket, path):
connections.append(websocket)
while True:
#keep the connection open until the client disconnects
msg = await websocket.recv()
#top level
connections = []
...
start_server = websockets.serve(newWsHandler, host, port)
signalThread = Thread(target = signalHandler)
signalThread.setDaemon(True)
signalThread.start()
loop = asyncio.get_event_loop()
loop.run_until_complete(start_server)
loop.run_forever()

emitting from thread using flask's socketio extension

I want to emit a delayed message to a socket client. For example, when a new client connects, "checking is started" message should be emitted to the client, and after a certain seconds another message from a thread should be emitted.
#socket.on('doSomething', namespace='/test')
def onDoSomething(data):
t = threading.Timer(4, checkSomeResources)
t.start()
emit('doingSomething', 'checking is started')
def checkSomeResources()
# ...
# some work which takes several seconds comes here
# ...
emit('doingSomething', 'checking is done')
But the code does not work because of context issue. I get
RuntimeError('working outside of request context')
Is it possible to make emitting from a thread?
The problem is that the thread does not have the context to know what user to address the message to.
You can pass request.namespace to the thread as an argument, and then send the message with it. Example:
#socket.on('doSomething', namespace='/test')
def onDoSomething(data):
t = threading.Timer(4, checkSomeResources, request.namespace)
t.start()
emit('doingSomething', 'checking is started')
def checkSomeResources(namespace)
# ...
# some work which takes several seconds comes here
# ...
namespace.emit('doingSomething', 'checking is done')

Responding to client disconnects using bottle and gevent.wsgi?

I have a small asynchronous server implemented using bottle and gevent.wsgi. There is a routine used to implement long poll that looks pretty much like the "Event Callbacks" example in the bottle documentation:
def worker(body):
msg = msgbus.recv()
body.put(msg)
body.put(StopIteration)
#route('/poll')
def poll():
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body)
return body
Here, msgbus is a ZMQ sub socket.
This all works fine, but if a client breaks the connection while
worker is blocked on msgbus.recv(), that greenlet task will hang
around "forever" (well, until a message is received), and will only
find out about the disconnected client when it attempts to send a
response.
I can use msgbus.poll(timeout=something) if I don't want to block
forever waiting for ipc messages, but I still can't detect a client
disconnect.
What I want to do is get something like a reference to the client
socket so that I can use it in some kind of select or poll loop,
or get some sort of asynchronous notification inside my greenlet, but
I'm not sure how to accomplish either of these things with these
frameworks (bottle and gevent).
Is there a way to get notified of client disconnects?
Aha! The wsgi.input variable, at least under gevent.wsgi, has an rfile member that is a file-like object. This doesn't appear to be required by the WSGI spec, so it might not work with other servers.
With this I was able to modify my code to look something like:
def worker(body, rfile):
poll = zmq.Poller()
poll.register(msgbus)
poll.register(rfile, zmq.POLLIN)
while True:
events = dict(poll.poll())
if rfile.fileno() in events:
# client disconnect!
break
if msgbus in events:
msg = msgbus.recv()
body.put(msg)
break
body.put(StopIteration)
#route('/poll')
def poll():
rfile = bottle.request.environ['wsgi.input'].rfile
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body, rfile)
return body
And this works great...
...except on OpenShift, where you will have to use the
alternate frontend on port 8000 with websockets support.

Dbus/GLib Main Loop, Background Thread

I'm starting out with DBus and event driven programming in general. The service that I'm trying to create really consists of three parts but two are really "server" things.
1) The actual DBus server talks to a remote website over HTTPS, manages sessions, and conveys info the clients.
2) The other part of the service calls a keep alive page every 2 minutes to keep the session active on the external website
3) The clients make calls to the service to retrieve info from the service.
I found some simple example programs. I'm trying to adapt them to prototype #1 and #2. Rather than building separate programs for both. I thought I that I can run them in a single, two threaded process.
The problem that I'm seeing is that I call time.sleep(X) in my keep alive thread. The thread goes to sleep, but won't ever wake up. I think that the GIL isn't released by the GLib main loop.
Here's my thread code:
class Keepalive(threading.Thread):
def __init__(self, interval=60):
super(Keepalive, self).__init__()
self.interval = interval
bus = dbus.SessionBus()
self.remote = bus.get_object("com.example.SampleService", "/SomeObject")
def run(self):
while True:
print('sleep %i' % self.interval)
time.sleep(self.interval)
print('sleep done')
reply_status = self.remote.keepalive()
if reply_status:
print('Keepalive: Success')
else:
print('Keepalive: Failure')
From the print statements, I know that the sleep starts, but I never see "sleep done."
Here is the main code:
if __name__ == '__main__':
try:
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
session_bus = dbus.SessionBus()
name = dbus.service.BusName("com.example.SampleService", session_bus)
object = SomeObject(session_bus, '/SomeObject')
mainloop = gobject.MainLoop()
ka = Keepalive(15)
ka.start()
print('Begin main loop')
mainloop.run()
except Exception as e:
print(e)
finally:
ka.join()
Some other observations:
I see the "begin main loop" message, so I know it's getting control. Then, I see "sleep %i," and after that, nothing.
If I ^C, then I see "sleep done." After ~20 seconds, I get an exception from self.run() that the remote application didn't respond:
DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
What's the best way to run my keep alive code within the server?
Thanks,
You have to explicitly enable multithreading when using gobject by calling gobject.threads_init(). See the PyGTK FAQ for background info.
Next to that, for the purpose you're describing, timeouts seem to be a better fit. Use as follows:
# Enable timer
self.timer = gobject.timeout_add(time_in_ms, self.remote.keepalive)
# Disable timer
gobject.source_remove(self.timer)
This calls the keepalive function every time_in_ms (milli)seconds. Further details, again, can be found at the PyGTK reference.

Categories

Resources