I have a small asynchronous server implemented using bottle and gevent.wsgi. There is a routine used to implement long poll that looks pretty much like the "Event Callbacks" example in the bottle documentation:
def worker(body):
msg = msgbus.recv()
body.put(msg)
body.put(StopIteration)
#route('/poll')
def poll():
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body)
return body
Here, msgbus is a ZMQ sub socket.
This all works fine, but if a client breaks the connection while
worker is blocked on msgbus.recv(), that greenlet task will hang
around "forever" (well, until a message is received), and will only
find out about the disconnected client when it attempts to send a
response.
I can use msgbus.poll(timeout=something) if I don't want to block
forever waiting for ipc messages, but I still can't detect a client
disconnect.
What I want to do is get something like a reference to the client
socket so that I can use it in some kind of select or poll loop,
or get some sort of asynchronous notification inside my greenlet, but
I'm not sure how to accomplish either of these things with these
frameworks (bottle and gevent).
Is there a way to get notified of client disconnects?
Aha! The wsgi.input variable, at least under gevent.wsgi, has an rfile member that is a file-like object. This doesn't appear to be required by the WSGI spec, so it might not work with other servers.
With this I was able to modify my code to look something like:
def worker(body, rfile):
poll = zmq.Poller()
poll.register(msgbus)
poll.register(rfile, zmq.POLLIN)
while True:
events = dict(poll.poll())
if rfile.fileno() in events:
# client disconnect!
break
if msgbus in events:
msg = msgbus.recv()
body.put(msg)
break
body.put(StopIteration)
#route('/poll')
def poll():
rfile = bottle.request.environ['wsgi.input'].rfile
body = gevent.queue.Queue()
worker = gevent.spawn(worker, body, rfile)
return body
And this works great...
...except on OpenShift, where you will have to use the
alternate frontend on port 8000 with websockets support.
Related
I am trying to implement WebSocket connection to a server (Python app <=> Django app)
Whole system runs in big Asyncio loop with many tasks. Code snippet is just very small testing part.
I am able to send any data to a server at any moment and many of them will be type request something and wait for response. But I would like to have some "always running" handler for all incoming messages. (When something in Django database will change I want to send changes to python app).
How can Include always running receiver/ or add callback to websocket? I am not able to find any solution for this.
My code snippet:
import asyncio, json, websockets, logging
class UpdateConnection:
async def connect(self,botName):
self.sock = await websockets.connect('ws://localhost:8000/updates/bot/'+botName)
async def send(self,data):
try:
await self.sock.send(json.dumps(data))
except:
logging.info("Websocket connection lost!")
# Find a way how to reconenct... or make socket reconnect automatically
if __name__ == '__main__':
async def DebugLoop(socketCon):
await socketCon.connect("dev")
print("Running..")
while True:
data = {"type": "debug"}
await socketCon.send(data)
await asyncio.sleep(1)
uSocket = UpdateConnection()
loop = asyncio.get_event_loop()
loop.create_task(DebugLoop(uSocket))
loop.run_forever()
My debug server after connection will start sending random messages to the client in random intervals and I would like to somehow handle them in async way.
Thanks for any help :)
You don't have to do it so complicated. First of all I suggest you use the context patterns offered by websockets module.
From the documentation:
connect() can be used as an infinite asynchronous iterator to reconnect automatically on errors:
async for websocket in websockets.connect(...):
try:
...
except websockets.ConnectionClosed:
continue
Additionally, you simply keep the connection alive by awaiting incoming messages:
my_websocket = None
async for websocket in websockets.connect('ws://localhost:8000/updates/bot/' + botName):
try:
my_websocket = websocket
async for message in websocket:
pass # here you could also process incoming messages
except websockets.ConnectionClosed:
my_websocket = None
continue
As you can see we have a nested loop here:
The outer loop constantly reconnects to the server
The inner loop processes one incoming message at a time
If you are connected, and no messages are coming in from the server, this will just sleep.
The other thing that happens here is that my_websocket is set to the active connection, and unset again when the connection is lost.
In other parts of your script you can use my_websocket to send data. Note that you will need to check if it is currently set wherever you use it:
async def send(data):
if my_websocket:
await my_websocket.send(json.dumps(data))
This is just an illustration, you can also keep the websocket object as an object member, or pass it to another component through a setter function, etc.
I am writing a Python 3.5 program which handles some signals and serves this data to a small amount of websocket clients.
I want the websocket server and the signal handling to happen in the same program, therefore I am using threading.
The problem is I don't know how to send data from the worker thread to the client.
The Websocket server is implemented with a simple library called "websockets". The server is set up and clients can connect and talk to the server within the "new websocket client has connected" handler.
The server is set up with the help of an event loop:
start_server = websockets.serve(newWsHandler, host, port)
loop = asyncio.get_event_loop()
loop.run_until_complete(start_server)
loop.run_forever()
Because I want my program to do signal handling too, and loop.run_forever() is a blocking call, I create an endless worker thread before I start my server. This works as expected.
When the worker thread detects a signal change, it has to alert the connected websocket clients. But a simple client.send() does not work. Putting await in front of it does not work either (since that only works within coroutines, I think). I tried making a separate "async def" function and adding it to the event loop, but it gets a bit complicated because it's not on the same thread.
So the main question is: what is the best way send something to a websocket client from a worker thread? I don't receive anything in response.
EDIT:
It will probably help if I add some mock code.
def signalHandler():
#check signals
...
if alert:
connections[0].send("Alert") #NEED HELP HERE
async def newWsHandler(websocket, path):
connections.append(websocket)
while True:
#keep the connection open until the client disconnects
msg = await websocket.recv()
#top level
connections = []
...
start_server = websockets.serve(newWsHandler, host, port)
signalThread = Thread(target = signalHandler)
signalThread.setDaemon(True)
signalThread.start()
loop = asyncio.get_event_loop()
loop.run_until_complete(start_server)
loop.run_forever()
I am trying to create a 'keepalive' websocket thread to send an emit every 10 seconds to the browser once someone connects to the page, but I'm getting an error and am not sure how to get around it.
Any ideas on how to make this work?
And how would I kill this thread once a 'disconnect' is sent?
Thanks!
#socketio.on('connect', namespace='/endpoint')
def test_connect():
emit('my response', {'data': '<br>Client thinks i\'m connected'})
def background_thread():
"""Example of how to send server generated events to clients."""
count = 0
while True:
time.sleep(10)
count += 1
emit('my response', {'data': 'websocket is keeping alive'}, namespace='/endpoint')
global thread
if thread is None:
thread = Thread(target=background_thread)
thread.start()
You wrote your background thread in a way that requires it to know who's the client, since you are sending a direct message to it. For that reason the background thread needs to have access to the request context. In Flask you can install a copy of the current request context in a thread using the copy_current_request_context decorator:
#copy_current_request_context
def background_thread():
"""Example of how to send server generated events to clients."""
count = 0
while True:
time.sleep(10)
count += 1
emit('my response', {'data': 'websocket is keeping alive'}, namespace='/endpoint')
Couple of notes:
It is not necessary to set the namespace when you are sending back to the client, by default the emit call will be on the same namespace used by the client. The namespace needs to be specified when you broadcast or send messages outside of a request context.
Keep in mind your design will require a separate thread for each client that connects. It would be more efficient to have a single background thread that broadcasts to all clients. See the example application that I have on the Github repository for an example: https://github.com/miguelgrinberg/Flask-SocketIO/tree/master/example
To stop the thread when the client disconnects you can use any multi-threading mechanism to let the thread know it needs to exit. This can be, for example, a global variable that you set on the disconnect event. A not so great alternative that is easy to implement is to wait for the emit to raise an exception when the client went away and use that to exit the thread.
I just got started with ZMQ. I am designing an app whose workflow is:
one of many clients (who have random PULL addresses) PUSH a request to a server at 5555
the server is forever waiting for client PUSHes. When one comes, a worker process is spawned for that particular request. Yes, worker processes can exist concurrently.
When that process completes it's task, it PUSHes the result to the client.
I assume that the PUSH/PULL architecture is suited for this. Please correct me on this.
But how do I handle these scenarios?
the client_receiver.recv() will wait for an infinite time when server fails to respond.
the client may send request, but it will fail immediately after, hence a worker process will remain stuck at server_sender.send() forever.
So how do I setup something like a timeout in the PUSH/PULL model?
EDIT: Thanks user938949's suggestions, I got a working answer and I am sharing it for posterity.
If you are using zeromq >= 3.0, then you can set the RCVTIMEO socket option:
client_receiver.RCVTIMEO = 1000 # in milliseconds
But in general, you can use pollers:
poller = zmq.Poller()
poller.register(client_receiver, zmq.POLLIN) # POLLIN for recv, POLLOUT for send
And poller.poll() takes a timeout:
evts = poller.poll(1000) # wait *up to* one second for a message to arrive.
evts will be an empty list if there is nothing to receive.
You can poll with zmq.POLLOUT, to check if a send will succeed.
Or, to handle the case of a peer that might have failed, a:
worker.send(msg, zmq.NOBLOCK)
might suffice, which will always return immediately - raising a ZMQError(zmq.EAGAIN) if the send could not complete.
This was a quick hack I made after I referred user938949's answer and http://taotetek.wordpress.com/2011/02/02/python-multiprocessing-with-zeromq/ . If you do better, please post your answer, I will recommend your answer.
For those wanting lasting solutions on reliability, refer http://zguide.zeromq.org/page:all#toc64
Version 3.0 of zeromq (beta ATM) supports timeout in ZMQ_RCVTIMEO and ZMQ_SNDTIMEO. http://api.zeromq.org/3-0:zmq-setsockopt
Server
The zmq.NOBLOCK ensures that when a client does not exist, the send() does not block.
import time
import zmq
context = zmq.Context()
ventilator_send = context.socket(zmq.PUSH)
ventilator_send.bind("tcp://127.0.0.1:5557")
i=0
while True:
i=i+1
time.sleep(0.5)
print ">>sending message ",i
try:
ventilator_send.send(repr(i),zmq.NOBLOCK)
print " succeed"
except:
print " failed"
Client
The poller object can listen in on many recieving sockets (see the "Python Multiprocessing with ZeroMQ" linked above. I linked it only on work_receiver. In the infinite loop, the client polls with an interval of 1000ms. The socks object returns empty if no message has been recieved in that time.
import time
import zmq
context = zmq.Context()
work_receiver = context.socket(zmq.PULL)
work_receiver.connect("tcp://127.0.0.1:5557")
poller = zmq.Poller()
poller.register(work_receiver, zmq.POLLIN)
# Loop and accept messages from both channels, acting accordingly
while True:
socks = dict(poller.poll(1000))
if socks:
if socks.get(work_receiver) == zmq.POLLIN:
print "got message ",work_receiver.recv(zmq.NOBLOCK)
else:
print "error: message timeout"
The send wont block if you use ZMQ_NOBLOCK, but if you try closing the socket and context, this step would block the program from exiting..
The reason is that the socket waits for any peer so that the outgoing messages are ensured to get queued.. To close the socket immediately and flush the outgoing messages from the buffer, use ZMQ_LINGER and set it to 0..
If you're only waiting for one socket, rather than create a Poller, you can do this:
if work_receiver.poll(1000, zmq.POLLIN):
print "got message ",work_receiver.recv(zmq.NOBLOCK)
else:
print "error: message timeout"
You can use this if your timeout changes depending on the situation, instead of setting work_receiver.RCVTIMEO.
I'm using python-xmpp to send jabber messages. Everything works fine except that every time I want to send messages (every 15 minutes) I need to reconnect to the jabber server, and in the meantime the sending client is offline and cannot receive messages.
So I want to write a really simple, indefinitely running xmpp client, that is online the whole time and can send (and receive) messages when required.
My trivial (non-working) approach:
import time
import xmpp
class Jabber(object):
def __init__(self):
server = 'example.com'
username = 'bot'
passwd = 'password'
self.client = xmpp.Client(server)
self.client.connect(server=(server, 5222))
self.client.auth(username, passwd, 'bot')
self.client.sendInitPresence()
self.sleep()
def sleep(self):
self.awake = False
delay = 1
while not self.awake:
time.sleep(delay)
def wake(self):
self.awake = True
def auth(self, jid):
self.client.getRoster().Authorize(jid)
self.sleep()
def send(self, jid, msg):
message = xmpp.Message(jid, msg)
message.setAttr('type', 'chat')
self.client.send(message)
self.sleep()
if __name__ == '__main__':
j = Jabber()
time.sleep(3)
j.wake()
j.send('receiver#example.org', 'hello world')
time.sleep(30)
The problem here seems to be that I cannot wake it up. My best guess is that I need some kind of concurrency. Is that true, and if so how would I best go about that?
EDIT: After looking into all the options concerning concurrency, I decided to go with twisted and wokkel. If I could, I would delete this post.
There is a good example on the homepage of xmpppy itself (which is another name for python-xmpp), which does almost what you want: xtalk.py
It is basically a console jabber-client, but shouldn't be hard to rewrite into bot you want.
It's always online and can send and receive messages. I don't see a need for multiprocessing (or other concurrency) module here, unless you need to receive and send messages at exact same time.
A loop over the Process(timeout) method is a good way to wait and process any new incoming stanzas while keeping the connection up.