I have a python program which on certain event (for example on curl request) would calculate the function value. What I need is the moment the function executes, some data needs to be posted to tornado websocket. I have looked around internet and found examples on how to create websocket but all these examples cover scenarios where the data is invoked inside the websocket handler
Referring to this code for example:
https://github.com/benjaminmbrown/real-time-data-viz-d3-crossfilter-websocket-tutorial/blob/master/rt-data-viz/websocket_server.py
Can someone guide me on how can I post message on websocket. Basically I have tornado API where if user do a curl request I would like to log that message to websocket
You can do it by creating a registry of all active websockets and use it to send messages on a certain event.
class WebsocketRegistry:
def __init__(self):
self._active_websockets = []
def add_listener(self, listener):
self._active_websockets.append(listener)
def remove_listener(self, listener):
self._active_websockets.remove(listener)
def send_messages(self, msg_txt):
for ws in self._active_websockets:
ws.write_message(msg_txt)
registry = WebsocketRegistry()
class WSHandler(tornado.websocket.WebSocketHandler):
def open(self, *args, **kwargs):
super(WSHandler, self).open(*args, **kwargs)
registry.add_listener(self)
def on_close(self):
super(WSHandler, self).on_close()
registry.remove_listener(self)
P.S. Take note that if you plan to scale your app with 2+ instances, this won't work and you would have to use, for example, a message queue (RabbitMQ is good) to deliver events to the all opened websockets. But overall approach would be the same: MQ would be a registry and websockets subscribe on messages (and unsubscribe on closing) on connection.
Related
I have a Flask server that accepts HTTP requests from a client. This HTTP server needs to delegate work to a third-party server using a websocket connection (for performance reasons).
I find it hard to wrap my head around how to create a permanent websocket connection that can stay open for HTTP requests. Sending requests to the websocket server in a run-once script works fine and looks like this:
async def send(websocket, payload):
await websocket.send(json.dumps(payload).encode("utf-8"))
async def recv(websocket):
data = await websocket.recv()
return json.loads(data)
async def main(payload):
uri = f"wss://the-third-party-server.com/xyz"
async with websockets.connect(uri) as websocket:
future = send(websocket, payload)
future_r = recv(websocket)
_, output = await asyncio.gather(future, future_r)
return output
asyncio.get_event_loop().run_until_complete(main({...}))
Here, main() establishes a WSS connection and closes it when done, but how can I keep that connection open for incoming HTTP requests, such that I can call main() for each of those without re-establising the WSS connection?
The main problem there is that when you code a web app responding http(s), your code have a "life cycle" that is very peculiar to that: usually you have a "view" function that will get the request data, perform all actions needed to gather the response data and return it.
This "view" function in most web frameworks has to be independent from the rest of the system - it should be able to perform its duty relying on no other data or objects than what it gets when called - which are the request data, and system configurations - that gives the application server (the framework parts designed to actually connect your program to the internet) can choose a variety of ways to serve your program: they may run your view function in several parallel threads, or in several parallel processes, or even in different processes in various containers or physical servers: you application would not need to care about that.
If you want a resource that is available across calls to your view functions, you need to break out of this paradigm. For example, typically, frameworks will want to create a pool of database connections, so that views on the same process can re-use those connections. These database connections are usually supplied by the framework itself, which implements a mechanism for allowing then to be reused, and be available in a transparent way, as needed. You have to recreate a mechanism of the same sort if you want to keep a websocket connection alive.
In a certain way, you need a Python object that can mediate your websocket data behaving like a "server" for your web view functions.
That is simpler to do than it sounds - a special Python class designed to have a single instance per process, which keeps the connections, and is able to send and receive data received from parallel calls without mangling it is enough. A callable that will ensure this instance exists in the current process is enough to work under any strategy configured to serve your app to the web.
If you are using Flask, which does not use asyncio, you get a further complication - you will loose the async-ability inside your views, they will have to wait for the websocket requisition to be completed - it will then be the job of your application server to have your view in different threads or processes to ensure availability. And, it is your job to have the asyncio loop for your websocket running in a separate thread, so that it can make the requests it needs.
Here is some example code.
Please note that apart from using a single websocket per process,
this has no provisions in case of failure of any kind, but,
most important: it does nothing in parallel: all
pairs of send-recv are blocking, as you give no clue of
a mechanism that would allow one to pair each outgoing message
with its response.
import asyncio
import threading
from queue import Queue
class AWebSocket:
instance = None
def __new__(cls, *args, **kw):
if cls.instance:
return cls.instance
return super().__new__(cls, *args, **kw)
def __init__(self, *args, **kw):
cls = self.__class__
if cls.instance:
# init will be called even if new finds the existing instance,
# so we have to check again
return
self.outgoing = Queue()
self.responses = Queue()
self.socket_thread = threading.Thread(target=self.start_socket)
self.socket_thread.start()
def start_socket():
# starts an async loop in a separate thread, and keep
# the web socket running, in this separate thread
asyncio.get_event_loop().run_until_complete(self.core())
def core(self):
self.socket = websockets.connect(uri)
async def _send(self, websocket, payload):
await websocket.send(json.dumps(payload).encode("utf-8"))
async def _recv(self, websocket):
data = await websocket.recv()
return json.loads(data)
async def core(self):
uri = f"wss://the-third-party-server.com/xyz"
async with websockets.connect(uri) as websocket:
self.websocket = websocket
while True:
# This code is as you wrote it:
# it essentially blocks until a message is sent
# and the answer is received back.
# You have to have a mechanism in your websocket
# messages allowing you to identify the corresponding
# answer to each request. On doing so, this is trivially
# paralellizable simply by calling asyncio.create_task
# instead of awaiting on asyncio.gather
payload = self.outgoing.get()
future = self._send(websocket, payload)
future_r = self._recv(websocket)
_, response = await asyncio.gather(future, future_r)
self.responses.put(response)
def send(self, payload):
# This is the method you call from your views
# simply do:
# `output = AWebSocket().send(payload)`
self.outgoing.put(payload)
return self.responses.get()
Using python/tornado I wanted to set up a little "trampoline" server that allows two devices to communicate with each other in a RESTish manner. There's probably vastly superior/simpler "off the shelf" ways to do this. I'd welcome those suggestions, but I still feel it would be educational to figure out how to do my own using tornado.
Basically, the idea was that I would have the device in the role of server doing a longpoll with a GET. The client device would POST to the server, at which point the POST body would be transferred as the response of the blocked GET. Before the POST responded, it would block. The server side then does a PUT with the response, which is transferred to the blocked POST and return to the device. I thought maybe I could do this with tornado.queues. But that appears to not have worked out. My code:
import tornado
import tornado.web
import tornado.httpserver
import tornado.queues
ToServerQueue = tornado.queues.Queue()
ToClientQueue = tornado.queues.Queue()
class Query(tornado.web.RequestHandler):
def get(self):
toServer = ToServerQueue.get()
self.write(toServer)
def post(self):
toServer = self.request.body
ToServerQueue.put(toServer)
toClient = ToClientQueue.get()
self.write(toClient)
def put(self):
ToClientQueue.put(self.request.body)
self.write(bytes())
services = tornado.web.Application([(r'/query', Query)], debug=True)
services.listen(49009)
tornado.ioloop.IOLoop.instance().start()
Unfortunately, the ToServerQueue.get() does not actually block until the queue has an item, but rather returns a tornado.concurrent.Future. Which is not a legal value to pass to the self.write() call.
I guess my general question is twofold:
1) How can one HTTP verb invocation (e.g. get, put, post, etc) block and then be signaled by another HTTP verb invocation.
2) How can I share data from one invocation to another?
I've only really scratched the simple/straightforward use cases of making little REST servers with tornado. I wonder if the coroutine stuff is what I need, but haven't found a good tutorial/example of that to help me see the light, if that's indeed the way to go.
1) How can one HTTP verb invocation (e.g. get, put, post,u ne etc) block and then be signaled by another HTTP verb invocation.
2) How can I share data from one invocation to another?
The new RequestHandler object is created for every request. So you need some coordinator e.g. queues or locks with state object (in your case it would be re-implementing queue).
tornado.queues are queues for coroutines. Queue.get, Queue.put, Queue.join return Future objects, that need to be "resolved" - scheduled task done either with success or exception. To wait until future is resolved you should yielded it (just like in the doc examples of tornado.queues). The verbs method also need to be decorated with tornado.gen.coroutine.
import tornado.gen
class Query(tornado.web.RequestHandler):
#tornado.gen.coroutine
def get(self):
toServer = yield ToServerQueue.get()
self.write(toServer)
#tornado.gen.coroutine
def post(self):
toServer = self.request.body
yield ToServerQueue.put(toServer)
toClient = yield ToClientQueue.get()
self.write(toClient)
#tornado.gen.coroutine
def put(self):
yield ToClientQueue.put(self.request.body)
self.write(bytes())
The GET request will last (wait in non-blocking manner) until something will be available on the queue (or timeout that can be defined as Queue.get arg).
tornado.queues.Queue provides also get_nowait (there is put_nowait as well) that don't have to be yielded - returns immediately item from queue or throws exception.
I have the following scenario I would like to implement:
User surfs to our website
User enters a bitcoin address.
A websocket is created to the server, passing the address.
The server registers a callback with Blocktrail
When the callback is triggered (a payment was seen by Blocktrail) we send a message back to the browser.
The page the user is browsing is updated to show the message recieved
I'm using webhooks from the Blocktrail API to "listen" to an event, being the reception of coins on an address.
Now, when the event happens, the API does a POST to my URL. This should send a message to the browser that is connected to my server with socket.io (such as 'payment seen on blockchain')
So the question is,
How can I send a message from a route to a socket using flask-socketio
Pseudo code:
#app.route('/callback/<address>')
def callback(id):
socketio.send('payment seen on blockchain')
#socketio.on('address',address)
def socketlisten(address):
registerCallback(address)
I'm going to describe how to solve this using Flask-SocketIO beta version 1.0b1. You can also do this with the 0.6 release, but it is a bit more complicated, the 1.0 release makes addressing individual clients easier.
Each client of a socket connection gets assigned a session id that uniquely identifies it, the so called sid. Within a socket function handler, you can access it as request.sid. Also, upon connection, each client is assigned to a private room, named with the session id.
I assume the metadata that you receive with the callback allows you to identify the user. What you need is to obtain the sid of that user. Once you have it, you can send your alert to the corresponding room.
Example (with some hand-waving regarding how you attach a sid to an address):
#app.route('/callback/<address>')
def callback(address):
sid = get_sid_from_address(address)
socketio.send('payment seen on blockchain', room=sid)
#socketio.on('address')
def socketlisten(address):
associate_address_with_sid(address, request.sid)
I'm facing problem in emiting messages from RabbitMQ to User via SocketIO.
I have Flask application with SocketIO integration.
Current user flow seems like
The problem is i'm not able to set up RabbitMQ listener which forward messages to browser via SocketIO. Every time i'm getting different error. Mostly is that connection is closed, or i'm working outside of application context.
I tried many approaches, here is my last one.
# callback
def mq_listen(uid):
rabbit = RabbitMQ()
def cb(ch, method, properties, body, mq=rabbit):
to_return = [0] # mutable
message = Message.load(body)
to_return[0] = message.get_message()
emit('report_part', {"data": to_return[0]})
rabbit.listen('results', callback=cb, id=uid)
# this is the page, which user reach
#blueprint.route('/report_result/<uid>', methods=['GET'])
def report_result(uid):
thread = threading.Thread(target=mq_listen, args=(uid,))
thread.start()
return render_template("property/report_result.html", socket_id=uid)
where rabbit.listen method is abstraction like:
def listen(self, queue_name, callback=None, id=None):
if callback is not None:
callback_function = callback
else:
callback_function = self.__callback
if id is None:
self.channel.queue_declare(queue=queue_name, durable=True)
self.channel.basic_qos(prefetch_count=1)
self.consumer_tag = self.channel.basic_consume(callback_function, queue=queue_name)
self.channel.start_consuming()
else:
self.channel.exchange_declare(exchange=queue_name, type='direct')
result = self.channel.queue_declare(exclusive=True)
exchange_name = result.method.queue
self.channel.queue_bind(exchange=queue_name, queue=exchange_name, routing_key=id)
self.channel.basic_consume(callback_function, queue=exchange_name, no_ack=True)
self.channel.start_consuming()
which resulted into
RuntimeError: working outside of request context
I will be happy for any tip or example of usage.
Thanks a lot
I had a similar issue, in the end of the day it's because when you make a request flask passes the request context to client. But the solution is NOT to add with app.app_context(). That is hackey and will definitely have errors as you're not natively sending the request context.
My solution was to create a redirect so that the request context is maintained like:
def sendToRedisFeed(eventPerson, type):
eventPerson['type'] = type
requests.get('http://localhost:5012/zmq-redirect', json=eventPerson)
This is my redirect function, so whenever there is an event I'd like to push to my PubSub it goes through this function, which then pushes to that localhost endpoint.
from flask_sse import sse
app.register_blueprint(sse, url_prefix='/stream')
#app.route('/zmq-redirect', methods=['GET'])
def send_message():
try:
sse.publish(request.get_json(), type='greeting')
return Response('Sent!', mimetype="text/event-stream")
except Exception as e:
print (e)
pass
Now, whenever an event is pushed to my /zmq-redirect endpoint, it is redirected and published via SSE.
And now finally, just to wrap everything up, the client:
var source = new EventSource("/stream");
source.addEventListener(
"greeting",
function(event) {
console.log(event)
}
)
The error message suggests that it's a Flask issue. While handling requests, Flask sets a context, but because you're using threads this context is lost. By the time it's needed, it is no longer available, so Flask gives the "working outside of request context" error.
A common way to resolve this is to provide the context manually. There is a section about this in the documentation: http://flask.pocoo.org/docs/1.0/appcontext/#manually-push-a-context
Your code doesn't show the socketio part. But I wonder if using something like flask-socketio could simplify some stuff... (https://flask-socketio.readthedocs.io/en/latest/). I would open up the RabbitMQ connection in the background (preferably once) and use the emit function to send any updates to connected SocketIO clients.
I want to emit message from server to client.
I have look at this but cannot use because I cannot create a namespace instance.
How to emit SocketIO event on the serverside
My use case is:
I have a database of price of product. A lot of users are currently surf my website. Some of them is viewing product X.
On the server side, the admin can edit the price of the product. If he edit the price of X, all the client must see the notification that X price change (e.x: a simple js alert).
My client javascript now:
var socket = io.connect('/product');
#notify server that this client is viewing product X
socket.emit("join", current_product.id);
#upon receive msg from server
socket.on('notification', function (data) {
alert("Price change");
});
My server code (socket.py):
#namespace('/products')
class ProductsNamespace(BaseNamespace, ProductSubscriberMixin):
def initialize(self, *args, **kwargs):
_connections[id(self)] = self
super(ProductsNamespace, self).initialize(*args, **kwargs)
def disconnect(self, *args, **kwargs):
del _connections[id(self)]
super(ProductsNamespace, self).disconnect(*args, **kwargs)
def on_join(self, *args):
print "joining"
def emit_to_subscribers(self): pass
I use the runserver_socketio.py as in this link.
(Thanks to Calvin Cheng for this excellent up-to-date example.)
I don't know how to call the emit_to_subscribers. Since I have no instance of namespace.
As I read from this doc ,
Namespaces are created only when some packets arrive that ask for the namespace.
But how can I send the packet to that namespace from the code? IF I can only create the instance when a client emit message to server, when no one is surfing the site, right after finish editing the price, the system will fail.
I am really confused about the namespace and its instance. If you have any clearer docs, please help me.
Thanks a lot!
This is my current state of understanding, hopefully it will be helpful to someone. Building up further from How to emit SocketIO event on the serverside, you now have a dictionary with ProductsNamespace objects as values. You can iterate through this dictionary to find the desired socket object. For example, if you set socket identifier upon connection, as described in the Django and Flask example apps by using on_nickname method, then you can retrieve the socket like so:
for key in _connections:
socket = _connections[key]
if 'nickname' in socket.session and socket.session['nickname'] == unicode('uniqueName'):
socket.emit('eventTag', 'message from server')
Similarly socket.session['rooms'] can be used to emit to all members of the room, and if there are multiple SocketIO namespaces, socket.ns_name can be used.