I'm using tornado websockets and it works fine.
However, I'd like to listen for changes to a MongoDB Collection and send new changes to the websocket client.
I cannot get it running with threads, and I saw that using threads with tornado is discouraged.
I'm really stuck right now. How can I proceed?
(blocking) Code right now:
def open(self):
print("Opening Connection")
with self.collection.watch() as stream:
for change in stream:
doc = change["fullDocument"]
self.write_message(u"%s" % json.dumps(doc))
It looks like Motor can handle Mongodb changestreams. https://motor.readthedocs.io/en/stable/api-asyncio/asyncio_motor_change_stream.html
Personally, I find RethinkDB or Firebase are better alternatives for realtime features like this. But without knowing your needs, I cannot say if this is a good for you.
Related
I am using the python library websocket-client to connect to a server and receive data
from websocket import create_connection
api_data = {...}
connection = create_connection(DOMAIN)
connection.send(api_data)
while True:
data = connection.recv()
print(data)
This works and all, but in order to write a more sophisticated fail-safe application I need to understand how a websocket receive command works. If my program is busy doing other things, for example if I add a time.sleep(10) command in my while loop, will the updates "queue" and when I finally do connection.recv() I obtain all at once, or will messages from the server be lost? Any links that explain practical things like this are very welcome because I feel like I lack some very fundamental knowledge since I don't see questions like this posed anywhere.
First important point for me: I want to implement websockets. I do not need the fallback options of socketIO.
I would like "my clients" to implement whatever they want, as soon as they stay to the websockets protocol. Namely something like: var ws = new WebSocket.
So.. if the server is Flask-SocketIO, will a simple js WebSocket work?
ADDIONAL NOTES:
Python first!
I am trying to set up a server which will respond only (actually only send) to websockets, no web page associated. (Yes, I am fine with WS and I do not need WSS, in case you ask ;) ).
I had a try on the server side with flask-sockets
https://github.com/kennethreitz/flask-sockets
but it is giving me some problems. Like closing immediately the connection and
beside many similar problems on the web I could not find a solution. Hard to debug too. So before I start developing a new server...
Sadly no, you cannot use a Socket.IO server with plain WebSocket clients. Sorry, that's not what Flask-SocketIO was made for.
(in case this isn't clear, this is the author of Flask-SocketIO speaking)
I am developing python service for xbmc and I am hopelessly stuck. XBMC has TCP API that communicates by JSON-RPC. XBMC has server TCP socket that is mainly design to recieve commands and respond, but if something happens in system it sends "Notification" to TCP. The problem is that I need to create TCP client that behaves like server therefore it is able to recieve this "Notification". Wherever I run socket.recv(4096) it waits for data and stucks my code, because I need to loop my code. Structure of code is basically like this:
import xbmc, xbmcgui, xbmcaddon
class XPlayer(xbmc.Player) :
def __init__ (self):
xbmc.Player.__init__(self)
def onPlayBackStarted(self):
if xbmc.Player().isPlayingVideo():
doPlayBackStartedStuff()
player=XPlayer()
doStartupStuff()
while (not xbmc.abortRequested):
if xbmc.Player().isPlayingVideo():
doPlayingVideoStuff()
else:
doPlayingEverythingElseStuff()
xbmc.sleep(500)
# This loop is the most essential part of code
if (xbmc.abortRequested):
closeEverything()
xbmc.log('Aborting...')
I tried everything threading, multiprocessing, blocking, non-blocking and nothing helped.
Thanks,
You likely want select(), poll() or epoll():
http://docs.python.org/library/select.html
This Python pipe-progress-meter application uses select, as an example:
http://stromberg.dnsalias.org/~strombrg/reblock.html
If you know what sort of delimiters are separating the various portions of the protocol, you may also find this useful, without a need for select or similar:
http://stromberg.dnsalias.org/~strombrg/bufsock.html
It deals pretty gracefully with "read to the next null byte", "read a maximum of 100 bytes", etc.
The end result I am trying to achieve is allow a server to assign specific tasks to a client when it makes it's connection. A simplified version would be like this
Client connects to Server
Server tells Client to run some network task
Client receives task and fires up another process to complete task
Client tells Server it has started
Server tells Client it has another task to do (and so on...)
A couple of notes
There would be a cap on how many tasks a client can do
The client would need to be able to monitor the task/process (running? died?)
It would be nice if the client could receive data back from the process to send to the server if needed
At first, I was going to try threading, but I have heard python doesn't do threading correctly (is that right/wrong?)
Then it was thought to fire of a system call from python and record the PID. Then send certain signals to it for status, stop, (SIGUSR1, SIGUSR2, SIGINT). But not sure if that will work, because I don't know if I can capture data from another process. If you can, I don't have a clue how that would be accomplished. (stdout or a socket file?)
What would you guys suggest as far as the best way to handle this?
Use spawnProcess to spawn a subprocess. If you're using Twisted already, then this should integrate pretty seamlessly into your existing protocol logic.
Use Celery, a Python distributed task queue. It probably does everything you want or can be made to do everything you want, and it will also handle a ton of edge cases you might not have considered yet (what happens to existing jobs if the server crashes, etc.)
You can communicate with Celery from your other software using a messaging queue like RabbitMQ; see the Celery tutorials for details on this.
It will probably be most convenient to use a database such as MySQL or PostgreSQL to store information about tasks and their results, but you may be able to engineer a solution that doesn't use a database if you prefer.
I use Tornado as the web server. I write some daemons with Python, which run in the server hardware. Sometimes the web server needs to send some data to the daemon and receives some computed results. There are two working:
1. Asynchronous mode: the server sends some data to the daemons, and it doesn't need the results soon. Can I use message queue to do it perfectly?
2. Synchronous mode: the server sends data to the daemons, and it will wait until it get the results. Should Iuse sockets?
So what's the best way of communication between tornado and Python based daemon?
ZeroMQ can be used for this purpose. It has various sockets for different purposes and it's fast enough to never be your bottleneck. For asynchronous you can use DEALER/ROUTER sockets and for strict synchronous mode you can use REQ/REP sockets.
You can use the python binding for this --> http://www.zeromq.org/bindings:python.
For the async mode you can try something like this from the zguide chapter 3 Router-to-dealer async routing :
In your case, the "client" in the diagram will be your web server and your daemon will be the "worker".
For synchronous you can try a simple request-reply broker or some variant to suit your need.
The diagram above shows a strictly synchronous cycle of send/recv at the REQ/REP sockets. Read through the zguide link to understand how it works. They also have a python code snippet on the page.
Depending on the scale - the simple thing is to just use HTTP and the AsyncHTTPClient in Tornado. For the request<->response case in our application we're going 300 connections/second with such an approach.
For the first case Fire and forget, you could also use AsyncHTTP and just have the server close out the connection and continue working...