Mixing SSEs into Tornado - python

I am trying to incorporate a legacy SSE Server + SSE Client with tornado. (A server that collects SSEs from processes, and distributes them to clients through a UDP socket) The first SSE GET request that we make, works perfectly. The only issue is tornado gets locked up when the user navigates away from the web app, and back. The web application will never load a second time.
I have a RequestHandler that is NOT asynchronous, and uses the client to wait in a while True loop reading from a non blocking python UDP socket. These messages are then written, and flushed to the browser. The browser successfully receives the SSEs.
In my RequestHandler the on_connection_close, and on_finish are never called. These are supposed to stop the client, and break from the while True loop. Is this because my get request is NOT a coroutine?
What is the correct way to do this in Tornado? I can show a code snippet if it's really needed, but the question should be self explanatory.

I was able to figure it out myself, after some experimentation.
on_finish() was never called because I need to call finish(), and on_connection_close() was never called because it was not a coroutine. I was able to resolve my issue by using the keyword yield.
More information can be found here: http://www.tornadoweb.org/en/stable/guide/coroutines.html

Related

How python socket detect the server is closed when continue sending data to server?

I use python socket to send data to server, and the code like:
When I close the server, it will send the data twice, and then, it will go to the "except" code. If I set the SEND_INTERVAL too long, it will be a disaster. So, how to get the error immediately when the server is closed or downtime?
Nothing happens immediatly over the network. That's one thing.
Secondly the underlying OS will detect broken connections (and Python gets that info in the form of an exception), but usually this takes time. And that's why you still send messages even though the connection is already dead. But since OS operates on network layer (as opposed to the application layer) then there's an issue: the connection may be dead but OS may never detect this. For example this will happen when the server is dead but behind alive proxy.
Thirdly the most reliable way to know that a server is alive is when it sends something back to the client. So you should always .recv() (with timeout) after .sendall() call and the server should always .sendall() after .recv() (the request-response pattern, even when the response is a simple "I received message"). If you can't modify the server side and (in worst case) if the server doesn't send anything back to the client then there's no reliable way to know this.
That's why you need some form of framing/correctness protocol. Simple .sendall() won't do.

Send websocket messages on demand, outside of callbacks?

I have the backend to a toy video game engine written in Python. It's running on my server in its continuous game loop. I want it to be able to send messages to web browser clients over websockets.
However, it looks to me like websockets are universally limited to sending information on callbacks alone. I tried using the Autobahn websockets library for Python, but when the server is run, it runs in a blocking loop, so you can't even interact with it -- you can only define its behavior ahead of time in callbacks.
I just want to be able to instantiate a type of MyWebsocketNetwork, which will run its server in the background, and be able to call myWebsocketNetwork.sendToAll("my message") anywhere in my code to send my messages on demand. NOT in callbacks, but on demand. Again, I can't find a way to do this with Autobahn (or any other library) since they all run in blocking lops.
Is this in general not possible due to the nature of websockets? Or is there some way I can send websocket messages to my clients on demand in Python (on demand = dynamically and conditionally based upon what happens in my game's loop).
Not sure I understand what you mean. Callbacks are on-demand. It sounds like you need to create your WebSocket in a separate thread.

How to store real-time chat messages in database?

I am using mysqldb for my database currently, and I need to integrate a messaging feature that is in real-time. The chat demo that Tornado provides does not implement a database, (whereas the blog does.)
This messaging service also will also double as an email in the future (like how Facebook's message service works. The chat platform is also email.) Regardless, I would like to make sure that my current, first chat version will be able to be expanded to function as email, and overall, I need to store messages in a database.
Is something like this as simple as: for every chat message sent, query the database and display the message on the users' screen. Or, is this method prone to suffer from high server load and poor optimization? How exactly should I structure the "infrastructure" to make this work?
(I apologize for some of the inherent subjectivity in this question; however, I prefer to "measure twice, code once.")
Input, examples, and resources appreciated.
Regards.
Tornado is a single threaded non blocking server.
What this means is that if you make any blocking calls on the main thread you will eventually kill performance. You might not notice this at first because each database call might only block for 20ms. But once you are making more than 200 database calls per seconds your application will effectively be locked up.
However that's quite a few DB calls. In your case that would be 200 people hitting send on their chat message in the same second.
What you probably want to do is use a queue with a non blocking API. So Tornado receives a chat message. You put it on the queue to be saved to the database by another process, then you send the chat message back out to the other chat members.
When someone connects to a chat session you also need to send off a request to the queue for all the previous messages, when the queue responds you send those off to the newly connected user.
That's how I would approach the problem anyway.
Also see this question and answer: Any suggestion for using non-blocking MySQL api on Tornado in Python3?
Just remember, Tornado is single threaded. It's amazing. And can handle thousands of simultaneous connections. But if code in one of those connections blocks for 1 second then NOTHING else will be done for any other connection during that second.

Cancel xmlrpc client request?

Is it possible to somehow cancel xmlrpc client request?
Let say that in one thread I have code like:
svr = xmlrpclib.ServerProxy('http://localhost:9092')
svr.DoSomethingWhichNeedTime()
I don't mean some kind of TimeOut... Sometimes from another thread I can get event to cancel my work. And then I need to cancel this request.
I know that I can do it with twisted but, is it possible to do it with standard xmlrpclib?
First of all, it must be implemented on server side, not in client (xmlrpclib). If you simply interrupt your HTTP request to XML-RPC server, it's not guaranteed that long process running on the server will be interrupted at all. So xmlrpclib just can't have this functionality.
If you want to implement this behaviour, you need to create two type of requests. A request of first type will tell your server to start some long process. It must be executed in background (in another thread or process), and your XML-RPC server must send the response ("Process started!") to the client immediately. When you want to stop the process, client must send another request that will tell your server to stop executing of process.
Yes, if you want to do really dirty hacks....
Basically the ServerProxy object keeps a handle to the underlying socket/http connection. If you reached into those internals and simply close() the socket your client code will blow up with an exception. If you handle those properly its your cancel.
You can do it a little more sane if you register your own transport class for the ServerProxy via the transport parameter and give it some cancel method that does what you want.
That won't stop the server from processing things, unless it reacts to closing the channel directly.

Design question on Python network programming

I'm currently writing a project in Python which has a client and a server part. I have troubles with the network communication, so I need to explain some things...
The client mainly does operations the server tells him to and sends the results of the operations back to the server. I need a way to communicate bidirectional on a TCP socket.
Current Situation
I currently use a LineReceiver of the Twisted framework on the server side, and a plain Python socket (and ssl) on client side (because I was unable to correctly implement a Twisted PushProducer). There is a Queue on the client side which gets filled with data which should be sent to the server; a subprocess continuously pulls data from the queue and sends it to the server (see code below).
This scenario works well, if only the client pushes its results to the manager. There is no possibility the server can send data to the client. More accurate, there is no way for the client to receive data the server has sent.
The Problem
I need a way to send commands from the server to the client.
I thought about listening for incoming data in the client loop I use to send data from the queue:
def run(self):
while True:
data = self.queue.get()
logger.debug("Sending: %s", repr(data))
data = cPickle.dumps(data)
self.socket.write(data + "\r\n")
# Here would be a good place to listen on the socket
But there are several problems with this solution:
the SSLSocket.read() method is a blocking one
if there is no data in the queue, the client will never receive any data
Yes, I could use Queue.get_nowait() instead of Queue.get(), but all in all it's not a good solution, I think.
The Question
Is there a good way to achieve this requirements with Twisted? I really do not have that much skills on Twisted to find my way round in there. I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event.
Is Twisted (or more general any event driven framework) able to solve this problem? I don't even have real event on the communication side. If the server decides to send data, it should be able to send it; there should not be a need to wait for any event on the communication side, as possible.
"I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event."
You can send data using protocol.transport.write from anywhere, not just in lineReceived.
"I need a way to send commands from the server to the client."
Don't do this. It inverts the usual meaning of "client" and "server". Clients take the active role and send stuff or request stuff from the server.
Is Twisted (or more general any event driven framework) able to solve this problem?
It shouldn't. You're inverting the role of client and server.
If the server decides to send data, it should be able to send it;
False, actually.
The server is constrained to wait for clients to request data. That's generally the accepted meaning of "client" and "server".
"One to send commands to the client and one to transmit the results to the server. Does this solution sound more like a standard client-server communication for you?"
No.
If a client sent messages to a server and received responses from the server, it would meet more usual definitions.
Sometimes, this sort of thing is described as having "Agents" which are -- each -- a kind of server and a "Controller" which is a single client of all these servers.
The controller dispatches work to the agents. The agents are servers -- they listen on a port, accept work from the controller, and do work. Each Agent must do two concurrent things (usually via the select API):
Monitor a well-known socket on which it will receive work from the one-and-only client.
Do the work (in the background).
This is what Client-Server usually means.
If each Agent is a Server, you'll find lots of libraries will support this. This is the way everyone does it.

Categories

Resources