Send data to websockethandler on closing connection - python

I have Tornado websocket handler and I am sending messages from my browser ( I have override on_message,on_close,open).
In javascript on close I want to send some data to handler ( to clean some storages, I am sending some numbers in json like {'storage':22, 'time':96} ).
How in websocket handler in tornado to receive that closing message ?
I looked at close and on_close but there is no option to receive data.

If I understand what you're asking for correctly, it's impossible.
You want to make sure that when the connection is closed, and the browser calls the on_close function on your client-side JavaScript code, it can send some final data to the Tornado server.
But when the connection is closed, there's no way to send any more data. That's what it means to be closed.
What you need to do is create a "quit" or similar message, at the application level. When Tornado sends a "quit" message to the JS code, then it can send its final message; when Tornado receives that message, it can close the socket. (Of course this means you need to write your code to handle the case where that "graceful shutdown" never happens because, e.g., the client machine has been vaporized by a nuclear bomb.)

Related

python-socketio asyncio client: Is it possible to know when emit is completelly send data over the wire to server?

Do python-socketio or underlying python-engineio have any kind of confirmation that specific message was completely delivered to other side, similar to what TCP does to ensure all data was successfully transferred to other side?
I have kind of pubsub service built on python-socketio server, which sends back ok/error status when request has been processed. But in my python-socketio client sometimes I just need fire and forget some message to pubsub but I have to wait it was completely delivired before I terminate my application.
So, my naive code:
await sio.emit("publish", {my message})
it seems the await above is just scheduling send over wire to asyncio, but does not wait for send to complete. I suppose it's by design. Just need to know is it possible to know when send is complete or not.
Socket.IO has ACK packets that can be used for the receiving side to acknowledge receipt of an event.
When using the Python client and server, you can replace the emit() with call() to wait for the ack to be received. The return value of call() is whatever data the other side returned in the acknowledgement.
But not that for this to work the other side also needs to be expanded to send this ACK packets. If your other side is also Python, an event handler can issue an ACK simply by returning something from the handler function. The data that you return is included in the ACK packet. If the other side is JavaScript, you get a callback function passed as a last argument into your handler. The handler needs to call this function passing any data that it wants to send to the other side as response.

Server Sent Events detect client disconnect?

I have a lighttpd server with python installed, its using cgi. I'm able to set up a connection with Server Sent Events, but I'm unsure of how to detect if the client disconnects. I read somewhere that its impossible to tell whether a client disconnects or not, unless you send a message to detect it. I'm unsure of how to detect the client disconnect after sending a message. Whenever I send a message I just do...
print(message)
sys.stdout.flush()
Do I have to read stdin to check if the client disconnected or not?
The next version of lighttpd (1.4.40) detects if a client disconnects and sends a TERM signal to the CGI if the CGI is still running.

Whenever handlers executed need to push message to zeromq stays hanging

I have nginx in front of 8 instances of Tornado, and for some requests (a handler for comments), I need Tornado to push messages on ZeroMQ. I am doing this at the end of the handler (just before I send the response to the client):
# here is body of handler for comments
context = zmq.Context()
port = "5252"
socket = context.socket(zmq.PUSH)
socket.bind("tcp://*:%s" % port)
print "Running server on port: ", port
socket.send("Commented")
# here I flush response to client
But this is hanging. Is this real way to push to ZeroMQ whenever the handler is executed?
Is this real way to push to ZeroMQ whenever the handler is executed?
No. Your code calls zmq.Context() every time the request handler is invoked. This is bad. It should be called exactly once - usually at the very beginning of your process, perhaps in some kind of init handler. You can safely share the context instance among any number of threads.
Same thing with socket creation and binding - this should be done once at startup. You must be more careful with the socket. If all your handlers (application, request etc) are executing in the same thread each time the handler is called, then you can use the same socket.
Another problem is the way your a "send"-ing to a PUSH socket. As described in http://api.zeromq.org/3-2:zmq-socket, a send to a PUSH socket may very well block in certain situations and you probably want to avoid that. Use the zmq.Poller with the POLLOUT flag (and a 0 timeout) to determine if the send would block. If not, then send right away. If so, you have to decide if you want to just drop the message or store it in your application to try again later.

Cancel xmlrpc client request?

Is it possible to somehow cancel xmlrpc client request?
Let say that in one thread I have code like:
svr = xmlrpclib.ServerProxy('http://localhost:9092')
svr.DoSomethingWhichNeedTime()
I don't mean some kind of TimeOut... Sometimes from another thread I can get event to cancel my work. And then I need to cancel this request.
I know that I can do it with twisted but, is it possible to do it with standard xmlrpclib?
First of all, it must be implemented on server side, not in client (xmlrpclib). If you simply interrupt your HTTP request to XML-RPC server, it's not guaranteed that long process running on the server will be interrupted at all. So xmlrpclib just can't have this functionality.
If you want to implement this behaviour, you need to create two type of requests. A request of first type will tell your server to start some long process. It must be executed in background (in another thread or process), and your XML-RPC server must send the response ("Process started!") to the client immediately. When you want to stop the process, client must send another request that will tell your server to stop executing of process.
Yes, if you want to do really dirty hacks....
Basically the ServerProxy object keeps a handle to the underlying socket/http connection. If you reached into those internals and simply close() the socket your client code will blow up with an exception. If you handle those properly its your cancel.
You can do it a little more sane if you register your own transport class for the ServerProxy via the transport parameter and give it some cancel method that does what you want.
That won't stop the server from processing things, unless it reacts to closing the channel directly.

Design question on Python network programming

I'm currently writing a project in Python which has a client and a server part. I have troubles with the network communication, so I need to explain some things...
The client mainly does operations the server tells him to and sends the results of the operations back to the server. I need a way to communicate bidirectional on a TCP socket.
Current Situation
I currently use a LineReceiver of the Twisted framework on the server side, and a plain Python socket (and ssl) on client side (because I was unable to correctly implement a Twisted PushProducer). There is a Queue on the client side which gets filled with data which should be sent to the server; a subprocess continuously pulls data from the queue and sends it to the server (see code below).
This scenario works well, if only the client pushes its results to the manager. There is no possibility the server can send data to the client. More accurate, there is no way for the client to receive data the server has sent.
The Problem
I need a way to send commands from the server to the client.
I thought about listening for incoming data in the client loop I use to send data from the queue:
def run(self):
while True:
data = self.queue.get()
logger.debug("Sending: %s", repr(data))
data = cPickle.dumps(data)
self.socket.write(data + "\r\n")
# Here would be a good place to listen on the socket
But there are several problems with this solution:
the SSLSocket.read() method is a blocking one
if there is no data in the queue, the client will never receive any data
Yes, I could use Queue.get_nowait() instead of Queue.get(), but all in all it's not a good solution, I think.
The Question
Is there a good way to achieve this requirements with Twisted? I really do not have that much skills on Twisted to find my way round in there. I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event.
Is Twisted (or more general any event driven framework) able to solve this problem? I don't even have real event on the communication side. If the server decides to send data, it should be able to send it; there should not be a need to wait for any event on the communication side, as possible.
"I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event."
You can send data using protocol.transport.write from anywhere, not just in lineReceived.
"I need a way to send commands from the server to the client."
Don't do this. It inverts the usual meaning of "client" and "server". Clients take the active role and send stuff or request stuff from the server.
Is Twisted (or more general any event driven framework) able to solve this problem?
It shouldn't. You're inverting the role of client and server.
If the server decides to send data, it should be able to send it;
False, actually.
The server is constrained to wait for clients to request data. That's generally the accepted meaning of "client" and "server".
"One to send commands to the client and one to transmit the results to the server. Does this solution sound more like a standard client-server communication for you?"
No.
If a client sent messages to a server and received responses from the server, it would meet more usual definitions.
Sometimes, this sort of thing is described as having "Agents" which are -- each -- a kind of server and a "Controller" which is a single client of all these servers.
The controller dispatches work to the agents. The agents are servers -- they listen on a port, accept work from the controller, and do work. Each Agent must do two concurrent things (usually via the select API):
Monitor a well-known socket on which it will receive work from the one-and-only client.
Do the work (in the background).
This is what Client-Server usually means.
If each Agent is a Server, you'll find lots of libraries will support this. This is the way everyone does it.

Categories

Resources