I have a client and a server communicating as follows:
# server
running = 1
while running:
data = self.client.recv(self.size)
self.client.send(str(self.vel))
# client
def runit(self):
self.s.send('test')
data = float(self.s.recv(self.size))
self.master_.after(5,self.runit)
So both are in infinity loops, although this does transfer data, it is inefficient for my application. I am making a game and I want the server to send data to the client at specific instances, and I also want the client to receive that data at that instance. Something similar to a callback would work. Unfortunately, I wasn't able to find anything suitable for my needs.
First, I don't see anything "inefficient", a while loop is very common in such case, and as the comment says, the loop will just blocks in recv until a client connects.
So your question is how to send data from server to client? If a connection is established, assuming you're using TCP, then just call send() method on that socket. Maybe you want to set socket SO_KEEPALIVE, see How to change tcp keepalive timer using python script?.
Related
I tried to use python's zmq lib. And now I have two questions:
Is there a way to check socket connection state?
I'd like to know if connection is established after call connect
I want to one-to-one communication model.
I tried to use PAIR zmq socket type.
In that case if one client is already connected, server will not receive any messages from secondary connected client.
But I'd like to get info in the second client that there is another client and server is busy.
You'd get an error if connect fails.
But I guess the real question is how often do you want to check this? once at startup, before each message, or periodically, using some heartbeat?
That does not make sense, as you can not send info without connecting first.
However, some socket types might give some more info.
But the best way would be to use multiple sockets: one for such status information, and another one for sending data.
ZMQ is made to use multiple sockets.
I am implementing this example for using select in a simple echo server. Everything works fine, the client sends the message, receives the echo and disconnects.
This is the code I used for the client:
import socket
ECHOER_PORT = 10000
if __name__ == '__main__':
sockfd = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sockfd.connect(('localhost', ECHOER_PORT))
msg = input('Message: ').encode()
sockfd.send(msg)
response = sockfd.recv(len(msg))
print('Response: {}'.format(response))
sockfd.close()
The issue is with the server (full gist code is here), after sending the echo, select is called again one more time and the current client (which received the echo and disconnected) is returned from select as both readable and writable.
I understand why it's returned as readable, since according to the article:
A readable socket without data available is from a client that has
disconnected, and the stream is ready to be closed.
But my question is why does return as writable also?
But my question is why does return as writable also?
The main thing you want to have select() do when a client has disconnected is return immediately, so that your code can detect the disconnection-event and handle it ASAP (by closing the socket).
One way to do this is the common way, by having the server select-for-read on every socket, and when a client disconnects, select() will return ready-for-read on that socket, then the program will call recv() on the socket to find out what the data is, recv() will return EOF, and the server will close the socket. That is all well and good.
Now imagine the less-common (but not unheard of) case where the server writes to its client sockets, but doesn't want to read from them. In this case, the server has no need (or desire) to select for ready-to-read on its sockets; it only needs to select for ready-to-write, to know when there is some outgoing-data-buffer-space available to send more data to a client. That server still needs to know when a client has disconnected, though -- which is why the disconnected socket selects as ready-for-write as well, so that a select() that is only watching for ready-for-write can also detect and react to a disconnected socket immediately.
I am creating a file server in python using ftp sockets where multiple clients send images to a server. This is the method I am using to receive the image on the server side:
while True:
data = client.recv(512)
if not data:
break
file.write(data)
And this is the method I am using to send the image to the server:
while True:
data = file.read(512)
if not data:
break
server.send(data)
The problem I am running into is that on the server side the while loop is never exited which means the code is either stuck in the recv call or the if statement is never true. On the client side there are no problems, the loop is exited properly. I've heard that the client side will send something to the server to tell it to stop but the if statement doesn't seem to pick it up. How can I get the server to stop trying to receive without closing the connection?
https://docs.python.org/2/howto/sockets.html#disconnecting
On the client, close the socket to the server. On the server, check whether the recv returned 0 bytes.
Also from the documentation for using a socket:
When a recv returns 0 bytes, it means the other side has closed (or is
in the process of closing) the connection. You will not receive any
more data on this connection. Ever. You may be able to send data
successfully; I’ll talk more about this later.
Data will never be nothing unless the client closes the connection.
server.close()
I want to create a simple video streaming (actually, image streaming) server that can manage different protocols (TCP Push/Pull, UDP Push/Pull/Multicast).
I managed to get TCP Push/Pull working with the SocketServer.TCPServer class and ThreadinMixIn for processing each connected client in a different thread.
But now that I'm working on the UDP protocol, I just realized that ThreadinMixIn creates a thread per call of handle() per client query (as there's nothing such as a "connection" in UDP).
The problem is I need to process a sequence of queries by the same client, for all the clients. How could I manage that ?
The only way I see I could handle that is to have a list of (client adresses, processing thread) and send each query to the matching thread (or create a new one if the client haven't sent any thread yet). Is there an easier way to do that ?
Thanks !
P.S : I can't use any external or too "high-level" library for this as it's a school subject meant to understand how sockets work.
Take a look at Twisted. This will remove the need to do any thread dispatch from your application. You still have to match up packets to a particular session in order to handle them, but this isn't difficult (use a port per client and dispatch based on the port, or require packets in a session to always come from the same address and use the peer address, or use one of the existing protocols that solves this problem such as SIP).
I'm currently writing a project in Python which has a client and a server part. I have troubles with the network communication, so I need to explain some things...
The client mainly does operations the server tells him to and sends the results of the operations back to the server. I need a way to communicate bidirectional on a TCP socket.
Current Situation
I currently use a LineReceiver of the Twisted framework on the server side, and a plain Python socket (and ssl) on client side (because I was unable to correctly implement a Twisted PushProducer). There is a Queue on the client side which gets filled with data which should be sent to the server; a subprocess continuously pulls data from the queue and sends it to the server (see code below).
This scenario works well, if only the client pushes its results to the manager. There is no possibility the server can send data to the client. More accurate, there is no way for the client to receive data the server has sent.
The Problem
I need a way to send commands from the server to the client.
I thought about listening for incoming data in the client loop I use to send data from the queue:
def run(self):
while True:
data = self.queue.get()
logger.debug("Sending: %s", repr(data))
data = cPickle.dumps(data)
self.socket.write(data + "\r\n")
# Here would be a good place to listen on the socket
But there are several problems with this solution:
the SSLSocket.read() method is a blocking one
if there is no data in the queue, the client will never receive any data
Yes, I could use Queue.get_nowait() instead of Queue.get(), but all in all it's not a good solution, I think.
The Question
Is there a good way to achieve this requirements with Twisted? I really do not have that much skills on Twisted to find my way round in there. I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event.
Is Twisted (or more general any event driven framework) able to solve this problem? I don't even have real event on the communication side. If the server decides to send data, it should be able to send it; there should not be a need to wait for any event on the communication side, as possible.
"I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event."
You can send data using protocol.transport.write from anywhere, not just in lineReceived.
"I need a way to send commands from the server to the client."
Don't do this. It inverts the usual meaning of "client" and "server". Clients take the active role and send stuff or request stuff from the server.
Is Twisted (or more general any event driven framework) able to solve this problem?
It shouldn't. You're inverting the role of client and server.
If the server decides to send data, it should be able to send it;
False, actually.
The server is constrained to wait for clients to request data. That's generally the accepted meaning of "client" and "server".
"One to send commands to the client and one to transmit the results to the server. Does this solution sound more like a standard client-server communication for you?"
No.
If a client sent messages to a server and received responses from the server, it would meet more usual definitions.
Sometimes, this sort of thing is described as having "Agents" which are -- each -- a kind of server and a "Controller" which is a single client of all these servers.
The controller dispatches work to the agents. The agents are servers -- they listen on a port, accept work from the controller, and do work. Each Agent must do two concurrent things (usually via the select API):
Monitor a well-known socket on which it will receive work from the one-and-only client.
Do the work (in the background).
This is what Client-Server usually means.
If each Agent is a Server, you'll find lots of libraries will support this. This is the way everyone does it.