I'm working on a really basic "image streaming" server as a school subject, and I've done most of the work but I'm still stuck on the separation between data and control related sockets:
My structure is : TCPServer (my server, used as control socket) contains a dataSocket (only used to send images and initialized within my TCPServer object, when I receive a certain query)
When I'm sending data (images) through my dataSocket, I still need to see if the client sent a PAUSE or STOP request, but if I use python's self.request.recv(1024) the server awaits a response instead of continuing to send data (which is quite logical).
What should I do to prevent this behavior ? Should I launch my recv(1024) on a separate thread and run it at each loop (and check if I get any relevant data in between two iterations) ?
Twisted should do the trick! It handles asynchronous sockets in Python
Related
I use python socket to send data to server, and the code like:
When I close the server, it will send the data twice, and then, it will go to the "except" code. If I set the SEND_INTERVAL too long, it will be a disaster. So, how to get the error immediately when the server is closed or downtime?
Nothing happens immediatly over the network. That's one thing.
Secondly the underlying OS will detect broken connections (and Python gets that info in the form of an exception), but usually this takes time. And that's why you still send messages even though the connection is already dead. But since OS operates on network layer (as opposed to the application layer) then there's an issue: the connection may be dead but OS may never detect this. For example this will happen when the server is dead but behind alive proxy.
Thirdly the most reliable way to know that a server is alive is when it sends something back to the client. So you should always .recv() (with timeout) after .sendall() call and the server should always .sendall() after .recv() (the request-response pattern, even when the response is a simple "I received message"). If you can't modify the server side and (in worst case) if the server doesn't send anything back to the client then there's no reliable way to know this.
That's why you need some form of framing/correctness protocol. Simple .sendall() won't do.
Im trying to make a tcp communication, where the server sends a message every x seconds through a socket, and should stop sending those messages on a certain condition where the client isnt sending any message for 5 seconds.
To be more detailed, the client also sends constant messages which are all ignored by the server on the same socket as above, and can stop sending them at any unknown time. The messages are, for simplicity, used as alive messages to inform the server that the communication is still relevant.
The problem is that if i want to send repeated messages from the server, i cannot allow it to "get busy" and start receiving messages instead, thus i cannot detect when a new messages arrives from the other side and act accordingly.
The problem is independent of the programming language, but to be more specific im using python, and cannot access the code of the client.
Is there any option of receiving and sending messages on a single socket simultaneously?
Thanks!
Option 1
Use two threads, one will write to the socket and the second will read from it.
This works since sockets are full-duplex (allow bi-directional simultaneous access).
Option 2
Use a single thread that manages all keep alives using select.epoll. This way one thread can handle multiple clients. Remember though, that if this isn't the only thread that uses the sockets, you might need to handle thread safety on your own
As discussed in another answer, threads are one common approach. The other approach is to use an event loop and nonblocking I/O. Recent versions of Python (I think starting at 3.4) include a package called asyncio that supports this.
You can call the create_connection method on an event_loop to create an asyncio connection. See this example for a simple server that reads and writes over TCP.
In many cases an event loop can permit higher performance than threads, but it has the disadvantage of requiring most or all of your code to be aware of the event model.
I am currently implementing a socket server using Python's socketServer module. I am struggling to understand how a client 'signals' the server to perform certain tasks.
As you can tell, I am a beginner in this area. I have looked at many tutorials, however, these only tell you how to perform singular tasks in the server e.g. modify a message from the client and send it back.
Ideally what I want to know is there a way for the client to communicate with the server to perform different kinds of tasks.
Is there a standard approach to this issue?
Am I even using the correct type of server?
I was thinking of implementing some form of message passing from the client that tells the server which task it should perform.
I was thinking of implementing some form of message passing from the client that tells the server which task it should perform.
That's exactly what you need: an application protocol.
A socket (assuming a streaming Internet socket, or TCP) is a stream of bytes, nothing more. To give those bytes any meaning, you need a protocol that determines which byte (or sequence thereof) means what.
The main problem to tackle is that the stream that such a socket provides has no notion of "messages". So when one party sends "HELLO", and "BYE" after that, it all gets concatenated into the stream: "HELLOBYE". Or worse even, your server first receives "HELL", followed by "OBYE".
So you need message framing, or rules how to interpret where messages start and end.
You generally don't want to invent your own application protocol. Usually HTTP or other existing protocols are leveraged to pass messages around.
I want to create a simple video streaming (actually, image streaming) server that can manage different protocols (TCP Push/Pull, UDP Push/Pull/Multicast).
I managed to get TCP Push/Pull working with the SocketServer.TCPServer class and ThreadinMixIn for processing each connected client in a different thread.
But now that I'm working on the UDP protocol, I just realized that ThreadinMixIn creates a thread per call of handle() per client query (as there's nothing such as a "connection" in UDP).
The problem is I need to process a sequence of queries by the same client, for all the clients. How could I manage that ?
The only way I see I could handle that is to have a list of (client adresses, processing thread) and send each query to the matching thread (or create a new one if the client haven't sent any thread yet). Is there an easier way to do that ?
Thanks !
P.S : I can't use any external or too "high-level" library for this as it's a school subject meant to understand how sockets work.
Take a look at Twisted. This will remove the need to do any thread dispatch from your application. You still have to match up packets to a particular session in order to handle them, but this isn't difficult (use a port per client and dispatch based on the port, or require packets in a session to always come from the same address and use the peer address, or use one of the existing protocols that solves this problem such as SIP).
I'm currently writing a project in Python which has a client and a server part. I have troubles with the network communication, so I need to explain some things...
The client mainly does operations the server tells him to and sends the results of the operations back to the server. I need a way to communicate bidirectional on a TCP socket.
Current Situation
I currently use a LineReceiver of the Twisted framework on the server side, and a plain Python socket (and ssl) on client side (because I was unable to correctly implement a Twisted PushProducer). There is a Queue on the client side which gets filled with data which should be sent to the server; a subprocess continuously pulls data from the queue and sends it to the server (see code below).
This scenario works well, if only the client pushes its results to the manager. There is no possibility the server can send data to the client. More accurate, there is no way for the client to receive data the server has sent.
The Problem
I need a way to send commands from the server to the client.
I thought about listening for incoming data in the client loop I use to send data from the queue:
def run(self):
while True:
data = self.queue.get()
logger.debug("Sending: %s", repr(data))
data = cPickle.dumps(data)
self.socket.write(data + "\r\n")
# Here would be a good place to listen on the socket
But there are several problems with this solution:
the SSLSocket.read() method is a blocking one
if there is no data in the queue, the client will never receive any data
Yes, I could use Queue.get_nowait() instead of Queue.get(), but all in all it's not a good solution, I think.
The Question
Is there a good way to achieve this requirements with Twisted? I really do not have that much skills on Twisted to find my way round in there. I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event.
Is Twisted (or more general any event driven framework) able to solve this problem? I don't even have real event on the communication side. If the server decides to send data, it should be able to send it; there should not be a need to wait for any event on the communication side, as possible.
"I don't even know if using the LineReceiver is a good idea for this kind of problem, because it cannot send any data, if it does not receive data from the client. There is only a lineReceived event."
You can send data using protocol.transport.write from anywhere, not just in lineReceived.
"I need a way to send commands from the server to the client."
Don't do this. It inverts the usual meaning of "client" and "server". Clients take the active role and send stuff or request stuff from the server.
Is Twisted (or more general any event driven framework) able to solve this problem?
It shouldn't. You're inverting the role of client and server.
If the server decides to send data, it should be able to send it;
False, actually.
The server is constrained to wait for clients to request data. That's generally the accepted meaning of "client" and "server".
"One to send commands to the client and one to transmit the results to the server. Does this solution sound more like a standard client-server communication for you?"
No.
If a client sent messages to a server and received responses from the server, it would meet more usual definitions.
Sometimes, this sort of thing is described as having "Agents" which are -- each -- a kind of server and a "Controller" which is a single client of all these servers.
The controller dispatches work to the agents. The agents are servers -- they listen on a port, accept work from the controller, and do work. Each Agent must do two concurrent things (usually via the select API):
Monitor a well-known socket on which it will receive work from the one-and-only client.
Do the work (in the background).
This is what Client-Server usually means.
If each Agent is a Server, you'll find lots of libraries will support this. This is the way everyone does it.