Alternative to timeout socket - python

Is there an alternative technique of timeout to interrupt communication with a server in case it has received all the messages?
Let me explain, I want to communicate through an SSL socket with a server (XMPP) unknown to me. The communication is done through XML messages.
For each request I can receive one or more response messages of different types. Or a single message that is sent in multiple chunks. Therefore I create a loop to receive all messages related to a request from the server. So I do not know the number and the size of messages for each request a priori. Once the messages are finished, the client waits for no more responses from the Server to stop listening (i.e., it waits for the timeout). However, this results in a long wait (e.g., for each request I have to wait 2s which at the end of the communication could be as long as 1 minute for all requests).
Is there a faster way to stop listening when we have received all the messages for each request?
I attach here my code:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock = ssl.wrap_socket(s, ssl_version=ssl. ssl.PROTOCOL_TLSv1_2)
sock.connect((hostname, port))
sock.timeout(2)
#example for a generic unique message request
sock.sendall(message.encode())
data = “”
while True:
try:
response = sock.recv(1024)
if not response: break
data += response.decode()
except socket.timeout:
break

You know how much data to expect because of the protocol.
The XMPP protocol says (section 4.2) what the start of a stream looks like; (section 4.3) tells you about stream negotiation process which the <stream:features> tag is used for. Don't understand them yet? Section 4.1 tells you about what a stream is and how it works.
In one comment, you complained about receiving a response in multiple chunks. This is absolutely normal in TCP because TCP does not care about chunks. TCP can split up a 1kB message into 1000 1-byte chunks if it feels like it (usually it doesn't feel like it). So you must keep receiving chunks until you see the end of the message.
Actually, XMPP is not even a request-response protocol. The server can send you messages at any time, which are not responses to something you sent, for example it could tell you that someone said "hello" to you. So you might send an IQ request, but the next message you receive might not be an IQ response, it might be a message saying that your friend is now online. You know which message is the response because: the message type is IQ; the 'type' is 'result' or 'error'; and the 'id' is the same as in the request. Section 8.2.3.

Related

Python Socket only sends after programm ends [duplicate]

Im trying to write perl TCP server / python TCP client, and i have the such code now:
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ("127.0.0.1", 9000)
sock.connect(server_address)
try:
message = unicode('Test')
sock.sendall(message)
data = sock.recv(1024)
print data
finally:
sock.close()
And i have noticed, that my TCP server (written in perl) is getting message not after sendall(message), but after close(). Server is working like an echo server, and sends data to client after getting a message. And that causes deadlock, server never gets a message, client never gets a response. What could be a problem? What is going to happen during close(), that message comes to server?
I'm going to hazard a guess that this is due to the server's implementation. There are many ways of writing an echo server:
receieve bytes in a loop (or async callback) until EOF; as the bytes are recieved (each loop iteration), echo them without any processing or buffering; when an EOF is found (the inbound stream is closed), close the outbound stream
read lines at a time (assume it is a text protocol), i.e. looking for CR / LF / EOF; when a line is found, return the line - when an EOF is found (the inbound stream is closed), close the outbound stream
read to an EOF; then return everything and close the outbound stream
If the echo server uses the first approach, it will work as expected already - so we can discount that.
For the second approach, you are sending text but no CR / LF, and you haven't closed the stream from client to server (EOF), so the server will never reply to this request. So yes, it will deadlock.
If it is the third approach, then again - unless you close the outbound stream, it will deadlock.
From your answer, it looks like adding a \n "fixes" it. From that, I conclude that your echo-server is line-based. So two solutions, and a third that would work in any scenario:
make the echo-server respond to raw data, rather than lines
add an end-of-line marker
close the outbound stream at the client, i.e. the client-to-server stream (many network APIs allow you to close the outbound and inbound streams separately)
Additionally: ensure Nagle is disabled (often called NO_DELAY) - this will prevent the bytes sitting at the client for a while, waiting to be composed into a decent sized packet (this applies to 1 & 2, but not 3; having Nagle enabled would add a delay, but will not usually cause a deadlock).

Multiprocessing using Pipe or Queue

I have a requirement where I am going to receive asynchronous data (JSON) from client over http, which I've to process whenever it is received and then send it to a device over TCP connection, and get the responses back (1 or 2 responses based on request). This response I have to send it to client back.
My question is how can I do that? i.e. should I run to while True: loop and wait for data is put on the Queue, by keep checking it if it is not empty, once received , I will collect data from Queue and send it over TCP, But how second loop (recv data from TCP connection) should be run? Same while True: loop for waiting for TCP response? and once response is received how do I send it to HTTP client?
If yes, then how will it work? Can someone provide an example? I thought running two processes, one for write_to_queue and read_from_queue, but still can't comprehend how to implement and how will it work.

Python - Read remaining data from socket after TCP RST

I'm implementing a file transfer protocol with the following use case:
The server sends the file chunk by chunk inside several frames.
The client might cancel the transfer: for this, it sends a message and disconnects at TCP level.
What happened in that case on server side (Python running on Windows) is that I catch a ConnectionResetException (this is normal, the client has disconnected the socket) while sending the data to the client. I would want to read the latest data sent by the client (the message used to abort the call), but calling mysocket.recv() still raises a ConnectionResetException.
With a wireshark capture, I can clearly see that the message was properly sent by the client prior to TCP disonnection.
Any idea floks? Thanks!
VR
In order to understand what to do about this situation, you need to understand how a TCP connection is closed (see, e.g. this) and how the socket API relates to a clean shutdown (without fail, see this).
Your client is most likely calling close to terminate the connection. The problem with this is that there may be unread data in the socket receive queue or data arriving shortly from the other end that you will no longer be able to read, which is basically an error condition. To signal to the other end that data sent cannot be delivered to the receiving application, a reset is sent (well, technically, "SHOULD be sent" as per the RFC) and the TCP connection is abnormally terminated.
You might think that enabling SO_LINGER will help (many, many bits have been spilt over this so I won't elaborate further), but it won't solve the problem of unread data by the client causing the reset.
The client needs to instead call shutdown(SHUT_WR) to indicate that it is done sending, and then continue to call recv() until it reads 0 bytes indicating the other side is done sending. You may then call close().
Note that the Python 2 socket documentation states that
Depending on the platform, shutting down one half of the connection can also close the opposite half (e.g. on Mac OS X, shutdown(SHUT_WR) does not allow further reads on the other end of the connection).
This sounds like a bug to me. To get around this, you would have to send your cancel message, then keep reading until you get 0 bytes so that you know the server received the cancel message. You may then close the socket.
The Python 3.8 docs make no such disclaimer.

How to implement non-blocking tcp server with ack mechanism?

I am new to the multithreading web server programming
Now I am writing a server program that:
Receive messages (in self-defined data format) from tcp socket
Process these messages (which takes time)
Send corresponding responses to the socket
Provide ACK mechanism for receiving messages and sending responses, that is every message contains a unique seq number and I should include the ack (same as seq) in the corresponding response. The other side also implements this mechanism. If I did not receive ACK from the other side for 5 min, I should re-send the message that I expected to receive corresponding ACK from.
My thought was to use a while loop to receive messages from the socket, then process the messages and send responses.
The problem is, processing messages takes time and I may receive multiple messages in a short period. So if I call the process_message() function in this while loop and wait for its finish, it will be blocking and I will definitely waste time. So I need non-blocking way.
I have done some research. I supposed I may use two common techs: thread pool and message queue.
For thread pool, my idea goes like the following pseudo code:
def process_message():
process_message // takes time
send_response(socket)
while True:
message = recv(socket)
thread = thread_pool.get_one()
thread.start(target=process_message)
For message queue, I am not sure, but my idea would be having producer thread and consumer thread:
def consumer:
// only one consumer thread?
message = queue.poll()
consumer_thread.process_message(message)
send_response(socket)
while True:
// only one producer thread?
message = recv(socket)
producer_thread.put_message_to_queue()
Hope my idea is clear. Can anyone provide some typical solution?
Then, the tricker part, any thoughts on how to implement the ACK mechanism?
Thank you!
This is rather broad because there is still too much to implement.
The general idea is indeed to implement:
a TCP server, that will receive incoming messages and write them (including the socket from which they were received) in a queue
a pool of worker threads that will get a message from the queue, process the message, and pass the response to an object in charge of sending the message and wait for the acknowledgement
an object that will send the responses, store the sequence number, the socket and the message until the response has been acknowledged. A thread would be handy to process the list of message waiting for acknowledgement and sent them again when the timeout is exhausted.
But each part requires a consequent amount of work, and can be implemented in different ways (select, TCPServer or threads processing accepted sockets for the first, which data structure to store the messages waiting for acknowledgement for the third, and which pool implementation for the second). I have done some tests and realized that a complete answer would be far beyond what is expected on this site. IMHO, you'd better break the question in smaller answerable pieces, keeping this one as the general context.
You should also say whether the incoming messages should be immediately acknowledged when received or will be implicitely acknowledged by the response.

ZMQ in Python: New socket object for each incoming connection

Sockets in ZMQ are simply bound to an interface and are then able to receive messages right away, like this:
socket.bind("tcp://*:5555")
message = socket.recv()
Since multiple connections can send data to that socket simultaneously, how to distinguish the different senders?
On the other hand, with regular sockets, incoming connections are first accepted, which spawns a new socket, like this:
serversocket.bind((socket.gethostname(), 5555))
serversocket.listen()
(clientsocket, address) = serversocket.accept()
Here, the different senders can be easily distinguished since each is received through a different socket.
What is the best way to benefit from the convenience message-based and queue-buffered communication of ZMQ but still create an arbitrary number of distinguishable one-on-one connections as soon as they are requested?
How to distinguish the different clients depends on what socket type your using as your 'server', the explanations below will hopefully answer the 2nd question too.
REQ - Will reply to the client that sent the request, a recv call on a REQ socket must be followed by a send so you can't service the next request until you have processed the first. However multiple requests from different clients will be queued.
ROUTER - Will append a frame onto the message you recv that contains the client id of the sender. When sending a message the first frame will be removed and used to identify which connected client to reply to. You should store all frames up to and including the empty delimiter frame and prepend them to your reply message when you send the reply. Unlike REQ there is no need to send any messaged before another call to recv. The client id will be generated by ZeroMQ if not specified, but if you want 'persistence' you can set the id via setsockopt with the zmq.IDENTITY flag.

Categories

Resources