Python TCP Duplicate Message - python

I'm working on an in-house TCP Server, and TCP client. When there is 0% packet loss the Server and Client work fine. However, when I have 20% or more packet loss I am seeing duplicate TCP messages. I am receiving something like this....
Client <-- MessageA -- Server
Client -- MessageB --> Server
Client <-- MessageCMessageA -- Server
Is it possible that MessageA is not completely making it to the Client, it times outs, then TCP resends it, and then the original message makes it which is received at a later time by the Client?
My question is if TCP works like that, and if that's a possible scenario with a network containing 20% packet loss or more.
Barebones of how the client and server are sending/receiving data...
socket.recv(1024)
socket.send(1024)

No, it is not possible. TCP guarantees that it will either deliver data exactly once and in the order in which it was sent, or signal an error to the application. Hence, there is probably a bug in your code. The most likely is that your code fails to deal with partial reads.
When you perform a write or send on a TCP socket, the TCP module will segment your data into as many packets as are needed. In the presence of packet loss, it is possible that some packets have arrived successfully but others must be resent. In that case, the corresponding read or recv will only receive part of the data — the remaining data will arrive in a subsequent read or recv. In other words, TCP does not preserve message boundaries.
Your code is probably interpreting such a split message as multiple messages. Make sure that you accumulate a full message in your buffer before attempting to parse it.

Related

Python - Read remaining data from socket after TCP RST

I'm implementing a file transfer protocol with the following use case:
The server sends the file chunk by chunk inside several frames.
The client might cancel the transfer: for this, it sends a message and disconnects at TCP level.
What happened in that case on server side (Python running on Windows) is that I catch a ConnectionResetException (this is normal, the client has disconnected the socket) while sending the data to the client. I would want to read the latest data sent by the client (the message used to abort the call), but calling mysocket.recv() still raises a ConnectionResetException.
With a wireshark capture, I can clearly see that the message was properly sent by the client prior to TCP disonnection.
Any idea floks? Thanks!
VR
In order to understand what to do about this situation, you need to understand how a TCP connection is closed (see, e.g. this) and how the socket API relates to a clean shutdown (without fail, see this).
Your client is most likely calling close to terminate the connection. The problem with this is that there may be unread data in the socket receive queue or data arriving shortly from the other end that you will no longer be able to read, which is basically an error condition. To signal to the other end that data sent cannot be delivered to the receiving application, a reset is sent (well, technically, "SHOULD be sent" as per the RFC) and the TCP connection is abnormally terminated.
You might think that enabling SO_LINGER will help (many, many bits have been spilt over this so I won't elaborate further), but it won't solve the problem of unread data by the client causing the reset.
The client needs to instead call shutdown(SHUT_WR) to indicate that it is done sending, and then continue to call recv() until it reads 0 bytes indicating the other side is done sending. You may then call close().
Note that the Python 2 socket documentation states that
Depending on the platform, shutting down one half of the connection can also close the opposite half (e.g. on Mac OS X, shutdown(SHUT_WR) does not allow further reads on the other end of the connection).
This sounds like a bug to me. To get around this, you would have to send your cancel message, then keep reading until you get 0 bytes so that you know the server received the cancel message. You may then close the socket.
The Python 3.8 docs make no such disclaimer.

Reliable udp using tcp retransmit [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am client receiver of UDP multicast data sent by Sender server (Stock Exchange data). I am continuously receiving udp multicast packet flow sequentially numbered 1 to approximately 35,000,000 sent uniformly over a period of 6 hours . I need to ensure all packets upto say N are received before the set of N packets is periodically processed after every say ~ 256 packets. i.e. I need reliable UDP.
Reliable UDP is mimicked using TCP retransmit. If any udp packet(s) is lost/not received, it is requested by using tcp protocol by specifying the desired missing packet range (starting number, ending number).
Sender keeps record of all the packets (stock exchange data) it has sent via UDP multicast so far. So Sender will resend by TCP only those packets numbers that the receiver specifically requests for via TCP. This is how UDP reliability is achieved by receiver. The UDP drop ratio is very small (less than 0.001%) except when starting the UDP multicast in the middle of the day, in which case all previously sent UDP packets from 1 to some N will need to be resent on TCP, while live transmission of UDP multicast data packet number N+1 onward is being received.) I can't request Sender (Stock Exchange) to change its protocol--it is fixed.
What is the efficient algorithm to implement this in terms of CPU?
The issue is speed BigOh. I can make a naive algorithm using several nested loops and methods, but it not necessarily the best.
I am thinking of maintaining a number N which confirms I have received UDP
packets 1 through N, and any packet no. M which is not the next expected packet no. N+1 will be buffered, for say 256 packets, and then TCP will be used to request the missing numbers. Then normal UDP reception resumes over from the last confirmed received number after TCP request is filled.
Example:
Suppose UDP packets received by receiver are in the following sequence {1,2,3,6,7,8,9,10 ...}
After packet No. 3, the next packet is No. 6. Packets 4 through 5 are missing.
So the missing packets {4,5} are requested using TCP request({4 through 5}), and {6,7,8,9,10} are buffered. There is enough space on the 10GBaseT LAN card for buffering 35,000,000 packets.
So: receive UDP {1,2,3}, refill by TCP request {4,5}, continue receive UDP {6,7,8,9,10, ...}
I assume since you are using multicast that there are going to be multiple receivers of this data? (Because if not, you'd probably be using unicast instead)
Therefore, if the receivers are going to have the option of requesting TCP retransmission of packets they didn't get, that means that the transmitting program will need to keep a copy of recently-sent UDP packets in memory, so that when it receives a retransmit-request, it will have the requested data available to retransmit. Assuming you're stamping each packet with a unique ID, it can store this data in a std::map or std::unordered_map or similar for quick lookup.
The real question becomes, how much of this old-packet data should the transmitter retain? ideally it would retain all of it, because you never know how much a given receiver might have missed and might want to request; but that would require infinite memory so that's not a realistic option. Probably the best you can do is decide how much RAM you're willing to tie up for this purpose, and keep a count of the total number of bytes you have in your table, and when it reaches the limit, start dropping the oldest packets from the table in order to keep its size under the limit.
I wrote an open-source library that uses essentially the technique you describe (multicast UDP + TCP-retransmit-to-recover-from-packet-loss) to synchronize databases across multiple hosts as quickly as possible; some things I learned while implementing it include:
If/when you can, pack your data-messages together into larger packets, up to the MTU of the network you are transmitting over (e.g. 1388 bytes for IPv4/Ethernet). Very small packet-sizes (like 48-bytes/packet) are inefficient, since the fixed-sized packet-headers make up a greater percentage of the total data sent/received.
Only try to send when your sending-socket indicates it is ready-for-write. (i.e. don't assume that you will never fill up the socket's outgoing-data-buffer; if your traffic is "bursty", you probably will at some point)
Minimize UDP packet loss by making your UDP sockets' send and receive buffers as large as you can get away with
Further minimize UDP packet loss by doing all the UDP receiving in a dedicated, high-priority thread (which can then route the received UDP data back to a normal-priority thread for further processing -- the main thing is to avoid allowing the receiving UDP-socket's incoming-data-buffer to overflow if possible)
For the TCP retransmission part, keep in mind that TCP streams can potentially slow down to nearly zero bytes-per-second in the worst case scenario, which makes it important to ensure that poor TCP performance to client A doesn't block the TCP communications to/from clients B, C, D, etc. This can be accomplished either via non-blocking I/O and select() (or poll() or similar), or asynchronous networking, or via multiple threads; avoid blocking I/O unless you are implementing a thread-per-socket model (and probably avoid that model as well, since a thread that is indefinitely-blocked-inside-recv() is difficult to shut down cleanly)
Think about under what circumstances (if any) it is acceptable for a client to never receive a particular packet at all; are there situations where that is okay? Or must the entire system grind to a halt until every receiver has received every packet in the group, regardless of how long that might take?
If you want to get really fancy, you can look into Forward Error Correction algorithms that encode data across packets, such that the receiver can still decode all of the data even if it never receives (up to a certain percentage of) the packets. This makes the need for a re-transmit request less likely, at the cost of making all of the packets slightly larger.

Is it is possible to force TCP socket to send 0 bytes in case of packet losses - python

I am writing a simple file client-server communication to transfer a file with TCP. The code I want to implement it using python socket programming. I follow this example. I am wondering if possible to control the packet loss and retransmission. For instance, can I send zero bytes of size lost when packets are dropped instead of retransmitting the actual lost-segments?
No, sending 0 bytes is an indicator that the socket has been properly closed and this is part of the TCP protocol. What you're trying to do is alter the TCP protocol which already handles packet loss. You can certainly send data that indicates packet loss, but it would be received after the retry attempts were completed for data that was previously lost and would not help you. It sounds like what you're trying to do would be better suited by using UDP and writing your own logic around packet loss, but then you have to handle out of order data as well.

Receiving multicast data with socket recvfrom in Python

I have a multicast server sending data that must be captured by a python client. The problem is that recvfrom does not receive any data or at least receive the first packet and sorta caches it. If I use recvfrom in a loop then my data is received correctly.
My question is why I should use recvfrom in a loop to have the expected behavior?
from socket import *
s=socket(AF_INET, SOCK_DGRAM)
s.bind(('172.30.102.141',12345))
m=s.recvfrom(1024)
print m[0]
# sleep for x seconds here
m=s.recvfrom(1024)
print m[0]
# print the exact same thing as previously...
One thing is for sure, multicast is basically sending UDP packages and you have to keep listening for new packages. That is true even for TCP protocol based communication.
When you use low level interfaces for network communication, like socket is, it's up on both sides to define application level protocol.
That means, you define how receiving party concludes that message is complete. This is because message could get split in multiple parts/packets that get through the network. So receiving side has to assemble them in a proper way and then check if the message is whole. After that you push it up through the pipeline of processing messages or whatever you do in receiving side.
When using UDP, receiving side doesn't know if there is any packet on its way, so it just does try to recvfrom 1024 bytes and finishes. It doesn't know and should not care if there is more data on it's way. It's up to you to take care of that.

How to know the status of tcp connect in python?

In python, tcp connect returns success even though the connect request is in queue at server end. Is there any way to know at client whether accept happened or in queue at server?
The problem is not related to Python but is caused by the underlying socket machinery that does its best to hide low level network events from the program. The best I can imagine would be to try a higher level protocol handshake (send a hello string and set a timeout for receiving the answer) but it would make no difference between the following problem:
connection is queued on peer and still not accepted
connection has been accepted, but for any other reason the server could not process it in allocated time
(only if timeout is very short) congestion on machines (including sender) and network added a delay greater that the timeout
My advice is simply that you do not even want to worry with such low level details. As problems can arise server side after the connection has been accepted, you will have to deal with possible higher level protocol errors, timeouts or connection loss. Just say that there is no difference between a timeout after connection has been accepted and a timeout to accept the connection.
If connect returns and there is no error, the TCP 3-Way Handshake has taken place successfully.
Client: connect sends a SYN (and blocks)
Server: (blocking on accept) sends a SYN,ACK
Client: connect sends an ACK
After 3, connectgives control back to you on the client side and accept also gives control back to the caller on the server side.
Of course, if the server is fully loaded, there is no guarantee that the wake-up of accept means actual processing of the request, but the fact that connect has woken up and returned with no error is a guarantee of having successfully set-up the TCP connection.
Packets can be sent.
For a good explanation see for example:
https://www.ibm.com/developerworks/aix/library/au-tcpsystemcalls/index.html
And head to the The 3-way TCP handshake section

Categories

Resources