Python Sockets - Data is being sent way too fast - python

I'm using python sockets to send data across however whenever I'm sending data to the client, it seems to miss my data unless I'm debugging (which allows me to pause execution when needed).
Server snippet:
def send_file(client_socket: socket):
with open('client.py', 'rb') as file:
while True:
read_data = file.read()
client_socket.sendall(read_data)
if not read_data:
client_socket.sendall('End'.encode())
break
print('Finished')
The server reports that it has finished and sent the 'end' message, but my client seems to be hanging on listening for too long, even though I thought adding a end message would help.
Client Snippet:
with open('test.txt', 'wb') as file:
while True:
received_bytes = sock.recv(BUFFER_SIZE)
if received_bytes == b'End':
break
file.write(received_bytes)
# TODO: Restart client program
What am I doing wrong here?

received_bytes = sock.recv(BUFFER_SIZE)
Does read BUFFER_SIZE or less bytes, depending on BUFFER_SIZE and message you send End might become part of received_bytes rather than whole received_bytes or be cut between subsequent received_bytes. Consider following example, let say your message is HELLOWORLD thus you do send HELLOWORLDEnd and BUFFER_SIZE equals 3 therefore received_bytes are subsequently
HEL
LOW
ORL
DEn
d
Thus last is clearly not End. You do not need special way of informing about end, see part of socket Example from docs:
while True:
data = conn.recv(1024)
if not data: break
conn.sendall(data)
In your case this means not sending End and replacing
if received_bytes == b'End':
using
if not received_bytes:
EDIT after comment server cannot send an empty message and so the client is still listening for data even though it has well sufficiently been sent
If you must use end of message marker AT ANY PRICE, then consider using single byte for that purpose, which never appears in your message. For example if you elect \x00 for this purpose, you might then do
received_bytes = sock.recv(BUFFER_SIZE)
file.write(received_bytes.rstrip(b'\x00'))
if received_bytes.endswith(b'\x00'):
break
.endswith and .rstrip methods work for bytes same way like for strs. Thus this code does write received_bytes sans trailing \x00 (note that .rstrip does not modify received_bytes) then if received_bytes endswith said byte break.

Related

Python file transfer (tcp socket), problem with slow network

I setted up a secure socket using Tor and socks, but i'm facing a problem when sending large amount of data
Sender:
socket.send(message.encode())
Receiver:
chunks = []
while 1:
part = connection.recv(4096)
chunks.append(part.decode())
if len(part) < 4096:
break
response = "".join(chunks)
Since the network speed is not consistent in a loop i don't always fill the 4096b buffer, so the loop breaks and i don't receive the full data.
Lowering the buffer size doesn't seem an option because the "packet" size can be as low as 20b sometimes
TCP can split your package data in any amount of pieces it wants. So you should never rely on other end of a socket on the size of the packet received. You have to invent another mechanism for detecting end of message/end of file.
If you are going to send only one blob and close socket, then on server side you just read until you get False value:
while True:
data = sock.recv(1024)
if data:
print(data)
# continue
else:
sock.close()
break
If you are going to send multiple messages, you have to decide, what will be the separator between them. For text protocols it is a good idea to use lineending. You can then enjoy the power of Twisted LineReceiver protocol and others.
If you are doing a binary protocol, it's a common practice to preface your each message with size byte/word/dword.
Try using structure to pass the length of the incoming data first to the receiver, "import struct". That way the receiving end knows exactly how much data to receive. In this example bytes are being sent over the socket, the examples here I've borrowed from my github upload github.com/nsk89/netcrypt for reference and cut out the encryption steps from the send function as well as it sending a serialised dictionary.
Edit I should also clarify that when you send data over the socket especially if your sending multiple messages they all sit in the stream as one long message. Not every message is 4096 bytes in length. If one is 2048 in length and the next 4096 and you receive 4096 on your buffers you'll receive the first message plus half of the next message or completely hang waiting for more data that doesn't exist.
data_to_send = struct.pack('>I', len(data_to_send)) + data_to_send # pack the length of data in the first four bytes of data stream, >I indicates internet byte order
socket_object.sendall(data_to_send) # transport data
def recv_message(socket_object):
raw_msg_length = recv_all(socket_object, 4) # receive first 4 bytes of data in stream
if not raw_msg_length:
return None
# unpack first 4 bytes using network byte order to retrieve incoming message length
msg_length = struct.unpack('>I', raw_msg_length)[0]
return recv_all(socket_object, msg_length) # recv rest of stream up to message length
def recv_all(socket_object, num_bytes):
data = b''
while len(data) < num_bytes: # while amount of data recv is less than message length passed
packet = socket_object.recv(num_bytes - len(data)) # recv remaining bytes/message
if not packet:
return None
data += packet
return data
By the way, no need to decode the every part before combine them to a chunk, combine all the parts to a chunk and then decode the chunk.
For your situation, the better way is using 2 steps.
Step1: sender send the size of the message, receiver take this size and ready to receive the message.
Step2: sender send the message, receiver combine the data if necessary.
Sender
# Step 1
socket.send( str(len(message.encode())).encode() )
# Step 2
socket.send(message.encode("utf-8"))
Receiver
# Step 1
message_size = connection.recv(1024)
print("Will receive message size:",message_size.decode())
# Step 2
recevied_size = 0
recevied_data = b''
while recevied_size < int(message_size.decode()):
part = connection.recv(1024)
recevied_size += len(part)
recevied_data += part
else:
print(recevied_data.decode("utf-8", "ignore"))
print("message receive done ....",recevied_size)

TCP socket reads out of turn

I am using TCP with Python sockets, transfering data from one computer to another. However the recv command reads more than it should in the serverside, I could not find the issue.
client.py
while rval:
image_string = frame.tostring()
sock.sendall(image_string)
rval, frame = vc.read()
server.py
while True:
image_string = ""
while len(image_string) < message_size:
data = conn.recv(message_size)
image_string += data
The length of the message is 921600 (message_size) so it is sent with sendall, however when recieved, when I print the length of the arrived messages, the lengths are sometimes wrong, and sometimes correct.
921600
921600
921923 # wrong
922601 # wrong
921682 # wrong
921600
921600
921780 # wrong
As you see, the wrong arrivals have no pattern. As I use TCP, I expected more consistency, however it seems the buffers are mixed up and somehow recieving a part of the next message, therefore producing a longer message. What is the issue here ?
I tried to add just the relevant part of the code, I can add more if you wish, but the code performs well on localhost but fails on two computers, so there should be no errors besides the transmitting part.
Edit1: I inspected this question a bit, it mentions that all send commands in the client may not be recieved by a single recv in the server, but I could not understand how to apply this to practice.
TCP is a stream protocol. There is ABSOLUTELY NO CONNECTION between the sizes of the chunks of data you send, and the chunks of data you receive. If you want to receive data of a known size, it's entirely up to you to only request that much data: you're currently requesting the total length of the data each time, which is going to try to read too much except in the unlikely event of the entire data being retrieved by the first .recv() call. Basically, you need to do something like data = conn.recv(message_size - len(image_string)) to reflect the fact that the amount of remaining data is decreasing.
Think of TCP as a raw stream of bytes. It is your responsibility to track where you are in the stream and interpret it correctly. Buffer what you read and only extract what you currently need.
Here's an (untested) class to illustrate:
class Buffer:
def __init__(self,socket):
self.socket = socket
self.buffer = b''
def recv_exactly(self,count):
# Could return less if socket closes early...
while len(self.buffer) < count:
data = self.socket.recv(4096)
if not data: break
self.buffer += data
ret,self.buffer = self.buffer[:count],self.buffer[count:]
return ret
The recv always requests the same amount of data and queues it in a buffer. recv_exactly only returns the number of bytes requested and leaves any extra in the buffer.

Reading bytes from network socket in Python

I want to write a Python program that gets data from a network socket and then scans the data looking for particular sequences of data.
The 'getting from the network' bit works fine, and I can dump the retrieved data to a file with no problem, but trying to get Python to actually scan the data one byte at a time is just not working.
Whenever I put code in to try and work things in the 'for byte' loop, I don't get anything much to happen.
When I run the program below, the size of byte.out is usually twice the size of buf.out, which I think is a major symptom pointing to what has gone wrong. If the inner loop were really dealing with the data byte by byte, I would expect both output files to be the same size.
My feeling is that there is something wrong with "for byte in chr(buf):" but I really don't know what to put here.
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
fh1 = open("buf.out", 'wb')
fh2 = open("byte.out", 'wb')
s.connect(("obscured.url", 9999))
s.send('GET /xx HTTP/1.1\nHost obscured.url:9999\n\n')
for i in range(10):
buf = s.recv(1024)
for byte in chr(buf):
print >>fh2, byte
print >>fh1, buf
s.close
chr(buf) should give a TypeError. Use
for byte in buf:

How to read Python socket recv

I'm attempting to send an HTTP Request to a website and read the data it returns. The first website I tried worked successfully. It returned about 4 packets of data and then returned a 0 packet which the script caught and terminated.
However, attempting to load http://www.google.com/ does not work this way. Instead, it returns about 10 packets of the same length, a final smaller packet, and then proceeds to time out. Is it normal for this to happen? Does it all just depend on what server the host is using?
If anyone could recommend an alternative way to reading with socket.recv() that would take into account that a final null packet is not always sent, it would be greatly appreciated. Thanks.
try:
data = s.recv(4096)
while True:
more = s.recv(4096)
print len(more)
if not more:
break
else:
data += more
except socket.timeout:
errMsg = "Connection timed-out while connecting to %s. Request headers were as follows: %s", (parsedUrl.netloc, rHeader.headerContent)
self.logger.exception(errMsg)
raise Exception
For HTTP, use requests rather than writing your own.
> ipython
In [1]: import requests
In [2]: r = requests.get('http://www.google.com')
In [3]: r.status_code
Out[3]: 200
In [4]: r.text[:80]
Out[4]: u'<!doctype html><html itemscope="itemscope" itemtype="http://schema.org/WebPage">'
In [5]: len(r.text)
Out[5]: 10969
TCP does not give you "packets", but sequential bytes sent from the other side. It is a stream. recv() gives you chunks of that stream that are currently available. You stitch them back together and parse the stream content.
HTTP is rather involved protocol to work out by hand, so you probably want to start with some existing library like httplib instead.
It could be that Google uses Keep-Alive to keep the socket open in order to serve a further request. This would require parsing of the header and reading the exact number of bytes.
Depending on which version of HTTP you use, you have to add Connection: Keep-Alive to your headers or not. (This might be the simplest solution: just use HTTP/1.0 instead of 1.1.)
If you use that feature nevertheless, you would have to receive your first chunk of data and
parse if there is a '\r\nContent-Length: ' inside, and if so, take the bytes between that and the next '\r\n' and convert them to a number. That is your size.
Have a look if you have a '\r\n\r\n' in your data. If so, that is the end of your header. From here, you must read the exact number of bytes mentionned above.
Example:
import socket
s = socket.create_connection(('www.google.com', 80))
s.send("GET / HTTP/1.1\r\n\r\n")
x = s.recv(10000)
poscl = x.lower().find('\r\ncontent-length: ')
poseoh = x.find('\r\n\r\n')
if poscl < poseoh and poscl >= 0 and poseoh >= 0:
# found CL header
poseocl = x.find('\r\n',poscl+17)
cl = int(x[poscl+17:poseocl])
realdata = x[poseoh+4:]
Now, you have the content length in cl and the (start of the) payload data in realdata. The number of bytes missing of this request is missing = cl - len(realdata). If it is 0, you've got everything; if not, do s.read(missing) and recalculate missing until it is 0.
The code above is a simppe start of the job to be done; there are some places where you might need to recv() further before you can proceed.
This is quite compliated. By far easier ways would be
to use HTTP 1.1's Connection: close header in the request,
to use HTTP 1.0,
to use one of the libraries crafted for this task and not to reinvent the wheel.

python socket: sending and receiving 16 bytes

See edits below.
I have two programs that communicate through sockets. I'm trying to send a block of data from one to the other. This has been working with some test data, but is failing with others.
s.sendall('%16d' % len(data))
s.sendall(data)
print(len(data))
sends to
size = int(s.recv(16))
recvd = ''
while size > len(recvd):
data = s.recv(1024)
if not data:
break
recvd += data
print(size, len(recvd))
At one end:
s = socket.socket()
s.connect((server_ip, port))
and the other:
c = socket.socket()
c.bind(('', port))
c.listen(1)
s,a = c.accept()
In my latest test, I sent a 7973903 byte block and the receiver reports size as 7973930.
Why is the data block received off by 27 bytes?
Any other issues?
Python 2.7 or 2.5.4 if that matters.
EDIT: Aha - I'm probably reading past the end of the send buffer. If remaining bytes is less than 1024, I should only read the number of remaining bytes. Is there a standard technique for this sort of data transfer? I have the feeling I'm reinventing the wheel.
EDIT2: I'm screwing up by reading the next file in the series. I'm sending file1 and the last block is 997 bytes. Then I send file2, so the recv(1024) at the end of file1 reads the first 27 bytes of file2.
I'll start another question on how to do this better.
Thanks everyone. Asking and reading comments helped me focus.
First, the line
size = int(s.recv(16))
might read less than 16 bytes — it is unlikely, I will grant, but possible depending on how the network buffers align. The recv() call argument is a maximum value, a limit on how much data you are willing to receive. But you might only receive one byte. The operating system will generally give you control back once at least one byte has arrived, maybe (depending on the OS and on how busy the CPU is) after waiting another few milliseconds in case a second packet arrives with some further data, so that it only has to wake you up once instead of twice.
So you would want to say instead (to do the simplest possible loop; other variants are possible):
data = ''
while len(data) < 16:
more = s.recv(16 - len(data))
if not more:
raise EOFError()
data += more
This is indeed a wheel nearly everyone re-invents because it is so often needed. And your own code needs it a second time: your while loop needs its recv() to count down, asking for smaller and smaller limits until finally it has received exactly the number of bytes that were promised, and no more.

Categories

Resources