Sending images over socket in Python 3 - python

I am trying to send an image (screenshot) over socket from the client to the server. In Python 2 I was able to use the read() and write() function in order to read and write binary data as well as StringIO. But all of them disappeared in Python 3. I was playing around with PIL, but I can't get the test program running.
CLIENT
image = ImageGrab.grab()
s.send(image.tobytes())
I create a screenshot using GrabImage and save it as image. After that I send the image as binary over the socket to the server.
SERVER
data = conn.recv(4194304)
img = Image.frombytes('RGB', (1366, 768), data)
img.save('screenshot.jpg')
However, If I run the script I get an error message:
ValueError: not enough image data
I think I'm missing something decisive, but I can't figure it out.Thank you, chrizator.

It's likely that the call to .recv() is returning before all the data is retrieved; the parameter is a maximum size, not an exact size. You'll need to call .recv() in a loop and append the data until the entire image is received. This implies that you'll need some way to know WHEN the entire data is received - common strategies for this are:
Keep reading until you see some particular terminating character or character sequence. Not directly applicable in this case, since the raw image data could accidentally contain any particular sequence of bytes whatsoever.
Send the length (perhaps as a decimal number with a terminator, or a fixed-size binary value) ahead of the data; keep reading until you've received that many bytes.
Close the socket after sending the data; keep reading until you get a zero-byte result.

Related

(Python) How does one implement a buffer to hold bytes from a socket?

I'm new to using buffers and I've just set up a server and client. When I send data over a socket, my understanding is it is sent as bytes. I'd like to send some arbitrary list of values between 1-4096 by using i.e. _conn.sendall(str(list).encode()). However, I don't know how to handle the data on the client side to put it into a buffer. My first attempt was using the io module and creating a buffer like buffer = io.BytesIO() and then writing to it from the socket.
data = socket.recv(buffer_size)
buffer.write(bytes(data))
However, I'm not sure if this is correct or if the buffer is written to at all. I've tried printing out the buffer by using print(data.getbuffer()) but nothing prints on my terminal not even if I set up an exception handler.

PySerial seems to have a read() limit

I'm trying to send a large stream of data via serial, and using the PySerial library for my purposes. However, the program freezes whenever I need to read more than somewhere between 10,000 and 15,000 bytes.
s = ser.read(10000) works, however s = ser.read(15000) does not.
I've tried flushing the buffer, I've tried reading in one byte at a time with a loop, I've even tried opening and closing the port after calls of less bytes to try and receive all the data I'm sending - but no luck.
Any idea how to receive more data without the program freezing?

Socket lose part of message in UDP

I am trying to send an image frame through a UDP socket with Python 2.7, the current frame I am trying to send is 921600 bytes (640 x 480). And the buffer limit for UDP messages are 65507 bytes, so I need to split the message, here is how I am doing it.
From client.py:
image_string = frame.tostring() # frame is an multi-d numpy array
message_size = len(image_string)
sock.sendto(str(message_size), (HOST, PORT)) # First send the size
for i in xrange(0, message_size, 65507): # Split it and send
sock.sendto(image_string[i:i + 65507], (HOST, PORT))
sock.sendto("\n", (HOST, PORT)) # Mark the end to avoid hanging.
Here is how I am receiving it in server.py, I inserted some prints for debugging.
image_string = ""
data, addr = sock.recvfrom(1024) # recieve image size
message_size = int(data)
print "Incoming image with size: " + data
for i in xrange(0, message_size, 65507):
data, addr = sock.recvfrom(65507)
image_string += data.strip()
print "received part, image is now:", len(image_string)
print "End of image"
So I am reading the message same way I send it, it checks out in theory however not in practice. Possibly because of some packet loss after the client is done sending - the server is still stuck trying to read (blocked).
I know that UDP is unreliable and hard to work with, however I read that UDP is used in many video streaming applications, so I believe there should exist a solution to this problem, but I can not find it.
All help is appreciated, thanks.
Edit1: The reason I suspect packet loss is the problem, is because every time I run the test, I end up with different size of image being already sent before the server hangs.
Edit2: I forgot to mention that I tried different size of chunks while partitioning, 1024 and 500 bytes revealed no difference (5-20 bytes lost in 921600). But I should mention that I am sending and recieving from localhost, which already provides minimum error.
I know that UDP is unreliable and hard to work with, however I read that UDP is used in many video streaming applications, so I believe there should exist a solution to this problem, but I can not find it.
Those guys can. They design their protocol knowing that data may be lost (or even the contrary, arrive multiple times), it may arrive out of order, and their protocols/applications expect that.
You can not simply cut your data into pieces and sent them with UDP. You have to form each individual message in a way that each of them has a meaning on its own. If it is a "stream" it has to contain where that particular piece of data is located in the stream, and when your application receives it, it will know if the given piece of data can be handled, it is obsolete (arrived too late, or arrived already), it should be put aside and hope that some preceding parts would arrive, perhaps so unusable in itself, that the application should send a direct request to the sender in order to get things synchonized again.
In case of transferring an image - or a series of images -, you could send the offset of the data, and simply overwrite the given offset a fixed size buffer (which can host one entire image) whenever receiving something, and render the result. Then the buffer would always contain some image, at least some mixture of several images - or in extremely lucky cases a single, "real" image.
EDIT: an example of 'evaluating' what to do with a package: besides the offset, the number ('timestamp') of the image could be there too, and then the application could avoid overwriting a newer part of the image with something old - should some packet from the past (re)appear for any reason.
Why do you use the maximum buffer limit? The largest payload you can reliably send through UDP is 534 bytes. Sending more than that may cause fragmentation. If you are concerned with the data loss, use TCP.

PyVISA read closes before transfer has finished

I am writing a code in python to communicate with scopes through pyvisa.
Sometimes happens that during the transfer of data from the scope to the pc via ethernet connection, not all the data are transferred.
I open the connection with the scope as a SOCKET connection, as indicated in the manual:
inst = visa.ResourceManager().open_resource("TCPIP0::<ip_address>::<port>::SOCKET")
Everything runs properly except for data transfer.
I ask for data via the command inst.write('channel1:data?') as reported in the manual and then I read the data with inst.read(). But if I compare the number of points indicated in the data header with the length of the data array I obtain from the read() method I get a different result, not all the data is transferred. I tried to enable termination characters to the read operations and they work, but when I read data I get a warning from VISA saying that the string does not end with any termination character.
Is there a way to tell peeves when stop reading? Is there a way to force the read time to be longer?
Thanks

"Message Must be multiple of 16" Encrypted Audio over the Network with Python

I am sending encrypted audio through the network in Python. This app works momentarily then breaks saying it must be a multiple of 16.
Not sure what I am doing wrong. Or where to look in the code to solve this.
I would appreciate any help you have to offer
EDIT * I believe I have it working now if anyone is interested in taking a look I made a google code project
http://code.google.com/p/mii-chat/
msg = conn.recv(2024)
if msg:
cmd, msg = ord(msg[0]),msg[1:]
if cmd == CMD_MSG:
listb1.insert(END, decrypt_my_message(msg.strip()) + "\n")
The snippet above from your code reads 2024 bytes of data (which isn't a multiple of 16) and then (if the "if" statements are True) then calls decrypt_my_message with msg.strip() as the argument. Then decrypt_my_message complains that it was given a string whose length wasn't a multiple of 16. (I'm guessing that this is the problem. Have a look in the traceback to see if this is the line that causes the exception).
You need to call decrypt_my_message with a string of length n*16.
You might need to rethink your logic for reading the stream - or have something in the middle to buffer the calls to decrypt_my_message into chunks of n*16.
I did a quick scan of the code. All messages are sent after being encrypted, so the total data you send is a multiple of 16, plus 1 for the command. So far, so good.
On the decrypting side, you strip off the command, which leaves you with a message that is a multiple of 16 again. However, you are calling msg.strip() before you call decrypt_my_message. It is possible that the call to strip corrupts your encrypted data by removing bytes from the beginning or the end.
I will examine the code further, and edit this answer if I find anything else.
EDIT:
You are using space character for padding, and I suppose you meant to remove the padding using the strip call. You should change decrypt_my_message(msg.strip()) to decrypt_my_message(msg).strip().
You are using TCP to send the data, so your protocol is bound to give you headaches in the long term. I always send the length of the payload in my messages with this sort of custom protocol, so the receiving end can determine if it received the message block correctly. For example, you could use: CMD|LEN(2)|PAYLOAD(LEN) as your data frame. It means, one byte for command, two more bytes to tell the server how many bytes to expect, and LEN bytes of actual message. This way, your recv call can loop until it reads the correct amount. And more importantly, it will not attempt to read the next packet when/if they are sent back-to-back.
Alternatively, if your packets are small enough, you could go for UDP. It opens up another can of worms, but you know that the recv will only receive a single packet with UDP.

Categories

Resources