I receive data from some device via socket-module.
But after some time the device stops sending packages.
Then I want to interupt the for-loop.
While True doesn't work, because he receives more then 100 packages.
How can I stop this process?
s stands for socket.
...
for i in range(packages100):
data = s.recv(4)
f.write(data)
...
Edit:
I think socket.settimeout() is part of the solution. See also:
How to set timeout on python's socket recv method?
If your peer really just stops sending data, as opposed to closing the connection, this is tricky and you'll be forced to resort to asynchronous reading from this socket.
Put it in asynchronous mode (the docs and Google are your friends), and try to read it each time, instead of the blocking read. You can then just stop "trying" anytime you wish. Note that by nature of async IO your code will be a bit different - you will no longer be able to assume that once recv returns, it actually read some data.
while 1:
data = conn.recv(4)
if not data: break
f.write(data)
Also, example in python docs
Related
I am using pySerial to communicate to a microcontroller over USB, Most of the communication is initiated by the desktop python script which sends a command packet and waits for reply.
But there is also a alert packet that may be sent by the microcontroller without a command from the python script. In this case I need to monitor the read stream for any alerts.
For handling alerts, I dedicate a seperate process to call readline() and loop around it, like so:
def serialMonitor(self):
while not self.stopMonitor:
self.lock.acquire()
message = self.stream.readline()
self.lock.release()
self.callback(message)
inside a class. The function is then started in a seperate process by
self.monitor = multiprocessing.Process(target = SerialManager.serialMonitor, args = [self])
Whenever a command packet is send, the command function needs to take back control of the stream, for which it must interrupt the readline() call which is in blocking. How do I interrupt the readline() call? Is there any way to terminate a process safely?
You can terminate a multiprocessing process with .terminate(). Is this safe? Probably it's alright for a readline case.
However, this is not how I would handle things here. As I read your scenario, there are two possibilities:
MCU initiates alert package
Computer sends data to MCU (and MCU perhaps responds)
I assume the MCU will not send an alert package whilst an exchange is going on initiated by the computer.
So I would just initiate the serial object with a small timeout, and leave it in a loop when I'm not using it. My overall flow would go like this:
ser = Serial(PORT, timeout=1)
response = None
command_to_send = None
running = True
while running: # event loop
while running and not command_to_send and not line:
try:
line = ser.readline()
except SerialTimeoutException:
pass
if not command_to_send:
process_mcu_alert(line)
else:
send_command(command_to_send)
command_to_send = None
response = ser.readline()
This is only a sketch, as it would need to be run in a thread or subprocess, since readline() is indeed blocking, so you need some thread-safe way of setting command_to_send and running (used to exit gracefully) and getting response, and you likely want to wrap all this state up in a class. The precise implementation of that depends upon what you are doing, but the principle is the same---have one loop which handles reading and writing to the serial port, have it timeout to respond relatively quickly (you can set a smaller timeout if you need to), and have it expose some interface you can handle.
Sadly to my knowledge python has no asyncio compatible serial library, otherwise that approach would seem neater.
I am designing a turn-based game that uses connection.recv() to read from the socket and store the data of a 'move' in a buffer (which the server reads from). The problem is, the player can send data outside of their turn to be queued in the socket buffer, which means the server potentially reads from moves they made outside of their turn, instead of blocking until they make a turn. Is there any way to flush the data stored in the socket, and if not, is there any other workaround to this problem?
From Steffen's suggestion, I am using recv calls to clear the buffer. Currently, I'm setting the socket to non-blocking and calling recv until a BlockingIOError. It would be much appreciated if anyone could point out a more graceful solution (that doesn't use exceptions).
connection.setblocking(False)
while True:
try:
chunk = connection.recv(4096)
except BlockingIOError as b:
break
connection.setblocking(True)
I have a problem with receiving data from server to client. I have the following client-side function that attempts to receive data from the server. The data sent by the server using the socket.sendall (data) function is greater than buff_size so I need a loop to read all the data.
def receiveAll (sock):
data = ""
buff_size = 4096
while True:
part = sock.recv (buff_size)
data + = part
if part <buff_size:
break;
return data
The problem that occurs to me is that after the first iteration (read the first 4096mb), in the second the program is blocked waiting for the other data in part = sock.recv (buff_size). How do I have to do so that recv() can continue reading the other missing data? Thank you.
Your interpretation is wrong. Your code reads all the data that it get from the server. It just doesn't know that it should stop listening for incoming data. It doesn't know that the server sent everything it had.
First of all note that these lines
if part <buff_size:
break;
are very wrong. First of all you are comparing a string to int (in Python3.x that would throw an exception). But even if you meant if len(part) <buff_size: then this is still wrong. Because first of all there might be a lag in the middle of streaming and you will only read a piece smaller then buff_size. Your code will stop there.
Also if your server sends a content of the size being a multiple of buff_size then the if part will never be satisfied and it will hang on .recv() forever.
Side note: don't use semicolons ;. It's Python.
There are several solutions to your problem but none of them can be used correctly without modyfing the server side.
As a client you have to know when to stop reading. But the only way to know it is if the server does something special and you will understand it. This is called a communication protocol. You have to add a meaning to data you send/receive.
For example if you use HTTP, then a server sends this header Content-Length: 12345 before body so now as a client you know that you only need to read 12345 bytes (your buffer doesn't have to be as big, but with that info you will know how many times you have to loop before reading it all).
Some binary protocols may send the size of the content in first 2 or 4 bytes for example. This can be easily interpreted on the client side as well.
Easier solution is this: simply make server close the connection after he sends all the data. Then you will only need to add check if not part: break in your code.
I am communicating with an instrument via TCP/IP using the Python socket package.
The program sends a command to the instrument to perform an action, and then repetitively sends another "check" command until it receives a "done" reply. However, after many loops, the program hangs while waiting for a "done" reply.
I have circumvented this problem by using the recv_timeout() function below, which returns no data if the socket is hanging, then I close the connection with socket.close() and reconnect.
Is there a more elegant solution without having to reboot anything?
import socket
import time
def recv_timeout(self,timeout=0.5):
'''
code from http://code.activestate.com/recipes/408859/
'''
self.s.setblocking(0)
total_data=[];data='';begin=time.time()
while 1:There must be a way I can reboot to carry on communicating with the instrument, without having to restart.
#if you got some data, then break after wait sec
if total_data and time.time()-begin>timeout:
break
#if you got no data at all, wait a little longer
elif time.time()-begin>timeout*2:
break
try:
data=self.s.recv(8192)
if data:
total_data.append(data)
begin=time.time()
else:
time.sleep(0.1)
except:
pass
return ''.join(total_data)
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect(('555.555.55.555',23))
for action_num in range(0,1000):
socket.sendall(('performaction %s \r'%action_num).encode())
while True:
time.sleep(0.2)
socket.sendall(('checkdone \r').encode())
done = socket.recv_timeout()
if not done:
print 'communication broken...what should I do?'
socket.close()
time.sleep(60)
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect(('555.555.55.555',23))
elif done == '1':
print 'done performing action'
break
socket.close()
I have circumvented this problem by using the recv_timeout() function
below, which returns no data if the socket is hanging
Are you certain that the socket will hang forever? What about the possibility that the instrument just sometimes takes more than half a second to respond? (Note that even if the instrument's software is good at responding in a timely manner, that is no guarantee that the response data will actually get to your Python program in a timely manner. For example, if the TCP packets containing the response get dropped by the network and have to be resent, that could cause them to take more than .5 seconds to return to your program. You can force that scenario to occur by pulling the Ethernet cable out of your PC for a second or two, and then plugging it back in... you'll see that the response bytes still make it through, just a second or two later on (after the dropped packets get resent); that is, if your Python program hasn't given up on them and closed the socket already.
Is there a more elegant solution without having to reboot anything?
The elegant solution is to figure out what is happening to the reply bytes in the fault scenario, and fixing the underlying bug so that the reply bytes no longer get lost. WireShark can be very helpful in diagnosing where the fault is; for example if WireShark shows that the response bytes did enter your computer's Ethernet port, then that is a pretty good clue that the bug is in your Python program's handling of the incoming bytes(*). On the other hand if the response bytes never show up in WireShark, then there might be a bug in the instrument itself that causes it to fail to respond sometimes. Wireshark would also show you if the problem is that your Python script failed to send out the "check" command for some reason.
That said, if you really can't fix the underlying bug (e.g. because it's a bug in the instrument and you don't have the ability to upgrade the source code of the software running on the instrument) then the only thing you can do is what you are doing -- close the socket connection and reconnect. If the instrument doesn't want to respond for some reason, you can't force it to respond.
(*) One thing to do is print out the contents of the string returned by recv_timeout(). You may find that you did get a reply, but it just wasn't the '1' string you were expecting.
I've been scouring the Internet looking for a solution to my problem with Python. I'm trying to use a urllib2 connection to read a potentially endless stream of data from an HTTP server. It's part of some interactive communication, so it's important that I can get the data that's available, even if it's not a whole buffer full. There seems to be no way to have read \ readline return the available data. It will block forever waiting for the entire (endless) stream before it returns.
Even if I set the underlying file descriptor to non-blocking using fnctl, the urllib2 file-object still blocks!! In general there seems to be no way to make python file-objects, upon read, return all available data if there is some and block otherwise.
I've seen a few posts about people seeking help with this, but I have seen no solutions. What gives? Am I missing something? This seems like such a normal use-case to completely ruin! I'm hoping to utilize urllib2's ability to detect configured proxies and use chunked encoding, but I can't if it won't cooperate.
Edit: Upon request, here is some example code
Client:
connection = urllib2.urlopen(commandpath)
id = connection.readline()
Now suppose that the server is using chunked transfer encoding, and writes one chunk down the stream and the chunk contains the line, and then waits. The connection is still open, but the client has data waiting in a buffer.
I cannot get read or readline to return the data I know it has waiting for it, because it tries to read until the end of the connection. In this case the connection may never close so it will wait either forever or until an inactivity timeout occurs, severing the connection. Once the connection is severed it will return, but that's obviously not the behavior I want.
urllib2 operates at the HTTP level, which works with complete documents. I don't think there's a way around that without hacking into the urllib2 source code.
What you can do is use plain sockets (you'll have to talk HTTP yourself in this case), and call sock.recv(maxbytes) which does read only available data.
Update: you may want to try to call conn.fp._sock.recv(maxbytes), instead of conn.read(bytes) on an urllib2 connection.