How to clear data stored in a socket in python - python

I am designing a turn-based game that uses connection.recv() to read from the socket and store the data of a 'move' in a buffer (which the server reads from). The problem is, the player can send data outside of their turn to be queued in the socket buffer, which means the server potentially reads from moves they made outside of their turn, instead of blocking until they make a turn. Is there any way to flush the data stored in the socket, and if not, is there any other workaround to this problem?

From Steffen's suggestion, I am using recv calls to clear the buffer. Currently, I'm setting the socket to non-blocking and calling recv until a BlockingIOError. It would be much appreciated if anyone could point out a more graceful solution (that doesn't use exceptions).
connection.setblocking(False)
while True:
try:
chunk = connection.recv(4096)
except BlockingIOError as b:
break
connection.setblocking(True)

Related

Avoiding TCP/IP connection hanging

I am communicating with an instrument via TCP/IP using the Python socket package.
The program sends a command to the instrument to perform an action, and then repetitively sends another "check" command until it receives a "done" reply. However, after many loops, the program hangs while waiting for a "done" reply.
I have circumvented this problem by using the recv_timeout() function below, which returns no data if the socket is hanging, then I close the connection with socket.close() and reconnect.
Is there a more elegant solution without having to reboot anything?
import socket
import time
def recv_timeout(self,timeout=0.5):
'''
code from http://code.activestate.com/recipes/408859/
'''
self.s.setblocking(0)
total_data=[];data='';begin=time.time()
while 1:There must be a way I can reboot to carry on communicating with the instrument, without having to restart.
#if you got some data, then break after wait sec
if total_data and time.time()-begin>timeout:
break
#if you got no data at all, wait a little longer
elif time.time()-begin>timeout*2:
break
try:
data=self.s.recv(8192)
if data:
total_data.append(data)
begin=time.time()
else:
time.sleep(0.1)
except:
pass
return ''.join(total_data)
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect(('555.555.55.555',23))
for action_num in range(0,1000):
socket.sendall(('performaction %s \r'%action_num).encode())
while True:
time.sleep(0.2)
socket.sendall(('checkdone \r').encode())
done = socket.recv_timeout()
if not done:
print 'communication broken...what should I do?'
socket.close()
time.sleep(60)
sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
sock.connect(('555.555.55.555',23))
elif done == '1':
print 'done performing action'
break
socket.close()
I have circumvented this problem by using the recv_timeout() function
below, which returns no data if the socket is hanging
Are you certain that the socket will hang forever? What about the possibility that the instrument just sometimes takes more than half a second to respond? (Note that even if the instrument's software is good at responding in a timely manner, that is no guarantee that the response data will actually get to your Python program in a timely manner. For example, if the TCP packets containing the response get dropped by the network and have to be resent, that could cause them to take more than .5 seconds to return to your program. You can force that scenario to occur by pulling the Ethernet cable out of your PC for a second or two, and then plugging it back in... you'll see that the response bytes still make it through, just a second or two later on (after the dropped packets get resent); that is, if your Python program hasn't given up on them and closed the socket already.
Is there a more elegant solution without having to reboot anything?
The elegant solution is to figure out what is happening to the reply bytes in the fault scenario, and fixing the underlying bug so that the reply bytes no longer get lost. WireShark can be very helpful in diagnosing where the fault is; for example if WireShark shows that the response bytes did enter your computer's Ethernet port, then that is a pretty good clue that the bug is in your Python program's handling of the incoming bytes(*). On the other hand if the response bytes never show up in WireShark, then there might be a bug in the instrument itself that causes it to fail to respond sometimes. Wireshark would also show you if the problem is that your Python script failed to send out the "check" command for some reason.
That said, if you really can't fix the underlying bug (e.g. because it's a bug in the instrument and you don't have the ability to upgrade the source code of the software running on the instrument) then the only thing you can do is what you are doing -- close the socket connection and reconnect. If the instrument doesn't want to respond for some reason, you can't force it to respond.
(*) One thing to do is print out the contents of the string returned by recv_timeout(). You may find that you did get a reply, but it just wasn't the '1' string you were expecting.

python socket and recv() returning empty data

I have a c program that connects to a python server, sends a short string (less than about 100chars) and then closes socket. It does this at a periodic rate.
The python server accepts connection, spawns a thread, and in that thread calls:
data = sock.recv(4096)
data often turns out to be empty.
After reading through the python man pages, and some of the stack overflow posts (thanks guys!), I realize that the problem is the c program that opens, writes, closes, sockets so quickly, that by the time the python server accepts, spawns thread, the recv() returns no data, as documented.
The problem is, I don't know a workaround to this problem? I have very little control over the c-program. Is where a way to tell python to buffer the message for recv() even if the other side closes connection?
(caveat: I haven't verified my hunch yet on wireshark, but the logs in both programs strongly indicate the c-program closes before recv() is even called for most of the time.)
thanks.

pyserial - possible to write to serial port from thread a, do blocking reads from thread b?

I tried googling this, couldn't find an answer, searched here, couldn't find an answer. Has anyone looked into whether it's thread safe to write to a Serial() object (pyserial) from thread a and do blocking reads from thread b?
I know how to use thread synchronization primitives and thread-safe data structures, and in fact my current form of this program has a thread dedicated to reading/writing on the serial port and I use thread-safe data structures to coordinate activities in the app.
My app would benefit greatly if I could write to the serial port from the main thread (and never read from it), and read from the serial port using blocking reads in the second thread (and never write to it). If someone really wants me to go into why this would benefit the app I can add my reasons. In my mind there would be just one instance of Serial() and even while thread B sits in a blocking read on the Serial object, thread A would be safe to use write methods on the Serial object.
Anyone know whether the Serial class can be used this way?
EDIT: It occurs to me that the answer may be platform-dependent. If you have any experience with a platform like this, it'd be good to know which platform you were working on.
EDIT: There's only been one response but if anyone else has tried this, please leave a response with your experience.
I have done this with pyserial. Reading from one thread and writing from another should not cause problems in general, since there isn't really any kind of resource arbitration problem. Serial ports are full duplex, so reading and writing can happen completely independently and at the same time.
I've used pyserial in this way on Linux (and Windows), no problems !
I would recommend to modify Thread B from "blocking read" to "non blocking read/write". Thread B would become your serial port "Daemon".
Thread A could run at full speed for a friendly user interface or perform any real time operation.
Thread A would write a message to Thread B instead of trying to write directly to the serial port. If the size/frequency of the messages is low, a simple shared buffer for the message itself and a flag to indicate that a new message is present would work. If you need higher performance, you should use a stack. This is actually implemented simply using an array large enough to accumulate many message to be sent and two pointers. The write pointer is updated only by Thread A. The read pointer is updated only by Thread B.
Thread B would grab the message and sent it to the serial port. The serial port should use the timeout feature so that the read serial port function release the CPU, allowing you to poll the shared buffer and, if any new message is present, send it to the serial port. I would use a sleep at that point to limit the CPU time used by Thread B.. Then, you can make Thread B loop to the read serial port function. If the serial port timeout is not working right, like if the USB-RS232 cable get unplugged, the sleep function will make the difference between a good Python code versus the not so good one.

Python: Stop the socket-receiving-process

I receive data from some device via socket-module.
But after some time the device stops sending packages.
Then I want to interupt the for-loop.
While True doesn't work, because he receives more then 100 packages.
How can I stop this process?
s stands for socket.
...
for i in range(packages100):
data = s.recv(4)
f.write(data)
...
Edit:
I think socket.settimeout() is part of the solution. See also:
How to set timeout on python's socket recv method?
If your peer really just stops sending data, as opposed to closing the connection, this is tricky and you'll be forced to resort to asynchronous reading from this socket.
Put it in asynchronous mode (the docs and Google are your friends), and try to read it each time, instead of the blocking read. You can then just stop "trying" anytime you wish. Note that by nature of async IO your code will be a bit different - you will no longer be able to assume that once recv returns, it actually read some data.
while 1:
data = conn.recv(4)
if not data: break
f.write(data)
Also, example in python docs

Python doesn't detect a closed socket until the second send

When I close the socket on one end of a connection, the other end gets an error the second time it sends data, but not the first time:
import socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(("localhost", 12345))
server.listen(1)
client = socket.create_connection(("localhost",12345))
sock, addr = server.accept()
sock.close()
client.sendall("Hello World!") # no error
client.sendall("Goodbye World!") # error happens here
I've tried setting TCP_NODELAY, using send instead of sendall, checking the fileno(), I can't find any way to get the first send to throw an error or even to detect afterwards that it failed. EDIT: calling sock.shutdown before sock.close doesn't help. EDIT #2: even adding a time.sleep after closing and before writing doesn't matter. EDIT #3: checking the byte count returned by send doesn't help, since it always returns the number of bytes in the message.
So the only solution I can come up with if I want to detect errors is to follow each sendall with a client.sendall("") which will raise an error. But this seems hackish. I'm on a Linux 2.6.x so even if a solution only worked for that OS I'd be happy.
This is expected, and how the TCP/IP APIs are implemented (so it's similar in pretty much all languages and on all operating systems)
The short story is, you cannot do anything to guarantee that a send() call returns an error directly if that send() call somehow cannot deliver data to the other end. send/write calls just delivers the data to the TCP stack, and it's up to the TCP stack to deliver it when it can.
TCP is also just a transport protocol, if you need to know if your application "messages" have reached the other end, you need to implement that yourself(some form of ACK), as part of your application protocol - there's no other free lunch.
However - if you read() from a socket, you can get notified immediatly when an error occurs, or when the other end closed the socket - you usually need to do this in some form of multiplexing event loop (that is, using select/poll or some other IO multiplexing facility).
Just note that you cannot read() from a socket to learn whether the most recent send/write succeded, Here's a few cases as of why (but it's the cases one doesn't think about that always get you)
several write() calls got buffered up due to network congestion, or because the tcp window was closed (perhaps a slow reader) and then the other end closes the socket or a hard network error occurs, thus you can't tell if if was the last write that didn't get through, or a write you did 30 seconds ago.
Network error, or firewall silently drops your packets (no ICMP replys are generated), You will have to wait until TCP times out the connection to get an error which can be many seconds, usually several minutes.
TCP is busy doing retransmission as you call send - maybe those retransmissions generate an error.(really the same as the first case)
As per the docs, try calling sock.shutdown() before the call to sock.close().

Categories

Resources