In Python 3.4/Asyncio I'm using StreamReader/Writer.
To detect a client disconnect the common method seems to be to read from the client and if there's nothing there then the client disconnected.
while True:
data = (yield from asyncio.wait_for(client_reader.readline(),
timeout=1.0))
if not data: #client disconnected
break
However quickly you run out of lines to read from the client header (it moves to the next line on each loop) and if there are no additional lines sent from the client (in my case the client is not sending just listening) you hit the timeout.
What I would like to do is to only read the first line of the header over and over.. or possibly even just the first character of the first line, or if that's not possible when it gets to the last line loop back around to the first.
What's the best/most elegant way to accomplish this task? 3.4/Asyncio/StreamReader/Writer. (detecting client disconnects)
I had a similar problem. The way that worked for me was to check for EOF first and then raise a ConnectionError exception if true. So for your code I would add the following:
while True:
try:
if client_reader.at_eof():
raise ConnectionError
data = (yield from asyncio.wait_for(client_reader.readline(),
timeout=1.0))
if not data: #client disconnected
break
except ConnectionError:
break
except:
break # This is here to catch things like the asyncio futures timeout exception
Hope that helps. If anyone has a better way I'd be interested.
Related
I have created a client/server setup for transferring PGP signature information from the client to the server. The code below shows part of the server code, where adds signatures received from the client to an output text file
However, my code isn't able to move on after the second while loop, after breaking.
It receives 2 signatures from the client, and successfully prints the "test" string only twice and adds both received strings to the output file, but the program will not continue after breaking, and doesn't print the other "test2" and "test3" strings.
while True:
# Accepts incoming connection, creating socket used for data transfer to client
conn, addr = server.accept()
print("Connected successfully")
directory = "(Hidden for question)"
outputFile = open((directory + "\\signatures.txt"), "w")
while True:
data = conn.recv(2048)
if not data: break
print("test")
outputFile.write(data.decode()+"\n")
outputFile.flush()
print("test2")
conn.close()
print("test3")
I feel like I am missing something very obvious but cannot figure out what the issue is.
Your loop will never break as the recv function on a socket is a blocking call.
This means the function will not return until it receives some data, there for not data will always be false.
Try sending more information (after the first 2 signatures) into the socket and see that your script will continue to write it into the file.
If you want to receive a specific amount of data/times, track it using a variable and break your loop using that.
Alternatively to #Nadav's answer, remove the inner while loop. Since recv() is synchronous, you don't need to loop.
I've written a simple multi-threaded game server in python that creates a new thread for each client connection. I'm finding that every now and then, the server will crash because of a broken-pipe/SIGPIPE error. I'm pretty sure it is happening when the program tries to send a response back to a client that is no longer present.
What is a good way to deal with this? My preferred resolution would simply close the server-side connection to the client and move on, rather than exit the entire program.
PS: This question/answer deals with the problem in a generic way; how specifically should I solve it?
Assuming that you are using the standard socket module, you should be catching the socket.error: (32, 'Broken pipe') exception (not IOError as others have suggested). This will be raised in the case that you've described, i.e. sending/writing to a socket for which the remote side has disconnected.
import socket, errno, time
# setup socket to listen for incoming connections
s = socket.socket()
s.bind(('localhost', 1234))
s.listen(1)
remote, address = s.accept()
print "Got connection from: ", address
while 1:
try:
remote.send("message to peer\n")
time.sleep(1)
except socket.error, e:
if isinstance(e.args, tuple):
print "errno is %d" % e[0]
if e[0] == errno.EPIPE:
# remote peer disconnected
print "Detected remote disconnect"
else:
# determine and handle different error
pass
else:
print "socket error ", e
remote.close()
break
except IOError, e:
# Hmmm, Can IOError actually be raised by the socket module?
print "Got IOError: ", e
break
Note that this exception will not always be raised on the first write to a closed socket - more usually the second write (unless the number of bytes written in the first write is larger than the socket's buffer size). You need to keep this in mind in case your application thinks that the remote end received the data from the first write when it may have already disconnected.
You can reduce the incidence (but not entirely eliminate) of this by using select.select() (or poll). Check for data ready to read from the peer before attempting a write. If select reports that there is data available to read from the peer socket, read it using socket.recv(). If this returns an empty string, the remote peer has closed the connection. Because there is still a race condition here, you'll still need to catch and handle the exception.
Twisted is great for this sort of thing, however, it sounds like you've already written a fair bit of code.
Read up on the try: statement.
try:
# do something
except socket.error, e:
# A socket error
except IOError, e:
if e.errno == errno.EPIPE:
# EPIPE error
else:
# Other error
SIGPIPE (although I think maybe you mean EPIPE?) occurs on sockets when you shut down a socket and then send data to it. The simple solution is not to shut the socket down before trying to send it data. This can also happen on pipes, but it doesn't sound like that's what you're experiencing, since it's a network server.
You can also just apply the band-aid of catching the exception in some top-level handler in each thread.
Of course, if you used Twisted rather than spawning a new thread for each client connection, you probably wouldn't have this problem. It's really hard (maybe impossible, depending on your application) to get the ordering of close and write operations correct if multiple threads are dealing with the same I/O channel.
I face with the same question. But I submit the same code the next time, it just works.
The first time it broke:
$ packet_write_wait: Connection to 10.. port 22: Broken pipe
The second time it works:
[1] Done nohup python -u add_asc_dec.py > add2.log 2>&1
I guess the reason may be about the current server environment.
My answer is very close to S.Lott's, except I'd be even more particular:
try:
# do something
except IOError, e:
# ooops, check the attributes of e to see precisely what happened.
if e.errno != 23:
# I don't know how to handle this
raise
where "23" is the error number you get from EPIPE. This way you won't attempt to handle a permissions error or anything else you're not equipped for.
I have created a chat bot for Twitch IRC, I can connect and create commands etc etc, however I cannot use keyboard-interrupt in the command prompt. I suspect it is because it's stuck in this infinite loop, and I don't know how to fix this? I am new to programming btw!
Here is the code I have in my Run.py, openSocket() is defined in another file, basically connection to the server. s = socket.socket.
First part in the while-loop basically just reads the server messages, I think it's pretty straight forward for you guys!
s = openSocket()
joinRoom(s)
readbuffer = ""
while True:
readbuffer = readbuffer + s.recv(1024).decode("utf-8")
temp = str.split(readbuffer, "\n")
readbuffer = temp.pop()
for line in temp:
if "PING" in line:
s.send("PONG :tmi.twitch.tv\r\n".encode("utf-8"))
print("---SENT PONG---")
printMessage(getUser, getMessage, line)
message = getMessage(line)
for key in commands:
command = key
if command in message:
sendMessage(s, commands[command])
((Edit: I also have this problem where the connection to the server seems to time out for whatever reason. I managed to get it keep connection with ping/pong for about 40-45min, but then it disconnected again.
EDIT:
Sorry the original post was super messy. I have created this pastebin with the least amount of code I could use to recreate the problem.
If the IRC chat is inactive it will disconnect, and I can't get it to send 2 pings in a row without any messages in between, not sure if that's because it disconnects before the 2nd ping or because of the 2nd ping.
On at least one occasion it has disconnected even before I got the first ping from the server.
Pastebin: pastebin.com/sXUW50sS
Part of code that you posted doesn't have much to do with problem you described.
This is a guess (although an educated one). In you socket connection you are probably using try: except: and using Pokemon approach (gotta catch 'em all)
Thing here would be to find a line where you are doing something like this:
except:
pass
and change it to:
except (KeyboardInterrupt, SystemExit):
raise
except:
pass
Obviously I'm not trying to say here that your porgram should catch all exceptions and just pass like if nothing happened. Main point is that you are probably already doing that (for i-have-no-idea-why reasons) and you should have special treatment for system errors.
I have a python script which will parse xml file for serial numbers and will write them to a text file. The problem with the below code is, It is going on infinite loop. If I am adding a break statement some where after logging to a file, It is writing only one serial number. How do I increase the counter, so that the program will exit after writing all the serial numbers.
try:
while True:
data, addr = s.recvfrom(65507)
mylist=data.split('\r')
url = re.findall('http?://(?:[a-zA-Z]|[0-9]|[$-_#.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', data)
print url[0]
response = urllib2.urlopen(url[0])
the_page = response.read()
tree = ET.XML(the_page)
with open("temp.xml", "w") as f:
f.write(ET.tostring(tree))
document = parse('temp.xml')
actors = document.getElementsByTagName("ns0:serialNumber")
for act in actors:
for node in act.childNodes:
if node.nodeType == node.TEXT_NODE:
r = "{}".format(node.data)
print r
logToFile(str(r))
time.sleep(10)
s.sendto(msg, ('239.255.255.250', 1900) )
except socket.timeout:
pass
I would normally create a flag so that the while would be
while working == True:
Then reset the flag at the appropriate time.
This allows you to use the else statement to close the text file and output the final results after the while loop is complete. Else clause on Python while statement.
Note that it is always better to explicitly close open files when finished rather than relying on garbage collection. You should also close the file and output a timeout message in the except logic.
For debugging, you can output a statement at each write to the text file.
If your s.recvfrom(65507) is working correctly it should be an easy fix. Write this code just below your data, addr = s.recvfrom(65507)
if not data:
break
You open a UDP socket and you use recvfrom to get data from the socket.
You set a high timeout which makes this function a blocking function. It means when you start listening on the socket, if no data have been sent from the sender your program will be blocked on that line until either the sender sends something or the timeout reaches. In case of timeout and no data the function will raise an Exception.
I see two options:
Send something from the sender that indicates the end of stream (the serial numbers in your case).
Set a small timeout then catch the Exception and use it to break the loop.
Also, take a look at this question: socket python : recvfrom
Hope it helps.
I'm making a python URL grabber program. For my purposes, I want it to time out really really fast, so I'm doing
urllib2.urlopen("http://.../", timeout=2)
Of course it times out correctly as it should. However, it doesn't bother to close the connection to the server, so the server thinks the client is still connected. How can I ask urllib2 to just close the connection after it times out?
Running gc.collect() doesn't work and I'd like to not use httplib if I can't help it.
The closest I can get is: the first try will time out. The server reports that the connection closed just as the second try times out. Then, the server reports the connection closed just as the third try times out. Ad infinitum.
Many thanks.
I have a suspicion that the socket is still open in the stack frames. When Python raises an exception it stores the stack frames so debuggers and other tools can view the stack and introspect values.
For historical reasons, and now for backwards compatibility, the stack information is stored (on a per-thread basis) in sys (see sys.exc_info(), sys.exc_type and others). This is one of the things which has been removed in Python 3.0.
What that means for you is the stack is still alive, and referenced. There stack contains the local data for some function which has the open socket. That's why the socket isn't yet closed. It's only when the stack trace is removed that everything will be gc'ed.
To test if that's the case, insert something like
try:
1/0
except ZeroDivisionError:
pass
in your except clause. That's a quick way to replace the current exception with something else.
This is SUCH a hack, but the following code works. If the request is in another function AND it does not raise an exception, then the socket is always closed.
def _fetch(self, url):
try:
return urllib2.urlopen(urllib2.Request(url), timeout=5).read()
except urllib2.URLError, e:
if isinstance(e.reason, socket.timeout):
return None
else:
raise e
def fetch(self, url):
x = None
while x is None:
x = self._fetch(url)
print "Timeout"
return x
Does ANYONE have a better way?