socket.recv(255) in python hung on waiting for data - python

Newbie question here,
I am trying to hack an existing python program that searches for bluethooth signal
all works fine if there is a bluetooh transmitter around,
but the program simply sits there if there is no Bluetooth signal around.
I found out that it is getting hung on this line
pkt = sock.recv(255)
I am naively guessing that is simply sitting there waiting for data, I want it to give me an error or timeout after lets say 10 seconds.
How do I do this? Is my thinking correct?
Thanks

Call settimeout before recving. Then it will raise an error if it takes too long.
sock.settimeout(10)
try:
pkt = sock.recv(255)
except socket.error:
print "connection timed out!"
return

Related

How can I auto restart a python Socket after a crash without rebooting my server? [duplicate]

I've written a simple multi-threaded game server in python that creates a new thread for each client connection. I'm finding that every now and then, the server will crash because of a broken-pipe/SIGPIPE error. I'm pretty sure it is happening when the program tries to send a response back to a client that is no longer present.
What is a good way to deal with this? My preferred resolution would simply close the server-side connection to the client and move on, rather than exit the entire program.
PS: This question/answer deals with the problem in a generic way; how specifically should I solve it?
Assuming that you are using the standard socket module, you should be catching the socket.error: (32, 'Broken pipe') exception (not IOError as others have suggested). This will be raised in the case that you've described, i.e. sending/writing to a socket for which the remote side has disconnected.
import socket, errno, time
# setup socket to listen for incoming connections
s = socket.socket()
s.bind(('localhost', 1234))
s.listen(1)
remote, address = s.accept()
print "Got connection from: ", address
while 1:
try:
remote.send("message to peer\n")
time.sleep(1)
except socket.error, e:
if isinstance(e.args, tuple):
print "errno is %d" % e[0]
if e[0] == errno.EPIPE:
# remote peer disconnected
print "Detected remote disconnect"
else:
# determine and handle different error
pass
else:
print "socket error ", e
remote.close()
break
except IOError, e:
# Hmmm, Can IOError actually be raised by the socket module?
print "Got IOError: ", e
break
Note that this exception will not always be raised on the first write to a closed socket - more usually the second write (unless the number of bytes written in the first write is larger than the socket's buffer size). You need to keep this in mind in case your application thinks that the remote end received the data from the first write when it may have already disconnected.
You can reduce the incidence (but not entirely eliminate) of this by using select.select() (or poll). Check for data ready to read from the peer before attempting a write. If select reports that there is data available to read from the peer socket, read it using socket.recv(). If this returns an empty string, the remote peer has closed the connection. Because there is still a race condition here, you'll still need to catch and handle the exception.
Twisted is great for this sort of thing, however, it sounds like you've already written a fair bit of code.
Read up on the try: statement.
try:
# do something
except socket.error, e:
# A socket error
except IOError, e:
if e.errno == errno.EPIPE:
# EPIPE error
else:
# Other error
SIGPIPE (although I think maybe you mean EPIPE?) occurs on sockets when you shut down a socket and then send data to it. The simple solution is not to shut the socket down before trying to send it data. This can also happen on pipes, but it doesn't sound like that's what you're experiencing, since it's a network server.
You can also just apply the band-aid of catching the exception in some top-level handler in each thread.
Of course, if you used Twisted rather than spawning a new thread for each client connection, you probably wouldn't have this problem. It's really hard (maybe impossible, depending on your application) to get the ordering of close and write operations correct if multiple threads are dealing with the same I/O channel.
I face with the same question. But I submit the same code the next time, it just works.
The first time it broke:
$ packet_write_wait: Connection to 10.. port 22: Broken pipe
The second time it works:
[1] Done nohup python -u add_asc_dec.py > add2.log 2>&1
I guess the reason may be about the current server environment.
My answer is very close to S.Lott's, except I'd be even more particular:
try:
# do something
except IOError, e:
# ooops, check the attributes of e to see precisely what happened.
if e.errno != 23:
# I don't know how to handle this
raise
where "23" is the error number you get from EPIPE. This way you won't attempt to handle a permissions error or anything else you're not equipped for.

Ending infinite loop for a bot

I have created a chat bot for Twitch IRC, I can connect and create commands etc etc, however I cannot use keyboard-interrupt in the command prompt. I suspect it is because it's stuck in this infinite loop, and I don't know how to fix this? I am new to programming btw!
Here is the code I have in my Run.py, openSocket() is defined in another file, basically connection to the server. s = socket.socket.
First part in the while-loop basically just reads the server messages, I think it's pretty straight forward for you guys!
s = openSocket()
joinRoom(s)
readbuffer = ""
while True:
readbuffer = readbuffer + s.recv(1024).decode("utf-8")
temp = str.split(readbuffer, "\n")
readbuffer = temp.pop()
for line in temp:
if "PING" in line:
s.send("PONG :tmi.twitch.tv\r\n".encode("utf-8"))
print("---SENT PONG---")
printMessage(getUser, getMessage, line)
message = getMessage(line)
for key in commands:
command = key
if command in message:
sendMessage(s, commands[command])
((Edit: I also have this problem where the connection to the server seems to time out for whatever reason. I managed to get it keep connection with ping/pong for about 40-45min, but then it disconnected again.
EDIT:
Sorry the original post was super messy. I have created this pastebin with the least amount of code I could use to recreate the problem.
If the IRC chat is inactive it will disconnect, and I can't get it to send 2 pings in a row without any messages in between, not sure if that's because it disconnects before the 2nd ping or because of the 2nd ping.
On at least one occasion it has disconnected even before I got the first ping from the server.
Pastebin: pastebin.com/sXUW50sS
Part of code that you posted doesn't have much to do with problem you described.
This is a guess (although an educated one). In you socket connection you are probably using try: except: and using Pokemon approach (gotta catch 'em all)
Thing here would be to find a line where you are doing something like this:
except:
pass
and change it to:
except (KeyboardInterrupt, SystemExit):
raise
except:
pass
Obviously I'm not trying to say here that your porgram should catch all exceptions and just pass like if nothing happened. Main point is that you are probably already doing that (for i-have-no-idea-why reasons) and you should have special treatment for system errors.

Python error code on exit, and internet connectivity

I'm running a small python script to check every 10 seconds to see if my internet access is available (been having problems with my isp sucking). I've been running it for probably 2 months and it's worked perfectly, but now it randomly exits. Sometimes it exits within 20 seconds of me starting it, and sometimes it waits 5 minutes. the code is:
import time
import datetime
import urllib2
waitTime = 300
outfile = "C:\Users\simmons\Desktop\internetConnectivity.txt"
def internetOffline():
with open (outfile, "a") as file:
file.write("Internet Offline: %s\n" % time.ctime())
print("Internet Went Down!")
def internetCheck():
try:
urllib2.urlopen('https://www.google.com', timeout = 2)
except urllib2.URLError:
internetOffline()
while (1):
internetCheck()
time.sleep( 10 )
My question is not only, how would I print out what is happening when it exits, but also, does anyone know of a more efficient way of doing this, so it possibly causes less network traffic. It's not a problem now, but I was just wondering about more efficient methods.
this could be from going to google to many times im not to sure
run the program in you're IDE and then read the error it throws on exit this should tell you what or where the program is exiting
Here is a good way to do this:
import urllib2
def internet_on():
try:
response=urllib2.urlopen('http://74.125.228.100',timeout=1)
return True
except urllib2.URLError as err: pass
return False
74.125.228.100 is one of the IP-addresses for google.com. Change http://74.125.228.100 to whatever site can be expected to respond quickly
I got this solution from this question, take a look at it, it should help

Pyro4 sometimes throwing "Connection reset by peer" (Erno 104) after exceeding 4 concurrent connections

I have searched and searched and can't find an answer. I am trying to open a Pyro connection between two unix devices. I can connect 4 times to the device using a Pyro4 Proxy with an identical URI string. On the fifth connection, the instance hangs on my get data function call. It goes through the core.py pyro package and ends up waiting to get the data. Very occasionally, one of these open connections that was created after the fourth one will throw a ConnectionClosedError exception that looks like this:
ConnectionClosedError("receiving: connection lost: "+str(x))
ConnectionClosedError: receiving: connection lost: [Errno 104] Connection reset by peer
If I haven't been clear, the following is what causes this issue:
-Open 4 connections on different SSH sessions to the device and run repeated tests which setup a pyro proxy. (These work just fine and complete without error)
-Open more connections, all hanging on my call to get data. They hang for at least 5 minutes, and some will infrequently raise the above exception.
-Not all of them will do this. Once 1 of the 4 running tests finishes, the 5th test that was hanging will pick up and finish just fine. The others will follow, but never any more than 4 at a time.
Lastly, the following code (in socketutil.py) is where the exception is actually happening:
def receiveData(sock, size):
"""Retrieve a given number of bytes from a socket.
It is expected the socket is able to supply that number of bytes.
If it isn't, an exception is raised (you will not get a zero length result
or a result that is smaller than what you asked for). The partial data that
has been received however is stored in the 'partialData' attribute of
the exception object."""
try:
retrydelay=0.0
msglen=0
chunks=[]
if hasattr(socket, "MSG_WAITALL"):
# waitall is very convenient and if a socket error occurs,
# we can assume the receive has failed. No need for a loop,
# unless it is a retryable error.
# Some systems have an erratic MSG_WAITALL and sometimes still return
# less bytes than asked. In that case, we drop down into the normal
# receive loop to finish the task.
while True:
try:
data=sock.recv(size, socket.MSG_WAITALL)
if len(data)==size:
return data
# less data than asked, drop down into normal receive loop to finish
msglen=len(data)
chunks=[data]
break
except socket.timeout:
raise TimeoutError("receiving: timeout")
except socket.error:
x=sys.exc_info()[1]
err=getattr(x, "errno", x.args[0])
if err not in ERRNO_RETRIES:
################HERE:###############
raise ConnectionClosedError("receiving: connection lost: "+str(x))
time.sleep(0.00001+retrydelay) # a slight delay to wait before retrying
retrydelay=__nextRetrydelay(retrydelay)
Would really appreciate some direction here. Thanks in advance!
Turns out it was the minimum number of threads that the server was creating on boot. For some reason, it wouldn't add any more when it should have.

why a new process entry then the events of old process stop running when sharing a listening socket for multiple processes?

The problem happened in my proxy program, Considering G10K I use gevent in my program and I use the low-level gevent.core to run all my function.
Before I change my program into multiple processes. everything is OK. But when I changed it, the problem appears.
I find the problem is that when process NO.2 accept the socket, then the events of process NO.1 will stop dispatch. And if I add a sleep(0.1) in my event, then came a surprise. BUT I lower the sleep time, the problem showed again.
The problem have bothered me for a weeks, still nothing to do with that, Could someone help me ?
I use event like that:
core.init()
self.ent_s_send = core.event(core.EV_WRITE,self.conn.fileno(),\
self.ser_send,[self.conn,self.body])
self.ent_s_send.add()
core.dispatch()
I think that the problem is in your code, because this code is working fine, with the same shared socket.
When you accept sa ocket with EV_READ, you must get the client socket and free the control over the main socket; you must not write to it. You should use code similar to the following one:
try:
client_socket, address = sock.accept()
except socket.error, err:
if err[0] == errno.EAGAIN:
sys.exc_clear()
return
raise
core.event(core.EV_READ, client_socket.fileno(), callback)
core.event(core.EV_WRITE, client_socket.fileno(), callback)
core.event(core.EV_READ | core.EV_WRITE, client_socket.fileno(), callback)
After this, set READ and WRITE events for this socket.

Categories

Resources