Skip Connection Interruptions (Site & BeautifulSoup) - python

I'm currently doing this with my script:
Get the body (from sourcecode) and search for a string, it does it until the string is found. (If the site updates.)
Altough, if the connection is lost, the script stops.
My 'connection' code looks something like this (This keeps repeating in a while loop every 20 seconds):
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
url = ('url')
openUrl = opener.open(url).read()
soup = BeautifulSoup(openUrl)
I've used urllib2 & BeautifulSoup.
Can anyone tell me how I could tell the script to "freeze" if the connection is lost and look to see if the internet connection is alive? Then continue based on the answer.(So, to check if the script CAN connect, not to see if the site is up. If it does checkings this way, the script will stop with a bunch of errors.)
Thank you!

Found the solution!
So, I need to check the connection every LOOP, before actually doing stuff.
So I created this function:
def check_internet(self):
try:
header = {"pragma" : "no-cache"}
req = urllib2.Request("http://www.google.ro", headers=header)
response = urllib2.urlopen(req,timeout=2)
return True
except urllib2.URLError as err:
return False
And it works, tested it with my connection down & up!
For the other newbies wodering:
while True:
conn = check_internet('Site or just Google, just checking for connection.')
try:
if conn is True:
#code
else:
#need to make it wait and re-do the while.
time.sleep(30)
except: urllib2.URLError as err:
#need to wait
time.sleep(20)
Works perfectly, the script has been running for about 10 hours now and it handles errors perfectly! It also works with my connection off and shows proper messages.
Open to suggestions for optimization!

Rather than "freeze" the script, I would have the script continue to run only if the connection is alive. If it's alive, run your code. If it's not alive, either attempt to reconnect, or halt execution.
while keepRunning:
if connectionIsAlive():
run_your_code()
else:
reconnect_maybe()
One way to check whether the connection is alive is described here Checking if a website is up via Python
If your program "stops with a bunch of errors" then that is likely because you're not properly handling the situation where you're unable to connect to the site (for various reasons such as you not having internet, their website is down, etc.).
You need to use a try/except block to make sure that you catch any errors that occur because you were unable to open a live connection.
try:
openUrl = opener.open(url).read()
except urllib2.URLError:
# something went wrong, how to respond?

Related

Checking FTP connection is valid using NOOP command

I'm having trouble with one of my scripts seemingly disconnecting from my FTP during long batches of jobs. To counter this, I've attempted to make a module as shown below:
def connect_ftp(ftp):
print "ftp1"
starttime = time.time()
retry = False
try:
ftp.voidcmd("NOOP")
print "ftp2"
except:
retry = True
print "ftp3"
print "ftp4"
while (retry):
try:
print "ftp5"
ftp.connect()
ftp.login('LOGIN', 'CENSORED')
print "ftp6"
retry = False
print "ftp7"
except IOError as e:
print "ftp8"
retry = True
sys.stdout.write("\rTime disconnected - "+str(time.time()-starttime))
sys.stdout.flush()
print "ftp9"
I call the function using only:
ftp = ftplib.FTP('CENSORED')
connect_ftp(ftp)
However, I've traced how the code runs using print lines, and on the first use of the module (before the FTP is even connected to) my script runs ftp.voidcmd("NOOP") and does not except it, so no attempt is made to connect to the FTP initially.
The output is:
ftp1
ftp2
ftp4
ftp success #this is ran after the module is called
I admit my code isn't the best or prettiest, and I haven't implemented anything yet to make sure I'm not reconnecting constantly if I keep failing to reconnect, but I can't work out why this isn't working for the life of me so I don't see a point in expanding the module yet. Is this even the best approach for connecting/reconnecting to an FTP?
Thank you in advance
This connects to the server:
ftp = ftplib.FTP('CENSORED')
So, naturally the NOOP command succeeds, as it does not need an authenticated connection.
Your connect_ftp is correct, except that you need to specify a hostname in your connect call.

URLlib2 causing program to stop after 20 attempts

so i am trying to write a program wich needs to check for an urllib2 111 error
I do this by using:
def Refresher:
req = urllib2.Request('http://example.com/myfile.txt')
try:
urlopen = urllib2.urlopen(req)
except urllib2.HTTPError as e:
if e.code == 404 or e.code == 111:
error = True
At the end of refresher I update it using because refresher also edits a tk window:
root.after(75, Refresher)
My problem is that when I reboot the server (and therefor cause a 111 error) this works fine for the first 20 times. But after the 20th time through my function appears to stop running with no error being thrown in the console.Then when the server comes back up my function starts running again.
How do I keep my program refreshing as the function does other things aswell as checking if the server is down?
Thanks in advance.
Use requests instead of urllib2, it's safer to use and easier to understand, if the error persists, then the problem will be in another part of the server configuration.

mechanize keeps giving URLErrors

I'm automating some stuff with mechanize. I have a working program that logs into a site and goes to a page while logged in. However, sometimes I just get URLErrors stating the connection has timed out, whenever I do anything via mechanize:
URLError: <urlopen error [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
If I restart the program or re-try the attempts, it will just work. If I visit the same site with Chrome, it will never time out, no matter how often I attempt to log in.
What might be the cause of this? It sounds like mechanize is doing something that isn't ideal. I've gotten a similar pattern with different sites as well - URLErrors when there are in fact no connection issues.
EDIT: I also notice that if I do this - retry immediately - it often works, but then will fail again on the next thing I do, etc.
last_response = ...
for attempt in (1, 2):
try:
self.mech.select_form(nr=0)
self.mech[self.LOGIN_FORM_DATA[1]] = self.user
self.mech[self.LOGIN_FORM_DATA[2]] = self.password
resp = self.mech.submit()
html = resp.read()
resp.close()
except mechanize.URLError:
self.error("URLError submitting form, trying again...")
self.mech.set_response(last_response) #reset the response
continue
break

Python - try statement breaking urllib2.urlopen

I'm writing a program in Python that has to make a http request while being forced onto a direct connection in order to avoid a proxy. Here is the code I use which successfully manages this:
print "INFO: Testing API..."
proxy = urllib2.ProxyHandler({})
opener = urllib2.build_opener(proxy)
urllib2.install_opener(opener)
req = urllib2.urlopen('http://maps.googleapis.com/maps/api/geocode/json?address=blahblah&sensor=true')
returneddata = json.loads(req.read())
I then want to add a try statement around 'req', in order to handle a situation where the user is not connected to the internet, which I have tried like so:
try:
req = urllib2.urlopen('http://maps.googleapis.com/maps/api/geocode/json?address=blahblah&sensor=true')
except urllib2.URLError:
print "Unable to connect etc etc"
The trouble is that by doing that, it always throws the exception, even though the address is perfectly accessible & the code works without it.
Any ideas? Cheers.

urlopen error 10045, 'address already in use' while downloading in Python 2.5 on Windows

I'm writing code that will run on Linux, OS X, and Windows. It downloads a list of approximately 55,000 files from the server, then steps through the list of files, checking if the files are present locally. (With SHA hash verification and a few other goodies.) If the files aren't present locally or the hash doesn't match, it downloads them.
The server-side is plain-vanilla Apache 2 on Ubuntu over port 80.
The client side works perfectly on Mac and Linux, but gives me this error on Windows (XP and Vista) after downloading a number of files:
urllib2.URLError: <urlopen error <10048, 'Address already in use'>>
This link: http://bytes.com/topic/python/answers/530949-client-side-tcp-socket-receiving-address-already-use-upon-connect points me to TCP port exhaustion, but "netstat -n" never showed me more than six connections in "TIME_WAIT" status, even just before it errored out.
The code (called once for each of the 55,000 files it downloads) is this:
request = urllib2.Request(file_remote_path)
opener = urllib2.build_opener()
datastream = opener.open(request)
outfileobj = open(temp_file_path, 'wb')
try:
while True:
chunk = datastream.read(CHUNK_SIZE)
if chunk == '':
break
else:
outfileobj.write(chunk)
finally:
outfileobj = outfileobj.close()
datastream.close()
UPDATE: I find by greping the log that it enters the download routine exactly 3998 times. I've run this multiple times and it fails at 3998 each time. Given that the linked article states that available ports are 5000-1025=3975 (and some are probably expiring and being reused) it's starting to look a lot more like the linked article describes the real issue. However, I'm still not sure how to fix this. Making registry edits is not an option.
If it is really a resource problem (freeing os socket resources)
try this:
request = urllib2.Request(file_remote_path)
opener = urllib2.build_opener()
retry = 3 # 3 tries
while retry :
try :
datastream = opener.open(request)
except urllib2.URLError, ue:
if ue.reason.find('10048') > -1 :
if retry :
retry -= 1
else :
raise urllib2.URLError("Address already in use / retries exhausted")
else :
retry = 0
if datastream :
retry = 0
outfileobj = open(temp_file_path, 'wb')
try:
while True:
chunk = datastream.read(CHUNK_SIZE)
if chunk == '':
break
else:
outfileobj.write(chunk)
finally:
outfileobj = outfileobj.close()
datastream.close()
if you want you can insert a sleep or you make it os depended
on my win-xp the problem doesn't show up (I reached 5000 downloads)
I watch my processes and network with process hacker.
Thinking outside the box, the problem you seem to be trying to solve has already been solved by a program called rsync. You might look for a Windows implementation and see if it meets your needs.
You should seriously consider copying and modifying this pyCurl example for efficient downloading of a large collection of files.
Instead of opening a new TCP connection for each request you should really use persistent HTTP connections - have a look at urlgrabber (or alternatively, just at keepalive.py for how to add keep-alive connection support to urllib2).
All indications point to a lack of available sockets. Are you sure that only 6 are in TIME_WAIT status? If you're running so many download operations it's very likely that netstat overruns your terminal buffer. I find that netstat stat overruns my terminal during normal useage periods.
The solution is to either modify the code to reuse sockets. Or introduce a timeout. It also wouldn't hurt to keep track of how many open sockets you have. To optimize waiting. The default timeout on Windows XP is 120 seconds. so you want to sleep for at least that long if you run out of sockets. Unfortunately it doesn't look like there's an easy way to check from Python when a socket has closed and left the TIME_WAIT status.
Given the asynchronous nature of the requests and timeouts, the best way to do this might be in a thread. Make each threat sleep for 2 minutes before it finishes. You can either use a Semaphore or limit the number of active threads to ensure that you don't run out of sockets.
Here's how I'd handle it. You might want to add an exception clause to the inner try block of the fetch section, to warn you about failed fetches.
import time
import threading
import Queue
# assumes url_queue is a Queue object populated with tuples in the form of(url_to_fetch, temp_file)
# also assumes that TotalUrls is the size of the queue before any threads are started.
class urlfetcher(threading.Thread)
def __init__ (self, queue)
Thread.__init__(self)
self.queue = queue
def run(self)
try: # needed to handle empty exception raised by an empty queue.
file_remote_path, temp_file_path = self.queue.get()
request = urllib2.Request(file_remote_path)
opener = urllib2.build_opener()
datastream = opener.open(request)
outfileobj = open(temp_file_path, 'wb')
try:
while True:
chunk = datastream.read(CHUNK_SIZE)
if chunk == '':
break
else:
outfileobj.write(chunk)
finally:
outfileobj = outfileobj.close()
datastream.close()
time.sleep(120)
self.queue.task_done()
elsewhere:
while url_queue.size() < TotalUrls: # hard limit of available ports.
if threading.active_threads() < 3975: # Hard limit of available ports
t = urlFetcher(url_queue)
t.start()
else:
time.sleep(2)
url_queue.join()
Sorry, my python is a little rusty, so I wouldn't be surprised if I missed something.

Categories

Resources