I'm using the following code and I can't figure out why it's not raising an exception when the urlopen() is failing..
In my particular case, I know why it's failing.. My url's don't have http:// in front of them... but I want to catch those cases so the script can continue to run my code rather than exiting.
req = urllib2.Request(link)
try:
url = urllib2.urlopen(req)
except urllib2.URLError, e:
print e.code
print e.read()
return False
and I'm getting..
Traceback (most recent call last):
File "./getURLs.py", line 141, in <module>
main()
File "./getURLs.py", line 82, in main
Process(args).get_children()
File "./getURLs.py", line 65, in get_children
self.get_links(link)
File "./getURLs.py", line 46, in get_links
data = urllib2.urlopen(req)
File "/usr/local/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/local/lib/python2.7/urllib2.py", line 383, in open
protocol = req.get_type()
File "/usr/local/lib/python2.7/urllib2.py", line 244, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: /
.
.
.
Solution
for anyone else interested in my particular solution.. I'm using the following to catch both exceptions.
req = urllib2.Request(link)
try:
url = urllib2.urlopen(req)
except (ValueError,urllib2.URLError) as e:
print e
return False
From what you've pasted, it looks like you're catching the wrong type of exception. The code should say
try:
url=urllib2.urlopen(req)
except ValueError:
# etc etc etc.
If it's critical that the entirety of your code run, you can also have a plain except: with an unspecified exception type, or even a finally. See: http://docs.python.org/tutorial/errors.html
Related
I have a project which is structured something like this, with multiple functionality and often more possible sources of error. One functionality may also call something else that raises an error.
def functionality_one(arguments) -> str:
try:
status_feedback = attempt_functionality_one(arguments)
# this would usually be multiple lines
except ValueError as e:
return "known-failure-code"
except ConnectionError as e:
raise ConnectionError("Some user-friendly message for unexpected error") from e
else:
return status_feedback
def main():
## when the relevant CLI argument is passed:
try:
status = functionality_one(arguments)
except Exception as e:
send_notification_to_user(e.args[0])
else:
send_notification_to_user(USER_FRIENDLY_SUCCESS_MESSAGES.get(status, "Success!"))
if __name__ == "__main__":
main()
Focus on this bit about re-raising errors:
except ConnectionError as e:
raise ConnectionError("Some user-friendly message for unexpected error") from e
I do this to attach a user-friendly message in the error that I can later display to the user. Is there a better way to accomplish this?
In particular, normally error tracebacks just state errors that propogate. With this method, it gives a message like "... was the direct cause of the following exception ..." and I don't know whether this is the norm in Python. Here's an example from the log file:
Traceback (most recent call last):
File "D:\username\Documents\tech-projects\project-name\src\auth.py", line 157, in login
login_request = post(
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "D:\username\Documents\tech-projects\project-name\.venv\lib\site-packages\requests\adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='wifi-login.university-website.domain', port=80): Max retries exceeded with url: /cgi-bin/authlogin (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001FF681B27D0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\username\Documents\tech-projects\project-name\login_cli.py", line 269, in main
status_message: str = parsed_namespace.func(parsed_namespace)
File "D:\username\Documents\tech-projects\project-name\login_cli.py", line 197, in connect
return src.auth.login(credentials)
File "D:\username\Documents\tech-projects\project-name\src\auth.py", line 164, in login
raise ConnectionError(f"Server-side error. Contact IT support or wait until morning.") from e
requests.exceptions.ConnectionError: Server-side error. Contact IT support or wait until morning.
So what's the right way to do this? Feel free to suggest a change that completely changes the structure of the program too, if you feel that's necessary.
My requirement is almost same as Requests — how to tell if you're getting a success message?
But I need to print error whenever I could not reach the URL..Here is my try..
# setting up the URL and checking the conection by printing the status
url = 'https://www.google.lk'
try:
page = requests.get(url)
print(page.status_code)
except requests.exceptions.HTTPError as err:
print("Error")
The issue is rather than printing just "Error" it prints a whole error msg as below.
Traceback (most recent call last):
File "testrun.py", line 22, in <module>
page = requests.get(url)
File "/root/anaconda3/envs/py36/lib/python3.6/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/root/anaconda3/envs/py36/lib/python3.6/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/root/anaconda3/envs/py36/lib/python3.6/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/root/anaconda3/envs/py36/lib/python3.6/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/root/anaconda3/envs/py36/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='learn.microsoft.com', port=443): Max retries exceeded with url: /en-us/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ff91a543198>: Failed to establish a new connection: [Errno 110] Connection timed out',))
Can someone show me how should I modify my code to just print "Error" only if there is any issue? Then I can extend it to some other requirement.
You're not catching the correct exception.
import requests
url = 'https://www.googlggggggge.lk'
try:
page = requests.get(url)
print(page.status_code)
except (requests.exceptions.HTTPError, requests.exceptions.ConnectionError):
print("Error")
you can also do except Exception, however note that Exception is too broad and is not recommended in most cases since it traps all errors
You need to either use a general exception except or catch all exceptions that requests module might throw, e.g. except (requests.exceptions.HTTPError, requests.exceptions.ConnectionError).
For full list see: Correct way to try/except using Python requests module?
My program is written to scan through a large list of websites for SQLi vulnerabilities by adding a simple string query (') to the end of URLs and looking for errors in the page source.
My program keeps getting stuck on the same website. Here's the error I keep receiving:
[-] http://www.pluralsight.com/guides/microsoft-net/getting-started-with-asp-net-mvc-core-1-0-from-zero-to-hero?status=in-review'
[-] Page not found.
[-] http://lfg.go2dental.com/member/dental_search/searchprov.cgi?P=LFGDentalConnect&Network=L'
[-] http://www.parlimen.gov.my/index.php?lang=en'
[-] http://www.otakunews.com/category.php?CatID=23'
[-] http://plaine-d-aunis.bibli.fr/opac/index.php?lvl=cmspage&pageid=6&id_rubrique=100'
[-] Page not found.
[-] http://www.rvparkhunter.com/state.asp?state=britishcolumbia'
[-] http://ensec.org/index.php?option=com_content&view=article&id=547:lord-howell-british-fracking-policy--a-change-of-direction-needed&catid=143:issue-content&Itemid=433'
[-] URL Timed Out
[-] http://www.videohelp.com/tools.php?listall=1'
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 81, in
mp_worker
mainMethod(URLS)
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 77, in
mainMethod
tryMethod(req, URL)
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 48, in
tryMethod
checkforMySQLError(req, URL)
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 23, in
checkforMySQLError
response = urllib.request.urlopen(req, context=gcontext, timeout=2)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 532, in open
response = meth(req, response)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 564, in error
result = self._call_chain(*args)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 753, in http_error_302
fp.read()
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\http\client.py", line 462, in read
s = self._safe_read(self.length)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\http\client.py", line 614, in _safe_read
raise IncompleteRead(b''.join(s), amt)
http.client.IncompleteRead: IncompleteRead(4659 bytes read, 15043 more
expected)
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "sitehunter.py", line 91, in <module>
mp_handler(URLList)
File "sitehunter.py", line 86, in mp_handler
p.map(mp_worker, URLList)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 608, in get
raise self._value
http.client.IncompleteRead: IncompleteRead(4659 bytes read, 15043 more
expected)
C:\Users\Brice\Desktop\My Site Hunter>
Here's my full source code. I narrow it down for you in the next section.
# Start off with imports
import urllib.request
import urllib.error
import socket
import threading
import multiprocessing
import time
import ssl
# Fake a header to get less errors
headers={'User-agent' : 'Mozilla/5.0'}
# Make a class to pass to upon exception errors
class MyException(Exception):
pass
# Checks for mySQL error responses after putting a string (') query on the end of a URL
def checkforMySQLError(req, URL):
# gcontext is to bypass a no SSL error from shutting down my program
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
response = urllib.request.urlopen(req, context=gcontext, timeout=2)
page_source = response.read()
page_source_string = page_source.decode(encoding='cp866', errors='ignore')
# The if statements behind the whole thing. Checks page source for these errors,
# and returns any that come up positive.
# I'd like to do my outputting here, if possible.
if "You have an error in your SQL syntax" in page_source_string:
print ("\t [+] " + URL)
elif "mysql_fetch" in page_source_string:
print ("\t [+] " + URL)
elif "mysql_num_rows" in page_source_string:
print ("\t [+] " + URL)
elif "MySQL Error" in page_source_string:
print ("\t [+] " + URL)
elif "MySQL_connect()" in page_source_string:
print ("\t [+] " + URL)
elif "UNION SELECT" in page_source_string:
print ("\t [+] " + URL)
else:
print ("\t [-] " + URL)
# Attempts to connect to the URL, and passes an error on if it fails.
def tryMethod(req, URL):
try:
checkforMySQLError(req, URL)
except urllib.error.HTTPError as e:
if e.code == 404:
print("\t [-] Page not found.")
if e.code == 400:
print ("\t [+] " + URL)
except urllib.error.URLError as e:
print("\t [-] URL Timed Out")
except socket.timeout as e:
print("\t [-] URL Timed Out")
pass
except socket.error as e:
print("\t [-] Error in URL")
pass
# This is where the magic begins.
def mainMethod(URLList):
##### THIS IS THE WORK-AROUND I USED TO FIX THIS ERROR ####
# URL = urllib.request.urlopen(URLList, timeout=2)
# Replace any newlines or we get an invalid URL request.
URL = URLList.replace("\n", "")
# URLLib doesn't like https, not sure why.
URL = URL.replace("https://","http://")
# Python likes to truncate urls after spaces, so I add a typical %20.
URL = URL.replace("\s", "%20")
# The blind sql query that makes the errors occur.
URL = URL + "'"
# Requests to connect to the URL and sends it to the tryMethod.
req = urllib.request.Request(URL)
tryMethod(req, URL)
# Multi-processing worker
def mp_worker(URLS):
mainMethod(URLS)
# Multi-processing handler
def mp_handler(URLList):
p = multiprocessing.Pool(25)
p.map(mp_worker, URLList)
# The beginning of it all
if __name__=='__main__':
URLList = open('sites.txt', 'r')
mp_handler(URLList)
Here's the important parts of the code, specifically the parts where I read from URLs using urllib:
def mainMethod(URLList):
##### THIS IS THE WORK-AROUND I USED TO FIX THIS ERROR ####
# URL = urllib.request.urlopen(URLList, timeout=2)
# Replace any newlines or we get an invalid URL request.
URL = URLList.replace("\n", "")
# URLLib doesn't like https, not sure why.
URL = URL.replace("https://","http://")
# Python likes to truncate urls after spaces, so I add a typical %20.
URL = URL.replace("\s", "%20")
# The blind sql query that makes the errors occur.
URL = URL + "'"
# Requests to connect to the URL and sends it to the tryMethod.
req = urllib.request.Request(URL)
tryMethod(req, URL)
# Checks for mySQL error responses after putting a string (') query on the end of a URL
def checkforMySQLError(req, URL):
# gcontext is to bypass a no SSL error from shutting down my program
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
response = urllib.request.urlopen(req, context=gcontext, timeout=2)
page_source = response.read()
page_source_string = page_source.decode(encoding='cp866', errors='ignore')
I got past this error by making a request to read from URLList before making any changes to it. I commented out the part that fixed it - but only to get another error that looks worse/harder to fix (which is why I included this error although I had fixed it)
Here's the new error when I remove the comment from that line of code:
[-] http://www.davis.k12.ut.us/site/Default.aspx?PageType=1&SiteID=6497&ChannelID=6507&DirectoryType=6'
[-] http://www.surreyschools.ca/NewsEvents/Posts/Lists/Posts/ViewPost.aspx?ID=507'
[-] http://plaine-d-aunis.bibli.fr/opac/index.php?lvl=cmspage&pageid=6&id_rubrique=100'
[-] http://www.parlimen.gov.my/index.php?lang=en'
[-] http://www.rvparkhunter.com/state.asp?state=britishcolumbia'
[-] URL Timed Out
[-] http://www.videohelp.com/tools.php?listall=1'
Traceback (most recent call last):
File "sitehunter.py", line 91, in <module>
mp_handler(URLList)
File "sitehunter.py", line 86, in mp_handler
p.map(mp_worker, URLList)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\pool.py", line 608, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x0381C790>'. Reason: 'TypeError("cannot serialize '_io.BufferedReader' object",)'
C:\Users\Brice\Desktop\My Site Hunter>
The new error seems worse than the old one, to be honest. That's why I included both. Any information on how to fix this would be greatly appreciated, as I've been stuck trying to fix it for the past few hours.
I have been working on a chatbot interface to icinga2, and have not found a persistent way to survive the restart/reload of the icinga2 server. After a week of moving try/except blocks, using requests sessions, et al, it's time to reach out to the community.
Here is the current iteration of the request function:
def i2api_request(url, headers={}, data={}, stream=False, *, auth=api_auth, ca=api_ca):
''' Do not call this function directly; it's a helper for the i2* command functions '''
# Adapted from http://docs.icinga.org/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
# Section 11.10.3.1
try:
r = requests.post(url,
headers=headers,
auth=auth,
data=json.dumps(data),
verify=ca,
stream=stream
)
except (requests.exceptions.ChunkedEncodingError,requests.packages.urllib3.exceptions.ProtocolError, http.client.IncompleteRead,ValueError) as drop:
return("No connection to Icinga API")
if r.status_code == 200:
for line in r.iter_lines():
try:
if stream == True:
yield(json.loads(line.decode('utf-8')))
else:
return(json.loads(line.decode('utf-8')))
except:
debug("Could not produce JSON from "+line)
continue
else:
#r.raise_for_status()
debug('Received a bad response from Icinga API: '+str(r.status_code))
print('Icinga2 API connection lost.')
(The debug function just flags and prints the indicated error to the console.)
This code works fine handling events from the API and sending them to the chatbot, but if the icinga server is reloaded, as would be needed after adding a new server definition in /etc/icinga2..., the listener crashes.
Here is the error response I get when the server is restarted:
Exception in thread Thread-11:
Traceback (most recent call last):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 447, in _update_chunk_length
self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 228, in _error_catcher
yield
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 498, in read_chunked
self._update_chunk_length()
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 451, in _update_chunk_length
raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/models.py", line 664, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 349, in stream
for line in self.read_chunked(amt, decode_content=decode_content):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 526, in read_chunked
self._original_response.close()
File "/usr/lib64/python3.4/contextlib.py", line 77, in __exit__
self.gen.throw(type, value, traceback)
File "/home/errbot/err3/lib/python3.4/site-packages/requests/packages/urllib3/response.py", line 246, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
requests.packages.urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.4/threading.py", line 920, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.4/threading.py", line 868, in run
self._target(*self._args, **self._kwargs)
File "/home/errbot/plugins/icinga2bot.py", line 186, in report_events
for line in queue:
File "/home/errbot/plugins/icinga2bot.py", line 158, in i2events
for line in queue:
File "/home/errbot/plugins/icinga2bot.py", line 98, in i2api_request
for line in r.iter_lines():
File "/home/errbot/err3/lib/python3.4/site-packages/requests/models.py", line 706, in iter_lines
for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode):
File "/home/errbot/err3/lib/python3.4/site-packages/requests/models.py", line 667, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
With Icinga2.4, this crash happened every time the server was restarted. I thought the problem had gone away after we upgraded to 2.5, but it now appears to have turned into a heisenbug.
I wound up getting advice on IRC to reorder the try/except blocks and make sure they were in the right places. Here's the working result.
def i2api_request(url, headers={}, data={}, stream=False, *, auth=api_auth, ca=api_ca):
''' Do not call this function directly; it's a helper for the i2* command functions '''
# Adapted from http://docs.icinga.org/icinga2/latest/doc/module/icinga2/chapter/icinga2-api
# Section 11.10.3.1
debug(url)
debug(headers)
debug(data)
try:
r = requests.post(url,
headers=headers,
auth=auth,
data=json.dumps(data),
verify=ca,
stream=stream
)
debug("Connecting to Icinga server")
debug(r)
if r.status_code == 200:
try:
for line in r.iter_lines():
debug('in i2api_request: '+str(line))
try:
if stream == True:
yield(json.loads(line.decode('utf-8')))
else:
return(json.loads(line.decode('utf-8')))
except:
debug("Could not produce JSON from "+line)
return("Could not produce JSON from "+line)
except (requests.exceptions.ChunkedEncodingError,ConnectionRefusedError):
return("Connection to Icinga lost.")
else:
debug('Received a bad response from Icinga API: '+str(r.status_code))
print('Icinga2 API connection lost.')
except (requests.exceptions.ConnectionError,
requests.packages.urllib3.exceptions.NewConnectionError) as drop:
debug("No connection to Icinga API. Error received: "+str(drop))
sleep(5)
return("No connection to Icinga API.")
What is the appropriate way for handling exceptions from libraries imported by other libraries in Python?
For example, I have a library called "pycontrol" that I import into my main program. "pycontrol" imports the "suds" library. The "suds" library, in turn, imports the "urllib2" library. I've noticed that when the "suds" library has trouble connecting to remote resources it is accessing through "urllib2," these exceptions trickle up to my main program.
My best guess at this point is to import urllib2 and suds into my global name space and catch typical exceptions that get thrown by them and aren't handled in "pycontrol".
Is there some other best practice as to how one might approach this?
A basic idea of what the snippet of code looks like (without importing suds or urllib2 into global name space):
import pycontrol.pycontrol as pc
print "Connecting to iControl API on LTM %s..." % ltm
try:
b = pc.BIGIP(hostname=ltm, username=user, password=pw,
wsdls=wsdl_list, fromurl=True,
debug=soap_debug)
except (<whattocatch>), detail:
print "Error: could not connect to iControl API on LTM %s... aborting!" % ltm
print "Details: %s" % detail
exitcode = 1
else:
print "Connection successfully established."
Here's a sample traceback:
Connecting to iControl API on LTM s0-bigip1-lb2.lab.zynga.com...
Traceback (most recent call last):
File "./register.py", line 507, in <module>
main()
File "./register.py", line 415, in main
b = build_bigip_object(ltm, user, pw, WSDLS, soap_debug = False)
File "./register.py", line 85, in build_bigip_object
debug=soap_debug)
File "build/bdist.macosx-10.6-universal/egg/pycontrol/pycontrol.py", line 81, in __init__
File "build/bdist.macosx-10.6-universal/egg/pycontrol/pycontrol.py", line 103, in _get_clients
File "build/bdist.macosx-10.6-universal/egg/pycontrol/pycontrol.py", line 149, in _get_suds_client
File "/Library/Python/2.6/site-packages/suds/client.py", line 111, in __init__
self.wsdl = reader.open(url)
File "/Library/Python/2.6/site-packages/suds/reader.py", line 136, in open
d = self.fn(url, self.options)
File "/Library/Python/2.6/site-packages/suds/wsdl.py", line 136, in __init__
d = reader.open(url)
File "/Library/Python/2.6/site-packages/suds/reader.py", line 73, in open
d = self.download(url)
File "/Library/Python/2.6/site-packages/suds/reader.py", line 88, in download
fp = self.options.transport.open(Request(url))
File "/Library/Python/2.6/site-packages/suds/transport/https.py", line 60, in open
return HttpTransport.open(self, request)
File "/Library/Python/2.6/site-packages/suds/transport/http.py", line 62, in open
return self.u2open(u2request)
File "/Library/Python/2.6/site-packages/suds/transport/http.py", line 118, in u2open
return url.open(u2request, timeout=tm)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 383, in open
response = self._open(req, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 401, in _open
'_open', req)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 361, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1138, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1105, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known>
I think you answered you question yourself. Import urllib2 and catch the exception in your module.
from urllib2 import URLError
try:
# something
except URLError, e:
# Do something in case of error.
Why do you need to catch specific exceptions at all? After all, any exception (not only URLError) raised from b = pc.BIGIP(...) means you cannot go on.
I suggest:
import traceback
try:
b = pc.BIGIP(...)
except:
traceback.print_exc()
exitcode = 1
else:
do_something_with_connection(b)
Another idea: Why bother with catching the exception at all? The Python interpreter will dump a stack trace to stderr and exit the program when it encounters an unhandled exception:
b = bc.BIGIP(...)
do_something_with_connection(b)
Or if you need to write to an error log:
import logging
import sys
def main():
b = bc.BIGIP(...)
do_something_with_connection(b)
if __name__ == "__main__":
try:
main()
except:
logging.exception("An unexpected error occured")
sys.exit(1)