Why does urllib2's .getcode() method crash on 404's? - python

In the beginner Python course I took on Lynda it said to use .getcode() to get the http code from a url and that that can be used as a test before reading the data:
webUrl = urllib2.urlopen('http://www.wired.com/tag/magazine-23-05/page/4')
print(str(webUrl.getcode()))
if (webURL.getcode() == 200):
data = webURL.read()
else:
print 'error'
However, when used with the 404 page above it causes Python to quit: Python function terminated unexpectedly: HTTP Error 404: Not Found, so it seems this lesson was completely wrong?
My question then is what exactly is .getcode() actually good for? You can't actually use it to test what the http code is unless you know what it is (or at least that it's not a 404). Was the course wrong or am I missing something?
My understanding is the proper way to do it is like this, which doesn't use .getcode() at all (though tell me if there is a better way):
try:
url = urllib2.urlopen('http://www.wired.com/tag/magazine-23-05/page/4')
except urllib2.HTTPError, e:
print e
This doesn't use .getcode() at all. Am I misunderstanding the point of .getcode() or is it pretty much useless? It seems strange to me a method for getting a page code in a library dedicated to opening url's can't handle something as trivial as returning a 404.

A 404 code is considered an error status by urllib2 and thus an exception is raised. The exception object also supports the getcode() method:
>>> import urllib2
>>> try:
... url = urllib2.urlopen('http://www.wired.com/tag/magazine-23-05/page/4')
... except urllib2.HTTPError, e:
... print e
... print e.getcode()
...
HTTP Error 404: Not Found
404
The fact that errors are raised is poorly documented. The library uses a stack of handlers to form a URL opener (created with (urllib2.build_opener(), installed with urllib2.install_opener()), and in the default stack the urllib2.HTTPErrorProcessor class is included.
It is that class that causes anything response with a response code outside the 2xx range to be handled as an error. The 3xx status codes then are handled by the HTTPRedirectHandler object, and some of the 40x codes (related to authentication) are handled by specialised authentication handlers, but most codes simply are left to be raised as an exception.
If you are up to installing additional Python libraries, I recommend you install the requests library instead, where error handling is a lot saner. No exceptions are raised unless you explicitly request it:
import requests
response = requests.get(url)
response.raise_for_status() # raises an exception for 4xx or 5xx status codes.

Yes you are understanding right, It throws an exception for a non-"OK" http status code. At the time of writing the lesson might have worked because the URL was valid, but if you try that URL in a browser now, you will also get a 404 not found, because the URL is now no longer valid.
In this case, urllib2.urlopen is in a way (arguably), abusing exceptions to return http status codes as exceptions (see docs for urllib2.HTTPError)
As an aside, I would suggest trying the requests library, which is much nicer to work with if you are planning to do some actual scripting work in this space outside of tutorials.

Related

Catching SSLErrors within Tornado

I'm running Tornado 4.0.2 in a Python 2.7.5 virtualenv using SSL and a self-signed certificate and am the following SSLError is showing up repeatedly:
SSLError: [Errno 1] _ssl.c:1419: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca
A few questions follow:
I'm assuming these exceptions are due to clients freaking out about my self-signed certificate. Is this correct?
Assuming this is the case - I don't care about this exception, and I don't want to see it in the log. (It's an internal webserver - we're never going to pay for a CA. All connections are just going to have to be untrusted.) In an attempt to catch the exceptions myself, I've tried subclassing IOLoop as follows:
class MyIOLoop(IOLoop):
def handle_callback_exception(callback):
print "Exception in callback", callback
if __name__ == "__main__":
app = Application(urls, compress_response = True)
ioloop=MyIOLoop.instance()
http_server = httpserver.HTTPServer(app, ssl_options={"certfile": "cert.pem", "keyfile": "key.pem" }, io_loop=ioloop )
http_server.listen(8888)
ioloop.start()
But this hasn't helped - I still get the full stack trace.
What do I need to do to handle (i.e. ignore) such exceptions myself? I've experimented with setting cert_reqs" : ssl.CERT_NONE in the ssl_options but that also hasn't helped.
Is there anything else I need to do - such as close the connection myself - when I've caught such an exception? If so, what, and how?
I also asked this question on the Tornado mailing list, and got the following response:
This error is coming from HTTP1ServerConnection, not IOLoop (I think
it's uncommon for errors to make it all the way up to the IOLoop these
days). You're correct that this means that a client has refused to
connect because it doesn't trust your certificate. It's arguably
useful to log something in this case (you'd want to know if this
started happening a lot), but it should be at most one line instead of
a full stack trace. It might also be better to treat it as more like
ECONNRESET and log nothing.
We don't currently expose any useful ways to customize this logging,
but you have options in the logging module itself. You could attach a
Filter to the logger and block entries where exc_info[0] is SSLError
and exc_info[1] has the right error code, for example.
I ended up adding a filter to Tornado's logger as suggested. One slight snag was that record.exc_info was sometimes None, but in such situations I was able to get enough information out of record.args to decide if I want to filter it.
Following on from helgridly's own answer: the error can't be caught, but you can filter the logs.
Create a function that checks for the presence of an SSL error in a log record, and rejects certain such errors
Install it as a filter for the tornado.general logger
For example:
def ssl_log_filter(record):
if record.exc_info is not None:
e = record.exc_info[1]
elif len(record.args) >= 3 and isinstance(record.args[2], Exception):
e = record.args[2]
else:
e = None
if isinstance(e, SSLEOFError):
return False
if isinstance(e, SSLError):
if e.reason in {'NO_SHARED_CIPHER'}:
return False
return True
logging.getLogger('tornado.general').addFilter(ssl_log_filter)
The code above will only work for Python 3.2+. For older versions, subclass Filter instead.

urllib2.open giving 500 HTTPError exception even when call is successful

I am using urllib2 to access a URL and read the data. The urlopen call is in a try except block like below. I have seen other questions asked on the site saying they are encountering this 500 error but I could not find a concrete answer as to why we get this 500 exception even when the call is successful. Can anyone elaborate on that or point out ways to encounter it?
try:
data = urllib2.urlopen(url).read().split('\n')
except urllib2.HTTPError, e:
print "Could not get data with url {0} due to error code {1}.".format(url,e.code)
except urllib2.URLError, e:
print "Could not get data with url {0} due to reason {1}.".format(url,e.reason)
sys.exit(1)
HTTP Error 500 is a server error (https://en.wikipedia.org/wiki/List_of_HTTP_status_codes). You should investigate the server side logs
You're getting a server side error.
You need to inspect the error (e) to see if there is any feedback on what is causing it. it usually has some of the actual error data from the server in it. not all servers will return error data though, sometimes it's just on the server logs.
If this is running on a daemon, or sporadically, you could write something that logs the contents of e somewhere.
You could also use pdb.set_trace() to set a breakpoint and inspect the object yourself.
also, while this line looks great:
data = urllib2.urlopen(url).read().split('\n')
it's a real pain during debugging and troubleshooting, which happens A LOT when using urllib.
i would suggest splitting it into a few lines like this
url_obj = urllib2.urlopen(url)
data = url_obj.read()
data = data.split('\n')
if you enter in a few breakpoints with pdb ( pdb.set_trace() ) you'll be able to instead each variable.
since you're not using a custom opener, i would also just use the requests library, which just wraps urllib and makes it less horrible.

Any way to save a traceback object in Python

I was looking to possibly try and save a traceback object and somehow pickle it to a file that I can access. An example of a use case for this is if I am submitting some python code to a farm computer to run and it fails, it would be nice to be able to open a session and access that traceback to debug the problem rather than just seeing a log of the traceback. I do not know if there is any sort of way to do this but thought it would be worth asking why it couldn't if so.
okay so you can use traceback.print_exception(type, value, traceback[, limit[, file]]) and save it in a text or json or you can refer to docs
if you find it helpful please mark it correct or upvote thanx..:)
Depending on how you've written your code, the try statement is probably your best answer. Since any error is just a class that inherits Python's builtin Exception, you can raise custom errors everywhere you need more information about a thrown error. You just need to rename your errors or pass in an appropriate string as the first argument. If you then try your code and use the except statement except CustomError as e, you can pull all the information you want out of e in the except statement as a regular instance. Example:
Your code would be:
def script():
try: codeblock
except Exception as e: raise Error1('You hit %s error in the first block'% e)
try: codeblock 2
except Exception as e: raise Error2('You hit %s error in the second block' % e)
try: script()
except Exception as e:
with open('path\to\file.txt','w') as outFile:
outFile.write(e)
The last part is really nothing more than creating your own log file, but you have to write it down somewhere, right?
As for using the traceback module mentioned above, you can get error information out of that. Any of the commands here can get you a list of tracebacks:
http://docs.python.org/2/library/traceback.html
On the otherhand, if you're trying to avoid looking at log files, the traceback module is only going to give you the same thing a log file would, in a different format. Adding your own error statements in your code gives you more information than a cryptic ValueError about what actually happened. If you print the traceback to your special error, it might give you still more information on your issue.

How to make asynchronous HTTP GET requests in Python and pass response object to a function

Update: Problem was incomplete documentation, event dispatcher passing kwargs to the hook function.
I have a list of about 30k URLs that I want to check for various strings. I have a working version of this script using Requests & BeautifulSoup, but it doesn't use threading or asynchronous requests so it's incredibly slow.
Ultimately what I would like to do is cache the html for each URL so I can run multiple checks without making redundant HTTP requests to each site. If I have a function that will store the html, what's the best way to asynchronously send the HTTP GET requests and then pass the response objects?
I've been trying to use Grequests (as described here) and the "hooks" parameter, but I'm getting errors and the documentation doesn't go very in-depth. So I'm hoping someone with more experience can shed some light.
Here's a simplified example of what I'm trying to accomplish:
import grequests
urls = ['http://www.google.com/finance','http://finance.yahoo.com/','http://www.bloomberg.com/']
def print_url(r):
print r.url
def async(url_list):
sites = []
for u in url_list:
rs = grequests.get(u, hooks=dict(response=print_url))
sites.append(rs)
return grequests.map(sites)
print async(urls)
And it produces the following TypeError:
TypeError: print_url() got an unexpected keyword argument 'verify'
<Greenlet at 0x32803d8L: <bound method AsyncRequest.send of <grequests.AsyncRequest object at 0x00000000028D2160>>
(stream=False)> failed with TypeError
Not sure why it's sending 'verify' as a keyword argument by default; it would be great to get something working though, so if anyone has any suggestions (using grequests or otherwise) please share :)
Thanks in advance.
I tried your code and could get it work by adding an additional parameter kwargs to your print_url function.
def print_url(r, **kwargs):
print r.url
I figured what was wrong in this other stackoverlow question: Problems with hooks using Requests Python package.
It seems when you use the response hook in grequests you need to add **kwargs in your callback definition.

Is there any way to simulate some GAE server error?

Are there ways to test my error_handlers setup in the app.yaml file, especially the error code over_quota?
Testing error_handlers
dev_appserver.py is the application that is parsing your app.yaml and serving these error files. This means that you're best bet is probably a straight up acceptance test where you bring up dev_appserver.py and try hitting it localhost:8080 with GETs and PUTs that would trigger the various errors you're expecting.
So, if /foo returns a 404, you could do the following with Python requests:
>>> def test_foo():
>>> response = requests.get('/foo')
>>> assert response.status_code == 404
Testing Over Quota Error
In this specific case it sounds like you're trying to explicitly raise the over_quota error. This link mentions that the exception you're looking for is apiproxy_errors.OverQuotaError.
I'm not sure what your test code is, but have you tried explicitly raising this error, with straight up raise?
I was able to run the following code after bootstrapping my apiproxy_stub_map, setting up my path, etc.:
from google.appengine.runtime import apiproxy_errors
def test_foo():
raise apiproxy_errors.OverQuotaError

Categories

Resources