Using the HTTPretty library for Python, I can create mock HTTP responses of choice and then pick them up i.e. with the requests library like so:
import httpretty
import requests
# set up a mock
httpretty.enable()
httpretty.register_uri(
method=httpretty.GET,
uri='http://www.fakeurl.com',
status=200,
body='My Response Body'
)
response = requests.get('http://www.fakeurl.com')
# clean up
httpretty.disable()
httpretty.reset()
print(response)
Out: <Response [200]>
Is there also the possibility to register an uri which cannot be reached (e.g. connection timed out, connection refused, ...) such that no response is received at all (which is not the same as an established connection which gives an HTTP error code like 404)?
I want to use this behaviour in unit testing to ensure that my error handling works as expected (which does different things in case of 'no connection established' and 'connection established, bad bad HTTP status code'). As a workaround, I could try to connect to an invalid server like http://192.0.2.0 which would time out in any case. However, I would prefer to do all my unit testing without using any real network connections.
Meanwhile I got it, using a HTTPretty callback body seems to produce the desired behaviour. See inline comments below.
This is actually not exactly the same as I was looking for (it is not a server that cannot be reached and hence the request times out but a server that throws a timeout exception once it is reached, however, the effect is the same for my usecase.
Still, if anybody knows a different solution, I'm looking forward to it.
import httpretty
import requests
# enable HTTPretty
httpretty.enable()
# create a callback body that raises an exception when opened
def exceptionCallback(request, uri, headers):
# raise your favourite exception here, e.g. requests.ConnectionError or requests.Timeout
raise requests.Timeout('Connection timed out.')
# set up a mock and use the callback function as response's body
httpretty.register_uri(
method=httpretty.GET,
uri='http://www.fakeurl.com',
status=200,
body=exceptionCallback
)
# try to get a response from the mock server and catch the exception
try:
response = requests.get('http://www.fakeurl.com')
except requests.Timeout as e:
print('requests.Timeout exception got caught...')
print(e)
# do whatever...
# clean up
httpretty.disable()
httpretty.reset()
Related
I have the following session-dependent code which must be run continuously.
Code
import requests
http = requests.Session()
while True:
# if http is not good, then run http = requests.Session() again
response = http.get(....)
# process respons
# wait for 5 seconds
Note: I moved the line http = requests.Session() out of the loop.
Issue
How to check if the session is working?
An example for a not working session may be after the web server is restarted. Or loadbalancer redirects to a different web server.
The requests.Session object is just a persistence and connection-pooling object to allow shared state between different HTTP request on the client-side.
If the server unexpectedly closes a session, so that it becomes invalid, the server probably would respond with some error-indicating HTTP status code.
Thus requests would raise an error. See Errors and Exceptions:
All exceptions that Requests explicitly raises inherit from requests.exceptions.RequestException.
See the extended classes of RequestException.
Approach 1: implement open/close using try/except
Your code can catch such exceptions within a try/except-block.
It depends on the server's API interface specification how it will signal a invalidated/closed session. This signal response should be evaluated in the except block.
Here we use session_was_closed(exception) function to evaluate the exception/response and Session.close() to close the session correctly before opening a new one.
import requests
# initially open a session object
s = requests.Session()
# execute requests continuously
while True:
try:
response = s.get(....)
# process response
except requests.exceptions.RequestException as e:
if session_was_closed(e):
s.close() # close the session
s = requests.Session() # opens a new session
else:
# process non-session-related errors
# wait for 5 seconds
Depending on the server response of your case, implement the method session_was_closed(exception).
Approach 2: automatically open/close using with
From Advanced Usage, Session Objects:
Sessions can also be used as context managers:
with requests.Session() as s:
s.get('https://httpbin.org/cookies/set/sessioncookie/123456789')
This will make sure the session is closed as soon as the with block is exited, even if unhandled exceptions occurred.
I would flip the logic and add a try-except.
import requests
http = requests.Session()
while True:
try:
response = http.get(....)
except requests.ConnectionException:
http = requests.Session()
continue
# process respons
# wait for 5 seconds
See this answer for more info. I didn't test if the raised exception is that one, so please test it.
I used the following code snippet to unshorten URLs using the requests library. The snippet runs correctly for URL redirects of hostnames that are valid , and running webpages. But , this code and every other variants of the snippets of unshortening urls seem to fail when the final URL is invalid website. I would still like to get what the final web page url is , regardless of being it an invalid one.
The snippet is :
def unshorten_url(url):
return requests.head(url, allow_redirects=True).url
print unshorten_url(<shortened URL>)
The shortened URL should redirect to this webpage, which has invalid host .
http://trekingear.com/product/4-get-a-real-rocky-mountain-high/?utm_source=Content&utm_medium=Postings&utm_campaign=Guffey%20X%20Mass
But it returns me this error :
requests.exceptions.ConnectionError: HTTPConnectionPool(host='trekingear.com', port=80): Max retries exceeded with url: /product/4-get-a-real-rocky-mountain-high/?utm_source=Content&utm_medium=Postings&utm_campaign=Guffey%20X%20Mass (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10556dc50>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
Here is the URL I am trying to unshorten :
How can I extract the final URL, of this invalid host from this redirection chain?
You should not use requests.head like that, since by default it follows a 302 Redirect up to three times.
You could disable redirection (with retries=False) and use urlopen. Then the returned response would always hold the 302 contents as its url:
urlopen(method, url, body=None, headers=None, retries=None,
redirect=True, assert_same_host=True, timeout=<object object>,
pool_timeout=None, release_conn=None, chunked=False, body_pos=None,
**response_kw)
Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you’ll need to specify all the raw details.
Parameters:
method – HTTP request method (such as GET, POST, PUT, etc.)
body – Data to send in the request body (useful for creating POST requests, see HTTPConnectionPool.post_url for more convenience).
headers – Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers.
retries (Retry, False, or an int.) –
Configure the number of retries to allow before raising a MaxRetryError exception.
Pass None to retry until you receive a response. Pass a Retry object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry.
And this is the relevant note:
If False, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned.
Example
(I have actually ran a different test on my local web server, but can't find a public one supplying wrong 302 requests).
from urllib3 import PoolManager
manager = PoolManager(10)
req = manager.urlopen("GET", "http://en.wikipedia.org/wiki/Claude_E._Shannon", retries=False)
print req.get_redirect_location()
The above will request a HTTP page from Wikipedia, thus generating the redirect to HTTPS:
https://en.wikipedia.org/wiki/Claude_E._Shannon
Redirects plus no retries
Your case is a bit different. You want to do redirects since the original URL will not yield the real redirection on the first try, but you want to get the failed redirect.
The problem here is that redirects are handled by the same code as error retries, so you can't disable only the latter. It's neither or both.
You then have to enable both, and do it the long way (intercepting the error). You might need to increase retries, which will slow down things when errors occur.
try:
// Did not know you can't post a URL shortener in a SO answer. Live and learn.
req = manager.urlopen("GET", "http(COLON)(SLASH)(SLASH)t(DOT)co(SLASH)eWWk8s8Hzj")
loc = req.get_redirect_location()
except MaxRetryError as fail:
// build "loc" from scheme, host and url
loc = "%s://%s%s" % (fail.pool.scheme, fail.pool.host, fail.url)
print loc
Your specific case
Since you're using a urllib3 wrapper, you can just unwrap the exception:
try:
# This is your existing code
return requests.head(url, allow_redirects = True).url
except requests.ConnectionError as fail:
return "%s://%s%s" % (fail.args[0].pool.scheme, fail.args[0].pool.host, fail.args[0].url)
You ought to provide for other possible errors, though.
Is there a way to receive the status code of a request if the request throws an exception? I am trying to send a request, but there is a ConnectionTimeout. I've tried printing the error, but there's not useful status info in the error message. The actual object that is returned by requests.request() doesn't seem to get created when the exception is thrown, at least in my case. Could this be due to internal server configurations or something?
What I have attempted:
try:
response = requests.request(
method=some_http_method,
url=some_url,
auth=some_auth,
data=some_data,
timeout=30,
verify=some_certificate)
except requests.exceptions.ConnectTimeout as e:
response = e.response
print(e)
print(response.status_code) # should print a status code, instead response is still None
The exception ConnectTimeout means that the request timed out while trying to connect to the remote server. Thus, there's no response to get the status code from. it is normal that response variable is None
It can be some network issue, make sure the URL you're trying to query is reachable (use curl to verify)
I am using a urllib.request.urlopen() to GET from a web service I'm trying to test.
This returns an HTTPResponse object, which I then read() to get the response body.
But I always see a ResourceWarning about an unclosed socket from socket.py
Here's the relevant function:
from urllib.request import Request, urlopen
def get_from_webservice(url):
""" GET from the webservice """
req = Request(url, method="GET", headers=HEADERS)
with urlopen(req) as rsp:
body = rsp.read().decode('utf-8')
return json.loads(body)
Here's the warning as it appears in the program's output:
$ ./test/test_webservices.py
/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/socket.py:359: ResourceWarning: unclosed <socket.socket object, fd=5, family=30, type=1, proto=6>
self._sock = None
.s
----------------------------------------------------------------------
Ran 2 tests in 0.010s
OK (skipped=1)
If there's anything I can do to the HTTPResponse (or the Request?) to make it close its socket cleanly,
I would really like to know, because this code is for my unit tests; I don't like
ignoring warnings anywhere, but especially not there.
I don't know if this is the answer, but it is part of the way to an answer.
If I add the header "connection: close" to the response from my web services, the HTTPResponse object seems to clean itself up properly without a warning.
And in fact, the HTTP Spec (http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html) says:
HTTP/1.1 applications that do not support persistent connections MUST include the "close" connection option in every message.
So the problem was on the server end (i.e. my fault!). In the event that you don't have control over the headers coming from the server, I don't know what you can do.
I had the same problem with urllib3 and I just added a context manager to close connection automatically:
import urllib3
def get(addr, headers):
""" this function will close the connection after a http request. """
with urllib3.PoolManager() as conn:
res = conn.request('GET', addr, headers=headers)
if r.status == 200:
return res.data
else:
raise ConnectionError(res.reason)
Note that urllib3 is designed to have a pool of connections and to keep connections alive for you. This can significantly speed up your application, if it needs to make a series of requests, e.g. few calls to the backend API.
Please read urllib3 documentation re connection pools here: https://urllib3.readthedocs.io/en/1.5/pools.html
P.S. you could also use requests lib, which is not a part of the Python standard lib (at 2019) but is very powerful and simple to use: http://docs.python-requests.org/en/master/
I'm having little trouble creating a script working with URLs. I'm using urllib.urlopen() to get content of desired URL. But some of these URLs requires authentication. And urlopen prompts me to type in my username and then password.
What I need is to ignore every URL that'll require authentication, just easily skip it and continue, is there a way to do this?
I was wondering about catching HTTPError exception, but in fact, exception is handled by urlopen() method, so it's not working.
Thanks for every reply.
You are right about the urllib2.HTTPError exception:
exception urllib2.HTTPError
Though being an exception (a subclass of URLError), an HTTPError can also function as a non-exceptional file-like return value (the same thing that urlopen() returns). This is useful when handling exotic HTTP errors, such as requests for authentication.
code
An HTTP status code as defined in RFC 2616. This numeric value corresponds to a value found in the dictionary of codes as found in BaseHTTPServer.BaseHTTPRequestHandler.responses.
The code attribute of the exception can be used to verify that authentication is required - code 401.
>>> try:
... conn = urllib2.urlopen('http://www.example.com/admin')
... # read conn and process data
... except urllib2.HTTPError, x:
... print 'Ignoring', x.code
...
Ignoring 401
>>>