Is there a way to receive the status code of a request if the request throws an exception? I am trying to send a request, but there is a ConnectionTimeout. I've tried printing the error, but there's not useful status info in the error message. The actual object that is returned by requests.request() doesn't seem to get created when the exception is thrown, at least in my case. Could this be due to internal server configurations or something?
What I have attempted:
try:
response = requests.request(
method=some_http_method,
url=some_url,
auth=some_auth,
data=some_data,
timeout=30,
verify=some_certificate)
except requests.exceptions.ConnectTimeout as e:
response = e.response
print(e)
print(response.status_code) # should print a status code, instead response is still None
The exception ConnectTimeout means that the request timed out while trying to connect to the remote server. Thus, there's no response to get the status code from. it is normal that response variable is None
It can be some network issue, make sure the URL you're trying to query is reachable (use curl to verify)
Related
I am getting the above error despite setting the timeout to None for the httpx call. I am not sure what I am doing wrong.
from httpx import stream
with stream("GET", url, params=url_parameters, headers=headers, timeout=None) as streamed_response:
I've got the same error when using httpx via openapi-client autogenerated client
response = httpx.request(verify=client.verify_ssl,**kwargs,)
It turned out in kwargs parameter headers contained wrong authorization token.
So basically in my case that error meant "authorization error".
You might want to check if what you sending in your parameters is correct too.
It returns the 200, 301 and some other responses as expected. But when i try to get responses of some non existent website, instead of returning codes, it throws exception. Below is the code when i tried to get response code for "www.googl.com", I'm expecting a response code for this scenario.Even i can handle it in try and except but actually i need response code.
Code:
import requests
print (requests.head("https://www.googl.com"))
Since nothing is being returned, there is no response code, because there was no response.
Your best bet is just doing this:
import requests
try:
response = requests.head("https://www.googl.com")
except:
response = 404 # or whatever you want.
In this line of code:
request = requests.get(url, verify=False, headers=headers, proxies=proxy, timeout=15)
How do I know that timeout=15 has been triggered so I can send a message that url did not send any data in 15 seconds?
If a response is not received from the server at all within the given time, then an exception requests.exceptions.Timeout is thrown, as per Exa's link from the other answer.
To test if this occurred we can use a try, except block to detect it and act accordingly, rather than just letting our program crash.
Expanding on the demonstration used in the docs:
import requests
try:
requests.get('https://github.com/', timeout=0.001)
except requests.exceptions.Timeout as e:
# code to run if we didn't get a reply
print("Request timed out!\nDetails:", e)
else:
# code to run if we did get a response, and only if we did.
print(r.headers)
Just substitute your url and timeout where appropriate.
An exception will be thrown. See this for more info.
I'm using Python 3.7 with urllib.
All work fine but it seems not to athomatically redirect when it gets an http redirect request (307).
This is the error i get:
ERROR 2020-06-15 10:25:06,968 HTTP Error 307: Temporary Redirect
I've to handle it with a try-except and manually send another request to the new Location: it works fine but i don't like it.
These is the piece of code i use to perform the request:
req = urllib.request.Request(url)
req.add_header('Authorization', auth)
req.add_header('Content-Type','application/json; charset=utf-8')
req.data=jdati
self.logger.debug(req.headers)
self.logger.info(req.data)
resp = urllib.request.urlopen(req)
url is an https resource and i set an header with some Authhorization info and content-type.
req.data is a JSON
From urllib documentation i've understood that the redirects are authomatically performed by the the library itself, but it doesn't work for me. It always raises an http 307 error and doesn't follow the redirect URL.
I've also tried to use an opener specifiyng the default redirect handler, but with the same result
opener = urllib.request.build_opener(urllib.request.HTTPRedirectHandler)
req = urllib.request.Request(url)
req.add_header('Authorization', auth)
req.add_header('Content-Type','application/json; charset=utf-8')
req.data=jdati
resp = opener.open(req)
What could be the problem?
The reason why the redirect isn't done automatically has been correctly identified by yours truly in the discussion in the comments section. Specifically, RFC 2616, Section 10.3.8 states that:
If the 307 status code is received in response to a request other
than GET or HEAD, the user agent MUST NOT automatically redirect the
request unless it can be confirmed by the user, since this might
change the conditions under which the request was issued.
Back to the question - given that data has been assigned, this automatically results in get_method returning POST (as per how this method was implemented), and since that the request method is POST, and the response code is 307, an HTTPError is raised instead as per the above specification. In the context of Python's urllib, this specific section of the urllib.request module raises the exception.
For an experiment, try the following code:
import urllib.request
import urllib.parse
url = 'http://httpbin.org/status/307'
req = urllib.request.Request(url)
req.data = b'hello' # comment out to not trigger manual redirect handling
try:
resp = urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
if e.status != 307:
raise # not a status code that can be handled here
redirected_url = urllib.parse.urljoin(url, e.headers['Location'])
resp = urllib.request.urlopen(redirected_url)
print('Redirected -> %s' % redirected_url) # the original redirected url
print('Response URL -> %s ' % resp.url) # the final url
Running the code as is may produce the following
Redirected -> http://httpbin.org/redirect/1
Response URL -> http://httpbin.org/get
Note the subsequent redirect to get was done automatically, as the subsequent request was a GET request. Commenting out req.data assignment line will result in the lack of the "Redirected" output line.
Other notable things to note in the exception handling block, e.read() may be done to retrieve the response body produced by the server as part of the HTTP 307 response (since data was posted, there might be a short entity in the response that may be processed?), and that urljoin is needed as the Location header may be a relative URL (or simply has the host missing) to the subsequent resource.
Also, as a matter of interest (and for linkage purposes), this specific question has been asked multiple times before and I am rather surprised that they never got any answers, which follows:
How to handle 307 redirection using urllib2 from http to https
HTTP Error 307: Temporary Redirect in Python3 - INTRANET
HTTP Error 307 - Temporary redirect in python script
Using the HTTPretty library for Python, I can create mock HTTP responses of choice and then pick them up i.e. with the requests library like so:
import httpretty
import requests
# set up a mock
httpretty.enable()
httpretty.register_uri(
method=httpretty.GET,
uri='http://www.fakeurl.com',
status=200,
body='My Response Body'
)
response = requests.get('http://www.fakeurl.com')
# clean up
httpretty.disable()
httpretty.reset()
print(response)
Out: <Response [200]>
Is there also the possibility to register an uri which cannot be reached (e.g. connection timed out, connection refused, ...) such that no response is received at all (which is not the same as an established connection which gives an HTTP error code like 404)?
I want to use this behaviour in unit testing to ensure that my error handling works as expected (which does different things in case of 'no connection established' and 'connection established, bad bad HTTP status code'). As a workaround, I could try to connect to an invalid server like http://192.0.2.0 which would time out in any case. However, I would prefer to do all my unit testing without using any real network connections.
Meanwhile I got it, using a HTTPretty callback body seems to produce the desired behaviour. See inline comments below.
This is actually not exactly the same as I was looking for (it is not a server that cannot be reached and hence the request times out but a server that throws a timeout exception once it is reached, however, the effect is the same for my usecase.
Still, if anybody knows a different solution, I'm looking forward to it.
import httpretty
import requests
# enable HTTPretty
httpretty.enable()
# create a callback body that raises an exception when opened
def exceptionCallback(request, uri, headers):
# raise your favourite exception here, e.g. requests.ConnectionError or requests.Timeout
raise requests.Timeout('Connection timed out.')
# set up a mock and use the callback function as response's body
httpretty.register_uri(
method=httpretty.GET,
uri='http://www.fakeurl.com',
status=200,
body=exceptionCallback
)
# try to get a response from the mock server and catch the exception
try:
response = requests.get('http://www.fakeurl.com')
except requests.Timeout as e:
print('requests.Timeout exception got caught...')
print(e)
# do whatever...
# clean up
httpretty.disable()
httpretty.reset()