I need to send a PUT request to a web service and get some data out of error headers that is the expected result of the request. The code goes like this:
Request = urllib2.Request(destination_url, headers=headers)
Request.get_method = lambda: 'PUT'
try:
Response = urllib2.urlopen(Request)
except urllib2.HTTPError, e:
print 'Error code: ', e.code
print e.read()
I get Error 308 but response is empty and I'm not getting any data out of HTTPError. Is there a way to get HTTP headers while getting an HTTP error?
e has undocumented headers and hdrs properties that contains the HTTP headers sent by the server.
By the way, 308 is not a valid HTTP status code.
Related
We are trying to upload screenshots to a server from mac machine big sur(11.4 kernel 20.5 version) using python. But the response is always shows 400 Bad request, but the same is working fine from the postman. Any help is appreciated.
token = 'Bearer {}'.format(auth_token)
url = "{}/screenshot".format(base_url)
payload = {'date': date}
try:
files = {'imagefile': ('imagefile', open(
image_path, 'rb'), 'image/jpeg')}
log.debug("file : {0}".format(files))
except Exception as e:
log.error("file ERROR: {0}".format(e))
headers = {'Authorization': token}
try:
response = requests.post(
url, headers=headers, data=payload, files=files, timeout=30)
except Exception as e:
log.error("Response ERROR: {0}".format(e))
The HyperText Transfer Protocol (HTTP) 400 Bad Request response status code indicates that the server cannot or will not process the request due to something that is perceived to be a client error (e.g., malformed request syntax, invalid request message framing, or deceptive request routing)
This means your headers are malformed, or your format is wrong.
When I use the requests library with django and I get a 500 error back. response.json() gives me this error:
response = requests.post(....)
print("-----------------------block found!!!-----------------------")
print(response.status_code)
print(response.json())
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char
0)
Is there a way to represent a django 500 response with the requests library in a readable manner?
Assuming you get an HTTP 500 response, You would definitely receive an empty source, Which prevents the .json() function from taking place.
Why not write an exception clause to handle the exceptions, like below:
try:
response = requests.post(....)
print("-----------------------block found!!!-----------------------")
print(response.status_code)
print(response.json())
except HTTPError as e:
print('Error has occurred: ', e.response.status_code)
I can't connect with page. Here is my code and error witch I have:
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
import urllib
someurl = "https://www.genecards.org/cgi-bin/carddisp.pl?gene=MET"
req = Request(someurl)
try:
response = urllib.request.urlopen(req)
except HTTPError as e:
print('The server couldn\'t fulfill the request.')
print('Error code: ', e.code)
except URLError as e:
print('We failed to reach a server.')
print('Reason: ', e.reason)
else:
print("Everything is fine")
Error code: 403
Some websites require a browser-like "User-Agent" header, other requires specific cookies. In this case, I found out by trial and error that both are required. What you need to do is:
Send an initial request with a browser-like user-agent. This will fail with 403, but you will also obtain a valid cookie in the response.
Send a second request with the same user-agent and the cookie that you got before.
In code:
import urllib.request
from urllib.error import URLError
# This handler will store and send cookies for us.
handler = urllib.request.HTTPCookieProcessor()
opener = urllib.request.build_opener(handler)
# Browser-like user agent to make the website happy.
headers = {'User-Agent': 'Mozilla/5.0'}
url = 'https://www.genecards.org/cgi-bin/carddisp.pl?gene=MET'
request = urllib.request.Request(url, headers=headers)
for i in range(2):
try:
response = opener.open(request)
except URLError as exc:
print(exc)
print(response)
# Output:
# HTTP Error 403: Forbidden (expected, first request always fails)
# <http.client.HTTPResponse object at 0x...> (correct 200 response)
Or, if you prefer, using requests:
import requests
session = requests.Session()
jar = requests.cookies.RequestsCookieJar()
headers = {'User-Agent': 'Mozilla/5.0'}
url = 'https://www.genecards.org/cgi-bin/carddisp.pl?gene=MET'
for i in range(2):
response = session.get(url, cookies=jar, headers=headers)
print(response)
# Output:
# <Response [403]>
# <Response [200]>
You can use http.client. First, you need to open a connection with the server. And, after, make a GET request. Like this:
import http.client
conn = http.client.HTTPConnection("genecards.org:80")
conn.request("GET", "/cgi-bin/carddisp.pl?gene=MET")
try:
response = conn.getresponse().read().decode("UTF-8")
except HTTPError as e:
print('The server couldn\'t fulfill the request.')
print('Error code: ', e.code)
except URLError as e:
print('We failed to reach a server.')
print('Reason: ', e.reason)
else:
print("Everything is fine")
I am calling an API with the urllib. When something is not as expected, the API throws an error at the user (E.G. HTTP Error 415: Unsupported Media Type). But next to that, the API returns a JSON with more information. I would like to pass that json to the exception and parse it there, so I can give information to the user about the error.
Is that possible? And if, how is it done?
Extra info:
Error: HTTPError
--EDIT--
On request, here is some code (I want to read resp in the exception):
def _sendpost(url, data=None, filetype=None):
try:
global _auth
req = urllib.request.Request(url, data)
req.add_header('User-Agent', _useragent)
req.add_header('Authorization', 'Bearer ' + _auth['access_token'])
if filetype is not None:
req.add_header('Content-Type', filetype)
resp = urllib.request.urlopen(req, data)
data = json.loads(resp.read().decode('utf-8'), object_pairs_hook=OrderedDict)
except urllib.error.HTTPError as e:
print(e)
return data
--EDIT 2--
I do not want to use extra library's/modules. As I do not control the target machines.
Code
import urllib.request
import urllib.error
try:
request = urllib.request.urlopen('https://api.gutefrage.net')
response = urllib.urlopen(request)
except urllib.error.HTTPError as e:
error_message = e.read()
print(error_message)
Output
b'{"error":{"message":"X-Api-Key header is missing or invalid","type":"API_REQUEST_FORBIDDEN"}}'
Not asked but with module json you could convert it to dict via
import json
json.loads(error_message.decode("utf-8"))
Which gives you the dict out of the byte string.
If you're stuck with using urllib, then you can use the error to read the text of the response, and load that into JSON.
from urllib import request, error
import json
try:
req = urllib.request.Request(url, data)
req.add_header('User-Agent', _useragent)
req.add_header('Authorization', 'Bearer ' + _auth['access_token'])
if filetype is not None:
req.add_header('Content-Type', filetype)
resp = urllib.request.urlopen(req, data)
data = json.loads(resp.read().decode('utf-8'), object_pairs_hook=OrderedDict)
except error.HTTPError as e:
json_response = json.loads(e.read().decode('utf-8'))
If you're not stuck to urllib, I would highly recommend you use the requests module instead of urllib. With that, you can have something like this instead:
response = requests.get("http://www.example.com/api/action")
if response.status_code == 415:
response_json = response.json()
requests doesn't throw an exception when it encounters a non-2xx series response code; instead it returns the response anyway with the status code added.
You can also add headers and parameters to these requests:
headers = {
'User-Agent': _useragent,
'Authorization': 'Bearer ' + _auth['access_token']
}
response = requests.get("http://www.example.com/api/action", headers=headers)
I receive a 'HTTP Error 500: Internal Server Error' response, but I still want to read the data inside the error HTML.
With Python 2.6, I normally fetch a page using:
import urllib2
url = "http://google.com"
data = urllib2.urlopen(url)
data = data.read()
When attempting to use this on the failing URL, I get the exception urllib2.HTTPError:
urllib2.HTTPError: HTTP Error 500: Internal Server Error
How can I fetch such error pages (with or without urllib2), all while they are returning Internal Server Errors?
Note that with Python 3, the corresponding exception is urllib.error.HTTPError.
The HTTPError is a file-like object. You can catch it and then read its contents.
try:
resp = urllib2.urlopen(url)
contents = resp.read()
except urllib2.HTTPError, error:
contents = error.read()
If you mean you want to read the body of the 500:
request = urllib2.Request(url, data, headers)
try:
resp = urllib2.urlopen(request)
print resp.read()
except urllib2.HTTPError, error:
print "ERROR: ", error.read()
In your case, you don't need to build up the request. Just do
try:
resp = urllib2.urlopen(url)
print resp.read()
except urllib2.HTTPError, error:
print "ERROR: ", error.read()
so, you don't override urllib2.HTTPError, you just handle the exception.
alist=['http://someurl.com']
def testUrl():
errList=[]
for URL in alist:
try:
urllib2.urlopen(URL)
except urllib2.URLError, err:
(err.reason != 200)
errList.append(URL+" "+str(err.reason))
return URL+" "+str(err.reason)
return "".join(errList)
testUrl()