I am calling an API with the urllib. When something is not as expected, the API throws an error at the user (E.G. HTTP Error 415: Unsupported Media Type). But next to that, the API returns a JSON with more information. I would like to pass that json to the exception and parse it there, so I can give information to the user about the error.
Is that possible? And if, how is it done?
Extra info:
Error: HTTPError
--EDIT--
On request, here is some code (I want to read resp in the exception):
def _sendpost(url, data=None, filetype=None):
try:
global _auth
req = urllib.request.Request(url, data)
req.add_header('User-Agent', _useragent)
req.add_header('Authorization', 'Bearer ' + _auth['access_token'])
if filetype is not None:
req.add_header('Content-Type', filetype)
resp = urllib.request.urlopen(req, data)
data = json.loads(resp.read().decode('utf-8'), object_pairs_hook=OrderedDict)
except urllib.error.HTTPError as e:
print(e)
return data
--EDIT 2--
I do not want to use extra library's/modules. As I do not control the target machines.
Code
import urllib.request
import urllib.error
try:
request = urllib.request.urlopen('https://api.gutefrage.net')
response = urllib.urlopen(request)
except urllib.error.HTTPError as e:
error_message = e.read()
print(error_message)
Output
b'{"error":{"message":"X-Api-Key header is missing or invalid","type":"API_REQUEST_FORBIDDEN"}}'
Not asked but with module json you could convert it to dict via
import json
json.loads(error_message.decode("utf-8"))
Which gives you the dict out of the byte string.
If you're stuck with using urllib, then you can use the error to read the text of the response, and load that into JSON.
from urllib import request, error
import json
try:
req = urllib.request.Request(url, data)
req.add_header('User-Agent', _useragent)
req.add_header('Authorization', 'Bearer ' + _auth['access_token'])
if filetype is not None:
req.add_header('Content-Type', filetype)
resp = urllib.request.urlopen(req, data)
data = json.loads(resp.read().decode('utf-8'), object_pairs_hook=OrderedDict)
except error.HTTPError as e:
json_response = json.loads(e.read().decode('utf-8'))
If you're not stuck to urllib, I would highly recommend you use the requests module instead of urllib. With that, you can have something like this instead:
response = requests.get("http://www.example.com/api/action")
if response.status_code == 415:
response_json = response.json()
requests doesn't throw an exception when it encounters a non-2xx series response code; instead it returns the response anyway with the status code added.
You can also add headers and parameters to these requests:
headers = {
'User-Agent': _useragent,
'Authorization': 'Bearer ' + _auth['access_token']
}
response = requests.get("http://www.example.com/api/action", headers=headers)
Related
Below is my code to your view:
import warnings
import contextlib
import json
import requests
from urllib3.exceptions import InsecureRequestWarning
old_merge_environment_settings = requests.Session.merge_environment_settings
#contextlib.contextmanager
def no_ssl_verification():
opened_adapters = set()
def merge_environment_settings(self, url, proxies, stream, verify, cert):
# Verification happens only once per connection so we need to close
# all the opened adapters once we're done. Otherwise, the effects of
# verify=False persist beyond the end of this context manager.
opened_adapters.add(self.get_adapter(url))
settings = old_merge_environment_settings(self, url, proxies, stream, verify, cert)
settings['verify'] = False
return settings
requests.Session.merge_environment_settings = merge_environment_settings
try:
with warnings.catch_warnings():
warnings.simplefilter('ignore', InsecureRequestWarning)
yield
finally:
requests.Session.merge_environment_settings = old_merge_environment_settings
for adapter in opened_adapters:
try:
adapter.close()
except:
pass
with no_ssl_verification():
##350014,166545
payload = {'key1': '350014', 'key2': '166545'}
resp = requests.get('https://rhconnect.marcopolo.com.br/api/workers/data_employee/company/1/badge/params', params=payload, verify=False, headers={'Authorization': 'Token +++++private++++', 'content-type': 'application/json'})
print(resp.status_code)
print(resp.status_code)
j = resp.json()
##print(j)
jprint(resp.json())
how can I do a while or a for to send a list of personal id numbers and get a JSON result to witch one?
I tried pasting some parametres but it does not work, produce some errors...
I got this follow error:
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
if i put:
resp = requests.get('https://rhconnect.marcopolo.com.br/api/workers/data_employee/company/1/badge/350014',
with a single number, it works.
here the follow json:
200
[
{
"DT_INI_VIG_invalidez": null,
"DT_fim_VIG_invalidez": null,
"MODULO": "APOIO",
"chapa": 350014,
}
]
You have to add number to url manually
"https://rhconnect.marcopolo.com.br/api/workers/data_employee/company/1/badge/" + str(params)
or
"https://rhconnect.marcopolo.com.br/api/workers/data_employee/company/1/badge/{}".format(params)
or using f-string in Python 3.6+
f"https://rhconnect.marcopolo.com.br/api/workers/data_employee/company/1/badge/{params}"
Using params=params will not add numer to url this way but ?key1=350014&key2=166545
You can see url used by request using
print(resp.request.url)
Now you can run in loop
all_results = []
for number in [350014, 166545]:
url = 'https://rhconnect.marcopolo.com.br/api/workers/data_employee/company/1/badge/{}'.format(number)
resp = requests.get(url, verify=False, headers={'Authorization': 'Token +++++private++++', 'content-type': 'application/json'})
#print(resp.request.url)
print(resp.status_code)
print(resp.json())
# keep result on list
all_results.append(resp.json())
BTW: If you get error then you should check what you get
print(resp.text)
Maybe you get HTML with information or warning
I am trying to pass a list of dictionaries(strings) to a for a put request. I am getting this error:
TypeError: POST data should be bytes, an iterable of bytes.
Is this the right way to make a put request with list of dictionaries(strings) in python.
list looks like the following:
list1 = ['{"id" : "","email" : "John#fullcontact.com","fullName": "John Lorang"}', '{"id" : "","email" : "Lola#fullcontact.com","fullName": "Lola Dsilva"}']
myData = json.dumps(list1)
myRestRequestObj = urllib.request.Request(url,myData)
myRestRequestObj.add_header('Content-Type','application/json')
myRestRequestObj.add_header('Authorization','Basic %s')
myRestRequestObj.get_method = lambda : 'PUT'
try:
myRestRequestResponse = urllib.request.urlopen(myRestRequestObj)
except urllib.error.URLError as e:
print(e.reason)
As you said in a comment, you cannot use requests (that's pretty sad to hear!), so I did another snippet using urllib (the short answer: you must .encode('utf-8') json.dumps and decode('utf-8') response.read()):
import urllib.request
import urllib.error
import json
url = 'http://httpbin.org/put'
token = 'jwtToken'
list1 = ['{"id" : "","email" : "John#fullcontact.com","fullName": "John Lorang"}', '{"id" : "","email" : "Lola#fullcontact.com","fullName": "Lola Dsilva"}']
# Request needs bytes, so we have to encode it
params = json.dumps(list1).encode('utf-8')
headers = {
'Content-Type': 'application/json',
'Authorization': 'Basic {token}'.format(token=token)
}
# Let's try to create our request with data, headers and method
try:
request = urllib.request.Request(url, data=params, headers=headers, method='PUT')
except urllib.error.URLError as e:
# Unable to create our request, here the reason
print("Unable to create youro request: {error}".format(error=str(e)))
else:
# We did create our request, let's try to use it
try:
response = urllib.request.urlopen(request)
except urllib.error.HTTPError as e:
# An HTTP error occured, here the reason
print("HTTP Error: {error}".format(error=str(e)))
except Exception as e:
# We got another reason, here the reason
print("An error occured while trying to put {url}: {error}".format(
url=url,
error=str(e)
))
else:
# We are printing the result
# We must decode it because response.read() returns a bytes string
print(response.read().decode('utf-8'))
I did try to add some comments. I hope this solution help you!
To help you learn a better way to learn python, you should read the Style Guide for Python Code
I will suppose you can use the requests module (pip install requests).
requests is a simple yet powerful HTTP libraby for Python.
import json
import requests
my_data = json.dumps(list1)
headers = {
'Authorization': 'Basic {token}'.format(token=your_token)
}
response = requests.put(url, headers=headers, json=my_data)
print("Status code: {status}".format(status=response.status_code))
print("raw response: {raw_response}".format(
raw_response=response.text
)
print("json response: {json_response}".format(
json_response=response.json()
)
Using urllib I am checking list of urls where my machine is located behind an squid web proxy, but somehow I can't manage proxy setting correctly in the requests which I am getting 404 instead of 200 when calling the function in a for loop or via a map function.
however single requests work fine!
from multiprocessing import Pool
import urllib.error
import urllib.request
proxy_host = "192.168.1.1:3128"
urls = ['https://www.youtube.com/watch?v=XqZsoesa55w',
'https://www.youtube.com/watch?v=GR2o6k8aPlI',
'https://stackoverflow.com/']
single request example (works fine):
req = urllib.request.Request(
url = url[0],
data = None,
headers = {
'User-Agent': 'Mozilla/5.0'
})
req.set_proxy(proxy_host, 'http')
conn = urllib.request.urlopen(req)
conn.getcode() # --> returns 200
This return true http code for single url check.
batch request example (returns wrong http status code):
Function:
def check_url(url):
req = urllib.request.Request(
url = url,
data = None,
headers = {
'User-Agent': 'Mozilla/5.0'
})
req.set_proxy(proxy_host, 'http')
try:
conn = urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
return [str(e), url]
except urllib.error.URLError as e:
return [str(e), url]
except ValueError as e:
return [str(e), url]
else:
if conn:
return conn.getcode()
else:
return 'Unknown Status!'
for url in urls:
check_url(url)
# returns:
>>>404
>>>404
>>>404
p = Pool(processes=20)
p.map(check_url,urls)
#returns:
>>>[404, 404, 404]
I can't connect with page. Here is my code and error witch I have:
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
import urllib
someurl = "https://www.genecards.org/cgi-bin/carddisp.pl?gene=MET"
req = Request(someurl)
try:
response = urllib.request.urlopen(req)
except HTTPError as e:
print('The server couldn\'t fulfill the request.')
print('Error code: ', e.code)
except URLError as e:
print('We failed to reach a server.')
print('Reason: ', e.reason)
else:
print("Everything is fine")
Error code: 403
Some websites require a browser-like "User-Agent" header, other requires specific cookies. In this case, I found out by trial and error that both are required. What you need to do is:
Send an initial request with a browser-like user-agent. This will fail with 403, but you will also obtain a valid cookie in the response.
Send a second request with the same user-agent and the cookie that you got before.
In code:
import urllib.request
from urllib.error import URLError
# This handler will store and send cookies for us.
handler = urllib.request.HTTPCookieProcessor()
opener = urllib.request.build_opener(handler)
# Browser-like user agent to make the website happy.
headers = {'User-Agent': 'Mozilla/5.0'}
url = 'https://www.genecards.org/cgi-bin/carddisp.pl?gene=MET'
request = urllib.request.Request(url, headers=headers)
for i in range(2):
try:
response = opener.open(request)
except URLError as exc:
print(exc)
print(response)
# Output:
# HTTP Error 403: Forbidden (expected, first request always fails)
# <http.client.HTTPResponse object at 0x...> (correct 200 response)
Or, if you prefer, using requests:
import requests
session = requests.Session()
jar = requests.cookies.RequestsCookieJar()
headers = {'User-Agent': 'Mozilla/5.0'}
url = 'https://www.genecards.org/cgi-bin/carddisp.pl?gene=MET'
for i in range(2):
response = session.get(url, cookies=jar, headers=headers)
print(response)
# Output:
# <Response [403]>
# <Response [200]>
You can use http.client. First, you need to open a connection with the server. And, after, make a GET request. Like this:
import http.client
conn = http.client.HTTPConnection("genecards.org:80")
conn.request("GET", "/cgi-bin/carddisp.pl?gene=MET")
try:
response = conn.getresponse().read().decode("UTF-8")
except HTTPError as e:
print('The server couldn\'t fulfill the request.')
print('Error code: ', e.code)
except URLError as e:
print('We failed to reach a server.')
print('Reason: ', e.reason)
else:
print("Everything is fine")
I'm trying to use the Microsoft Cognitive Verify API with python 2.7: https://dev.projectoxford.ai/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
The code is:
import httplib, urllib, base64
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': 'my key',
}
params = '{\'faceId1\': \'URL.jpg\',\'faceId2\': \'URL.jpg.jpg\'}'
try:
conn = httplib.HTTPSConnection('api.projectoxford.ai')
conn.request("POST", "/face/v1.0/verify?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
I tried letting the conn.request line like this:
conn.request("POST", "/face/v1.0/verify?%s" % params, "", headers)
The error is:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request</h2>
<hr><p>HTTP Error 400. The request is badly formed.</p>
</BODY></HTML>
I alrealy tried to follow and make work the following codes:
https://github.com/Microsoft/Cognitive-Emotion-Python/blob/master/Jupyter%20Notebook/Emotion%20Analysis%20Example.ipynb
Using Project Oxford's Emotion API
However I just can't make this one work. I guess there is something wrong with the params or body argument.
Any help is very appreciated.
You can refer to this question.
Obviously you did not understand the code. "{body}" means you should replace it with your body which contains your request url, just like the site says:
So you can use this api this way:
body = {
"url": "http://example.com/1.jpg"
}
…………
conn = httplib.HTTPSConnection('api.projectoxford.ai')
conn.request("POST", "/face/v1.0/detect?%s" % params, str(body), headers)
Dawid's comment looks like it should fix it (double quoting), try this for python 2.7:
import requests
url = "https://api.projectoxford.ai/face/v1.0/verify"
payload = "{\n \"faceId1\":\"A Face ID\",\n \"faceId2\":\"A Face ID\"\n}"
headers = {
'ocp-apim-subscription-key': "KEY_HERE",
'content-type': "application/json"
}
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
for python 3:
import http.client
conn = http.client.HTTPSConnection("api.projectoxford.ai")
payload = "{\n\"faceId1\": \"A Face ID\",\n\"faceId2\": \"Another Face ID\"\n}"
headers = {
'ocp-apim-subscription-key': "keyHere",
'content-type': "application/json"
}
conn.request("POST", "/face/v1.0/verify", payload, headers)
res = conn.getresponse()
data = res.read()
There are a couple issues with your script:
You must pass face Ids and not URLs or file objects to the REST API.
You must correctly formulate the HTTP request.
However, you may find it easier to use the Python API and not the REST API. For example, once you have the face ids, you can just run result = CF.face.verify(faceid1, another_face_id=faceid2) instead of worrying about setting up the correct POST request.
You will probably need to install cognitive_face with pip. I use this API to get the face Ids for some bonus instruction.
To make this simpler, let's assume you have img1.jpg and img2.jpg on disk.
Here is an example using the REST API:
import cognitive_face as CF
from io import BytesIO
import json
import http.client
# Setup
KEY = "your subscription key"
# Get Face Ids
def get_face_id(img_file):
f = open(img_file, 'rb')
data = f.read()
f.close()
faces = CF.face.detect(BytesIO(data))
if len(faces) != 1:
raise RuntimeError('Too many faces!')
face_id = faces[0]['faceId']
return face_id
# Initialize API
CF.Key.set(KEY)
faceId1 = get_face_id('img1.jpg')
faceId2 = get_face_id('img2.jpg')
# Now that we have face ids, we can setup our request
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': KEY
}
params = {
'faceId1': faceId1,
'faceId2': faceId2
}
# The Content-Type in the header specifies that the body will
# be json so convert params to json
json_body = json.dumps(params)
try:
conn = httplib.HTTPSConnection('https://eastus.api.cognitive.microsoft.com')
conn.request("POST", "/face/v1.0/verify", body=json_body, headers=headers)
response = conn.getresponse()
data = response.read()
data = json.loads(data)
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))