I am using python 2.7.5. I am using urllib2.urlopen() to make a client request to local server. When I write this piece of code seperately on my machine..it works smoothly
import json
import urllib2
urls = [ "https://localhost/rest/service1",
"https://localhost/rest/service2" ]
for url in urls:
url_encoded = urllib2.quote(url, safe="-:?=/")
token = 'f42fa4d43db94c2782feabf84fa2cc90'
headers = { "Content-Type": "application/json",
"X-Auth-Token": token }
try:
request = urllib2.Request( url_encoded, headers=headers )
response = urllib2.urlopen( request )
reply = response.read()
data = json.loads(reply)
print data
except Exception as ex:
msg = ex.msg
print msg
Same piece of code invoked in my project using WSGI rest framework gives this error:
-> response = urllib2.urlopen( request )
(Pdb)
URLError: URLError...:579)'),)
Please guide me in this.. Why is this happening ?
Related
I am trying to send a SOAP request using Python requests library. I am behind a corporate proxy to which I should authenticate using NTLM authentication. However, when I try to send the request, I always get status code 407 from proxy server and the following error:
The NTLM challenge header was not found - HTTP 407: failed to establish NTLM proxy tunnel
Code snippet below:
import wincertstore
import os
from requests_ntlm2 import HttpNtlmAuth, NtlmCompatibility, HttpNtlmAdapter
import requests_pkcs12
import requests
import traceback
def create_pem_of_system_certs():
certfile = wincertstore.CertFile()
certfile.addstore("CA")
certfile.addstore("ROOT")
return certfile.name
def get_headers():
return {
"content-type": "text/xml",
"SOAPAction": "",
"Proxy-Connection": "Keep-Alive"
}
os.environ["REQUESTS_CA_BUNDLE"] = create_pem_of_system_certs()
local_user = os.environ["userdomain"] + "\\" + os.getlogin() # domain\\username
local_user_password = "local_user_password"
pkcs12_pass = "p12_pass"
pkcs12_file = "file.p12"
content = '<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> ' \
'<soapenv:Header/>' \
'<soapenv:Body>' \
'...' \
'</soapenv:Body>' \
'</soapenv:Envelope>'
ntlm_proxy = "proxy_ip:proxy_port"
http_dict = "http://{}".format(ntlm_proxy)
https_dict = http_dict
proxies = {
"http": http_dict,
"https": https_dict
}
ntlm_compatibility = NtlmCompatibility.NTLMv2_DEFAULT
url = "URL of WSDL"
session = requests.Session()
session.mount(
url,
requests_pkcs12.Pkcs12Adapter(
pkcs12_filename=pkcs12_file,
pkcs12_password=pkcs12_pass
)
)
session.mount(
"https://",
HttpNtlmAdapter(
local_user,
local_user_password,
ntlm_compatibility=ntlm_compatibility
)
)
session.mount(
"http://",
HttpNtlmAdapter(
local_user,
local_user_password,
ntlm_compatibility=ntlm_compatibility
)
)
session.auth = HttpNtlmAuth(
local_user,
local_user_password,
ntlm_compatibility=ntlm_compatibility
)
session.proxies = proxies
try:
response = session.post(url=url,
data=content,
headers=get_headers()
)
print("Status code: {}".format(response.status_code))
except:
print("{}".format(traceback.format_exc()))
I tried with different levels of NTLM, according to:
https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/cc960646%28v=technet.10%29
I was searching for different examples on how to use NTLM authentication with Python requests library, but I couldn't find any that would include the headers, specified as parameter to POST method, as in my snippet below. Sending request without headers as the parameter, I get "no SOAPAction header" error from server. I suppose I should add some additional authorization headers inside the headers dict or should the libraries do that?
Can anyone please help me with this?
Thank you very much!
Best regards.
First thank you for your time. I'm trying to do an insert using a Rest-API POST, I'm working with Python. Among my messages I have special characters that I want to keep in the destination, which by the way returns an error for them since by default the messages are in UTF-8, but I want them in "ISO-8859-1".
For this I have created the line: headers["Charset"] = "ISO-8859-1" . Python does not give me an error but I continue with the same problem.
The error is:
400 Client Error: Bad Request for url: https://api.example.com/
Here is my code:
import requests
from requests.structures import CaseInsensitiveDict
url = 'https://api.example.com/'
headers = CaseInsensitiveDict()
headers["Accept"] = "application/json"
headers["Authorization"] = "Bearer "
headers["Content-Type"] = "application/json"
headers["Charset"] = "ISO-8859-1"
collet_x = df_spark.collect()
for row in collet_x:
#insert
resp = requests.post(url, headers=headers, data=row['JSON'])
v_respuesta = resp.text
print(resp.status_code)
print(v_respuesta)
How else can I change the encoding?
From already thank you very much.
Regards
I have gone through number of similar posts related to firing GET requests with Basic Auth (eg: Python, HTTPS GET with basic authentication), still can't figure out the problem. I keep getting the error requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url
With the same credentials, headers tried the same in postman it works as expected. Verified that base64encoded value for the api_key, password is exactly same as the value used in postman, so I don't think its encoding or resource access permission problem.
python -V
Python 3.6.4 :: Anaconda, Inc.
Approach 1
api_key = 'some_api_key'
password = 'some_password'
headers = {'accept': 'application/json'}
url = 'https://test.access.com/this/url'
api_key_password = "%s:%s" % (api_key, password)
b64_encoded = b64encode(bytes(api_key_password, 'utf-8')).decode("ascii")
headers['authorization'] = 'Basic %s' % b64_encoded
response = requests.get(url,
headers=headers)
if (response.ok):
json_data = json.loads(response.content)
print (json_data)
else:
print (response)
response.raise_for_status()
Approach 2
api_key = 'some_api_key'
password = 'some_password'
url = 'https://test.access.com/this/url'
headers = {
'accept': 'application/json',
}
response = requests.get(url, headers=headers, auth=(api_key, password))
print (response.ok)
if (response.ok):
json_data = json.loads(response.content)
print (json_data)
else:
print (response)
response.raise_for_status()
Can you please provide some pointers?
I had a similar issue (although in .NET Framework).
In my case the reason was that I was using the url without a forward slash in the end and the API apparently does not support that.
So https://test.access.com/this/url
Throws 401 error Unauthorized
but
https://test.access.com/this/url/
Returns 200 OK.
Older post but I had a similar issue. Postman will cache your JSESSIONID. Be sure you are clearing out that cookie while testing. If you are hitting an API that requires a login API call to establish a session before you can make subsequent API calls, this Postman behavior can produce a false sense of security.
In this situation with Python requests, it can be handled with code similar to what I've provided below:
import requests,json
loginAPI = "https://myapi.myco.comv/someuri/someuri/users/login"
someHTTPGetAPI = "https://myapi.myco.com/someuri/someuri/someservice"
username = "myuser"
password = "mypass"
headers = {
"Content-Type": "application/json",
"login": username,
"password": password
}
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
verify=False
session = requests.Session()
sessionResponse = session.get(url=loginURL,headers=headers, verify=verify)
if sessionResponse.status_code == 200:
getResponse = session.get(url=someHTTPGetAPI)
if getResponse.status_code == 200:
responseJSON = agentStatus.json()
I am getting the error message "Bad Gateway
The proxy server received an invalid
response from an upstream server" with the following code:
import requests
url = "https://apis.company.com/v3/media"
attachments = 'media': ('x.mp3', open('x.mp3', 'r'))}
headers = {'content-type': "multipart/form-data",'cache-control': "no-cache"
'Authorization':"Bearer zzz" }
response = requests.post(url, files=attachments, headers = headers)
print response.text
I'm following the example in the requests Quickstart documentation, where it says "You can also pass a list of tuples to the data argument": http://docs.python-requests.org/en/master/user/quickstart/#post-a-multipart-encoded-file
What is causing this error and how can I fix it?
The main problem was that I set the content-type in the header. This code works:
import requests
url = 'https://apis.company.com/v3/media'
token = 'token-goes-here'
headers = { 'Authorization' : 'Bearer ' + token }
filename = 'x.mp3'
with open(filename, 'rb') as media_file:
attachments = {
'media': (filename, media_file, 'application/octet-stream')
}
response = requests.post(url, files = attachments, headers = headers)
print response.text
I'm trying to use the Microsoft Cognitive Verify API with python 2.7: https://dev.projectoxford.ai/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a
The code is:
import httplib, urllib, base64
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': 'my key',
}
params = '{\'faceId1\': \'URL.jpg\',\'faceId2\': \'URL.jpg.jpg\'}'
try:
conn = httplib.HTTPSConnection('api.projectoxford.ai')
conn.request("POST", "/face/v1.0/verify?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))
I tried letting the conn.request line like this:
conn.request("POST", "/face/v1.0/verify?%s" % params, "", headers)
The error is:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request</h2>
<hr><p>HTTP Error 400. The request is badly formed.</p>
</BODY></HTML>
I alrealy tried to follow and make work the following codes:
https://github.com/Microsoft/Cognitive-Emotion-Python/blob/master/Jupyter%20Notebook/Emotion%20Analysis%20Example.ipynb
Using Project Oxford's Emotion API
However I just can't make this one work. I guess there is something wrong with the params or body argument.
Any help is very appreciated.
You can refer to this question.
Obviously you did not understand the code. "{body}" means you should replace it with your body which contains your request url, just like the site says:
So you can use this api this way:
body = {
"url": "http://example.com/1.jpg"
}
…………
conn = httplib.HTTPSConnection('api.projectoxford.ai')
conn.request("POST", "/face/v1.0/detect?%s" % params, str(body), headers)
Dawid's comment looks like it should fix it (double quoting), try this for python 2.7:
import requests
url = "https://api.projectoxford.ai/face/v1.0/verify"
payload = "{\n \"faceId1\":\"A Face ID\",\n \"faceId2\":\"A Face ID\"\n}"
headers = {
'ocp-apim-subscription-key': "KEY_HERE",
'content-type': "application/json"
}
response = requests.request("POST", url, data=payload, headers=headers)
print(response.text)
for python 3:
import http.client
conn = http.client.HTTPSConnection("api.projectoxford.ai")
payload = "{\n\"faceId1\": \"A Face ID\",\n\"faceId2\": \"Another Face ID\"\n}"
headers = {
'ocp-apim-subscription-key': "keyHere",
'content-type': "application/json"
}
conn.request("POST", "/face/v1.0/verify", payload, headers)
res = conn.getresponse()
data = res.read()
There are a couple issues with your script:
You must pass face Ids and not URLs or file objects to the REST API.
You must correctly formulate the HTTP request.
However, you may find it easier to use the Python API and not the REST API. For example, once you have the face ids, you can just run result = CF.face.verify(faceid1, another_face_id=faceid2) instead of worrying about setting up the correct POST request.
You will probably need to install cognitive_face with pip. I use this API to get the face Ids for some bonus instruction.
To make this simpler, let's assume you have img1.jpg and img2.jpg on disk.
Here is an example using the REST API:
import cognitive_face as CF
from io import BytesIO
import json
import http.client
# Setup
KEY = "your subscription key"
# Get Face Ids
def get_face_id(img_file):
f = open(img_file, 'rb')
data = f.read()
f.close()
faces = CF.face.detect(BytesIO(data))
if len(faces) != 1:
raise RuntimeError('Too many faces!')
face_id = faces[0]['faceId']
return face_id
# Initialize API
CF.Key.set(KEY)
faceId1 = get_face_id('img1.jpg')
faceId2 = get_face_id('img2.jpg')
# Now that we have face ids, we can setup our request
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': KEY
}
params = {
'faceId1': faceId1,
'faceId2': faceId2
}
# The Content-Type in the header specifies that the body will
# be json so convert params to json
json_body = json.dumps(params)
try:
conn = httplib.HTTPSConnection('https://eastus.api.cognitive.microsoft.com')
conn.request("POST", "/face/v1.0/verify", body=json_body, headers=headers)
response = conn.getresponse()
data = response.read()
data = json.loads(data)
print(data)
conn.close()
except Exception as e:
print("[Errno {0}] {1}".format(e.errno, e.strerror))