Error: Unauthorized - Python script to IBM Watson Visual Recognition - python

So I'm trying to get the output of the IBM Visual Recognition Service, but always get the same Error: {"code":401, "error": "Unauthorized"}
It works if I try it with cURL:
$ curl -X POST -u "apikey: ------------" -F "images_file=#bobross.jpg" "https://gateway.watsonplatform.net/visual-recognition/api/v3/detect_faces?version=2018-03-19"
{ facerecognition data }
My python code so far:
import json
import sys
import requests
header= { 'apikey': '---------', 'Content-Type': 'FaceCharacteristics'}
url= "https://gateway.watsonplatform.net/visual-recognition/api/v3/detect_faces?version=2018-03-19"
file ={image:open('bobross.jpg','rb')}
r = requests.post(url, headers=header, files=file)
print(r.text)
I tried my code in other variants, but it always led to "Unauthorized".
Btw, I am very little experienced with python, I'm still trying to learn.

In your curl example you are passing authentication with the -u flag while in python you are passing it in the header as is. The server is ignoring the authentication in the header and you are being returned a 401 as we expect.
To make life easier we can pass our auth details into the request itself with
auth=('apikey', '[An API Key]') as a named parameter.
It would also be worth removing the Content-Type: FaceCharacteristics from the header - not really sure where this was picked up.
import requests
url = 'https://gateway.watsonplatform.net/visual-recognition/api/v3/classify?version=2018-03-19'
files = {'images_file': open('fruitbowl.jpg','rb')}
resp = requests.post(url, auth=('apikey', '[An API Key]'), files=files)
print(resp.content)
Finally add the file and you should be all set.
More info on requests here
However if you are doing anything more than this..
You probably want to have a look at the Python SDK that IBM provides.
It has more documentation and sample code that you can use.
For example, this is provided.
import json
from watson_developer_cloud import VisualRecognitionV3
visual_recognition = = VisualRecognitionV3(
version='{version}',
api_key='{api_key}'
)
with open('./fruitbowl.jpg', 'rb') as images_file:
classes = visual_recognition.classify(
images_file,
threshold='0.6',
classifier_ids='dogsx2018x03x17_1725181949,Connectors_424118776')
print(json.dumps(classes, indent=2))

Related

CURL Python Issue When Hitting an API

The API I want to access is having me use CURL, something I'm unfamiliar with. I'm on windows using pycharm/spyder IDE (I can use both).
I've only used the library requests so not sure how to proceed. I tried the approach below using requests but got the error: "TypeError: 'str' object is not callable." I researched answers here but couldn't find a resolution.
What the API documentation says:
$ curl -H "X-ABCD-Key: api_key" \
https://api.abcd.com/searches.json
My approach:
import requests
url = "https://api.abcd.com/searches.json"
auth = ("api_key")
r = requests.get(url, auth=auth)
print(r.content)
try
import requests
headers = {'X-ABCD-Key': 'api_key'}
response = requests.get('https://api.abcd.com/searches.json', headers=headers)

Trying to write output from shell command to JSON file using python

I am trying to write output from API request(passing through shell command) to JSON file using python.
import os
assignments = os.system("curl -u https://apitest.com/api/-u domain:jsdjbfkjsbdfden")
Getting a response in string format, How I can save this response to a JSON file
I tried with the request library in python with same domain name and api_key not sure why i am getting 404 error "{"error":"Invalid api id. Verify your subdomain parameter"}"
import requests
from requests.auth import HTTPBasicAuth
url = "https://apitest.com/api/"
headers = {"SUBDOMAIN":"domain","api_key": "jsdjbfkjsbdfden"}
authParams = HTTPBasicAuth('username#gmail.com', 'password#')
response = requests.get(url,headers=headers,auth = authParams)
Any help would be appreciated.
You should be using the requests library instead of system calls.
import requests
r = requests.get('https://postman-echo.com/get?foo1=bar1&foo2=bar2')
print(r.content)
Writing to a file is covered in many tutorials across the internet such as w3schools and has been covered extensively on StackOverflow already.
is It not easier to use a "requests" libarary to make a queries ?
import requests
link = "" #your link
myobj = {'somekey': 'somevalue'}
r = requests.post(link, data=myobj)
r.status_code
If you have to use a command:
import os
assignments = os.system("curl -u https://apitest.com/api/-u domain:jsdjbfkjsbdfden > somefile")
There's no real reason to use python's requests module persé except for purity, however keeping it pure python helps portability.

Getting 401 authentication error with Python Requests, wget functions correctly

I'm trying to download a zip file from a provider, using wget works fine:
wget -c --http-user=MY_UN --http-password=MY_PW "https://datapool.asf.alaska.edu/GRD_MD/SA/S1A_EW_GRDM_1SDH_20151003T040339_20151003T040351_007983_00B2A6_7377.zip"
However using the Python Requests library I get 401 errors using the same credentials, does anybody know why that might be, or where to look to begin understanding the problem?
url = "https://datapool.asf.alaska.edu/GRD_MD/SA/S1A_EW_GRDM_1SDH_20151003T040339_20151003T040351_007983_00B2A6_7377.zip"
r = requests.get(url, auth=("MY_UN", "MY_PW"), stream = True)
I should mention that I have quadruple checked the details, and they are correct on both. Is there an alternative method in Python?
In the mean time I have had to spawn a wget using the os package:
os.system("wget -c --http-user=MY_UN--http-password=MY_PW 'https://datapool.asf.alaska.edu/GRD_MD/SA/S1A_EW_GRDM_1SDH_20151003T040339_20151003T040351_007983_00B2A6_7377.zip'")
I got a similar issue, I solved it using an HTTPDigestAuth instead of an HTTPBasicAuth.
from requests.auth import HTTPDigestAuth
requests.get(url, auth=HTTPDigestAuth('mylogin', 'mypassword'))
I would suggest to try the following:
session = requests.Session()
session.trust_env = False # to bypass a proxy
r = session.get(url, verify=False) # or if there is a certificate do verify='file.cer'

API response with a proxy is working with the curl, but nothing is returned with python

I am trying to access data using API which is behind the proxy server. If I use curl command like below it works:
curl --proxy http://MY_PROXY_SERVER:PORT --header "Accept: application/csv" http://WEB_SERVER_ADDRESS/data/CHANNEL?start=1470011400
I get the data which is expected.
When I am trying to access the same URL with python either by requests or urllib2 I am not able to get the data back. This is the code:
from __future__ import print_function
import requests
s = requests.Session()
s.proxies = {"http": "http://MY_PROXY_SERVER:PORT"}
headers = {'Accept': 'application/csv'}
url = "http://WEB_SERVER_ADDRESS/data/CHANNEL?start=1470011400"
r = s.get(url, headers=headers)
print(r.text)
I don't get any error and request is able to go through the python successfully. The output is empty list. I also tried with other media type supported by API like 'json', issue still persists.

How to POST a local file using urllib2 in Python?

I am a complete Python noob and am trying to run cURL equivalents using urllib2. What I want is a Python script that, when run, will do the exact same thing as the following cURL command in Terminal:
curl -k -F docfile=#myLocalFile.csv http://myWebsite.com/extension/extension/extension
I found the following template on a tutorial page:
import urllib
import urllib2
url = "https://uploadWebsiteHere.com"
data = "{From: 'sender#email.com', To: 'recipient#email.com', Subject: 'Postmark test', HtmlBody: 'Hello dear Postmark user.'}"
headers = { "Accept" : "application/json",
"Conthent-Type": "application/json",
"X-Postmark-Server-Token": "abcdef-1234-46cc-b2ab-38e3a208ab2b"}
req = urllib2.Request(url, data, headers)
response = urllib2.urlopen(req)
the_page = response.read()
but I am completely lost on the 'data' and 'headers' vars. The urllib2 documentation (https://docs.python.org/2/library/urllib2.html) defines the 'data' input as "a string specifying additional data to send to the server" and the 'headers' input as "a dictionary". I am totally out of my depth in trying to follow this documentation and do not see why a dictionary is necessary when I could accomplish this same task in terminal by only specifying the file and URL. Thoughts, please?
The data you are posting doesn't appear to be valid JSON. Assuming the server is expecting valid JSON, you should change that.
Your curl invocation does not pass any optional headers, so you shouldn't need to provide much in the request. If you want to verify the exact headers you could add -vi to the curl invocation and directly match them in the Python code. Alternatively, this works for me:
import urllib2
url = "http://localhost:8888/"
data = '{"From": "sender#email.com", "To": "recipient#email.com", "Subject": "Postmark test", "HtmlBody": "Hello dear Postmark user."}'
headers = {
"Content-Type": "application/json"
}
req = urllib2.Request(url, data, headers)
response = urllib2.urlopen(req)
the_page = response.read()
It probably is in your best interest to switch over to using requests, but for something this simple the standard library urllib2 can be made to work.
What I want is a Python script that, when run, will do the exact same thing as the following cURL command in Terminal:
$ curl -k -F docfile=#myLocalFile.csv https://myWebsite.com/extension...
curl -F sends the file using multipart/form-data content type. You could reproduce it easily using requests library:
import requests # $ pip install requests
with open('myLocalFile.csv','rb') as input_file:
r = requests.post('https://myWebsite.com/extension/...',
files={'docfile': input_file}, verify=False)
verify=False is to emulate curl -k.

Categories

Resources