I am trying to write output from API request(passing through shell command) to JSON file using python.
import os
assignments = os.system("curl -u https://apitest.com/api/-u domain:jsdjbfkjsbdfden")
Getting a response in string format, How I can save this response to a JSON file
I tried with the request library in python with same domain name and api_key not sure why i am getting 404 error "{"error":"Invalid api id. Verify your subdomain parameter"}"
import requests
from requests.auth import HTTPBasicAuth
url = "https://apitest.com/api/"
headers = {"SUBDOMAIN":"domain","api_key": "jsdjbfkjsbdfden"}
authParams = HTTPBasicAuth('username#gmail.com', 'password#')
response = requests.get(url,headers=headers,auth = authParams)
Any help would be appreciated.
You should be using the requests library instead of system calls.
import requests
r = requests.get('https://postman-echo.com/get?foo1=bar1&foo2=bar2')
print(r.content)
Writing to a file is covered in many tutorials across the internet such as w3schools and has been covered extensively on StackOverflow already.
is It not easier to use a "requests" libarary to make a queries ?
import requests
link = "" #your link
myobj = {'somekey': 'somevalue'}
r = requests.post(link, data=myobj)
r.status_code
If you have to use a command:
import os
assignments = os.system("curl -u https://apitest.com/api/-u domain:jsdjbfkjsbdfden > somefile")
There's no real reason to use python's requests module persé except for purity, however keeping it pure python helps portability.
Related
Problem Description
I am using the following the following python code in order to retrieve data from a give website through an API. The problem is that I am not receiving anything. When I print(str(response.status)+" "+response.reason), I get the following: 302 FOUND and nothing is being printed. From what I saw The HTTP response status code 302 Found is a common way of performing URL redirection.
Question
I saw that there is a way to set allow_redirects to False in order to solve that problem. I am forced to use python 2.7. I can't use python 3.0. Is there a way to add allow_redirects to the request in python 2.7? I also can't use the requests library. I can use import requests.
#!/usr/bin/env python
import sys
import json
import httplib
# Retrieve list of errors from Error Viewer
def retrieve_errors_from_error_viewer(errors):
headers = {"Content-Type": "application/json","Accept": "text/html"}
data = {"dba": "XXX", "phase": "PROD"}
conn = httplib.HTTPConnection('errorviewer.toys.net')
conn.request('POST', '/api/errors', json.dumps(data), headers)
response = conn.getresponse()
print(str(response.status)+" "+response.reason)
print(response.read())
if __name__ == "__main__":
# Retrieve Errors From ErrorViewer
errors = []
retrieve_errors_from_error_viewer(errors)
If you use httplib2 instead of httplib, you have the follow_all_redirects option that should solve your problem.
The API I want to access is having me use CURL, something I'm unfamiliar with. I'm on windows using pycharm/spyder IDE (I can use both).
I've only used the library requests so not sure how to proceed. I tried the approach below using requests but got the error: "TypeError: 'str' object is not callable." I researched answers here but couldn't find a resolution.
What the API documentation says:
$ curl -H "X-ABCD-Key: api_key" \
https://api.abcd.com/searches.json
My approach:
import requests
url = "https://api.abcd.com/searches.json"
auth = ("api_key")
r = requests.get(url, auth=auth)
print(r.content)
try
import requests
headers = {'X-ABCD-Key': 'api_key'}
response = requests.get('https://api.abcd.com/searches.json', headers=headers)
So I'm trying to get the output of the IBM Visual Recognition Service, but always get the same Error: {"code":401, "error": "Unauthorized"}
It works if I try it with cURL:
$ curl -X POST -u "apikey: ------------" -F "images_file=#bobross.jpg" "https://gateway.watsonplatform.net/visual-recognition/api/v3/detect_faces?version=2018-03-19"
{ facerecognition data }
My python code so far:
import json
import sys
import requests
header= { 'apikey': '---------', 'Content-Type': 'FaceCharacteristics'}
url= "https://gateway.watsonplatform.net/visual-recognition/api/v3/detect_faces?version=2018-03-19"
file ={image:open('bobross.jpg','rb')}
r = requests.post(url, headers=header, files=file)
print(r.text)
I tried my code in other variants, but it always led to "Unauthorized".
Btw, I am very little experienced with python, I'm still trying to learn.
In your curl example you are passing authentication with the -u flag while in python you are passing it in the header as is. The server is ignoring the authentication in the header and you are being returned a 401 as we expect.
To make life easier we can pass our auth details into the request itself with
auth=('apikey', '[An API Key]') as a named parameter.
It would also be worth removing the Content-Type: FaceCharacteristics from the header - not really sure where this was picked up.
import requests
url = 'https://gateway.watsonplatform.net/visual-recognition/api/v3/classify?version=2018-03-19'
files = {'images_file': open('fruitbowl.jpg','rb')}
resp = requests.post(url, auth=('apikey', '[An API Key]'), files=files)
print(resp.content)
Finally add the file and you should be all set.
More info on requests here
However if you are doing anything more than this..
You probably want to have a look at the Python SDK that IBM provides.
It has more documentation and sample code that you can use.
For example, this is provided.
import json
from watson_developer_cloud import VisualRecognitionV3
visual_recognition = = VisualRecognitionV3(
version='{version}',
api_key='{api_key}'
)
with open('./fruitbowl.jpg', 'rb') as images_file:
classes = visual_recognition.classify(
images_file,
threshold='0.6',
classifier_ids='dogsx2018x03x17_1725181949,Connectors_424118776')
print(json.dumps(classes, indent=2))
I'm trying to download a zip file from a provider, using wget works fine:
wget -c --http-user=MY_UN --http-password=MY_PW "https://datapool.asf.alaska.edu/GRD_MD/SA/S1A_EW_GRDM_1SDH_20151003T040339_20151003T040351_007983_00B2A6_7377.zip"
However using the Python Requests library I get 401 errors using the same credentials, does anybody know why that might be, or where to look to begin understanding the problem?
url = "https://datapool.asf.alaska.edu/GRD_MD/SA/S1A_EW_GRDM_1SDH_20151003T040339_20151003T040351_007983_00B2A6_7377.zip"
r = requests.get(url, auth=("MY_UN", "MY_PW"), stream = True)
I should mention that I have quadruple checked the details, and they are correct on both. Is there an alternative method in Python?
In the mean time I have had to spawn a wget using the os package:
os.system("wget -c --http-user=MY_UN--http-password=MY_PW 'https://datapool.asf.alaska.edu/GRD_MD/SA/S1A_EW_GRDM_1SDH_20151003T040339_20151003T040351_007983_00B2A6_7377.zip'")
I got a similar issue, I solved it using an HTTPDigestAuth instead of an HTTPBasicAuth.
from requests.auth import HTTPDigestAuth
requests.get(url, auth=HTTPDigestAuth('mylogin', 'mypassword'))
I would suggest to try the following:
session = requests.Session()
session.trust_env = False # to bypass a proxy
r = session.get(url, verify=False) # or if there is a certificate do verify='file.cer'
Basically i need a program that given a URL, it downloads a file and saves it. I know this should be easy but there are a couple of drawbacks here...
First, it is part of a tool I'm building at work, I have everything else besides that and the URL is HTTPS, the URL is of those you would paste in your browser and you'd get a pop up saying if you want to open or save the file (.txt).
Second, I'm a beginner at this, so if there's info I'm not providing please ask me. :)
I'm using Python 3.3 by the way.
I tried this:
import urllib.request
response = urllib.request.urlopen('https://websitewithfile.com')
txt = response.read()
print(txt)
And I get:
urllib.error.HTTPError: HTTP Error 401: Authorization Required
Any ideas? Thanks!!
You can do this easily with the requests library.
import requests
response = requests.get('https://websitewithfile.com/text.txt',verify=False, auth=('user', 'pass'))
print(response.text)
to save the file you would type
with open('filename.txt','w') as fout:
fout.write(response.text):
(I would suggest you always set verify=True in the resquests.get() command)
Here is the documentation:
Doesn't the browser also ask you to sign in? Then you need to repeat the request with the added authentication like this:
Python urllib2, basic HTTP authentication, and tr.im
Equally good: Python, HTTPS GET with basic authentication
If you don't have Requests module, then the code below works for python 2.6 or later. Not sure about 3.x
import urllib
testfile = urllib.URLopener()
testfile.retrieve("https://randomsite.com/file.gz", "/local/path/to/download/file")
You can try this solution: https://github.qualcomm.com/graphics-infra/urllib-siteminder
import siteminder
import getpass
url = 'https://XYZ.dns.com'
r = siteminder.urlopen(url, getpass.getuser(), getpass.getpass(), "dns.com")
Password:<Enter Your Password>
data = r.read() / pd.read_html(r.read()) # need to import panda as pd for the second one