I have taken a look at other questions related to multipart/form POST requests in Python but unfortunately, they don't seem to address my exact question. Basically, I normally use CURL in order to hit an API service that allows me to upload zip files in order to create HTML5 assets. The CURL command I use looks like this:
curl -X POST -H "Authorization: api: 222111" --form "type=html" --form "file=Folder1/Folder2/example.zip" "https://example.api.com/upload?ins_id=123"
I am trying to use a python script to iterate through a folder of zip files in order to upload all of these files and receive a "media ID" back. This is what my script looks like:
import os
import requests
import json
ins_id = raw_input("Please enter your member ID: ")
auth = raw_input("Please enter your API authorization token: ")
for filename in os.listdir("zips"):
if filename.endswith(".zip"):
file_path = os.path.abspath(filename)
url = "https://example.api.com/upload?
ins_id="+str(ins_id)
header = {"Authorization": auth}
response = requests.post(url, headers=header, files={"form_type":
(None, "html"), "form_file_upload": (None, str(file_path))})
api_response = response.json()
print api_response
This API service requires the file path to be included when submitting the POST. However, when I use this script, the response indicates that "file not provided". Am I including this information correctly in my script?
Thanks.
Update:
I think I am heading in the right direction now (thanks to the answer provided) but now, I receive an error message stating that there is "no such file or directory". My thinking is that I am not using os.path correctly but even if I change my code to use "relpath" I still get the same message. My script is in a folder and I have a completely different folder called "zips" (in the same directory) which is where all of my zip files are stored.
To upload files with the request library, you can include the file handler directly in the JSON as described in the documentation. This is the corresponding example that I have taken from there:
url = 'http://httpbin.org/post'
files = {'file': open('path_to_your_file', 'rb')}
r = requests.post(url, files=files)
If we integrate this in your script, it would look as follows (I also made it slightly more pythonic):
import os
import requests
import json
folder = 'zips'
ins_id = raw_input("Please enter your member ID: ")
auth = raw_input("Please enter your API authorization token: ")
url = "https://example.api.com/upload?"
header = {"Authorization": auth}
for filename in os.listdir(folder):
if not filename.endswith(".zip"):
continue
file_path = os.path.abspath(os.path.join(folder, filename))
ins_id="+str(ins_id)"
response = requests.post(
url, headers=header,
files={"form_type": (None, "html"),
"form_file_upload": open(file_path, 'rb')}
)
api_response = response.json()
print api_response
As I don't have the API end point, I can't actually test this code block - but it should be something along these lines.
Related
I wrote the falling short code snippet to play around with requests and artifactory. I'm trying to upload a simple text file.
import requests
url = "https://myurl.jfrog.io/artifactory"
auth = ("myusername", "mypassword")
file_name = "test.txt"
response = requests.put(url + "/data/" + file_name, auth=auth, data=open(file_name, "rb"))
print(response.status_code)
I'm getting error code 405, what am I doing wrong? There are hardly any examples of using requests to work with artifactory
The specified request seems to be redirected to https://myurl.jfrog.io/artifactory/data/<file_name> which is not an actual repository within an Artifactory instance.
The 405 response code ("Method Not Allowed") gives a good hint.
Try creating a repository within Artifactory and append it after /artifactory, so it should be :
https://myurl.jfrog.io/artifactory/<repository_key>/data/<file_name>
Please find additional information in the REST API documentation:
https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-DeployArtifact
I'm new to using the Google Cloud suite and I'm trying to build a simple python script that calls the Vision API for document text extraction on a set of files. To do so, I have replicated the instructions found here:
https://cloud.google.com/vision/docs/ocr#vision_text_detection-drest
Currently my python script looks something like this:
key = <my_api_key>
url = 'https://vision.googleapis.com/v1/images:annotate?key=' + key
access_token = <my_access_token>
headers = {'Authorization': 'Bearer ' + access_token,
'Content-Type': 'application/json; charset=utf-8'}
where access_token is determined by
$ gcloud auth application-default print-access-token
(Normally, running using curl in bash I would replace access_token with $(gcloud auth ...).) Next,
import base64
import json
import requests
with open(file, 'rb') as f :
encoding = base64.b64encode(f.read()).decode('ascii')
request = {'requests': [{'features': [{'type': 'DOCUMENT_TEXT_DETECTION'}],
'image': {'content': encoding},
'imageContext': {'languageHints': ['en']}}]}
with open('request.json', 'w') as r :
r.write(json.dumps(request))
with open('request.json') as d :
response = requests.post(url = url, data = d, headers = headers)
i.e. I convert 'file' to base64, create the requests.json file then POST it.
I'm not very familiar with authentication so here is my question: at the moment the only authentication I have is, from what I can tell, an API key and a service account. I used the service account json file in to set GOOGLE_APPLICATION_CREDENTIALS and that allows me to call
$ gcloud auth application-default print-access-token
The only issue I'm facing is that the token seems to expire. So I have to (a) go back to bash, set GOOGLE_APPLICATION_CREDENTIALS, call the above command again, then copy and paste the token into my code. Is there an out-of-the-box type solution that allows me to have a static token or a static way to run my script?
Thanks to those who commented - it seems the easiest way to authenticate is using a service account and the associated .json file, rather than adding GOOGLE_APPLICATION_CREDENTIALS to path.
from google.cloud import vision
service_account_json = <my_service_account_json>
client = vision.ImageAnnotatorClient.from_service_account_json(service_account_json)
Parameters can still be passed into the request by sending the request as a dict (equivalent to the usual json).
def annotate (filename) :
with open(filename, 'rb') as f :
encoding = f.read()
request = {'image': {'content': encoding},
'features': [{'type': 'DOCUMENT_TEXT_DETECTION'}],
'image_context': {'language_hints': ['en']}}
response = client.annotate_image(request=request)
return response
I a splunk developer need to create a python script to get data from website using api call. I have no idea about how to write python script.
I have one refresh token through which we will get another token (access token ).
curl -X POST https://xxxx.com/api/auth/refreshToken -d <refresh token>
above command will return only access code in text format
curl -X GET https://xxxx.com/api/reporting/v0.1.0/training -g --header "Authorization:Bearer <access token>"| json_pp
by running above code we will get the data in json format.
I need to create a python script for this type api call.
Thanks in advance.
Say you have a file called rest.py, then :
import requests
from requests.auth import HTTPDigestAuth
import json
# Replace with the correct URL
url = "http://api_url"
# It is a good practice not to hardcode the credentials. So ask the user to enter credentials at runtime
myResponse = requests.get(url,auth=HTTPDigestAuth(raw_input("username: "), raw_input("Password: ")), verify=True)
#print (myResponse.status_code)
# For successful API call, response code will be 200 (OK)
if(myResponse.ok):
# Loading the response data into a dict variable
# json.loads takes in only binary or string variables so using content to fetch binary content
# Loads (Load String) takes a Json file and converts into python data structure (dict or list, depending on JSON)
jData = json.loads(myResponse.content)
print("The response contains {0} properties".format(len(jData)))
print("\n")
for key in jData:
print key + " : " + jData[key]
else:
# If response code is not ok (200), print the resulting http error code with description
myResponse.raise_for_status()
I am uploading a file to server using requests lib in Python. I read its documentation and some stackoverflow questions and implemented following code:
url = "http://example.com/file.csv"
id = "user-id"
password = "password"
headers = {'content-type': 'application/x-www-form-urlencoded'}
with open(file_path, 'rb') as f:
response = requests.post(url=url, files={'file':f}, auth=HTTPBasicAuth(username=id, password=password),headers=headers)
But this code is not working, response.status_code returning 405 and response.reason returning Method Not Allowed. When i upload file using curl command on terminal it works fine and file gets uploaded:
curl -u user-id:password -T file/path/on/local/machine/file.csv "http://example.com/file.csv"
Can someone please help here.
Related question here. In reality, curl --upload-file performs a PUT not a POST. If you want to mimic what curl does, you might want to try:
with open(file_path, 'rb') as f:
response = requests.put(url=url, files={'file':f}, auth=HTTPBasicAuth(username=id, password=password), headers=headers)
I have tried to upload a pdf by sending a POST Request to an API in R and in Python but I am not having a lot of success.
Here is my code in R
library(httr)
url <- "https://envoc-apply-api.azurewebsites.net/api/apply"
POST(url, body = upload_file("filename.pdf"))
The status I received is 500 when I want a status of 202
I have also tried with the exact path instead of just the filename but that comes up with a file does not exist error
My code in Python
import requests
url ='https://envoc-apply-api.azurewebsites.net/api/apply'
files = {'file': open('filename.pdf', 'rb')}
r = requests.post(url, files=files)
Error I received
FileNotFoundError: [Errno 2] No such file or directory: 'filename.pdf'
I have been trying to use these to guides as examples.
R https://cran.r-project.org/web/packages/httr/vignettes/quickstart.html
Python http://requests.readthedocs.io/en/latest/user/quickstart/
Please let me know if you need any more info.
Any help will be appreciated.
You need to specify a full path to the file:
import requests
url ='https://envoc-apply-api.azurewebsites.net/api/apply'
files = {'file': open('C:\Users\me\filename.pdf', 'rb')}
r = requests.post(url, files=files)
or something like that: otherwise it never finds filename.pdf when it tries to open it.