I am trying to upload a .jpg file to a server using HTTP POST with the requests library in Python 3.7.
Target URL has some PHP code that handles the upload, taking 'fileToUpload' as the multipart variable.
I tried putting the request into a with-statement, changing the data=files to files=files (as recommended by some example code), or setting the headers to multipart/form-data (which should not be necessary for this library)
import requests
url = 'http://someurl.com/upload/dir/post.php'
files = {'fileToUpload' : open('image.jpg', 'rb')}
r = requests.post(url, data=files)
If I run the script I raise every single error message in the post.php file, while if I take the thing to Insomnia or Postman the upload works just fine, so the server-side seems to be working.
Related
guys I want to create a python API and request msg and get a response with JSON and send it to knime workflow and work with it
this is my script in python the fact is that he give me json.decoder.jsondecodeError
from importlib.metadata import files
import requests
url ='https://api.edination.com/v2/edifact/read'
headers = {'Ocp-Apim-Subscription-Key': '3ecf6b1c5cf34bd797a5f4c57951a1cf'}
files = {'file':open('C:\\Users\\hcharafeddine\\Desktop\\EDI\\Interchange_1654767219416.edi','rb')}
r = requests.post(url,files=files)
r.json()
We'll need more info to help further. I understand if you don't want to share the content of the EDI message, so here are a few things to try:
The EDINation website allows you to paste an EDI message in and it'll show you the JSON output that the API will return
It also has a sample EDIFACT document you can select and then save locally to run through your python script - you can then share the results here
You can also use HTTPToolkit to inspect the API request and response to troubleshoot further.
I'm attempting to pull a JSON from my Tumblr page such that I can display an image within a Jupyter notebook globally rather than just pasting locally saved images. Ideally, I'd GET the JSON, extract the png URL and then use IOBytes and PIL to display the image.
However, when I send a get request to the server:
import json
import requests
url = 'https://www.tumblr.com/blog/ims4jupyter'
r = requests.get(url, headers={'accept': 'application/json'})
print(r.json())
I get a JSONDecodeError. Typing,
r.content
into the terminal returns a HTTP formatted webpage. I think this means that Tumblr refuses to return JSON, but other websites (such as YouTube, for example) won't return JSON either.
The solution to this question was that the website I was trying to request JSON from did not accept those requests. If you copy my code, but instead use https://api.my-ip.io/ip.json (as an example), the original code will run.
There's a website that has a button which downloads an Excel file. After I click, it takes around 20 seconds for the server API to generate the file and send it back to my browser for download.
If I monitor the communication after I click the button, I can see how the browser sends a POST request to a server with a series of headers and form values.
Is there a way that I can simulate a similar POST request programmatically using Python, and retrieve the Excel file after the server sends it over?
Thank you in advance
The requests module is used for sending all kinds of request types.
requests.post sends the post requests synchronously.
The payload data can be set using data=
The response can be accessed using .content.
Be sure to check the .status_code and only save on a successful response code
Also note the use of "wb" inside open, because we want to save the file as a binary instead of text.
Example:
import requests
payload = {"dao":"SampleDAO",
"condigId": 1,
...}
r = requests.post("http://url.com/api", data=payload)
if r.status_code == 200:
with open("file.save","wb") as f:
f.write(r.content)
Requests Documentation
I guess You could similarly do this:
file_info = request.get(url)
with open('file_name.extension', 'wb') as file:
file.write(file_info.content)
I honestly do not know how to explain this tho since I have little understanding how it works
I am trying to download a csv file from an authorized website.
I am able to get respond code of 200 with url https://workspace.xxx.com/abc/ (click in this web page to download the csv) but respond code of 401 at url = 'https://workspace.xxx.com/abc/abc.csv'
This is my code:
import requests
r = requests.get(url, auth=('myusername', 'mybasicpass'))
I tried adding header and using session but still get respond code of 401.
First of all, you have to investigate how the website accepts the password.
They might be using HTTP authentication or Authorization header in the request.
You can log in using their website and then download the file .study how they are passing authorization.
I am sure they are not accepting plain passwords in authorization they might be encoding it in base64 or another encoding scheme.
My advice to you is to open the developer console and study their requests in network tab. You can post more information so one could help you more.
I am using Python's Requests library to POST a PDF to a document store, the uploaded PDF is thereafter used in a signature process. However when uploading the PDF using Python (in stead of CURL) the signing environment doesnt work. On comparing different files, I found out that Requests adds some data to the PDF:
--ca9a0d04edf64b3395e62c72c7c143a5
Content-Disposition: form-data; name="LoI.pdf"; filename="LoI.pdf"
%%Original PDF goes here%%
--ca9a0d04edf64b3395e62c72c7c143a5--
This data is accepted perfectly fine by different PDF readers, but not by the Signature API. Is there a way to prevent Requests from adding this data to the PDF? I used the following code:
myfile = request.FILES['myfile']
url = %%documentstoreURL%%
resp = requests.request('post', url, files={myfile.name:myfile}, headers={'Content-Type':'application/pdf'}, auth=(%%auth details%%))
Thanks!
You're sending the file as binary data with curl, but attaching it in requests.
I read over the source code, and I believe resp = requests.request('post', url, data={myfile.name:myfile}, headers={'Content-Type':'application/pdf'}, auth=(%%auth details%%)) (data instead of files) will avoid the multipart encoding.
At the very least, it should be differently broken.
Being guided in the right direction, I found a working solution based on Python requests - POST data from a file
In the end I did it as follows:
myfile = request.FILES['myfile']
payload = request.FILES['myfile'].read()
headers = {'content-type': 'application/pdf'}
url = "%%DocumentServiceURL"
r = requests.post(url, auth=(%%auth_details%%), data=payload, verify=False, headers=headers)