Using Python and Requests, I am trying to upload a file to an API using multipart/form-data POST requests with a custom authentication mechanism (Just adding an 'Authentication-Token' header).
Here is the code I use:
import requests
from requests.auth import AuthBase
class MyAuth(AuthBase):
def __init__(self, token):
self.token = token
def __call__(self, r):
r.headers['Authentication-Token'] = self.token
return r
token = "175a5607d2e79109539b490f0f8ffe60"
url = "https://my-ap.com/files"
r = requests.post(url,
files={'file': open('qsmflkj.jpg', 'rb')},
data={'id': 151},
auth=MyAuth(token)
)
Unfortunately, this request causes the following exception on the server (the backend is using the Spring framework) :
org.apache.tomcat.util.http.fileupload.FileUploadException: the request was rejected because no multipart boundary was found
I have tried a lot of things, like adding the header myself, but this seems to be the "right" way to do it, and it fails. I know that the API is used from various mobile clients, and hence, that it is possible to upload picture there. What could I do differently to be able to use this API in Python?
In the end, I never managed to do this with Requests, but I succeeded with urllib2 and the poster library:
from poster.encode import multipart_encode
from poster.streaminghttp import register_openers
import urllib2
register_openers()
token = "175a5607d2e79109539b490f0f8ffe60"
with open('qsmflkj.jpg', 'rb') as f:
datagen, headers = multipart_encode({"file": f})
headers['Authentication-Token'] = token
request = urllib2.Request("https://myserver.com/files", \
datagen, headers)
response = urllib2.urlopen(request)
Related
Basically, I have to send a request to a API.
The url is this one https://testws.punto-web.com/wcfadproc/srvproceso.svc?wsdl
Information for the requests is like this, which is formatted in xml
The code I've tried is the following, but I get a 415 error and response is empty, so I think I mistook in request. Also, researching about this problem I've noticed that exists a module named 'zeep' that can be used for SOAP and WSDL, but not sure if it's necessary or my problem can be solved only with 'requests' module.
import requests
url = 'https://testws.punto-web.com/wcfadproc/srvproceso.svc?wsdl'
headers = {'content-type': 'text/xml'}
body = """
<Anulacion>
<Comercio>1234567</Comercio>
<IdTxn>12345</IdTxn>
<CodAutorizacion>123456</CodAutorizacion>
<NumPedido>123456</NumPedido>
<Moneda>PEN</Moneda>
<Monto>100.00</Monto>
<FechaTxn>20211206</FechaTxn>
<HoraTxn>121212</HoraTxn>
</Anulacion>
"""
anulacion = requests.request('POST',
url,
data= body,
headers = headers
)
Hi guys I'm developing a Python 3 quart asyncio application and I'm trying to setup a test framework around my http API.
Quart has methods to build json, form and raw requests but no files request. I believe I need build the request packet myself and post a "raw" request.
Using postman I can see that the requests need to look like this:
----------------------------298121837148774387758621\r\n
Content-Disposition: form-data; name="firmware"; filename="image.bin"\r\n
Content-Type: application/octet-stream\r\n
\r\n
\x00#\x00\x10\x91\xa0\t\x08+\xaa\t\x08/\xaa\t\x083\xaa\t\x087\xaa\t\x08;\xaa\t\x08\x00\x00\x00\
....
\xff\xff\xff\xff\xff\xff\xff\xa5\t\tZ\x0c\x00Rotea MLU Main V0.12\x00\x00k%\xea\x06\r\n
----------------------------298121837148774387758621--\r\n
I'd prefer not to encode this myself if there is a method that exists.
Is there an module in Python where I can build the raw packet data and send it with the Quart API?
I have tried using quart requests:
import requests
from .web_server import app as quart_app
test_client = quart_app.test_client()
firmware_image = 'test.bin'
with open(firmware_image, 'rb') as f:
data = f.read()
files = {'firmware': (firmware_image, data , 'application/octet-stream')}
firmware_req = requests.Request('POST', 'http://localhost:5000/firmware_update', files=files).prepare()
response = await test_client.post('/firmware_update',
data=firmware_req.body,
headers={'Content-type': 'multipart/form-data'})
Any suggestions would be greatly appreciated.
Cheers. Mitch.
Python's requests module provides a prepare function that you can use to get the raw data it would send for the request.
import requests
url = 'http://localhost:8080/'
files = {'file' : open('z', 'rb'),
'file2': open('zz', 'rb')}
req = requests.Request('POST',url, files=files)
r = req.prepare()
print(r.headers)
print(r.body)
I need to implement "put" and "get" REST API requests for sending a JSON file.
The problem is that it has to be done using urllib or urllib2 module (e.g. no requests module).
Is there any brief tutorial on how to do it?
Thanks!
I assume that you're using python3. Here's how you can make simple put request with the standard library:
from urllib.request import Request, urlopen
import json
url, data = 'https://example.com', {'key': 'value'}
data_bytes = bytes(json.dumps(data), encoding='utf8')
request = Request(url, method='PUT', data=data_bytes, headers={'Content-Type': 'application/json'})
with urlopen(request) as response:
print(response.read())
I am trying to connect to a webpage using urllib3. The code is provided below.
import urllib3
http=urllib3.PoolManager()
fields={'username':'abc','password':'xyz'}
r=http.request('GET',url,fields)
If we assume that url is some webpage which needs to be authenticated using username and password, am i using the right code to authenticate ?
I have did this using urllib2 very comfortably but i was not able to do the same thing using urllib3.
Many Thanks
Assuming you're trying to do Basic Authentication, then you need to put the username and password encoded in an Authorization header. Here's one way to do that using the urllib3.make_headers helper:
import urllib3
http = urllib3.PoolManager()
url = '...'
headers = urllib3.make_headers(basic_auth='abc:xyz')
r = http.request('GET', url, headers=headers)
Below is a working example to authenticate to an API using the requests library (ubuntu with python 3.6). Hope it helps!
import json
import requests
from requests.auth import HTTPBasicAuth
def __init__(self):
header = {"Content-Type": "application/json"}
access_url = "https://your.login.url/context_path"
data = {
"key_to_send":"value_to_send"
}
def get_token(self):
self.data = json.dumps(self.data)
encoded_data = json.dumps(self.data).encode('utf-8')
self.response = requests.post(self.access_url, auth=HTTPBasicAuth(self.username, self.password), headers=self.header, data=self.data)
# Show me what you found
print(self.response.text)
I need to have a proxy that acts as an intermediary to fetch images. An example would be, my server requests domain1.com/?url=domain2.com/image.png and domain1.com server will respond with the data at domain2.com/image.png via domain1.com server.
Essentially I want to pass to the proxy the URL I want fetched, and have the proxy server respond with that resource.
Any suggestions on where to start on this?
I need something very easy to use or implement as I'm very much a beginner at all of this.
Most solutions I have found in python and/or django have the proxy acts as a "translater" i.e. domain1.com/image.png translates to domain2.com/image.png, which is obviously not the same.
I currently have the following code, but fetching images results in garbled data:
import httplib2
from django.conf.urls.defaults import *
from django.http import HttpResponse
def proxy(request, url):
conn = httplib2.Http()
if request.method == "GET":
url = request.GET['url']
resp, content = conn.request(url, request.method)
return HttpResponse(content)
Old question but for future googlers, I think this is what you want:
# proxies the google logo
def test(request):
url = "http://www.google.com/logos/classicplus.png"
req = urllib2.Request(url)
response = urllib2.urlopen(req)
return HttpResponse(response.read(), mimetype="image/png")
A very simple Django proxy view with requests and StreamingHttpResponse:
import requests
from django.http import StreamingHttpResponse
def my_proxy_view(request):
url = request.GET['url']
response = requests.get(url, stream=True)
return StreamingHttpResponse(
response.raw,
content_type=response.headers.get('content-type'),
status=response.status_code,
reason=response.reason)
The advantage of this approach is that you don't need to load the complete file in memory before streaming the content to the client.
As you can see, it forwards some response headers. Depending on your needs, you may want to forward the request headers as well; for example:
response = requests.get(url, stream=True,
headers={'user-agent': request.headers.get('user-agent')})
If you need something more complete than my previous answer, you can use this class:
import requests
from django.http import StreamingHttpResponse
class ProxyHttpResponse(StreamingHttpResponse):
def __init__(self, url, headers=None, **kwargs):
upstream = requests.get(url, stream=True, headers=headers)
kwargs.setdefault('content_type', upstream.headers.get('content-type'))
kwargs.setdefault('status', upstream.status_code)
kwargs.setdefault('reason', upstream.reason)
super().__init__(upstream.raw, **kwargs)
for name, value in upstream.headers.items():
self[name] = value
You can use this class like so:
def my_proxy_view(request):
url = request.GET['url']
return ProxyHttpResponse(url, headers=request.headers)
The advantage of this version is that you can reuse it in multiple views. Also, it forwards all headers, and you can easily extend it to add or exclude some other headers.
If the file you're fetching and returning is an image, you'll need to change the mimetype of your HttpResponse Object.
Use mechanize, it allow you to choose a proxy and act like a browser, making it easy to change the user agent, to go back and forth in the history and to handle authentification or cookies.