I would like to send a POST request with multiple parameters using Twisted Web Client :
image : image
metadata : json document with meta data
I need to use pure Twisted without external libraries like Treq and requests.
At the moment, I can send only one parameter and tried few ways without success.
Do someone know how to change body to achieve this goal ?
from __future__ import print_function
from twisted.internet import reactor
from twisted.web.client import Agent
from twisted.web.http_headers import Headers
from bytesprod import BytesProducer
agent = Agent(reactor)
body = BytesProducer(b"hello, world")
d = agent.request(
b'POST',
b'http://httpbin.org/post',
Headers({'User-Agent': ['Twisted Web Client Example'],
'Content-Type': ['text/x-greeting']}),
body)
def cbResponse(ignored):
print('Response received')
d.addCallback(cbResponse)
def cbShutdown(ignored):
reactor.stop()
d.addBoth(cbShutdown)
reactor.run()
You need to specify how you would like the parameters encoded. If you want to to submit them like a browser form, you need to application/x-www-form-urlencoded or multipart/form-data encode the data. The former is generally for short data - and since of your parameters is "image" it probably isn't short. So you should multipart/form-data the data.
Once you have, you just declare this in the request head and include the encoded data in the body.
For example,
body = multipart_form_encoded_body_producer(your_form_fields))
d = agent.request(
b'POST',
b'http://httpbin.org/post',
Headers({'User-Agent': ['Twisted Web Client Example'],
'Content-Type': ['multipart/form-data']}),
body)
Conveniently, treq provides a multipart/form-data encoder
So multipart_form_encoded_body_producer(...) probably looks something like:
MultiPartProducer([
("image", image_data),
("metadata", some_metadata),
...
])
You mentioned that you can't use Treq. You didn't mention why. I recommend using Treq or at least finding another library that can do the encoding for you. If you can't do that for some unreasonable reason, you'll have to implement multipart/form-data encoding yourself. It is reasonably well documented and of course there are multiple implementations you can also use as references and interoperability testing tools.
Related
I have several Python scripts that are used from the CLI. Now I have been asked if I could provide a web API to perform some of the work normally done from the CLI. I would like to respond with JSON formatted data. I know little about JSON or API's.
How do I retrieve the query data from the http request?
How to a create the HTTP headers for the response from my script?
Given the following URL, how can I respond with a JSON reply of the name "Steve"?
http://myserver.com/api.py?query=who
I already have a functioning Apache web server running, and can serve up HTML and CGI scripts. It's simply coding the Python script that I need some help with. I would prefer not to use a framework, like Flask.
A simple code example would be appreciated.
The following is the Python code that I've come up with, but I know it's not the correct way to do this, and it doesn't use JSON:
#!/usr/local/bin/python3.7
import cgi # CGI module
# Set page headers and start page
print("Content-type: text/plain")
print("X-Content-Type-Options: nosniff\x0d\x0a\x0d\x0a")
# Form defaults
query_arg = None
# Get form values, if any
form = cgi.FieldStorage()
# Get the rest of the values if the form was submitted
if "query" in form.keys():
query_arg = form["query"].value
if query_arg == "who":
print("Steve", end="", flush=True)
You are trying to build a request handler with core python code. which is able to handle http request, In my opinion its not good idea as there are multiple securty scenarios attached with it, also in cross server request its bit difficult to handle all request and request scenarios . I will suggest to use Flask which is very lightweight and this will give an pre-setup of routing mechanism to handle all kind of request in very less code below is the snippet to generate http json response hope it helps
import sys
import flask
import random, string
from flask import jsonify
class Utils:
def make_response(self):
response = jsonify({
'message': 'success',
})
response.status_code = 200
return response
Is there a way to send CoAP requests, like HTTP requests, using Python. I tried below one but I got many errors.
rec = reuest.get(coap://localhost:5683/other/block)
You can use a library such as CoAPython to act as a CoAP client:
from coapthon.client.helperclient import HelperClient
client = HelperClient(server=('127.0.0.1', 5683))
response = client.get('other/block')
client.stop()
The response is of type Response. The methods available on the response are listed in the documentation, which you must build yourself.
Since you haven't listed what you want to do with the response, you can use the documentation to find out the methods available and get the values you want.
I'm building controller between source control system and Odoo in a way that specific integrated code source control system (like bitbucket, github) would be able to payload data using json. Reading of actual payloaded data is working, but what I'm struggling, is reading headers data inside controller.
I need headers data so I could identify from which system this payload is received (for example data structure might be different in bitbucket and github). Now if I would read that header, I would know which system payloads the data and how to parse it properly.
So my controller looks like this:
from odoo import http
from odoo.http import request
class GitData(http.Controller):
"""Controller responsible for receiving git data."""
#http.route(['/web/git.data'], type='json', auth="public")
def get_git_data(self, **kwargs):
"""Get git data."""
# How to read headers inside here??
data = request.jsonrequest
# do something with data
return '{"response": "OK"}'
Now for example I can call this route with:
import requests
import json
url = 'http://some_url/web/git.data'
headers = {
'Accept': 'text/plain',
'Content-Type': 'application/json',
'type': 'bitbucket'}
data = {'some': 'thing'}
r = requests.post(url, data=json.dumps(data), headers=headers)
Now it looks that controller reads headers automatically, because it understands that it is json type. But what if I need to manually check specific header data like headers['type'] (in my example it was bitbucket)?
I tried looking into dir(self) and dir(request), but did not see anything related with headers. Also **kwargs is empty, so no headers there.
Note.: request object is actually:
# Thread local global request object
_request_stack = werkzeug.local.LocalStack()
request = _request_stack()
"""
A global proxy that always redirect to the current request object.
"""
# (This is taken from Odoo 10 source)
So basically it is part of werkzeug.
Maybe someone has more experience with werkzeug or controllers in general, so could point me in the right direction?
P.S. Also in Odoo itself I did not find any example that would read headers like I want. It looks the only place headers are used (actually setting them instead of reading), are after the fact, when building a response back.
from openerp.http import request
Within your controller handling your specific path. You can access the request headers using the code below. (Confirmed Odoo8,Odoo10... probably works for Odoo9 as well)
headers = request.httprequest.headers
I am speaking to the gmail api and would like to batch the requests. They have a friendly guide for this here, https://developers.google.com/gmail/api/guides/batch, which suggests that I should be able to use multipart/mixed and include different urls.
I am using Python and the Requests library, but am unsure how to issue different urls. Answers like this one How to send a "multipart/form-data" with requests in python? don't mention an option for changing that part.
How do I do this?
Unfortunately, requests does not support multipart/mixed in their API. This has been suggested in several GitHub issues (#935 and #1081), but there are no updates on this for now. This also becomes quite clear if you search for "mixed" in the requests sources and get zero results :(
Now you have several options, depending on how much you want to make use of Python and 3rd-party libraries.
Google API Client
Now, the most obvious answer to this problem is to use the official Python API that Google is providing here. It comes with a HttpBatchRequest class that can handle the batch requests that you need. This is documented in detail in this guide.
Essentially, you create an HttpBatchRequest object and add all your requests to it. The library will then put everything together (taken from the guide above):
batch = BatchHttpRequest()
batch.add(service.animals().list(), callback=list_animals)
batch.add(service.farmers().list(), callback=list_farmers)
batch.execute(http=http)
Now, if for whatever reason you cannot or will not use the official Google libraries you will have to build parts of the request body yourself.
requests + email.mime
As I already mentioned, requests does not officially support multipart/mixed. But that does not mean that we cannot "force" it. When creating a Request object, we can use the files parameter to provide multipart data.
files is a dictionary that accepts 4-tuple values of this format: (filename, file_object, content_type, headers). The filename can be empty. Now we need to convert a Request object into a file(-like) object. I wrote a small method that covers the basic examples from the Google example. It is partly inspired by the internal methods that Google uses in their Python library:
import requests
from email.mime.multipart import MIMEMultipart
from email.mime.nonmultipart import MIMENonMultipart
BASE_URL = 'http://www.googleapis.com/batch'
def serialize_request(request):
'''Returns the string representation of the request'''
mime_body = ''
prepared = request.prepare()
# write first line (method + uri)
if request.url.startswith(BASE_URL):
mime_body = '%s %s\r\n' % (request.method, request.url[len(BASE_URL):])
else:
mime_body = '%s %s\r\n' % (request.method, request.url)
part = MIMENonMultipart('application', 'http')
# write headers (if possible)
for key, value in prepared.headers.iteritems():
mime_body += '%s: %s\r\n' % (key, value)
if getattr(prepared, 'body', None) is not None:
mime_body += '\r\n' + prepared.body + '\r\n'
return mime_body.encode('utf-8').lstrip()
This method will transform a requests.Request object into a UTF-8 encoded string that can later be used a a payload for a MIMENonMultipart object, i.e. the different multiparts.
Now in order to generate the actual batch request, we first need to squeeze a list of (Google API) requests into a files dictionary for the requests lib. The following method will take a list of requests.Request objects, transform each into a MIMENonMultipart and then return a dictionary that complies to the structure of the files dictionary:
import uuid
def prepare_requests(request_list):
message = MIMEMultipart('mixed')
output = {}
# thanks, Google. (Prevents the writing of MIME headers we dont need)
setattr(message, '_write_headers', lambda self: None)
for request in request_list:
message_id = new_id()
sub_message = MIMENonMultipart('application', 'http')
sub_message['Content-ID'] = message_id
del sub_message['MIME-Version']
sub_message.set_payload(serialize_request(request))
# remove first line (from ...)
sub_message = str(sub_message)
sub_message = sub_message[sub_message.find('\n'):]
output[message_id] = ('', str(sub_message), 'application/http', {})
return output
def new_id():
# I am not sure how these work exactly, so you will have to adapt this code
return '<item%s:12930812#barnyard.example.com>' % str(uuid.uuid4())[-4:]
Finally, we need to change the Content-Type from multipart/form-data to multipart/mixed and also remove the Content-Disposition and Content-Type headers from each request part. These we generated by requests and cannot be overwritten by the files dictionary.
import re
def finalize_request(prepared):
# change to multipart/mixed
old = prepared.headers['Content-Type']
prepared.headers['Content-Type'] = old.replace('multipart/form-data', 'multipart/mixed')
# remove headers at the start of each boundary
prepared.body = re.sub(r'\r\nContent-Disposition: form-data; name=.+\r\nContent-Type: application/http\r\n', '', prepared.body)
I have tried my best to test this with the Google Example from the Batching guide:
sheep = {
"animalName": "sheep",
"animalAge": "5",
"peltColor": "green"
}
commands = []
commands.append(requests.Request('GET', 'http://www.googleapis.com/batch/farm/v1/animals/pony'))
commands.append(requests.Request('PUT', 'http://www.googleapis.com/batch/farm/v1/animals/sheep', json=sheep, headers={'If-Match': '"etag/sheep"'}))
commands.append(requests.Request('GET', 'http://www.googleapis.com/batch/farm/v1/animals', headers={'If-None-Match': '"etag/animals"'}))
files = prepare_requests(commands)
r = requests.Request('POST', 'http://www.googleapis.com/batch', files=files)
prepared = r.prepare()
finalize_request(prepared)
s = requests.Session()
s.send(prepared)
And the resulting request should be close enough to what Google is providing in their Batching guide:
POST http://www.googleapis.com/batch
Content-Length: 1006
Content-Type: multipart/mixed; boundary=a21beebd15b74be89539b137bbbc7293
--a21beebd15b74be89539b137bbbc7293
Content-Type: application/http
Content-ID: <item8065:12930812#barnyard.example.com>
GET /farm/v1/animals
If-None-Match: "etag/animals"
--a21beebd15b74be89539b137bbbc7293
Content-Type: application/http
Content-ID: <item5158:12930812#barnyard.example.com>
GET /farm/v1/animals/pony
--a21beebd15b74be89539b137bbbc7293
Content-Type: application/http
Content-ID: <item0ec9:12930812#barnyard.example.com>
PUT /farm/v1/animals/sheep
Content-Length: 63
Content-Type: application/json
If-Match: "etag/sheep"
{"animalAge": "5", "animalName": "sheep", "peltColor": "green"}
--a21beebd15b74be89539b137bbbc7293--
In the end, I highly recommend the official Google library but if you cannot use it, you will have to improvise a bit :)
Disclaimer: I havent actually tried to send this request to the Google API Endpoints because the authentication procedure is too much of a hassle. I was just trying to get as close as possible to the HTTP request that is described in the Batching guide. There might be some problems with \r and \n line endings, depending on how strict the Google Endpoints are.
Sources:
requests github (especially issues #935 and #1081)
requests API documentation
Google APIs for Python
I'm using BaseHTTPRequestHandler to implement my httpserver. How do a I read a multiline post data in my do_PUT/do_POST?
Edit: I'm trying to implement a standalone script which sevices some custom requests, something like listener on a server, which consolidates/archives/extracts from various log files, I don't want implement something which requires a webserver, I don't have much experience in python, I would be grateful if someone could point any better solution.
Edit2: I can't use any external libraries/modules, I have to make do with plain vanilla python 2.4/java1.5/perl5.8.8, restrictive policies, my hands are tied
Getting the request body is as simple as reading from self.rfile, but you'll have to know how much to read if the client is using Connection: keep-alive. Something like this will work if the client specifies the Content-Length header...
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class RequestHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_length = int(self.headers['Content-Length'])
post_data = self.rfile.read(content_length)
print post_data
server = HTTPServer(('', 8000), RequestHandler)
server.serve_forever()
...although it gets more complicated if the client sends data using chunked transfer encoding.