I'm using BaseHTTPRequestHandler to implement my httpserver. How do a I read a multiline post data in my do_PUT/do_POST?
Edit: I'm trying to implement a standalone script which sevices some custom requests, something like listener on a server, which consolidates/archives/extracts from various log files, I don't want implement something which requires a webserver, I don't have much experience in python, I would be grateful if someone could point any better solution.
Edit2: I can't use any external libraries/modules, I have to make do with plain vanilla python 2.4/java1.5/perl5.8.8, restrictive policies, my hands are tied
Getting the request body is as simple as reading from self.rfile, but you'll have to know how much to read if the client is using Connection: keep-alive. Something like this will work if the client specifies the Content-Length header...
from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
class RequestHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_length = int(self.headers['Content-Length'])
post_data = self.rfile.read(content_length)
print post_data
server = HTTPServer(('', 8000), RequestHandler)
server.serve_forever()
...although it gets more complicated if the client sends data using chunked transfer encoding.
Related
I would like to send a POST request with multiple parameters using Twisted Web Client :
image : image
metadata : json document with meta data
I need to use pure Twisted without external libraries like Treq and requests.
At the moment, I can send only one parameter and tried few ways without success.
Do someone know how to change body to achieve this goal ?
from __future__ import print_function
from twisted.internet import reactor
from twisted.web.client import Agent
from twisted.web.http_headers import Headers
from bytesprod import BytesProducer
agent = Agent(reactor)
body = BytesProducer(b"hello, world")
d = agent.request(
b'POST',
b'http://httpbin.org/post',
Headers({'User-Agent': ['Twisted Web Client Example'],
'Content-Type': ['text/x-greeting']}),
body)
def cbResponse(ignored):
print('Response received')
d.addCallback(cbResponse)
def cbShutdown(ignored):
reactor.stop()
d.addBoth(cbShutdown)
reactor.run()
You need to specify how you would like the parameters encoded. If you want to to submit them like a browser form, you need to application/x-www-form-urlencoded or multipart/form-data encode the data. The former is generally for short data - and since of your parameters is "image" it probably isn't short. So you should multipart/form-data the data.
Once you have, you just declare this in the request head and include the encoded data in the body.
For example,
body = multipart_form_encoded_body_producer(your_form_fields))
d = agent.request(
b'POST',
b'http://httpbin.org/post',
Headers({'User-Agent': ['Twisted Web Client Example'],
'Content-Type': ['multipart/form-data']}),
body)
Conveniently, treq provides a multipart/form-data encoder
So multipart_form_encoded_body_producer(...) probably looks something like:
MultiPartProducer([
("image", image_data),
("metadata", some_metadata),
...
])
You mentioned that you can't use Treq. You didn't mention why. I recommend using Treq or at least finding another library that can do the encoding for you. If you can't do that for some unreasonable reason, you'll have to implement multipart/form-data encoding yourself. It is reasonably well documented and of course there are multiple implementations you can also use as references and interoperability testing tools.
I am working on a software that has a TCP Server that replies to requests done in a proprietary protocol.
Obviously the implementation relies on a socket listening on a fixed port and on analyzing and managing raw request and response.
I should add to this service the possibility to manage http requests.
I started using flask with the idea to let it manage templates rendering and responses creation, but I am a bit struck on the second part:
Righ now I managed to make this work with something like this:
with open(template_file) as f:
template = f.read()
app = flask.Flask('my app') # create a context to render Response
with app.app_context():
context = {'title': 'mytitle',
'other_info':'.....',}
rendered = flask.render_template_string(template, **context)
response = flask.make_response(rendered)
answer = f'''HTTP/1.0 200 OK\nContent-Type: text/html\n\n {rendered} \n\n'''
sock.sendall(answer.encode())
sock.close()
In this case make_response creates a Response instance where you can get the rendered html code but going from Response to the raw http is my problem.
To solve this i have added manually a header, but I think that there is a better way in flask to do this but can't figure out.
To make the question more general: how can coexist flask web application with others? Where is the point on which I have to take control of the process?
I need to make a simple API using Python.
There are many tutorials to make a REST API using Django REST framework, but I don't need REST service, I just need to be able to process POST requests.
How can I do that? I'm new to Python.
Thank you!
You can use HTTPServer module alongwith SimpleHTTPRequestHandler to create a simple webserver that serves your GET and POST request
from http.server import BaseHTTPRequestHandler,HTTPServer, SimpleHTTPRequestHandler
class GetHandler(SimpleHTTPRequestHandler):
def do_GET(self):
SimpleHTTPRequestHandler.do_GET(self)
def do_POST(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
self.data_string = self.rfile.read(int(self.headers['Content-Length']))
data = b'<html><body><h1>POST!</h1></body></html>'
self.wfile.write(bytes(data))
return
Handler=GetHandler
httpd=HTTPServer(("localhost", 8080), Handler)
httpd.serve_forever()
Well if you don't need the whole DRF stuff than just don't use it.
Django is built around views which take HTTP requests (whatever the verb - POST, GET etc) and return HTTP responses (which can be html, json, text, csv, binary data, whatever), and are mapped to urls, so all you have to do is to write your views and map them to url.
I would like to know if it is possible to enable gzip compression
for Server-Sent Events (SSE ; Content-Type: text/event-stream).
It seems it is possible, according to this book:
http://chimera.labs.oreilly.com/books/1230000000545/ch16.html
But I can't find any example of SSE with gzip compression. I tried to
send gzipped messages with the response header field
Content-Encoding set to "gzip" without success.
For experimenting around SSE, I am testing a small web application
made in Python with the bottle framework + gevent ; I am just running
the bottle WSGI server:
#bottle.get('/data_stream')
def stream_data():
bottle.response.content_type = "text/event-stream"
bottle.response.add_header("Connection", "keep-alive")
bottle.response.add_header("Cache-Control", "no-cache")
bottle.response.add_header("Content-Encoding", "gzip")
while True:
# new_data is a gevent AsyncResult object,
# .get() just returns a data string when new
# data is available
data = new_data.get()
yield zlib.compress("data: %s\n\n" % data)
#yield "data: %s\n\n" % data
The code without compression (last line, commented) and without gzip
content-encoding header field works like a charm.
EDIT: thanks to the reply and to this other question: Python: Creating a streaming gzip'd file-like?, I managed to solve the problem:
#bottle.route("/stream")
def stream_data():
compressed_stream = zlib.compressobj()
bottle.response.content_type = "text/event-stream"
bottle.response.add_header("Connection", "keep-alive")
bottle.response.add_header("Cache-Control", "no-cache, must-revalidate")
bottle.response.add_header("Content-Encoding", "deflate")
bottle.response.add_header("Transfer-Encoding", "chunked")
while True:
data = new_data.get()
yield compressed_stream.compress("data: %s\n\n" % data)
yield compressed_stream.flush(zlib.Z_SYNC_FLUSH)
TL;DR: If the requests are not cached, you likely want to use zlib and declare Content-Encoding to be 'deflate'. That change alone should make your code work.
If you declare Content-Encoding to be gzip, you need to actually use gzip. They are based on the the same compression algorithm, but gzip has some extra framing. This works, for example:
import gzip
import StringIO
from bottle import response, route
#route('/')
def get_data():
response.add_header("Content-Encoding", "gzip")
s = StringIO.StringIO()
with gzip.GzipFile(fileobj=s, mode='w') as f:
f.write('Hello World')
return s.getvalue()
That only really makes sense if you use an actual file as a cache, though.
There's also middleware you can use so you don't need to worry about gzipping responses for each of your methods. Here's one I used recently.
https://code.google.com/p/ibkon-wsgi-gzip-middleware/
This is how I used it (I'm using bottle.py with the gevent server)
from gzip_middleware import Gzipper
import bottle
app = Gzipper(bottle.app())
run(app = app, host='0.0.0.0', port=8080, server='gevent')
For this particular library, you can set w/c types of responses you want to compress by modifying the DEFAULT_COMPRESSABLES variable for example
DEFAULT_COMPRESSABLES = set(['text/plain', 'text/html', 'text/css',
'application/json', 'application/x-javascript', 'text/xml',
'application/xml', 'application/xml+rss', 'text/javascript',
'image/gif'])
All responses go through the middleware and get gzipped without modifying your existing code. By default, it compresses responses whose content-type belongs to DEFAULT_COMPRESSABLES and whose content-length is greater than 200 characters.
I'm using SimpleHTTPServer to test some webpages I'm working on. It works great, however I need to do some cross-domain requests. That requires setting a Access-Control-Allow-Origin header with the domains the page is allowed to access.
Is there an easy way to set a header with SimpleHTTPServer and serve the original content? The header would be the same on each request.
This is a bit of a hack because it changes end_headers() behavior, but I think it's slightly better than copying and pasting the entire SimpleHTTPServer.py file.
My approach overrides end_headers() in a subclass and in it calls send_my_headers() followed by calling the superclass's end_headers().
It's not 1 - 2 lines either, less than 20 though; mostly boilerplate.
#!/usr/bin/env python
try:
from http import server # Python 3
except ImportError:
import SimpleHTTPServer as server # Python 2
class MyHTTPRequestHandler(server.SimpleHTTPRequestHandler):
def end_headers(self):
self.send_my_headers()
server.SimpleHTTPRequestHandler.end_headers(self)
def send_my_headers(self):
self.send_header("Access-Control-Allow-Origin", "*")
if __name__ == '__main__':
server.test(HandlerClass=MyHTTPRequestHandler)
I'd say there's no simple way of doing it, where simple means "just add 1-2 lines that will write the additional header and keep the existing functionality". So, the best solution would be to subclass the SimpleHTTPRequestHandler class and re-implement the functionality, with the addition of the new header.
The problem behind why there is no simple way of doing this can be observed by looking at the implementation of the SimpleHTTPRequestHandler class in the Python library: http://hg.python.org/cpython/file/19c74cadea95/Lib/http/server.py#l654
Notice the send_head() method, particularly the lines at the end of the method which send the response headers. Notice the invocation of the end_headers() method. This method writes the headers to the output, together with a blank line which signals the end of all headers and the start of the response body: http://docs.python.org/py3k/library/http.server.html#http.server.BaseHTTPRequestHandler.end_headers
Therefore, it would not be possible to subclass the SimpleHTTPRequestHandler handler, invoke the super-class do_GET() method, and then just add another header -- because the sending of the headers has already finished when the call to the super-class do_GET() method returns. And it has to work like this because the do_GET() method has to send the body (the file that is requested), and to send the body - it has to finalize sending the headers.
So, again, I think you're stuck with sub-classing the SimpleHTTPRequestHandler class, implementing it exactly as the code in the library (just copy-paste it?), and add another header before the call to the end_headers() method in send_head():
...
self.send_header("Last-Modified", self.date_time_string(fs.st_mtime))
# this below is the new header
self.send_header('Access-Control-Allow-Origin', '*')
self.end_headers()
return f
...
# coding: utf-8
import SimpleHTTPServer
import SocketServer
PORT = 9999
def do_GET(self):
self.send_response(200)
self.send_header('Access-Control-Allow-Origin', 'http://example.com')
self.end_headers()
Handler = SimpleHTTPServer.SimpleHTTPRequestHandler
Handler.do_GET = do_GET
httpd = SocketServer.TCPServer(("", PORT), Handler)
httpd.serve_forever()
While this is an older answer, its the first result in google...
Basically what #iMon0 suggested..Seems correct?..Example of doPOST
def do_POST(self):
self.send_response()
self.send_header('Content-type','application/json')
self.send_header('Access-Control-Allow-Origin','*')
self.end_headers()
sTest = {}
sTest['dummyitem'] = "Just an example of JSON"
self.wfile.write(json.dumps(sTest))
By doing this, the flow feels correct..
1: You get a request
2: You apply the headers and response type you want
3: You post back the data you want, be this what or how ever you want.,
The above example is working fine for me and can be extended further, its just a bare bone JSON post server. So i'll leave this here on SOF incase someone needs it or i myself come back in a few months for it.
This does produce a valid JSON file with only the sTest object, Same as a PHP generated page/file.