What is the correct way to stream JSON response from Django? E.g. use 'StreamingHttpResponse` for the following:
def sample_json_view(self):
data = { ... }
resp = http.HttpResponse(content_type="application/json")
json.dump(data, resp)
return resp
My goal is to use StreamingHttpResponse to minimize the latency in the view. Does StreamingHttpResponse offer benefit over HttpResponse where we are writing?
Or: Does writing to file-like HttpResponse object (like with json.dump) make Django / mod_wsgi to buffer the whole response on the server before starting to stream the response to the client? (increasing latency, all JSON response must be generated first).
This depends on how your data is being generated, and if you need the content rendered before all of the data is generated. The Django docs seem to discourage this, saying "StreamingHttpResponse should only be used in situations where it is absolutely required that the whole content isn’t iterated before transferring the data to the client."
For an example of how to correctly use StreamingHttpResponse, see Django 1.5 - using the new StreamingHttpResponse
Related
I have a Python REST server, that is able to download and write a temporary file using Python TempFile.
That is, at request time I have a a file in the filesystem of the server, but it is not permanent so the client cannot access it statically (I.e. via http://myserver/path-to-my-file ). I need a way to access an endpoint, and then get a file returned based on the request. (I.e. http://myserver/myendpoint/myrequestparameters)
How does that work over HTTP? Is it possible?
(For context, right now I am serving the file encoded as string using base64 encoding and utf-8 decoding, but my frontend application needs a direct download link)
I believe there's a dedicated response type for such stuff in django. Assuming send_file is your endpoint:
from django.http import FileResponse
def send_file(response):
img = open('my_image_file.jpg', 'rb')
response = FileResponse(img)
return response
I've solved my issue, but I'd like to know what was going wrong so I can address it in the future. I'm having issues decoding incoming JSON for use in my Flask application.
The code that sends it in Angular:
$http.post("/login", JSON.stringify($scope.loginForm))
.success(function(data, status, headers, config) {
console.log(data);
})
.error(function(data, status, headers, config) {
console.log("Submitting form failed!");
});
Important to note that the request type is set to application/json earlier up, with
$http.defaults.headers.post["Content-Type"] = "application/json";
The code that receives it within Flask:
data = request.get_json()
email_address = data.get("email_address")
password = data.get("password")
Attempting to load it this way returns an error 400, but any other way leads to some very strange issues. For example:
return json.dumps(request.get_json())
Will log {"password": "password", "email_address": "email#email.com"} in the console, but attempting to do this:
data = request.get_json()
email_address = data.get("email_address")
password = data.get("password")
With no difference whatsoever between this and the first block of code except that I'm not forcing it, I receive the exception "ValueError: need more than 1 value to unpack". Which implies that there aren't two values to unpack.
HOWEVER, they both work individually. If I do the above request and omit either of the data.get() lines above, the other will work.
What about my setup causes my JSON object to disintegrate the first time it's accessed?
I got around this by using request.json instead of request.get_json() but as request.json is being deprecated it's fairly important I know how to solve this in the future. Any pointers would be appreciated!
You can omit JSON.stringify and pass object directly to $http.post() method because angular will serialize it to JSON automatically it formData is object. So I assume that JSON.stringify will force angular to send is as x-www-form-urlencoded instead of application/json media type.
See default transformations section: angular $http service documentation
I would like to know if it is possible to enable gzip compression
for Server-Sent Events (SSE ; Content-Type: text/event-stream).
It seems it is possible, according to this book:
http://chimera.labs.oreilly.com/books/1230000000545/ch16.html
But I can't find any example of SSE with gzip compression. I tried to
send gzipped messages with the response header field
Content-Encoding set to "gzip" without success.
For experimenting around SSE, I am testing a small web application
made in Python with the bottle framework + gevent ; I am just running
the bottle WSGI server:
#bottle.get('/data_stream')
def stream_data():
bottle.response.content_type = "text/event-stream"
bottle.response.add_header("Connection", "keep-alive")
bottle.response.add_header("Cache-Control", "no-cache")
bottle.response.add_header("Content-Encoding", "gzip")
while True:
# new_data is a gevent AsyncResult object,
# .get() just returns a data string when new
# data is available
data = new_data.get()
yield zlib.compress("data: %s\n\n" % data)
#yield "data: %s\n\n" % data
The code without compression (last line, commented) and without gzip
content-encoding header field works like a charm.
EDIT: thanks to the reply and to this other question: Python: Creating a streaming gzip'd file-like?, I managed to solve the problem:
#bottle.route("/stream")
def stream_data():
compressed_stream = zlib.compressobj()
bottle.response.content_type = "text/event-stream"
bottle.response.add_header("Connection", "keep-alive")
bottle.response.add_header("Cache-Control", "no-cache, must-revalidate")
bottle.response.add_header("Content-Encoding", "deflate")
bottle.response.add_header("Transfer-Encoding", "chunked")
while True:
data = new_data.get()
yield compressed_stream.compress("data: %s\n\n" % data)
yield compressed_stream.flush(zlib.Z_SYNC_FLUSH)
TL;DR: If the requests are not cached, you likely want to use zlib and declare Content-Encoding to be 'deflate'. That change alone should make your code work.
If you declare Content-Encoding to be gzip, you need to actually use gzip. They are based on the the same compression algorithm, but gzip has some extra framing. This works, for example:
import gzip
import StringIO
from bottle import response, route
#route('/')
def get_data():
response.add_header("Content-Encoding", "gzip")
s = StringIO.StringIO()
with gzip.GzipFile(fileobj=s, mode='w') as f:
f.write('Hello World')
return s.getvalue()
That only really makes sense if you use an actual file as a cache, though.
There's also middleware you can use so you don't need to worry about gzipping responses for each of your methods. Here's one I used recently.
https://code.google.com/p/ibkon-wsgi-gzip-middleware/
This is how I used it (I'm using bottle.py with the gevent server)
from gzip_middleware import Gzipper
import bottle
app = Gzipper(bottle.app())
run(app = app, host='0.0.0.0', port=8080, server='gevent')
For this particular library, you can set w/c types of responses you want to compress by modifying the DEFAULT_COMPRESSABLES variable for example
DEFAULT_COMPRESSABLES = set(['text/plain', 'text/html', 'text/css',
'application/json', 'application/x-javascript', 'text/xml',
'application/xml', 'application/xml+rss', 'text/javascript',
'image/gif'])
All responses go through the middleware and get gzipped without modifying your existing code. By default, it compresses responses whose content-type belongs to DEFAULT_COMPRESSABLES and whose content-length is greater than 200 characters.
I am trying to write a file sharing application that exposes a REST interface.
The library I am using, Flask-RESTful only supports returning JSON by default. Obviously attempting to serve binary data over JSON is not a good idea at all.
What is the most "RESTful" way of serving up binary data through a GET method? It appears possible to extend Flask-RESTful to support returning different data representations besides JSON but the documentation is scarce and I'm not sure if it's even the best approach.
The approach suggested in the Flask-RESTful documentation is to declare our supported representations on the Api object so that it can support other mediatypes. The mediatype we are looking for is application/octet-stream.
First, we need to write a representation function:
from flask import Flask, send_file, safe_join
from flask_restful import Api
app = Flask(__name__)
api = Api(app)
#api.representation('application/octet-stream')
def output_file(data, code, headers):
filepath = safe_join(data["directory"], data["filename"])
response = send_file(
filename_or_fp=filepath,
mimetype="application/octet-stream",
as_attachment=True,
attachment_filename=data["filename"]
)
return response
What this representation function does is to convert the data, code, headers our method returns into a Response object with mimetype application/octet-stream. Here we use send_file function to construct this Response object.
Our GET method can be something like:
from flask_restful import Resource
class GetFile(Resource):
def get(self, filename):
return {
"directory": <Our file directory>,
"filename": filename
}
And that's all the coding we need. When sending this GET request, we need to change the Accept mimetype to Application/octet-stream so that our API will call the representation function. Otherwise it will return the JSON data as by default.
There's an xml example on github
I know this question was asked 7 years ago so it probably doesn't matter any more to #Ayrx. Hope it helps to whoever drops by.
As long as you're setting the Content-Type header accordingly and respecting the Accept header sent by the client, you're free to return any format you want. You can just have a view that returns your binary data with the application/octet-stream content type.
After lot of trials and experiments, including hours of browsing to make the Response class as Single Responsible down loader
class DownloadResource(Resource):
def get(self):
item_list = dbmodel.query.all()
item_list = [item.image for item in item_list]
data = json.dumps({'items':item_list})
response = make_response(data)
response.headers['Content-Type'] = 'text/json'
response.headers['Content-Disposition'] = 'attachment; filename=selected_items.json'
return response
Change your filename and content type to support the format you want.
I want to download file to my server and automatically send it to online storage(minus or dropbox) via minus or dropbox API, without saving the downloaded file in my server. So, its like streaming or pipe the HTTP connection. Right now im using minus.com API, but its require file object or local file as parameter. I can't figure out how to convert http response to file object.
It is possible to do this? if possible, how?
concept :
FILE_ON_ANOTHER_SERVER ----(http)---> MY_SERVER ----(http)----> ONLINE_STORAGE
thanks
You can get the data from a response via the read() method
response = urllib2.urlopen(request)
data = response.read()
The variable data has the binary data from the response.
Now you can create a StringIO Object which handles the data as a file like object.
import StringIO
datastream = StringIO.StringIO()
datastream.write(data)
datastream.seek(0)
#create dropbox client
client.put_file('/test', datastream)
urllib2.urlopen(url) will return a file-like object. Can you pass that directly to your minus api? See the urllib2 docs at
http://docs.python.org/library/urllib2