I have a Pandas dataframe in my Flask app that I want to return as a CSV file.
return Response(df.to_csv())
The problem is that the output appears in the browser instead of downloading as a separate file. How can I change that?
I tried the following as well but it just gave empty output.
response = make_response(df.to_csv())
response.headers['Content-Type'] = 'text/csv'
return Response(response)
Set the Content-Disposition to tell the browser to download the file instead of showing its content on the page.
resp = make_response(df.to_csv())
resp.headers["Content-Disposition"] = "attachment; filename=export.csv"
resp.headers["Content-Type"] = "text/csv"
return resp
This is pretty much the same solution but you can just pass the same info into Response:
return Response(
df.to_csv(),
mimetype="text/csv",
headers={"Content-disposition":
"attachment; filename=filename.csv"})
set the content-disposition and use stringIO to convert dataframe to stream, below is the code to achieve,
execel_file = StringIO.StringIO()
filename = "%s.csv" % ('output file')
df.to_csv(execel_file, encoding='utf-8')
csv_output = execel_file.getvalue()
execel_file.close()
resp = make_response(csv_output)
resp.headers["Content-Disposition"] = ("attachment; filename=%s" % filename)
resp.headers["Content-Type"] = "text/csv"
return resp
I'm trying to download a file via an url. It is normally downloaded, but the problem is that , when I'm trying to open the file, it has 0 bytes.
Does anyone met this problem, or have idea from where can it come?
This is my code
def download():
file_name = "/opt/static/avatar/20/mouse.png"
response = HttpResponse(content_type='application/force-download')
response['Content-Disposition'] = 'attachment; filename=%s' % smart_str(file_name)
response['X-Sendfile'] = smart_str(path)
return response
NginX does not support X-Sendfile header. You must use X-Accel-Redirect instead:
response['X-Accel-Redirect'] = smart_str(path)
I always used to save the files I wanted to make downloadable in django. In my project I used that code for example:
def keyDownload(request, benutzername):
benutzernameKey = benutzername +".key"
fsock = open('/var/www/openvpn/examples/easy-rsa/2.0/keys/'+benutzernameKey, 'r')
response = HttpResponse(fsock, mimetype='application/pgp-keys')
response['Content-Disposition'] = "attachment; filename = %s " % (benutzernameKey)
return response
I got a pdf file which I get through urllib:
url = "http://www.urltomypdf.com"
sock = urllib2.urlopen(url)
with open('report.pdf', 'wb') as f:
while True:
content = sock.read()
if not content: break
f.write(content)
At the moment I am saving the pdf in a file called report.pdf. But my aim is to render it directly to my template with a function in django. Is that possible ?
With the introduction of Django 1.5, the StreamingHttpResponse class has been made available to stream a response based on an iterator. Your view and iterator could look like this:
def stream_pdf(url, chunk_size=8192):
sock = urllib2.urlopen(url)
while True:
content = sock.read(chunk_size)
if not content: break
yield content
def external_pdf_view(request, *args, **kwargs):
url = <url> # specify the url here
response = StreamingHttpResponse(stream_pdf(url), content_type="application/pdf")
response['Content-Disposition'] = "filename='%s'" % <filename> #specify the filename here
return response
Pre Django 1.5, it is still possible to stream a response by passing an iterator to HttpResponse, but there are several caveats. First, you need to use the #condition(etag_func=None) decorator on your view function. Secondly, some middleware can prevent a properly streamed response, so you'll need to bypass that middleware. And finally, a chunk of content only gets send when it reaches a length of 1024 bytes, so chunk_size should be over 1024.
I need to associate a link such as "Protocol" and then drive to download a PDF file. Is giving 404 error in mapping the url and would like some help regarding view.
url-Protocol:
urlpatterns += patterns('suap.views',(r'^manuais/$', 'manuais'),
(r'^static/manuais/manual_protocolo.pdf$', 'manual_pdf'))
view-Protocol:
def manual_pdf(request):
response = HttpResponse(extension='.pdf')
response['Content-Disposition'] = 'attachment; filename="manual_protocolo.pdf"' %manual_protocolo
return response
you have to use mod x_sendfile in your server. see
also make a small modification to the code
response = HttpResponse(mimetype='application/pdf')
response['Content-Disposition'] = 'attachment; filename=%s' % smart_str('manual_protocolo.pdf')
response['X-Sendfile'] = "/path/to/manual_protocolo.pdf"
return response
Problem: When POSTing data with Python's urllib2, all data is URL encoded and sent as Content-Type: application/x-www-form-urlencoded. When uploading files, the Content-Type should instead be set to multipart/form-data and the contents be MIME-encoded.
To get around this limitation some sharp coders created a library called MultipartPostHandler which creates an OpenerDirector you can use with urllib2 to mostly automatically POST with multipart/form-data. A copy of this library is here: MultipartPostHandler doesn't work for Unicode files
I am new to Python and am unable to get this library to work. I wrote out essentially the following code. When I capture it in a local HTTP proxy, I can see that the data is still URL encoded and not multi-part MIME-encoded. Please help me figure out what I am doing wrong or a better way to get this done. Thanks :-)
FROM_ADDR = 'my#email.com'
try:
data = open(file, 'rb').read()
except:
print "Error: could not open file %s for reading" % file
print "Check permissions on the file or folder it resides in"
sys.exit(1)
# Build the POST request
url = "http://somedomain.com/?action=analyze"
post_data = {}
post_data['analysisType'] = 'file'
post_data['executable'] = data
post_data['notification'] = 'email'
post_data['email'] = FROM_ADDR
# MIME encode the POST payload
opener = urllib2.build_opener(MultipartPostHandler.MultipartPostHandler)
urllib2.install_opener(opener)
request = urllib2.Request(url, post_data)
request.set_proxy('127.0.0.1:8080', 'http') # For testing with Burp Proxy
# Make the request and capture the response
try:
response = urllib2.urlopen(request)
print response.geturl()
except urllib2.URLError, e:
print "File upload failed..."
EDIT1: Thanks for your response. I'm aware of the ActiveState httplib solution to this (I linked to it above). I'd rather abstract away the problem and use a minimal amount of code to continue using urllib2 how I have been. Any idea why the opener isn't being installed and used?
It seems that the easiest and most compatible way to get around this problem is to use the 'poster' module.
# test_client.py
from poster.encode import multipart_encode
from poster.streaminghttp import register_openers
import urllib2
# Register the streaming http handlers with urllib2
register_openers()
# Start the multipart/form-data encoding of the file "DSC0001.jpg"
# "image1" is the name of the parameter, which is normally set
# via the "name" parameter of the HTML <input> tag.
# headers contains the necessary Content-Type and Content-Length
# datagen is a generator object that yields the encoded parameters
datagen, headers = multipart_encode({"image1": open("DSC0001.jpg")})
# Create the Request object
request = urllib2.Request("http://localhost:5000/upload_image", datagen, headers)
# Actually do the request, and get the response
print urllib2.urlopen(request).read()
This worked perfect and I didn't have to muck with httplib. The module is available here:
http://atlee.ca/software/poster/index.html
Found this recipe to post multipart using httplib directly (no external libraries involved)
import httplib
import mimetypes
def post_multipart(host, selector, fields, files):
content_type, body = encode_multipart_formdata(fields, files)
h = httplib.HTTP(host)
h.putrequest('POST', selector)
h.putheader('content-type', content_type)
h.putheader('content-length', str(len(body)))
h.endheaders()
h.send(body)
errcode, errmsg, headers = h.getreply()
return h.file.read()
def encode_multipart_formdata(fields, files):
LIMIT = '----------lImIt_of_THE_fIle_eW_$'
CRLF = '\r\n'
L = []
for (key, value) in fields:
L.append('--' + LIMIT)
L.append('Content-Disposition: form-data; name="%s"' % key)
L.append('')
L.append(value)
for (key, filename, value) in files:
L.append('--' + LIMIT)
L.append('Content-Disposition: form-data; name="%s"; filename="%s"' % (key, filename))
L.append('Content-Type: %s' % get_content_type(filename))
L.append('')
L.append(value)
L.append('--' + LIMIT + '--')
L.append('')
body = CRLF.join(L)
content_type = 'multipart/form-data; boundary=%s' % LIMIT
return content_type, body
def get_content_type(filename):
return mimetypes.guess_type(filename)[0] or 'application/octet-stream'
Just use python-requests, it will set proper headers and do upload for you:
import requests
files = {"form_input_field_name": open("filename", "rb")}
requests.post("http://httpbin.org/post", files=files)
I ran into the same problem and I needed to do a multipart form post without using external libraries. I wrote a whole blogpost about the issues I ran into.
I ended up using a modified version of http://code.activestate.com/recipes/146306/. The code in that url actually just appends the content of the file as a string, which can cause problems with binary files. Here's my working code.
import mimetools
import mimetypes
import io
import http
import json
form = MultiPartForm()
form.add_field("form_field", "my awesome data")
# Add a fake file
form.add_file(key, os.path.basename(filepath),
fileHandle=codecs.open("/path/to/my/file.zip", "rb"))
# Build the request
url = "http://www.example.com/endpoint"
schema, netloc, url, params, query, fragments = urlparse.urlparse(url)
try:
form_buffer = form.get_binary().getvalue()
http = httplib.HTTPConnection(netloc)
http.connect()
http.putrequest("POST", url)
http.putheader('Content-type',form.get_content_type())
http.putheader('Content-length', str(len(form_buffer)))
http.endheaders()
http.send(form_buffer)
except socket.error, e:
raise SystemExit(1)
r = http.getresponse()
if r.status == 200:
return json.loads(r.read())
else:
print('Upload failed (%s): %s' % (r.status, r.reason))
class MultiPartForm(object):
"""Accumulate the data to be used when posting a form."""
def __init__(self):
self.form_fields = []
self.files = []
self.boundary = mimetools.choose_boundary()
return
def get_content_type(self):
return 'multipart/form-data; boundary=%s' % self.boundary
def add_field(self, name, value):
"""Add a simple field to the form data."""
self.form_fields.append((name, value))
return
def add_file(self, fieldname, filename, fileHandle, mimetype=None):
"""Add a file to be uploaded."""
body = fileHandle.read()
if mimetype is None:
mimetype = mimetypes.guess_type(filename)[0] or 'application/octet-stream'
self.files.append((fieldname, filename, mimetype, body))
return
def get_binary(self):
"""Return a binary buffer containing the form data, including attached files."""
part_boundary = '--' + self.boundary
binary = io.BytesIO()
needsCLRF = False
# Add the form fields
for name, value in self.form_fields:
if needsCLRF:
binary.write('\r\n')
needsCLRF = True
block = [part_boundary,
'Content-Disposition: form-data; name="%s"' % name,
'',
value
]
binary.write('\r\n'.join(block))
# Add the files to upload
for field_name, filename, content_type, body in self.files:
if needsCLRF:
binary.write('\r\n')
needsCLRF = True
block = [part_boundary,
str('Content-Disposition: file; name="%s"; filename="%s"' % \
(field_name, filename)),
'Content-Type: %s' % content_type,
''
]
binary.write('\r\n'.join(block))
binary.write('\r\n')
binary.write(body)
# add closing boundary marker,
binary.write('\r\n--' + self.boundary + '--\r\n')
return binary
What a coincide, 2 years and 6 months ago I created the project
https://pypi.python.org/pypi/MultipartPostHandler2, that fixes MultipartPostHandler for utf-8 systems. I also have done some minor improvements, you are welcome to test it :)
To answer the OP's question of why the original code didn't work, the handler passed in wasn't an instance of a class. The line
# MIME encode the POST payload
opener = urllib2.build_opener(MultipartPostHandler.MultipartPostHandler)
should read
opener = urllib2.build_opener(MultipartPostHandler.MultipartPostHandler())