I want to render plot (matplotlib) in one place of code and then in ViewSet serve to it user without saving on disk. I tried to use io library to keep file in memory, but it seems that always something is wrong.
My code where I save plot on disk:
def some_func(self):
...generating plot...
filename = self.generate_path() # generate random name for file
plt.savefig(filename, transparent=True)
return filename
Code of ViewSet:
class SomeViewsSet(ViewSet):
def create(self, request):
... some code ...
path = some_func()
name = path.split('/')[-1]
with open(path, 'rb') as file:
response = HttpResponse(file, content_type=guess_type(path)[0])
response['Content-Length'] = len(response.content)
response['Content-Disposition'] = f'attachment; filename={name}'
return response
Make sure that you pass matplotlib a BytesIO object, and not a StringIO. Then get the bytes using getvalue(), and pass them to HttpResponse. If that's what you've already tried, please post your code and the error message you're seeing.
Related
Currently I am exporting an excel file, however, before it gets exported, I create an xls file on the host machine. To create the excel file, I use tablib. My export view looks like this:
#login_required
def export_xls(request):
# some irrelevant code
data = convert_json_to_dataset(json_data)
table = data.export('xls')
with open('/tmp/students.xls', 'wb') as f:
f.write(table)
response = FileResponse(open('/tmp/students.xls', 'rb'), as_attachment=True, filename="test.xls")
return response
What I am trying to achieve is to avoid writing always to /tmp/students.xls. I tried using BytesIO, however that did not work out.
#login_required
def export_xls(request):
# some irrelevant code
data = convert_json_to_dataset(json_data)
table = data.export('xls')
buffer = BytesIO()
buffer.write(table)
response = FileResponse(buffer.read(), as_attachment=True, filename="test.xls")
return response
Currently I am always overwriting the file, however, I will change the naming of the file and that will cause multiple files to be created, which I would like to avoid.
I am working with a legacy project and we need to implement a Django Admin that helps download a csv report that was stored as a BinaryField.
The model is something like this:
class MyModel(models.Model):
csv_report = models.BinaryField(blank=True,null=True)
Everything seems to being stored as expected but I have no clue how to decode the field back to a csv file for later use.
I am using something like these (as an admin action on MyModelAdmin class)
class MyModelAdmin(admin.ModelAdmin):
...
...
actions = ["download_file",]
def download_file(self, request,queryset):
# just getting one for testing
contents = queryset[0].csv_report
encoded_data = base64.b64encode(contents).decode()
with open("report.csv", "wb") as binary_file:
# Write bytes to file
decoded_image_data = base64.decodebytes(encoded_data)
binary_file.write(decoded_image_data)
response = HttpResponse(encoded_data)
response['Content-Disposition'] = 'attachment; filename=report.csv'
return response
download_file.short_description = "file"
But all I download is a scrambled csv file. I don't seem to understand if it is a problem of the format I am using to decode (.decode('utf-8') does nothing either )
PD:
I know it is a bad practice to use BinaryField for this. But requirements are requirements. Nothing to do about it.
EDIT:
As #TimRoberts pointed out, encoding and then decoding is REALLY silly :$. I've changed the method like so:
def download_file(self, request,queryset):
# print(self,request)
contents = queryset[0].csv_report
# print(type(contents))
encoded_data = base64.b64decode(contents)
with open("my_file.csv", "wb") as binary_file:
binary_file.write(encoded_data)
response = HttpResponse(encoded_data)
response['Content-Disposition'] = 'attachment; filename=blob.csv'
return response
download_file.short_description = "file"
Still I am getting a csv file with something like this:
A big fat case of the old RTFM: I was getting carried away by the all base64.. Obviously I didn't have any idea of what I was doing.
After tampering with the shell and reading the docs, I just changed my method to:
def download_file(self, request,queryset):
**contents = bytes(queryset[0].csv_report)**
response = HttpResponse(contents)
response['ContentDisposition']='attachment;filename=report.csv'
return response
Note that I was scrambling the data on purpose by doing the encoded_data = base64.b64decode(contents) stuff. I just needed to apply bytes on my BinaryField and voilá
I'm trying to let the user download an xml file that i have generated.
This is my code:
tree.write('output.xml', encoding="utf-16")
# Pathout is the path to the output.xml
xmlFile = open(pathout, 'r')
myfile = FileWrapper(xmlFile.read())
response = HttpResponse(myfile, content_type='application/xml')
response['Content-Disposition'] = 'attachment; filename='+filename
return response
When I try to create my response I get this exception:
'\\'str\\' object has no attribute \\'read\\''
Can't figure out what I'm doing wrong. Any ideas?
Edit:
When I use this code I get no errors but instead the downloaded file is empty
tree.write('output.xml', encoding="utf-16")
xmlFile = open(pathout, 'r')
myfile = FileWrapper(xmlFile)
response = HttpResponse(myfile, content_type='application/xml')
response['Content-Disposition'] = 'attachment; filename='+filename
return response
You are calling xmlFile.read() - which yields a string - and passing the result to FileWrapper() which expects a readable file-like object. You should either just pass xmlFile to FileWrapper, or not use FileWrapper at all and pass the result of xmlFile.read() as your HttpResponse body.
Note that if you are creating the xml dynamically (which seems to be the case according to your snippet's first line), writing it to disk only to read it back a couple lines later is both a waste of time and resources and a potential cause of race conditions. You perhaps want to have a look at https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.tostring
You're reading the file and passing the resulting string to FileWrapper, instead of passing the actual file object.
myfile = FileWrapper(xmlFile)
Alternatively from the other answers, I would recommend to completely go around the problem by utilizing the Django template system:
from django.http import HttpResponse
from django.template import Context, loader
def my_view(request):
# View code here...
t = loader.get_template('myapp/myfile.xml')
c = Context({'foo': 'bar'})
response = HttpResponse(t.render(c), content_type="application/xml")
response['Content-Disposition'] = 'attachment; filename=...'
return response
In this way create a myfile.xml template which is used to render the proper xml response without having to deal with writing any files to the filesystem. This is cleaner and faster, given that there is no other need to indeed create the xml and store it permanently.
I'm trying create and serve excel files using Django. I have a jar file which gets parameters and produces an excel file according to parameters and it works with no problem. But when i'm trying to get the produced file and serve it to the user for download the file comes out broken. It has 0kb size. This is the code piece I'm using for excel generation and serving.
def generateExcel(request,id):
if os.path.exists('./%s_Report.xlsx' % id):
excel = open("%s_Report.xlsx" % id, "r")
output = StringIO.StringIO(excel.read())
out_content = output.getvalue()
output.close()
response = HttpResponse(out_content,content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
response['Content-Disposition'] = 'attachment; filename=%s_Report.xlsx' % id
return response
else:
args = ['ServerExcel.jar', id]
result = jarWrapper(*args) # this creates the excel file with no problem
if result:
excel = open("%s_Report.xlsx" % id, "r")
output = StringIO.StringIO(excel.read())
out_content = output.getvalue()
output.close()
response = HttpResponse(out_content,content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
response['Content-Disposition'] = 'attachment; filename=%s_Report.xlsx' % id
return response
else:
return HttpResponse(json.dumps({"no":"excel","no one": "cries"}))
I have searched for possible solutions and tried to use File Wrapper also but the result did not changed. I assume i have problem with reading the xlsx file into StringIO object. But dont have any idea about how to fix it
Why on earth are you passing your file's content to a StringIO just to assign StringIO.get_value() to a local variable ? What's wrong with assigning file.read() to your variable directly ?
def generateExcel(request,id):
path = './%s_Report.xlsx' % id # this should live elsewhere, definitely
if os.path.exists(path):
with open(path, "r") as excel:
data = excel.read()
response = HttpResponse(data,content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
response['Content-Disposition'] = 'attachment; filename=%s_Report.xlsx' % id
return response
else:
# quite some duplication to fix down there
Now you may want to check weither you actually had any content in your file - the fact that the file exists doesn't mean it has anything in it. Remember that you're in a concurrent context, you can have one thread or process trying to read the file while another (=>another request) is trying to write it.
In addition to what Bruno says, you probably need to open the file in binary mode:
excel = open("%s_Report.xlsx" % id, "rb")
You can use this library to create excel sheets on the fly.
http://xlsxwriter.readthedocs.io/
For more information see this page. Thanks to #alexcxe
XlsxWriter object save as http response to create download in Django
my answer is:
def generateExcel(request,id):
if os.path.exists('./%s_Report.xlsx' % id):
with open('./%s_Report.xlsx' % id, "rb") as file:
response = HttpResponse(file.read(),content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')
response['Content-Disposition'] = 'attachment; filename=%s_Report.xlsx' % id
return response
else:
# quite some duplication to fix down there
why using "rb"? because HttpResponse class init parameters is (self, content=b'', *args, **kwargs), so we should using "rb" and using .read() to get the bytes.
I have a this model...
class MyModel(models.Model):
...
file = models.FileField(upload_to='files/',null=True, blank=True)
...
when i upload a file, example file name is docfile.doc. when i change the file or i rewrite it and upload again docfile.doc the file will become docfile_1.doc and the old docfile.doc is still exist.
i am doing the uploading and saving data in django-admin
my question is, how can i remove the old docfile.doc if i upload the new docfile.doc and the file name is still docfile.doc?
can anyone help me in my case? thanks in advance
i try this one :
def content_file_name(instance, filename):
print instance
print filename
file = os.path.exists(filename)
print file
if file:
os.remove(filename)
return "file/"+str(filename)
class MyModel(models.Model):
...
file = models.FileField(upload_to=content_file_name,null=True, blank=True)
...
but nothing happend, when i upload docfile.doc again, it will become docfile_1.doc and the old docfile.doc still exist.
i got it... i use this
def content_file_name(instance, filename):
print instance
print filename
file = os.path.exists("media/file/"+str(filename))
print file
if file:
os.remove("media/file/"+str(filename))
return "file/"+str(filename)
I don't know exactly how to do it, but i think these links can help you:
Here you can find the two options that a FileField accept. The one that i think will interest you the most is FileField.storage. You can pass a storage object in that parameter.
It says:
FileField.storage: Optional. A storage object, which handles the storage and retrieval of your files.
Then, if you read this you would see that you can write your own storage object. Here is some explanation on how to do it. I think that you could just override the _save method in order to accomplish what you want to do (i.e: if the file already exists, remove it before saving the new copy.)
But be careful! I don't know which is the source of the files you are going to store. Maybe, your app is going to recieve lots of files with the same name, although they are all different. In this case, you would want to use a callable as the FileField.upload_to parameter, so that determine a unique filename for each file your site recieve.
I hope this helps you!
You could also have a look here: ImageField overwrite image file with same name
Define your own storage and overwrite its get available_name method.
The next code solves your problem. You override pre_save method where image is actually saved to storage. Please, rename functions for your project. Use newly created image field ImageFieldWithPermantName with your upload_to function (content_file_name).
If the code is too complicated you could simplify it. I use the code to do more complex operations for uploading images: I create thumbnails on-the-fly in custom _save_image function. So, you can simplify it.
from PIL import Image
import StringIO
from django.db.models import ImageField
from django.db.models.fields.files import FileField
from dargent.settings import MEDIA_ROOT
import os
class ImageFieldWithPermanentName( ImageField ):
def pre_save( self, model_instance, add ):
file = super( FileField, self ).pre_save(model_instance, add)
if file and not file._committed:
if callable( self.upload_to ):
path = self.upload_to( model_instance, "" )
else:
path = self.upload_to
file.name = path # here we set the same name to a file
path = os.path.join( MEDIA_ROOT, path )
chunks = _get_chunks( file.chunks() )
_save_image( chunks, path )
return file
def _get_chunks( chunks ):
chunks_ = ""
for chunk in chunks:
chunks_ += chunk
return chunks_
def _get_image( chunks ):
chunks_ = ""
for chunk in chunks:
chunks_ += chunk
virt_file = StringIO.StringIO( chunks_ )
image = Image.open( virt_file )
return image
def _save_image( chunks, out_file_path ):
image = _get_image( chunks )
image.save( out_file_path, "JPEG", quality = 100 )