Python ZipFile Created In-Memory Not Compressing as Expected - python

I am trying to use Python to create a ZipFile object in-memory, and write a single file, also created in-memory, into the ZipFile object, and then upload the file to Google Cloud Storage.
My file is not actually getting compressed. Any idea what I might be doing wrong?
I realize there may be a fancier way of getting the row data into the file object, but apart from that, I'm really just trying to figure out why the resulting zip file is not coming out compressed at all.
UPDATE: code sample now excludes any interaction with Google Cloud Services (GCS, etc.), and instead just writes the files to disk.
It seems that when I write the file to disk first, then create the ZipFile, the result is compressed as expected, but when I add the StringIO contents directly from memory to the ZipFile object, the contents are not compressed.
import random, io, argparse, os, string
from zipfile import ZipFile, ZipInfo, ZIP_DEFLATED
parser = argparse.ArgumentParser()
parser.add_argument("--row_limit", default=1000)
parser.add_argument("--file_name", default='file.txt', type=str)
parser.add_argument("--archive_name", default='file.zip', type=str)
parser.add_argument("--snapshot_millis", default=0, type=int)
args = parser.parse_args()
# imagine this has lots and lots of data in it, coming from a database query result
rows = [{
'seq_no': ''.join(random.choices(string.ascii_uppercase + string.digits, k=args.row_limit)),
'csv': ''.join(random.choices(string.ascii_uppercase + string.digits, k=args.row_limit))
}] * args.row_limit
archive = io.BytesIO()
# create zip archive in memory
with ZipFile(archive, 'w', compression=ZIP_DEFLATED, compresslevel=9) as zip_archive:
count = 0
file_contents = io.StringIO()
for row in rows:
if count > args.row_limit:
break
count += 1
file_contents.write(f"{row['seq_no']},{row['csv']}\n")
# write file to zip archive in memory
zip_file = ZipInfo(args.file_name)
zip_archive.writestr(zip_file, file_contents.getvalue())
# also write file to disk
with open(args.file_name, mode='w') as f:
print(file_contents.getvalue(), file=f)
print(f"StringIO Size: {file_contents.tell()}")
print(f"Text File Size On Disk: {os.path.getsize(args.file_name)}")
archive.seek(0)
with open(args.archive_name, 'wb') as outfile:
outfile.write(archive.getbuffer())
print(f"Zip File Created from File In Memory: {os.path.getsize(args.archive_name)}")
ZipFile(args.archive_name, mode='w', compression=ZIP_DEFLATED, compresslevel=9).write(args.file_name)
print(f"Zip File Created from File On Disk: {os.path.getsize(args.archive_name)}")

The problem is here:
zip_file = ZipInfo(args.file_name)
zip_archive.writestr(zip_file, file_contents.getvalue())
From the ZipFile.writestr docs:
When passing a ZipInfo instance as the zinfo_or_arcname parameter, the
compression method used will be that specified in the compress_type
member of the given ZipInfo instance. By default, the ZipInfo
constructor sets this member to ZIP_STORED [i.e. uncompressed].
The easiest way to correct the issue is not to use a complete ZipInfo, but just the file name. This will also set the current date/time as the creation time for the file inside the archive (ZipInfo defaults to year 1980):
# zip_file = ZipInfo(args.file_name)
zip_archive.writestr(args.file_name, file_contents.getvalue())

Related

gz to zip conversion with S3 and Python [duplicate]

Is there a Python library that allows manipulation of zip archives in memory, without having to use actual disk files?
The ZipFile library does not allow you to update the archive. The only way seems to be to extract it to a directory, make your changes, and create a new zip from that directory. I want to modify zip archives without disk access, because I'll be downloading them, making changes, and uploading them again, so I have no reason to store them.
Something similar to Java's ZipInputStream/ZipOutputStream would do the trick, although any interface at all that avoids disk access would be fine.
According to the Python docs:
class zipfile.ZipFile(file[, mode[, compression[, allowZip64]]])
Open a ZIP file, where file can be either a path to a file (a string) or a file-like object.
So, to open the file in memory, just create a file-like object (perhaps using BytesIO).
file_like_object = io.BytesIO(my_zip_data)
zipfile_ob = zipfile.ZipFile(file_like_object)
PYTHON 3
import io
import zipfile
zip_buffer = io.BytesIO()
with zipfile.ZipFile(zip_buffer, "a",
zipfile.ZIP_DEFLATED, False) as zip_file:
for file_name, data in [('1.txt', io.BytesIO(b'111')),
('2.txt', io.BytesIO(b'222'))]:
zip_file.writestr(file_name, data.getvalue())
with open('C:/1.zip', 'wb') as f:
f.write(zip_buffer.getvalue())
From the article In-Memory Zip in Python:
Below is a post of mine from May of 2008 on zipping in memory with Python, re-posted since Posterous is shutting down.
I recently noticed that there is a for-pay component available to zip files in-memory with Python. Considering this is something that should be free, I threw together the following code. It has only gone through very basic testing, so if anyone finds any errors, let me know and I’ll update this.
import zipfile
import StringIO
class InMemoryZip(object):
def __init__(self):
# Create the in-memory file-like object
self.in_memory_zip = StringIO.StringIO()
def append(self, filename_in_zip, file_contents):
'''Appends a file with name filename_in_zip and contents of
file_contents to the in-memory zip.'''
# Get a handle to the in-memory zip in append mode
zf = zipfile.ZipFile(self.in_memory_zip, "a", zipfile.ZIP_DEFLATED, False)
# Write the file to the in-memory zip
zf.writestr(filename_in_zip, file_contents)
# Mark the files as having been created on Windows so that
# Unix permissions are not inferred as 0000
for zfile in zf.filelist:
zfile.create_system = 0
return self
def read(self):
'''Returns a string with the contents of the in-memory zip.'''
self.in_memory_zip.seek(0)
return self.in_memory_zip.read()
def writetofile(self, filename):
'''Writes the in-memory zip to a file.'''
f = file(filename, "w")
f.write(self.read())
f.close()
if __name__ == "__main__":
# Run a test
imz = InMemoryZip()
imz.append("test.txt", "Another test").append("test2.txt", "Still another")
imz.writetofile("test.zip")
The example Ethier provided has several problems, some of them major:
doesn't work for real data on Windows. A ZIP file is binary and its data should always be written with a file opened 'wb'
the ZIP file is appended to for each file, this is inefficient. It can just be opened and kept as an InMemoryZip attribute
the documentation states that ZIP files should be closed explicitly, this is not done in the append function (it probably works (for the example) because zf goes out of scope and that closes the ZIP file)
the create_system flag is set for all the files in the zipfile every time a file is appended instead of just once per file.
on Python < 3 cStringIO is much more efficient than StringIO
doesn't work on Python 3 (the original article was from before the 3.0 release, but by the time the code was posted 3.1 had been out for a long time).
An updated version is available if you install ruamel.std.zipfile (of which I am the author). After
pip install ruamel.std.zipfile
or including the code for the class from here, you can do:
import ruamel.std.zipfile as zipfile
# Run a test
zipfile.InMemoryZipFile()
imz.append("test.txt", "Another test").append("test2.txt", "Still another")
imz.writetofile("test.zip")
You can alternatively write the contents using imz.data to any place you need.
You can also use the with statement, and if you provide a filename, the contents of the ZIP will be written on leaving that context:
with zipfile.InMemoryZipFile('test.zip') as imz:
imz.append("test.txt", "Another test").append("test2.txt", "Still another")
because of the delayed writing to disc, you can actually read from an old test.zip within that context.
I am using Flask to create an in-memory zipfile and return it as a download. Builds on the example above from Vladimir. The seek(0) took a while to figure out.
import io
import zipfile
zip_buffer = io.BytesIO()
with zipfile.ZipFile(zip_buffer, "a", zipfile.ZIP_DEFLATED, False) as zip_file:
for file_name, data in [('1.txt', io.BytesIO(b'111')), ('2.txt', io.BytesIO(b'222'))]:
zip_file.writestr(file_name, data.getvalue())
zip_buffer.seek(0)
return send_file(zip_buffer, attachment_filename='filename.zip', as_attachment=True)
I want to modify zip archives without disk access, because I'll be downloading them, making changes, and uploading them again, so I have no reason to store them
This is possible using the two libraries https://github.com/uktrade/stream-unzip and https://github.com/uktrade/stream-zip (full disclosure: written by me). And depending on the changes, you might not even have to store the entire zip in memory at once.
Say you just want to download, unzip, zip, and re-upload. Slightly pointless, but you could slot in some changes to the unzipped content:
from datetime import datetime
import httpx
from stream_unzip import stream_unzip
from stream_zip import stream_zip, ZIP_64
def get_source_bytes_iter(url):
with httpx.stream('GET', url) as r:
yield from r.iter_bytes()
def get_target_files(files):
# stream-unzip doesn't expose perms or modified_at, but stream-zip requires them
modified_at = datetime.now()
perms = 0o600
for name, _, chunks in files:
# Could change name, manipulate chunks, skip a file, or yield a new file
yield name.decode(), modified_at, perms, ZIP_64, chunks
source_url = 'https://source.test/file.zip'
target_url = 'https://target.test/file.zip'
source_bytes_iter = get_source_bytes_iter(source_url)
source_files = stream_unzip(source_bytes_iter)
target_files = get_target_files(source_files)
target_bytes_iter = stream_zip(target_files)
httpx.put(target_url, data=target_bytes_iter)
Helper to create in-memory zip file with multiple files based on data like {'1.txt': 'string', '2.txt": b'bytes'}
import io, zipfile
def prepare_zip_file_content(file_name_content: dict) -> bytes:
"""returns Zip bytes ready to be saved with
open('C:/1.zip', 'wb') as f: f.write(bytes)
#file_name_content dict like {'1.txt': 'string', '2.txt": b'bytes'}
"""
zip_buffer = io.BytesIO()
with zipfile.ZipFile(zip_buffer, "a", zipfile.ZIP_DEFLATED, False) as zip_file:
for file_name, file_data in file_name_content.items():
zip_file.writestr(file_name, file_data)
zip_buffer.seek(0)
return zip_buffer.getvalue()
You can use the library libarchive in Python through ctypes - it offers ways of manipulating ZIP data in memory, with a focus on streaming (at least historically).
Say we want to uncompress ZIP files on the fly while downloading from an HTTP server. The below code
from contextlib import contextmanager
from ctypes import CFUNCTYPE, POINTER, create_string_buffer, cdll, byref, c_ssize_t, c_char_p, c_int, c_void_p, c_char
from ctypes.util import find_library
import httpx
def get_zipped_chunks(url, chunk_size=6553):
with httpx.stream('GET', url) as r:
yield from r.iter_bytes()
def stream_unzip(zipped_chunks, chunk_size=65536):
# Library
libarchive = cdll.LoadLibrary(find_library('archive'))
# Callback types
open_callback_type = CFUNCTYPE(c_int, c_void_p, c_void_p)
read_callback_type = CFUNCTYPE(c_ssize_t, c_void_p, c_void_p, POINTER(POINTER(c_char)))
close_callback_type = CFUNCTYPE(c_int, c_void_p, c_void_p)
# Function types
libarchive.archive_read_new.restype = c_void_p
libarchive.archive_read_open.argtypes = [c_void_p, c_void_p, open_callback_type, read_callback_type, close_callback_type]
libarchive.archive_read_finish.argtypes = [c_void_p]
libarchive.archive_entry_new.restype = c_void_p
libarchive.archive_read_next_header.argtypes = [c_void_p, c_void_p]
libarchive.archive_read_support_compression_all.argtypes = [c_void_p]
libarchive.archive_read_support_format_all.argtypes = [c_void_p]
libarchive.archive_entry_pathname.argtypes = [c_void_p]
libarchive.archive_entry_pathname.restype = c_char_p
libarchive.archive_read_data.argtypes = [c_void_p, POINTER(c_char), c_ssize_t]
libarchive.archive_read_data.restype = c_ssize_t
libarchive.archive_error_string.argtypes = [c_void_p]
libarchive.archive_error_string.restype = c_char_p
ARCHIVE_EOF = 1
ARCHIVE_OK = 0
it = iter(zipped_chunks)
compressed_bytes = None # Make sure not garbage collected
#contextmanager
def get_archive():
archive = libarchive.archive_read_new()
if not archive:
raise Exception('Unable to allocate archive')
try:
yield archive
finally:
libarchive.archive_read_finish(archive)
def read_callback(archive, client_data, buffer):
nonlocal compressed_bytes
try:
compressed_bytes = create_string_buffer(next(it))
except StopIteration:
return 0
else:
buffer[0] = compressed_bytes
return len(compressed_bytes) - 1
def uncompressed_chunks(archive):
uncompressed_bytes = create_string_buffer(chunk_size)
while (num := libarchive.archive_read_data(archive, uncompressed_bytes, len(uncompressed_bytes))) > 0:
yield uncompressed_bytes.value[:num]
if num < 0:
raise Exception(libarchive.archive_error_string(archive))
with get_archive() as archive:
libarchive.archive_read_support_compression_all(archive)
libarchive.archive_read_support_format_all(archive)
libarchive.archive_read_open(
archive, 0,
open_callback_type(0), read_callback_type(read_callback), close_callback_type(0),
)
entry = c_void_p(libarchive.archive_entry_new())
if not entry:
raise Exception('Unable to allocate entry')
while (status := libarchive.archive_read_next_header(archive, byref(entry))) == ARCHIVE_OK:
yield (libarchive.archive_entry_pathname(entry), uncompressed_chunks(archive))
if status != ARCHIVE_EOF:
raise Exception(libarchive.archive_error_string(archive))
can be used as follows to do that
zipped_chunks = get_zipped_chunks('https://domain.test/file.zip')
files = stream_unzip(zipped_chunks)
for name, uncompressed_chunks in stream_unzip(zipped_chunks):
print(name)
for uncompressed_chunk in uncompressed_chunks:
print(uncompressed_chunk)
In fact since libarchive supports multiple archive formats, and nothing above is particularly ZIP-specific, it may well work with other formats.

access logfile and return/open all files written in there

For Reference
I have a python class which is supposed to unpack an archive and recursively iterate over the directory structure and then return the files for further processing. In my case I want to hash those files. I'm struggling with returning the files. Here is my take.
I created an unzip function, a function which creates a log-file with all the paths of the files which were unpacked. Then I want to access this log-file and return ALL of the files so I can use them in another python class for further processing.This doesn't seem to work yet.
Structure of log-file:
/home/usr/Downloads/outdir/XXX.log
/home/usr/Downloads/outdir/Code/XXX.py
/home/usr/Downloads/outdir/Code/XXX.py
/home/usr/Downloads/outdir/Code/XXX.py
Code of interest:
#staticmethod
def read_received_files(from_log):
with open(from_log, 'r') as data:
data = data.readlines()
for lines in data:
\\ This does not seem to work zet
read_files = open(lines.strip())
return read_files
I believe that's what you're looking for:
#staticmethod
def read_received_files(from_log):
files = []
with open(from_log, 'r') as data:
for line in data:
files.append(open(line.strip()))
return files
You returned while iterating, preventing from opening the other files.
Since you are primarily after the meta data and hash of the files stored in the zip file, but not the file itself, there is no need to extract the files to the file system.
Instead you can use the ZipFile.open() method to access the contents of the file through a file-like object. Meta data could be gathered using the ZipInfo object for each file. Here's an example which gets file name and file size as meta data, and the hash of the file.
import hashlib
import zipfile
from collections import namedtuple
def get_files(archive):
FileInfo = namedtuple('FileInfo', ('filename', 'size', 'hash'))
with zipfile.ZipFile(archive) as zf:
for info in zf.infolist():
if not info.filename.endswith('/'): # exclude directories
f = zf.open(info)
hash_ = hashlib.md5(f.read()).hexdigest()
yield FileInfo(info.filename, info.file_size, hash_)
for f in get_files('some_file.zip'):
print('{}: {} {} bytes'.format(f.hash, f.filename, f.size))

Sending multiple .CSV files to .ZIP without storing to disk in Python

I'm working on a reporting application for my Django powered website. I want to run several reports and have each report generate a .csv file in memory that can be downloaded in batch as a .zip. I would like to do this without storing any files to disk. So far, to generate a single .csv file, I am following the common operation:
mem_file = StringIO.StringIO()
writer = csv.writer(mem_file)
writer.writerow(["My content", my_value])
mem_file.seek(0)
response = HttpResponse(mem_file, content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename=my_file.csv'
This works fine, but only for a single, unzipped .csv. If I had, for example, a list of .csv files created with a StringIO stream:
firstFile = StringIO.StringIO()
# write some data to the file
secondFile = StringIO.StringIO()
# write some data to the file
thirdFile = StringIO.StringIO()
# write some data to the file
myFiles = [firstFile, secondFile, thirdFile]
How could I return a compressed file that contains all objects in myFiles and can be properly unzipped to reveal three .csv files?
zipfile is a standard library module that does exactly what you're looking for. For your use-case, the meat and potatoes is a method called "writestr" that takes a name of a file and the data contained within it that you'd like to zip.
In the code below, I've used a sequential naming scheme for the files when they're unzipped, but this can be switched to whatever you'd like.
import zipfile
import StringIO
zipped_file = StringIO.StringIO()
with zipfile.ZipFile(zipped_file, 'w') as zip:
for i, file in enumerate(files):
file.seek(0)
zip.writestr("{}.csv".format(i), file.read())
zipped_file.seek(0)
If you want to future-proof your code (hint hint Python 3 hint hint), you might want to switch over to using io.BytesIO instead of StringIO, since Python 3 is all about the bytes. Another bonus is that explicit seeks are not necessary with io.BytesIO before reads (I haven't tested this behavior with Django's HttpResponse, so I've left that final seek in there just in case).
import io
import zipfile
zipped_file = io.BytesIO()
with zipfile.ZipFile(zipped_file, 'w') as f:
for i, file in enumerate(files):
f.writestr("{}.csv".format(i), file.getvalue())
zipped_file.seek(0)
The stdlib comes with the module zipfile, and the main class, ZipFile, accepts a file or file-like object:
from zipfile import ZipFile
temp_file = StringIO.StringIO()
zipped = ZipFile(temp_file, 'w')
# create temp csv_files = [(name1, data1), (name2, data2), ... ]
for name, data in csv_files:
data.seek(0)
zipped.writestr(name, data.read())
zipped.close()
temp_file.seek(0)
# etc. etc.
I'm not a user of StringIO so I may have the seek and read out of place, but hopefully you get the idea.
def zipFiles(files):
outfile = StringIO() # io.BytesIO() for python 3
with zipfile.ZipFile(outfile, 'w') as zf:
for n, f in enumarate(files):
zf.writestr("{}.csv".format(n), f.getvalue())
return outfile.getvalue()
zipped_file = zip_files(myfiles)
response = HttpResponse(zipped_file, content_type='application/octet-stream')
response['Content-Disposition'] = 'attachment; filename=my_file.zip'
StringIO has getvalue method which return the entire contents. You can compress the zipfile
by zipfile.ZipFile(outfile, 'w', zipfile.ZIP_DEFLATED). Default value of compression is ZIP_STORED which will create zip file without compressing.

Python zipfile, bizarre limit to number of files: "folder is invalid"

The computer is toying with me, I know it!
I am creating a zip folder in Python. The individual files are generated in memory and then the whole thing is zipped and saved to a file. I am allowed to add 9 files to the zip. I am allowed to add 11 files to the zip. But 10, no, not 10 files. The zip file IS saved to my computer, but I'm not allowed to open it; Windows says that the compressed zipped folder is invalid.
I use the code below, which I got from another stackoverflow question. It appends 10 files and saves the zipped folder. When I click on the folder, I cannot extract it. BUT, remove one of the appends() and it's fine. Or, add another append and it works!
What am I missing here? How can I make this work every time?
imz = InMemoryZip()
imz.append("1a.txt", "a").append("2a.txt", "a").append("3a.txt", "a").append("4a.txt", "a").append("5a.txt", "a").append("6a.txt", "a").append("7a.txt", "a").append("8a.txt", "a").append("9a.txt", "a").append("10a.txt", "a")
imz.writetofile("C:/path/test.zip")
import zipfile
import StringIO
class InMemoryZip(object):
def __init__(self):
# Create the in-memory file-like object
self.in_memory_zip = StringIO.StringIO()
def append(self, filename_in_zip, file_contents):
'''Appends a file with name filename_in_zip and contents of
file_contents to the in-memory zip.'''
# Get a handle to the in-memory zip in append mode
zf = zipfile.ZipFile(self.in_memory_zip, "a", zipfile.ZIP_DEFLATED, False)
# Write the file to the in-memory zip
zf.writestr(filename_in_zip, file_contents)
# Mark the files as having been created on Windows so that
# Unix permissions are not inferred as 0000
for zfile in zf.filelist:
zfile.create_system = 0
return self
def read(self):
'''Returns a string with the contents of the in-memory zip.'''
self.in_memory_zip.seek(0)
return self.in_memory_zip.read()
def writetofile(self, filename):
'''Writes the in-memory zip to a file.'''
f = file(filename, "w")
f.write(self.read())
f.close()
You should use the 'wb' mode when creating the file you are saving to the file system. This will ensure that the file is written in binary.
Otherwise, any time a newline (\n) character happens to be encountered in the zip file python will replace it to match the windows line ending (\r\n). The reason 10 files is a problem is that 10 happens to be the code for \n.
So your write function should look like this:
def writetofile(self, filename):
'''Writes the in-memory zip to a file.'''
f = file(filename, 'wb')
f.write(self.read())
f.close()
This should fix your problem and work for the files in your example. Although, in your case you might find it easier to write the zip file directly to the file system like this code which includes some of the comments from above:
import StringIO
import zipfile
class ZipCreator:
buffer = None
def __init__(self, fileName=None):
if fileName:
self.zipFile = zipfile.ZipFile(fileName, 'w', zipfile.ZIP_DEFLATED, False)
return
self.buffer = StringIO.StringIO()
self.zipFile = zipfile.ZipFile(self.buffer, 'w', zipfile.ZIP_DEFLATED, False)
def addToZipFromFileSystem(self, filePath, filenameInZip):
self.zipFile.write(filePath, filenameInZip)
def addToZipFromMemory(self, filenameInZip, fileContents):
self.zipFile.writestr(filenameInZip, fileContents)
for zipFile in self.zipFile.filelist:
zipFile.create_system = 0
def write(self, fileName):
if not self.buffer: # If the buffer was not initialized the file is written by the ZipFile
self.zipFile.close()
return
f = file(fileName, 'wb')
f.write(self.buffer.getvalue())
f.close()
# Use File Handle
zipCreator = ZipCreator('C:/path/test.zip')
# Use Memory Buffer
# zipCreator = ZipCreator()
for i in range(1, 10):
zipCreator.addToZipFromMemory('test/%sa.txt' % i, 'a')
zipCreator.write('C:/path/test.zip')
Ideally, you would probably use separate classes for an in-memory zip and a zip that is tied to the file system from the beginning. I have also seem some issues with the in-memory zip when folders are added which are difficult to recreate and which I am still trying to track down.

Create a temporary compressed file

I need to create a temporary file to send it, I have tried :
# Create a temporary file --> I think it is ok (file not seen)
temporaryfile = NamedTemporaryFile(delete=False, dir=COMPRESSED_ROOT)
# The path to archive --> It's ok
root_dir = "something"
# Create a compressed file --> It bugs
data = open(f.write(make_archive(f.name, 'zip', root_dir))).read()
# Send the file --> Its ok
response = HttpResponse(data, mimetype='application/zip')
response['Content-Disposition'] = 'attachment; filename="%s"' % unicode(downloadedassignment.name + '.zip')
return response
I don't know at all if it is the good approach..
I actually just needed to do something similar and I wanted to avoid file I/O entirely, if possible. Here's what I came up with:
import tempfile
import zipfile
with tempfile.SpooledTemporaryFile() as tmp:
with zipfile.ZipFile(tmp, 'w', zipfile.ZIP_DEFLATED) as archive:
archive.writestr('something.txt', 'Some Content Here')
# Reset file pointer
tmp.seek(0)
# Write file data to response
return HttpResponse(tmp.read(), mimetype='application/x-zip-compressed')
It uses a SpooledTemporaryFile so it will remain in-memory, unless it exceeds the memory limits. Then, I set this tempory file as the stream for ZipFile to use. The filename passed to writestr is just the filename that the file will have inside the archive, it doesn't have anything to do with the server's filesystem. Then, I just need to rewind the file pointer (seek(0)) after ZipFile had done its thing and dump it to the response.
First of all, you don't need to create a NamedTemporaryFile to use make_archive; all you want is a unique filename for the make_archive file to create.
.write doesn't return a filename
To focus on that error: You are assuming that the return value of f.write is a filename you can open; just seek to the start of your file and read instead:
f.write(make_archive(f.name, 'zip', root_dir))
f.seek(0)
data = f.read()
Note that you'll also need to clean up the temporary file you created (you set delete=False):
import os
f.close()
os.unlink(f.name)
Alternatively, just omit the delete keyword to have it default to True again and only close your file afterwards, no need to unlink.
That just wrote the archive filename to a new file..
You are just writing the new archive name to your temporary file. You'd be better off just reading the archive directly:
data = open(make_archive(f.name, 'zip', root_dir), 'rb').read()
Note that now your temporary file isn't being written to at all.
Best way to do this
Avoid creating a NamedTemporaryFile altogether: Use tempfile.mkdtemp() instead, to generate a temporary directory in which to put your archive, then clean that up afterwards:
tmpdir = tempfile.mkdtemp()
try:
tmparchive = os.path.join(tmpdir, 'archive')
root_dir = "something"
data = open(make_archive(tmparchive, 'zip', root_dir), 'rb').read()
finally:
shutil.rmtree(tmpdir)

Categories

Resources