For example tarfile.extractall(path) extracts the contents to specified direcotry. Similarly does gzip has any extract method to get the gzip member (contains only one file as per standards) to the specified directory or is there any workaround?
Edit: I don't want to read the complete file in to the memory
No, as gzip is only a compression format and not an archive one, there are no extract methods in the gzip module nor in the gzip.GzipFile class. But you have no reason to load the complete file in memory, you can just copy it in chunks. The manual gives an example on how to compress a file, it can be easily adapted to uncompress it:
import gzip
import shutil
with open('/home/joe/file.txt', 'wb') as f_out:
with gzip.open('/home/joe/file.txt.gz', 'rb') as f_in:
shutil.copyfileobj(f_in, f_out)
shutil.copyfileobj is meant to process copies in chunks.
Related
I searched how to compress a file in python, and found an answer that was basically as described below:
with open(input_file, 'rb') as f_in, gzip.open(output_file, 'wb') as f_out:
f_out.write(f_in.read())
It works readily with a 1GB file. But I plan on compressing files up to 200 GB.
Are there any considerations I need to take into account? Is there a different way I should be doing it with large files like that?
The files are binary .img files (exports of a block device; usually with empty space at the end, thus the compression works wonderfully).
This will read the entire file into memory, causing problems for you if you don't have 200G available!
You may be able to simply pipe the file through gzip, avoiding Python which will handle doing the work in chunks
% gzip -c myfile.img > myfile.img.gz
Otherwise you should read the file in chunks (picking a large block size may provide some benefit)
BLOCK_SIZE = 8192
with open(myfile, "rb") as f_in, gzip.open(output_file, 'wb') as f_out:
while True:
content = f_in.read(BLOCK_SIZE)
if not content:
break
f_out.write(content)
In Bash, when you gzip a file, the original is not retained, whereas in Python, you could use the gzip library like this (as shown here in the "Examples of Usage" section):
import gzip
import shutil
with open('/home/joe/file.txt', 'rb') as f_in:
with gzip.open('/home/joe/file.txt.gz', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
By default, this retains the original file. I couldn't find a way to not retain it while gzipping. Do I have to wait till gzip is done to delete the file?
If you are on a unix-like system, you can unlink the file after opening so that it is no longer found in the file system. But it will still take disk space until you close the now-anonymous file.
import gzip
import shutil
import os
with open('deleteme', 'rb') as f_in:
with gzip.open('deleteme.gz', 'wb') as f_out:
os.unlink('deleteme') # *after* we knew the gzip open worked!
shutil.copyfileobj(f_in, f_out)
As far as I know, this doesn't work on Windows. You need to do the remove after the zip process completes. You could change its name to something like "thefile.temporary" or even move it to a different directory (fast if the directory is the same file system, but copied if its a different one).
Considering that when GZip runs (in Bash, or anywhere else for that matter):
GZip requires the original data to perform the zipping action
GZip is designed to handle data of basically arbitrary size
Therefore: GZip isn't likely to be creating a temp file in memory, rather it is almost certainly deleting the original after the gzip is done anyway.
With those points in mind, an identical strategy for your code is to do the gzip, then delete the file.
Certainly deleting the file isn't onerous — there are several ways to do it — and of course you could package the whole thing in a procedure so as to never have to concern yourself with it again.
The code below (partially based on tdelaney's answer), will do the following:
read the file, compressing on the fly, and storing all the compressed data in memory
delete the input file
then write the compressed data
This is for the use case where you have a full filesystem, which prevents you from writing the compressed data at the same time that the uncompressed file exists on disk. To get around this problem, it is therefore necessary to store all the data in memory (unless you have access to external storage), but to minimise this memory cost as far as possible, only the compressed data is fully stored in memory, while the uncompressed data is read in chunks.
There is of course a risk of data loss if the program is interrupted between deleting the input file and completing writing the compressed data to disk.
There is also the possibility of failure if there is insufficient memory, but the input file would not be deleted in that case because the MemoryError would be raised before the os.unlink is reached.
It is worth noting that this does not specifically answer what the question asks for, namely deleting the input file while still reading from it. This is possible under unix-like OSes, but there is no practical advantage in doing this over the regular command-line gzip behaviour, because freeing the disk space still does not happen until the file is closed, so it sacrifices recoverability in the event of failure, without gaining any additional space to juggle data in exchange for that sacrifice. (There would still need to be disk space for the uncompressed and compressed data to coexist.)
import gzip
import shutil
import os
from io import BytesIO
filename = 'deleteme'
buf = BytesIO()
# compress into memory - don't store all the uncompressed data in memory
# but do store all the compressed data in memory
with open(filename, 'rb') as fin:
with gzip.open(buf, 'wb') as zbuf:
shutil.copyfileobj(fin, zbuf)
# sanity check for already compressed data
length = buf.tell()
if length > os.path.getsize(filename):
raise RuntimeError("data *grew* in size - refusing to delete input")
# delete input file and then write out the compressed data
buf.seek(0)
os.unlink(filename)
with open(filename + '.gz', 'wb') as fout:
shutil.copyfileobj(buf, fout)
I've worked with decompressing and reading files on the fly in memory with the bz2 library. However, i've read through the documentation and can't seem to just simply decompress the file to create a brand new file on the file system with the decompressed data without memory storage. Sure, you could read line by line using BZ2Decompressor then write that to a file, but that would be insanely slow. (Decompressing massive files, 50GB+). Is there some method or library I have overlooked to achieve the same functionality as the terminal command bz2 -d myfile.ext.bz2 in python without using a hacky solution involving a subprocess to call that terminal command?
Example why bz2 is so slow:
Decompressing that file via bz2 -d: 104seconds
Analytics on a decompressed file(just involves reading line by line): 183seconds
with open(file_src) as x:
for l in x:
Decompressing on the file and using analytics: Over 600 seconds (This time should be max 104+183)
if file_src.endswith(".bz2"):
bz_file = bz2.BZ2File(file_src)
for l in bz_file:
You could use the bz2.BZ2File object which provides a transparent file-like handle.
(edit: you seem to use that already, but don't use readlines() on a binary file, or on a text file because in your case the block size isn't big enough which explains why it's slow)
Then use shutil.copyfileobj to copy to the write handle of your output file (you can adjust block size if you can afford the memory)
import bz2,shutil
with bz2.BZ2File("file.bz2") as fr, open("output.bin","wb") as fw:
shutil.copyfileobj(fr,fw)
Even if the file is big, it doesn't take more memory than the block size. Adjust the block size like this:
shutil.copyfileobj(fr,fw,length = 1000000) # read by 1MB chunks
For smaller files that you can store in memory before you save to a file, you can use bz2.open to decompress the file and save it as an uncompressed new file.
import bz2
#decompress data
with bz2.open('compressed_file.bz2', 'rb') as f:
uncompressed_content = f.read()
#store decompressed file
with open('new_uncompressed_file.dat', 'wb') as f:
f.write(uncompressed_content)
f.close()
I want to compress files and compute the checksum of the compressed file using python. My first naive attempt was to use 2 functions:
def compress_file(input_filename, output_filename):
f_in = open(input_filename, 'rb')
f_out = gzip.open(output_filename, 'wb')
f_out.writelines(f_in)
f_out.close()
f_in.close()
def md5sum(filename):
with open(filename) as f:
md5 = hashlib.md5(f.read()).hexdigest()
return md5
However, it leads to the compressed file being written and then re-read. With many files (> 10 000), each several MB when compressed, in a NFS mounted drive, it is slow.
How can I compress the file in a buffer and then compute the checksum from this buffer before writing the output file?
The file are not that big so I can afford to store everything in memory. However, a nice incremental version could be nice too.
The last requirement is that it should work with multiprocessing (in order to compress several files in parallel).
I have tried to use zlib.compress but the returned string miss the header of a gzip file.
Edit: following #abarnert sggestion, I used python3 gzip.compress:
def compress_md5(input_filename, output_filename):
f_in = open(input_filename, 'rb')
# Read in buffer
buff = f_in.read()
f_in.close()
# Compress this buffer
c_buff = gzip.compress(buff)
# Compute MD5
md5 = hashlib.md5(c_buff).hexdigest()
# Write compressed buffer
f_out = open(output_filename, 'wb')
f_out.write(c_buff)
f_out.close()
return md5
This produce a correct gzip file but the output is different at each run (the md5 is different):
>>> compress_md5('4327_010.pdf', '4327_010.pdf.gz')
'0d0eb6a5f3fe2c1f3201bc3360201f71'
>>> compress_md5('4327_010.pdf', '4327_010.pdf.gz')
'8e4954ab5914a1dd0d8d0deb114640e5'
The gzip program doesn't have this problem:
$ gzip -c 4327_010.pdf | md5sum
8965184bc4dace5325c41cc75c5837f1 -
$ gzip -c 4327_010.pdf | md5sum
8965184bc4dace5325c41cc75c5837f1 -
I guess it's because the gzip module use the current time by default when creating a file (the gzip program use the modification of the input file I guess). There is no way to change that with gzip.compress.
I was thinking to create a gzip.GzipFile in read/write mode, controlling the mtime but there is no such mode for gzip.GzipFile.
Inspired by #zwol suggestion I wrote the following function which correctly sets the filename and the OS (Unix) in the header:
def compress_md5(input_filename, output_filename):
f_in = open(input_filename, 'rb')
# Read data in buffer
buff = f_in.read()
# Create output buffer
c_buff = cStringIO.StringIO()
# Create gzip file
input_file_stat = os.stat(input_filename)
mtime = input_file_stat[8]
gzip_obj = gzip.GzipFile(input_filename, mode="wb", fileobj=c_buff, mtime=mtime)
# Compress data in memory
gzip_obj.write(buff)
# Close files
f_in.close()
gzip_obj.close()
# Retrieve compressed data
c_data = c_buff.getvalue()
# Change OS value
c_data = c_data[0:9] + '\003' + c_data[10:]
# Really write compressed data
f_out = open(output_filename, "wb")
f_out.write(c_data)
# Compute MD5
md5 = hashlib.md5(c_data).hexdigest()
return md5
The output is the same at different run. Moreover the output of file is the same than gzip:
$ gzip -9 -c 4327_010.pdf > ref_max/4327_010.pdf.gz
$ file ref_max/4327_010.pdf.gz
ref_max/4327_010.pdf.gz: gzip compressed data, was "4327_010.pdf", from Unix, last modified: Tue May 5 14:28:16 2015, max compression
$ file 4327_010.pdf.gz
4327_010.pdf.gz: gzip compressed data, was "4327_010.pdf", from Unix, last modified: Tue May 5 14:28:16 2015, max compression
However, md5 is different:
$ md5sum 4327_010.pdf.gz ref_max/4327_010.pdf.gz
39dc3e5a52c71a25c53fcbc02e2702d5 4327_010.pdf.gz
213a599a382cd887f3c4f963e1d3dec4 ref_max/4327_010.pdf.gz
gzip -l is also different:
$ gzip -l ref_max/4327_010.pdf.gz 4327_010.pdf.gz
compressed uncompressed ratio uncompressed_name
7286404 7600522 4.1% ref_max/4327_010.pdf
7297310 7600522 4.0% 4327_010.pdf
I guess it's because the gzip program and the python gzip module (which is based on the C library zlib) have a slightly different algorithm.
Wrap a gzip.GzipFile object around an io.BytesIO object. (In Python 2, use cStringIO.StringIO instead.) After you close the GzipFile, you can retrieve the compressed data from the BytesIO object (using getvalue), hash it, and write it out to a real file.
Incidentally, you really shouldn't be using MD5 at all anymore.
I have tried to use zlib.compress but the returned string miss the header of a gzip file.
Of course. That's the whole difference between the zlib module and the gzip module; zlib just deals with zlib-deflate compression without gzip headers, gzip deals with zlib-deflate data with gzip headers.
So, just call gzip.compress instead, and the code you wrote but didn't show us should just work.
As a side note:
with open(filename) as f:
md5 = hashlib.md5(f.read()).hexdigest()
You almost certainly want to open the file in 'rb' mode here. You don't want to convert '\r\n' into '\n' (if on Windows), or decode the binary data as sys.getdefaultencoding() text (if on Python 3), so open it in binary mode.
Another side note:
Don't use line-based APIs on binary files. Instead of this:
f_out.writelines(f_in)
… do this:
f_out.write(f_in.read())
Or, if the files are too large to read into memory all at once:
for buf in iter(partial(f_in.read, 8192), b''):
f_out.write(buf)
And one last point:
With many files (> 10 000), each several MB when compressed, in a NFS mounted drive, it is slow.
Does your system not have a tmp directory mounted on a faster drive?
In most cases, you don't need a real file. Either there's a string-based API (zlib.compress, gzip.compress, json.dumps, etc.), or the file-based API only requires a file-like object, like a BytesIO.
But when you do need a real temporary file, with a real file descriptor and everything, you almost always want to create it in the temporary directory.* In Python, you do this with the tempfile module.
For example:
def compress_and_md5(filename):
with tempfile.NamedTemporaryFile() as f_out:
with open(filename, 'rb') as f_in:
g_out = gzip.open(f_out)
g_out.write(f_in.read())
f_out.seek(0)
md5 = hashlib.md5(f_out.read()).hexdigest()
If you need an actual filename, rather than a file object, you can use f_in.name.
* The one exception is when you only want the temporary file to eventually rename it to a permanent location. In that case, of course, you usually want the temporary file to be in the same directory as the permanent location. But you can do that with tempfile just as easily. Just remember to pass delete=False.
I'm trying to use the Python GZIP module to simply uncompress several .gz files in a directory. Note that I do not want to read the files, only uncompress them. After searching this site for a while, I have this code segment, but it does not work:
import gzip
import glob
import os
for file in glob.glob(PATH_TO_FILE + "/*.gz"):
#print file
if os.path.isdir(file) == False:
shutil.copy(file, FILE_DIR)
# uncompress the file
inF = gzip.open(file, 'rb')
s = inF.read()
inF.close()
the .gz files are in the correct location, and I can print the full path + filename with the print command, but the GZIP module isn't getting executed properly. what am I missing?
If you get no error, the gzip module probably is being executed properly, and the file is already getting decompressed.
The precise definition of "decompressed" varies on context:
I do not want to read the files, only uncompress them
The gzip module doesn't work as a desktop archiving program like 7-zip - you can't "uncompress" a file without "reading" it. Note that "reading" (in programming) usually just means "storing (temporarily) in the computer RAM", not "opening the file in the GUI".
What you probably mean by "uncompress" (as in a desktop archiving program) is more precisely described (in programming) as "read a in-memory stream/buffer from a compressed file, and write it to a new file (and possibly delete the compressed file afterwards)"
inF = gzip.open(file, 'rb')
s = inF.read()
inF.close()
With these lines, you're just reading the stream. If you expect a new "uncompressed" file to be created, you just need to write the buffer to a new file:
with open(out_filename, 'wb') as out_file:
out_file.write(s)
If you're dealing with very large files (larger than the amount of your RAM), you'll need to adopt a different approach. But that is the topic for another question.
You're decompressing file into s variable, and do nothing with it. You should stop searching stackoverflow and read at least python tutorial. Seriously.
Anyway, there's several thing wrong with your code:
you need is to STORE the unzipped data in s into some file.
there's no need to copy the actual *.gz files. Because in your code, you're unpacking the original gzip file and not the copy.
you're using file, which is a reserved word, as a variable. This is not
an error, just a very bad practice.
This should probably do what you wanted:
import gzip
import glob
import os
import os.path
for gzip_path in glob.glob(PATH_TO_FILE + "/*.gz"):
if os.path.isdir(gzip_path) == False:
inF = gzip.open(gzip_path, 'rb')
# uncompress the gzip_path INTO THE 's' variable
s = inF.read()
inF.close()
# get gzip filename (without directories)
gzip_fname = os.path.basename(gzip_path)
# get original filename (remove 3 characters from the end: ".gz")
fname = gzip_fname[:-3]
uncompressed_path = os.path.join(FILE_DIR, fname)
# store uncompressed file data from 's' variable
open(uncompressed_path, 'w').write(s)
You should use with to open files and, of course, store the result of reading the compressed file. See gzip documentation:
import gzip
import glob
import os
import os.path
for gzip_path in glob.glob("%s/*.gz" % PATH_TO_FILE):
if not os.path.isdir(gzip_path):
with gzip.open(gzip_path, 'rb') as in_file:
s = in_file.read()
# Now store the uncompressed data
path_to_store = gzip_fname[:-3] # remove the '.gz' from the filename
# store uncompressed file data from 's' variable
with open(path_to_store, 'w') as f:
f.write(s)
Depending on what exactly you want to do, you might want to have a look at tarfile and its 'r:gz' option for opening files.
I was able to resolve this issue by using the subprocess module:
for file in glob.glob(PATH_TO_FILE + "/*.gz"):
if os.path.isdir(file) == False:
shutil.copy(file, FILE_DIR)
# uncompress the file
subprocess.call(["gunzip", FILE_DIR + "/" + os.path.basename(file)])
Since my goal was to simply uncompress the archive, the above code accomplishes this. The archived files are located in a central location, and are copied to a working area, uncompressed, and used in a test case. the GZIP module was too complicated for what I was trying to accomplish.
Thanks for everyone's help. It is much appreciated!
I think there is a much simpler solution than the others presented given the op only wanted to extract all the files in a directory:
import glob
from setuptools import archive_util
for fn in glob.glob('*.gz'):
archive_util.unpack_archive(fn, '.')