Python - How to gzip a large text file without MemoryError? - python

I use the following simple Python script to compress a large text file (say, 10GB) on an EC2 m3.large instance. However, I always got a MemoryError:
import gzip
with open('test_large.csv', 'rb') as f_in:
with gzip.open('test_out.csv.gz', 'wb') as f_out:
f_out.writelines(f_in)
# or the following:
# for line in f_in:
# f_out.write(line)
The traceback I got is:
Traceback (most recent call last):
File "test.py", line 8, in <module>
f_out.writelines(f_in)
MemoryError
I have read some discussion about this issue, but still not quite clear how to handle this. Can someone give me a more understandable answer about how to deal with this problem?

The problem here has nothing to do with gzip, and everything to do with reading line by line from a 10GB file with no newlines in it:
As an additional note, the file I used to test the Python gzip functionality is generated by fallocate -l 10G bigfile_file.
That gives you a 10GB sparse file made entirely of 0 bytes. Meaning there are no newline bytes. Meaning the first line is 10GB long. Meaning it will take 10GB to read the first line. (Or possibly even 20 or 40GB, if you're using pre-3.3 Python and trying to read it as Unicode.)
If you want to copy binary data, don't copy line by line. Whether it's a normal file, a GzipFile that's decompressing for you on the fly, a socket.makefile(), or anything else, you will have the same problem.
The solution is to copy chunk by chunk. Or just use copyfileobj, which does that for you automatically.
import gzip
import shutil
with open('test_large.csv', 'rb') as f_in:
with gzip.open('test_out.csv.gz', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
By default, copyfileobj uses a chunk size optimized to be often very good and never very bad. In this case, you might actually want a smaller size, or a larger one; it's hard to predict which a priori.* So, test it by using timeit with different bufsize arguments (say, powers of 4 from 1KB to 8MB) to copyfileobj. But the default 16KB will probably be good enough unless you're doing a lot of this.
* If the buffer size is too big, you may end up alternating long chunks of I/O and long chunks of processing. If it's too small, you may end up needing multiple reads to fill a single gzip block.

That's odd. I would expect this error if you tried to compress a large binary file that didn't contain many newlines, since such a file could contain a "line" that was too big for your RAM, but it shouldn't happen on a line-structured .csv file.
But anyway, it's not very efficient to compress files line by line. Even though the OS buffers disk I/O it's generally much faster to read and write larger blocks of data, eg 64 kB.
I have 2GB of RAM on this machine, and I just successfully used the program below to compress a 2.8GB tar archive.
#! /usr/bin/env python
import gzip
import sys
blocksize = 1 << 16 #64kB
def gzipfile(iname, oname, level):
with open(iname, 'rb') as f_in:
f_out = gzip.open(oname, 'wb', level)
while True:
block = f_in.read(blocksize)
if block == '':
break
f_out.write(block)
f_out.close()
return
def main():
if len(sys.argv) < 3:
print "gzip compress in_file to out_file"
print "Usage:\n%s in_file out_file [compression_level]" % sys.argv[0]
exit(1)
iname = sys.argv[1]
oname = sys.argv[2]
level = int(sys.argv[3]) if len(sys.argv) > 3 else 6
gzipfile(iname, oname, level)
if __name__ == '__main__':
main()
I'm running Python 2.6.6 and gzip.open() doesn't support with.
As Andrew Bay notes in the comments, if block == '': won't work correctly in Python 3, since block contains bytes, not a string, and an empty bytes object doesn't compare as equal to an empty text string. We could check the block length, or compare to b'' (which will also work in Python 2.6+), but the simple way is if not block:.

It is weird to get a memory error even when reading a file line by line. I suppose it is because you have very little available memory and very large lines. You should then use binary reads :
import gzip
#adapt size value : small values will take more time, high value could cause memory errors
size = 8096
with open('test_large.csv', 'rb') as f_in:
with gzip.open('test_out.csv.gz', 'wb') as f_out:
while True:
data = f_in.read(size)
if data == '' : break
f_out.write(data)

Related

What is the best way to split a big file into small size files and send it to github using requests module POST/PUT method in python?

My requirement is to split a big file (e.g. 500MB)size into small files (50MB) in python.
What are the modules i can use in python to achieve this?
For eg.
I have a file of 500MB size i want to split that file into 10 50 MB files and send it to an API
Thanks in advance.
You don't need any external modules. (In fact, you don't need to even import anything.)
This would chop up large_file.dat into 50-megabyte pieces and write them to disk – but you could just as well replace the file writing with whatever API call you need.
filename = "large_file.dat"
chunk_size = 50_000_000 # bytes; must fit in memory
chunk_num = 1
with open(filename, "rb") as input_file:
while True:
chunk = input_file.read(chunk_size)
if not chunk: # Nothing more to read, we've reached file end
break
with open(f"{filename}.{chunk_num:04d}", "wb") as output_file:
output_file.write(chunk)
chunk_num += 1

gzip in bash vs python

In Bash, when you gzip a file, the original is not retained, whereas in Python, you could use the gzip library like this (as shown here in the "Examples of Usage" section):
import gzip
import shutil
with open('/home/joe/file.txt', 'rb') as f_in:
with gzip.open('/home/joe/file.txt.gz', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
By default, this retains the original file. I couldn't find a way to not retain it while gzipping. Do I have to wait till gzip is done to delete the file?
If you are on a unix-like system, you can unlink the file after opening so that it is no longer found in the file system. But it will still take disk space until you close the now-anonymous file.
import gzip
import shutil
import os
with open('deleteme', 'rb') as f_in:
with gzip.open('deleteme.gz', 'wb') as f_out:
os.unlink('deleteme') # *after* we knew the gzip open worked!
shutil.copyfileobj(f_in, f_out)
As far as I know, this doesn't work on Windows. You need to do the remove after the zip process completes. You could change its name to something like "thefile.temporary" or even move it to a different directory (fast if the directory is the same file system, but copied if its a different one).
Considering that when GZip runs (in Bash, or anywhere else for that matter):
GZip requires the original data to perform the zipping action
GZip is designed to handle data of basically arbitrary size
Therefore: GZip isn't likely to be creating a temp file in memory, rather it is almost certainly deleting the original after the gzip is done anyway.
With those points in mind, an identical strategy for your code is to do the gzip, then delete the file.
Certainly deleting the file isn't onerous — there are several ways to do it — and of course you could package the whole thing in a procedure so as to never have to concern yourself with it again.
The code below (partially based on tdelaney's answer), will do the following:
read the file, compressing on the fly, and storing all the compressed data in memory
delete the input file
then write the compressed data
This is for the use case where you have a full filesystem, which prevents you from writing the compressed data at the same time that the uncompressed file exists on disk. To get around this problem, it is therefore necessary to store all the data in memory (unless you have access to external storage), but to minimise this memory cost as far as possible, only the compressed data is fully stored in memory, while the uncompressed data is read in chunks.
There is of course a risk of data loss if the program is interrupted between deleting the input file and completing writing the compressed data to disk.
There is also the possibility of failure if there is insufficient memory, but the input file would not be deleted in that case because the MemoryError would be raised before the os.unlink is reached.
It is worth noting that this does not specifically answer what the question asks for, namely deleting the input file while still reading from it. This is possible under unix-like OSes, but there is no practical advantage in doing this over the regular command-line gzip behaviour, because freeing the disk space still does not happen until the file is closed, so it sacrifices recoverability in the event of failure, without gaining any additional space to juggle data in exchange for that sacrifice. (There would still need to be disk space for the uncompressed and compressed data to coexist.)
import gzip
import shutil
import os
from io import BytesIO
filename = 'deleteme'
buf = BytesIO()
# compress into memory - don't store all the uncompressed data in memory
# but do store all the compressed data in memory
with open(filename, 'rb') as fin:
with gzip.open(buf, 'wb') as zbuf:
shutil.copyfileobj(fin, zbuf)
# sanity check for already compressed data
length = buf.tell()
if length > os.path.getsize(filename):
raise RuntimeError("data *grew* in size - refusing to delete input")
# delete input file and then write out the compressed data
buf.seek(0)
os.unlink(filename)
with open(filename + '.gz', 'wb') as fout:
shutil.copyfileobj(buf, fout)

Split equivalent of gzip files in python

I'm trying to replicate this bash command in Bash which returns each file gzipped 50MB each.
split -b 50m "file.dat.gz" "file.dat.gz.part-"
My attempt at the python equivalent
import gzip
infile_name = "file.dat.gz"
chunk = 50*1024*1024 # 50MB
with gzip.open(infile_name, 'rb') as infile:
for n, raw_bytes in enumerate(iter(lambda: infile.read(slice), '')):
print(n, chunk)
with gzip.open('{}.part-{}'.format(infile_name[:-3], n), 'wb') as outfile:
outfile.write(raw_bytes)
This returns 15MB each gzipped. When I gunzip the files, then they are 50MB each.
How do I split the gzipped file in python so that split up files are each 50MB each before gunzipping?
I don't believe that split works the way you think it does. It doesn't split the gzip file into smaller gzip files. I.e. you can't call gunzip on the individual files it creates. It literally breaks up the data into smaller chunks and if you want to gunzip it, you have to concatenate all the chunks back together first. So, to emulate the actual behavior with Python, we'd do something like:
infile_name = "file.dat.gz"
chunk = 50*1024*1024 # 50MB
with open(infile_name, 'rb') as infile:
for n, raw_bytes in enumerate(iter(lambda: infile.read(chunk), b'')):
print(n, chunk)
with open('{}.part-{}'.format(infile_name[:-3], n), 'wb') as outfile:
outfile.write(raw_bytes)
In reality we'd read multiple smaller input chunks to make one output chunk to use less memory.
We might be able to break the file into smaller files that we can individually gunzip, and still make our target size. Using something like a bytesIO stream, we could gunzip the file and gzip it into that memory stream until it was the target size then write it out and start a new bytesIO stream.
With compressed data, you have to measure the size of the output, not the size of the input as we can't predict how well the data will compress.
Here's a solution for emulating something like the split -l (split on lines) command option that will allow you to open each individual file with gunzip.
import io
import os
import shutil
from xopen import xopen
def split(infile_name, num_lines ):
infile_name_fp = infile_name.split('/')[-1].split('.')[0] #get first part of file name
cur_dir = '/'.join(infile_name.split('/')[0:-1])
out_dir = f'{cur_dir}/{infile_name_fp}_split'
if os.path.exists(out_dir):
shutil.rmtree(out_dir)
os.makedirs(out_dir) #create in same folder as the original .csv.gz file
m=0
part=0
buf=io.StringIO() #initialize buffer
with xopen(infile_name, 'rt') as infile:
for line in infile:
if m<num_lines: #fill up buffer
buf.write(line)
m+=1
else: #write buffer to file
with xopen(f'{out_dir}/{infile_name_fp}.part-{str(part).zfill(5)}.csv.gz', mode='wt', compresslevel=6) as outfile:
outfile.write(buf.getvalue())
m=0
part+=1
buf=io.StringIO() #flush buffer -> faster than seek(0); truncate(0);
#write whatever is left in buffer to file
with xopen(f'{out_dir}/{infile_name_fp}.part-{str(part).zfill(5)}.csv.gz', mode='wt', compresslevel=6) as outfile:
outfile.write(buf.getvalue())
buf.close()
Usage:
split('path/to/myfile.csv.gz', num_lines=100000)
Outputs a folder with split files at path/to/myfile_split.
Discussion: I've used xopen here for additional speed, but you may choose to use gzip.open if you want to stay with Python native packages. Performance-wise, I've benchmarked this to take about twice as long as a solution combining pigz and split. It's not bad, but could be better. The bottleneck is the for loop and the buffer, so maybe rewriting this to work asynchronously would be more performant.

Reduce a given larger file to specific file size in python

I am trying to reduce a larger file to a given file size for my testing purpose. The code is as follows:
f = open ('original_file', 'rb')
f.seek(1000000)
rest = f.read()
f.close()
f1 = open('new_file', 'w')
f1.write(rest)
f1.close()
I want to reduce 1 MB from that file irrespective of content. But I am not able to get that reduction in the same file. Please help me where I am going wrong or any other method to reduce the content of the same file to specified MB. Thanks.
To trim a file to a determined size, maintaining its begining, you can use the os.truncate call.
You don't mention whether you want to shave the bytes at the begining or at the ending of the file - but from your code, one deduces it is at the beggining.
In that case, since the common truncate call is available in some file-systens to clip the file just at the end, what one has to do is write the data from the desired position to the end at the beggining of the file. A compact way of doing that is simply opening the file twice - (in some O.S.s that might not work, just read the dta to a temporary object, and open the file again for writting, in that case):
import os
def truncate_begining(path, length):
"""Remove length bytes at the beggning of given file"""
original_length = os.stat(path).st_size
with open(path, "r+b") as reading, open(path, "r+b") as writting:
reading.seek(length)
writting.write(reading.read())
try:
os.truncate(path, orginal_length - length)
except OSError as error:
print("Unable to truncate the file:", error)
Note that the truncate functionality is not available in all circunstances, and that depends on the filesystem the file is on having this capability. If it does not have, a call to truncate will raise an error. (The docs say the call is new in Python 3.3, and is available for Windows only on Python 3.5 onwards)
For Python versions prior to 3.3, on Linux, one can make use of ctypes to call system's truncate directly:
import ctypes
libc = ctypes.CDLL("libc.so.6")
libc.truncate(<path>, <length>)

Reading memory mapped bzip2 compressed file

So I'm playing with the Wikipedia dump file. It's an XML file that has been bzipped. I can write all the files to directories, but then when I want to do analysis, I have to reread all the files on the disk. This gives me random access, but it's slow. I have the ram to put the entire bzipped file into ram.
I can load the dump file just fine and read all the lines, but I cannot seek in it as it's gigantic. From what it seems, the bz2 library has to read and capture the offset before it can bring me there (and decompress it all, as the offset is in decompressed bytes).
Anyway, I'm trying to mmap the dump file (~9.5 gigs) and load it into bzip. I obviously want to test this on a bzip file before.
I want to map the mmap file to a BZ2File so I can seek through it (to get to a specific, uncompressed byte offset), but from what it seems, this is impossible without decompressing the entire mmap file (this would be well over 30 gigabytes).
Do I have any options?
Here's some code I wrote to test.
import bz2
import mmap
lines = '''This is my first line
This is the second
And the third
'''
with open("bz2TestFile", "wb") as f:
f.write(bz2.compress(lines))
with open("bz2TestFile", "rb") as f:
mapped = mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ)
print "Part of MMAPPED"
# This does not work until I hit a minimum length
# due to (I believe) the checksums in the bz2 algorithm
#
for x in range(len(mapped)+2):
line = mapped[0:x]
try:
print x
print bz2.decompress(line)
except:
pass
# I can decompress the entire mmapped file
print ":entire mmap file:"
print bz2.decompress(mapped)
# I can create a bz2File object from the file path
# Is there a way to map the mmap object to this function?
print ":BZ2 File readline:"
bzF = bz2.BZ2File("bz2TestFile")
# Seek to specific offset
bzF.seek(22)
# Read the data
print bzF.readline()
This all makes me wonder though, what is special about the bz2 file object that allows it to read a line after seeking? Does it have to read every line before it to get the checksums from the algorithm to work out correctly?
I found an answer! James Taylor wrote a couple scripts for seeking in BZ2 files, and his scripts are in the biopython module.
https://bitbucket.org/james_taylor/bx-python/overview
These work pretty well, although they do not allow for seeking to arbitrary byte offsets in the BZ2 file, his scripts read out blocks of BZ2 data and allow seeking based on blocks.
In particular, see bx-python / wiki / IO / SeekingInBzip2Files

Categories

Resources