I have a giant file, let's call it one-csv-file.xz. It is an XZ-compressed CSV file.
How can I open and parse through the file without first decompressing it to disk? What if the file is, for example, 100 GB? Python cannot read all of that into memory at once, of course. Will it page or run out of memory?
You can iterate through an LZMAFile object
import lzma # python 3, try lzmaffi in python 2
with open('one-csv-file.xz') as compressed:
with lzma.LZMAFile(compressed) as uncompressed:
for line in uncompressed:
do_stuff_with(line)
You can decompress incrementally. See Compression using the LZMA Algorithm. You create an LZMADecompressor object, and then use the decompress method with successive chunks of the compressed data to get successive chunks of the uncompressed data.
Related
I have a small application that reads local files using:
open(diefile_path, 'r') as csv_file
open(diefile_path, 'r') as file
and also uses linecache module
I need to expand the use to files that send from a remote server.
The content that is received by the server type is bytes.
I couldn't find a lot of information about handling IOBytes type and I was wondering if there is a way that I can convert the bytes chunk to a file-like object.
My goal is to use the API is specified above (open,linecache)
I was able to convert the bytes into a string using data.decode("utf-8"),
but I can't use the methods above (open and linecache)
a small example to illustrate
data = 'b'First line\nSecond line\nThird line\n'
with open(data) as file:
line = file.readline()
print(line)
output:
First line
Second line
Third line
can it be done?
open is used to open actual files, returning a file-like object. Here, you already have the data in memory, not in a file, so you can instantiate the file-like object directly.
import io
data = b'First line\nSecond line\nThird line\n'
file = io.StringIO(data.decode())
for line in file:
print(line.strip())
However, if what you are getting is really just a newline-separated string, you can simply split it into a list directly.
lines = data.decode().strip().split('\n')
The main difference is that the StringIO version is slightly lazier; it has a smaller memory foot print compared to the list, as it splits strings off as requested by the iterator.
The answer above that using StringIO would need to specify an encoding, which may cause wrong conversion.
from Python Documentation using BytesIO:
from io import BytesIO
f = BytesIO(b"some initial binary data: \x00\x01")
In Bash, when you gzip a file, the original is not retained, whereas in Python, you could use the gzip library like this (as shown here in the "Examples of Usage" section):
import gzip
import shutil
with open('/home/joe/file.txt', 'rb') as f_in:
with gzip.open('/home/joe/file.txt.gz', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
By default, this retains the original file. I couldn't find a way to not retain it while gzipping. Do I have to wait till gzip is done to delete the file?
If you are on a unix-like system, you can unlink the file after opening so that it is no longer found in the file system. But it will still take disk space until you close the now-anonymous file.
import gzip
import shutil
import os
with open('deleteme', 'rb') as f_in:
with gzip.open('deleteme.gz', 'wb') as f_out:
os.unlink('deleteme') # *after* we knew the gzip open worked!
shutil.copyfileobj(f_in, f_out)
As far as I know, this doesn't work on Windows. You need to do the remove after the zip process completes. You could change its name to something like "thefile.temporary" or even move it to a different directory (fast if the directory is the same file system, but copied if its a different one).
Considering that when GZip runs (in Bash, or anywhere else for that matter):
GZip requires the original data to perform the zipping action
GZip is designed to handle data of basically arbitrary size
Therefore: GZip isn't likely to be creating a temp file in memory, rather it is almost certainly deleting the original after the gzip is done anyway.
With those points in mind, an identical strategy for your code is to do the gzip, then delete the file.
Certainly deleting the file isn't onerous — there are several ways to do it — and of course you could package the whole thing in a procedure so as to never have to concern yourself with it again.
The code below (partially based on tdelaney's answer), will do the following:
read the file, compressing on the fly, and storing all the compressed data in memory
delete the input file
then write the compressed data
This is for the use case where you have a full filesystem, which prevents you from writing the compressed data at the same time that the uncompressed file exists on disk. To get around this problem, it is therefore necessary to store all the data in memory (unless you have access to external storage), but to minimise this memory cost as far as possible, only the compressed data is fully stored in memory, while the uncompressed data is read in chunks.
There is of course a risk of data loss if the program is interrupted between deleting the input file and completing writing the compressed data to disk.
There is also the possibility of failure if there is insufficient memory, but the input file would not be deleted in that case because the MemoryError would be raised before the os.unlink is reached.
It is worth noting that this does not specifically answer what the question asks for, namely deleting the input file while still reading from it. This is possible under unix-like OSes, but there is no practical advantage in doing this over the regular command-line gzip behaviour, because freeing the disk space still does not happen until the file is closed, so it sacrifices recoverability in the event of failure, without gaining any additional space to juggle data in exchange for that sacrifice. (There would still need to be disk space for the uncompressed and compressed data to coexist.)
import gzip
import shutil
import os
from io import BytesIO
filename = 'deleteme'
buf = BytesIO()
# compress into memory - don't store all the uncompressed data in memory
# but do store all the compressed data in memory
with open(filename, 'rb') as fin:
with gzip.open(buf, 'wb') as zbuf:
shutil.copyfileobj(fin, zbuf)
# sanity check for already compressed data
length = buf.tell()
if length > os.path.getsize(filename):
raise RuntimeError("data *grew* in size - refusing to delete input")
# delete input file and then write out the compressed data
buf.seek(0)
os.unlink(filename)
with open(filename + '.gz', 'wb') as fout:
shutil.copyfileobj(buf, fout)
I've worked with decompressing and reading files on the fly in memory with the bz2 library. However, i've read through the documentation and can't seem to just simply decompress the file to create a brand new file on the file system with the decompressed data without memory storage. Sure, you could read line by line using BZ2Decompressor then write that to a file, but that would be insanely slow. (Decompressing massive files, 50GB+). Is there some method or library I have overlooked to achieve the same functionality as the terminal command bz2 -d myfile.ext.bz2 in python without using a hacky solution involving a subprocess to call that terminal command?
Example why bz2 is so slow:
Decompressing that file via bz2 -d: 104seconds
Analytics on a decompressed file(just involves reading line by line): 183seconds
with open(file_src) as x:
for l in x:
Decompressing on the file and using analytics: Over 600 seconds (This time should be max 104+183)
if file_src.endswith(".bz2"):
bz_file = bz2.BZ2File(file_src)
for l in bz_file:
You could use the bz2.BZ2File object which provides a transparent file-like handle.
(edit: you seem to use that already, but don't use readlines() on a binary file, or on a text file because in your case the block size isn't big enough which explains why it's slow)
Then use shutil.copyfileobj to copy to the write handle of your output file (you can adjust block size if you can afford the memory)
import bz2,shutil
with bz2.BZ2File("file.bz2") as fr, open("output.bin","wb") as fw:
shutil.copyfileobj(fr,fw)
Even if the file is big, it doesn't take more memory than the block size. Adjust the block size like this:
shutil.copyfileobj(fr,fw,length = 1000000) # read by 1MB chunks
For smaller files that you can store in memory before you save to a file, you can use bz2.open to decompress the file and save it as an uncompressed new file.
import bz2
#decompress data
with bz2.open('compressed_file.bz2', 'rb') as f:
uncompressed_content = f.read()
#store decompressed file
with open('new_uncompressed_file.dat', 'wb') as f:
f.write(uncompressed_content)
f.close()
I have ~1GB *.tbz files. Inside each of those files there is a single ~9GB file. I just need to read the header of this file, the first 1024 bytes.
I want this to do this as fast as possible as I have hundreds of this 1GB files I want to process. It takes about 1m30s to extract.
I tried using full extraction:
tar = tarfile.open(fn, mode='r|bz2')
for item in tar:
tar.extract(item)
and tarfile.getmembers() but with no speed imprevement:
tar = tarfile.open(fn, mode='r|bz2')
for member in tar.getmembers():
f = tar.extractfile(member)
headerbytes = f.read(1024)
headerdict = parseHeader(headerbytes)
The getmembers() method is what's taking all the time there.
Is there any way I can to this?
I think you should use the standard library bz2 interface. .tbz is the file extension for tar files that are compressed with the -j option to specify a bzip2 format.
As #bbayles pointed out in the comments, you can open your file as a bz2.BZ2File and use seek and read:
read([size])
Read at most size uncompressed bytes, returned as a
string. If the size argument is negative or omitted, read until EOF is
reached.
seek(offset[, whence])
Move to new file position. Argument offset is a
byte count.
f = bz2.BZ2File(path)
f.seek(512)
headerbytes = f.read(1024)
You can then parse that with your functions.
headerdict = parseHeader(headerbytes)
If you're sure that every tar archive will contain only a single bz2 file, you can simply skip the first 512 bytes when first reading the tar file (NOT the bz2 file contained in it, of course), because the tar file format has a padded (fixed size) header, after which your "real" content is stored.
A simple
f.seek(512)
instead of looping over getmembers() should do the trick.
So I'm playing with the Wikipedia dump file. It's an XML file that has been bzipped. I can write all the files to directories, but then when I want to do analysis, I have to reread all the files on the disk. This gives me random access, but it's slow. I have the ram to put the entire bzipped file into ram.
I can load the dump file just fine and read all the lines, but I cannot seek in it as it's gigantic. From what it seems, the bz2 library has to read and capture the offset before it can bring me there (and decompress it all, as the offset is in decompressed bytes).
Anyway, I'm trying to mmap the dump file (~9.5 gigs) and load it into bzip. I obviously want to test this on a bzip file before.
I want to map the mmap file to a BZ2File so I can seek through it (to get to a specific, uncompressed byte offset), but from what it seems, this is impossible without decompressing the entire mmap file (this would be well over 30 gigabytes).
Do I have any options?
Here's some code I wrote to test.
import bz2
import mmap
lines = '''This is my first line
This is the second
And the third
'''
with open("bz2TestFile", "wb") as f:
f.write(bz2.compress(lines))
with open("bz2TestFile", "rb") as f:
mapped = mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ)
print "Part of MMAPPED"
# This does not work until I hit a minimum length
# due to (I believe) the checksums in the bz2 algorithm
#
for x in range(len(mapped)+2):
line = mapped[0:x]
try:
print x
print bz2.decompress(line)
except:
pass
# I can decompress the entire mmapped file
print ":entire mmap file:"
print bz2.decompress(mapped)
# I can create a bz2File object from the file path
# Is there a way to map the mmap object to this function?
print ":BZ2 File readline:"
bzF = bz2.BZ2File("bz2TestFile")
# Seek to specific offset
bzF.seek(22)
# Read the data
print bzF.readline()
This all makes me wonder though, what is special about the bz2 file object that allows it to read a line after seeking? Does it have to read every line before it to get the checksums from the algorithm to work out correctly?
I found an answer! James Taylor wrote a couple scripts for seeking in BZ2 files, and his scripts are in the biopython module.
https://bitbucket.org/james_taylor/bx-python/overview
These work pretty well, although they do not allow for seeking to arbitrary byte offsets in the BZ2 file, his scripts read out blocks of BZ2 data and allow seeking based on blocks.
In particular, see bx-python / wiki / IO / SeekingInBzip2Files