I have ~1GB *.tbz files. Inside each of those files there is a single ~9GB file. I just need to read the header of this file, the first 1024 bytes.
I want this to do this as fast as possible as I have hundreds of this 1GB files I want to process. It takes about 1m30s to extract.
I tried using full extraction:
tar = tarfile.open(fn, mode='r|bz2')
for item in tar:
tar.extract(item)
and tarfile.getmembers() but with no speed imprevement:
tar = tarfile.open(fn, mode='r|bz2')
for member in tar.getmembers():
f = tar.extractfile(member)
headerbytes = f.read(1024)
headerdict = parseHeader(headerbytes)
The getmembers() method is what's taking all the time there.
Is there any way I can to this?
I think you should use the standard library bz2 interface. .tbz is the file extension for tar files that are compressed with the -j option to specify a bzip2 format.
As #bbayles pointed out in the comments, you can open your file as a bz2.BZ2File and use seek and read:
read([size])
Read at most size uncompressed bytes, returned as a
string. If the size argument is negative or omitted, read until EOF is
reached.
seek(offset[, whence])
Move to new file position. Argument offset is a
byte count.
f = bz2.BZ2File(path)
f.seek(512)
headerbytes = f.read(1024)
You can then parse that with your functions.
headerdict = parseHeader(headerbytes)
If you're sure that every tar archive will contain only a single bz2 file, you can simply skip the first 512 bytes when first reading the tar file (NOT the bz2 file contained in it, of course), because the tar file format has a padded (fixed size) header, after which your "real" content is stored.
A simple
f.seek(512)
instead of looping over getmembers() should do the trick.
Related
I have a .gz file and I need to get the name of files inside it using python.
This question is the same as this one
The only difference is that my file is .gz not .tar.gz so the tarfile library did not help me here
I am using requests library to request a URL. The response is a compressed file.
Here is the code I am using to download the file
response = requests.get(line.rstrip(), stream=True)
if response.status_code == 200:
with open(str(base_output_dir)+"/"+str(current_dir)+"/"+str(count)+".gz", 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
del response
This code downloads the file with name 1.gz for example. Now if I opened the file with an archive manger the file will contain something like my_latest_data.json
I need to extract the file and the output be my_latest_data.json.
Here is the code I am using to extract the file
inF = gzip.open(f, 'rb')
outfilename = f.split(".")[0]
outF = open(outfilename, 'wb')
outF.write(inF.read())
inF.close()
outF.close()
The outputfilename variable is a string I provide in the script but I need the real file name (my_latest_data.json)
You can't, because Gzip is not an archive format.
That's a bit of a crap explanation on its own, so let me break this down a bit more than I did in the comment...
Its just compression
Being "just a compression system" means that Gzip operates on input bytes (usually from a file) and outputs compressed bytes. You cannot know whether or not the bytes inside represent multiple files or just a single file -- it is just a stream of bytes that has been compressed. That is why you can accept gzipped data over a network, for example. Its bytes_in -> bytes_out.
What's a manifest?
A manifest is a header within an archive that acts as a table of contents for the archive. Note that now I am using the term "archive" and not "compressed stream of bytes". An archive implies that it is a collection of files or segments that are referred to by a manifest -- a compressed stream of bytes is just a stream of bytes.
What's inside a Gzip, anyway?
A somewhat simplified description of a .gz file's contents is:
A header with a special number to indicate its a gzip, a version and a timestamp (10 bytes)
Optional headers; usually including the original filename (if the compression target was a file)
The body -- some compressed payload
A CRC-32 checksum at the end (8 bytes)
That's it. No manifest.
Archive formats, on the other hand, will have a manifest inside. That's where the tar library would come in. Tar is just a way to shove a bunch of bits together into a single file, and places a manifest at the front that lets you know the names of the original files and what sizes they were before being concatenated into the archive. Hence, .tar.gz being so common.
There are utilities that allow you to decompress parts of a gzipped file at a time, or decompress it only in memory to then let you examine a manifest or whatever that may be inside. But the details of any manifest are specific to the archive format contained inside.
Note that this is different from a zip archive. Zip is an archive format, and as such contains a manifest. Gzip is a compression library, like bzip2 and friends.
As noted in the other answer, your question can only make sense if I take out the plural: "I have a .gz file and I need to get the name of file inside it using python."
A gzip header may or may not have a file name in it. The gzip utility will normally ignore the name in the header, and decompress to a file with the same name as the .gz file, but with the .gz stripped. E.g. your 1.gz would decompress to a file named 1, even if the header has the file name my_latest_data.json in it. The -N option of gzip will use the file name in the header (as well as the time stamp in the header), if there is one. So gzip -dN 1.gz would create the file my_latest_data.json, instead of 1.
You can find the file name in the header in Python by processing the header manually. You can find the details in the gzip specification.
Verify that the first three bytes are 1f 8b 08.
Save the fourth byte. Call it flags. If flags & 8 is zero, then give up -- there is no file name in the header.
Skip the next six bytes.
If flags & 2 is not zero, skip two bytes.
If flags & 4 is not zero, then read the next two bytes. Considering them to be in little endian order, make an integer out of those two bytes, calling it xlen. Then skip xlen bytes.
We already know that flags & 8 is not zero, so you are now at the file name. Read bytes until you get to zero byte. Those bytes up to, but not including the zero byte are the file name.
Note: This answer is obsolete as of Python 3.
Using the tips from the Mark Adler reply and a bit of inspection on gzip module I've set up this function that extracts the internal filename from gzip files. I noticed that GzipFile objects have a private method called _read_gzip_header() that almost gets the filename so i did based on that
import gzip
def get_gzip_filename(filepath):
f = gzip.open(filepath)
f._read_gzip_header()
f.fileobj.seek(0)
f.fileobj.read(3)
flag = ord(f.fileobj.read(1))
mtime = gzip.read32(f.fileobj)
f.fileobj.read(2)
if flag & gzip.FEXTRA:
# Read & discard the extra field, if present
xlen = ord(f.fileobj.read(1))
xlen = xlen + 256*ord(f.fileobj.read(1))
f.fileobj.read(xlen)
filename = ''
if flag & gzip.FNAME:
while True:
s = f.fileobj.read(1)
if not s or s=='\000':
break
else:
filename += s
return filename or None
The Python 3 gzip library discards this information but you could adopt the code from around the link to do something else with it.
As noted in other answers on this page, this information is optional anyway. But it's not impossible to retrieve if you need to look if it's there.
import struct
def gzinfo(filename):
# Copy+paste from gzip.py line 16
FTEXT, FHCRC, FEXTRA, FNAME, FCOMMENT = 1, 2, 4, 8, 16
with open(filename, 'rb') as fp:
# Basically copy+paste from GzipFile module line 429f
magic = fp.read(2)
if magic == b'':
return False
if magic != b'\037\213':
raise ValueError('Not a gzipped file (%r)' % magic)
method, flag, _last_mtime = struct.unpack("<BBIxx", fp.read(8))
if method != 8:
raise ValueError('Unknown compression method')
if flag & FEXTRA:
# Read & discard the extra field, if present
extra_len, = struct.unpack("<H", fp.read(2))
fp.read(extra_len)
if flag & FNAME:
fname = []
while True:
s = fp.read(1)
if not s or s==b'\000':
break
fname.append(s.decode('latin-1'))
return ''.join(fname)
def main():
from sys import argv
for filename in argv[1:]:
print(filename, gzinfo(filename))
if __name__ == '__main__':
main()
This replaces the exceptions in the original code with a vague ValueError exception (you might want to fix that if you intend to use this more broadly, and turn this into a proper module you can import) and uses the generic read() function instead of the specific _read_exact() method which goes through some trouble to ensure that it got exactly the number of bytes it requested (this too could be lifted over if you wanted to).
I've worked with decompressing and reading files on the fly in memory with the bz2 library. However, i've read through the documentation and can't seem to just simply decompress the file to create a brand new file on the file system with the decompressed data without memory storage. Sure, you could read line by line using BZ2Decompressor then write that to a file, but that would be insanely slow. (Decompressing massive files, 50GB+). Is there some method or library I have overlooked to achieve the same functionality as the terminal command bz2 -d myfile.ext.bz2 in python without using a hacky solution involving a subprocess to call that terminal command?
Example why bz2 is so slow:
Decompressing that file via bz2 -d: 104seconds
Analytics on a decompressed file(just involves reading line by line): 183seconds
with open(file_src) as x:
for l in x:
Decompressing on the file and using analytics: Over 600 seconds (This time should be max 104+183)
if file_src.endswith(".bz2"):
bz_file = bz2.BZ2File(file_src)
for l in bz_file:
You could use the bz2.BZ2File object which provides a transparent file-like handle.
(edit: you seem to use that already, but don't use readlines() on a binary file, or on a text file because in your case the block size isn't big enough which explains why it's slow)
Then use shutil.copyfileobj to copy to the write handle of your output file (you can adjust block size if you can afford the memory)
import bz2,shutil
with bz2.BZ2File("file.bz2") as fr, open("output.bin","wb") as fw:
shutil.copyfileobj(fr,fw)
Even if the file is big, it doesn't take more memory than the block size. Adjust the block size like this:
shutil.copyfileobj(fr,fw,length = 1000000) # read by 1MB chunks
For smaller files that you can store in memory before you save to a file, you can use bz2.open to decompress the file and save it as an uncompressed new file.
import bz2
#decompress data
with bz2.open('compressed_file.bz2', 'rb') as f:
uncompressed_content = f.read()
#store decompressed file
with open('new_uncompressed_file.dat', 'wb') as f:
f.write(uncompressed_content)
f.close()
I want to compress files and compute the checksum of the compressed file using python. My first naive attempt was to use 2 functions:
def compress_file(input_filename, output_filename):
f_in = open(input_filename, 'rb')
f_out = gzip.open(output_filename, 'wb')
f_out.writelines(f_in)
f_out.close()
f_in.close()
def md5sum(filename):
with open(filename) as f:
md5 = hashlib.md5(f.read()).hexdigest()
return md5
However, it leads to the compressed file being written and then re-read. With many files (> 10 000), each several MB when compressed, in a NFS mounted drive, it is slow.
How can I compress the file in a buffer and then compute the checksum from this buffer before writing the output file?
The file are not that big so I can afford to store everything in memory. However, a nice incremental version could be nice too.
The last requirement is that it should work with multiprocessing (in order to compress several files in parallel).
I have tried to use zlib.compress but the returned string miss the header of a gzip file.
Edit: following #abarnert sggestion, I used python3 gzip.compress:
def compress_md5(input_filename, output_filename):
f_in = open(input_filename, 'rb')
# Read in buffer
buff = f_in.read()
f_in.close()
# Compress this buffer
c_buff = gzip.compress(buff)
# Compute MD5
md5 = hashlib.md5(c_buff).hexdigest()
# Write compressed buffer
f_out = open(output_filename, 'wb')
f_out.write(c_buff)
f_out.close()
return md5
This produce a correct gzip file but the output is different at each run (the md5 is different):
>>> compress_md5('4327_010.pdf', '4327_010.pdf.gz')
'0d0eb6a5f3fe2c1f3201bc3360201f71'
>>> compress_md5('4327_010.pdf', '4327_010.pdf.gz')
'8e4954ab5914a1dd0d8d0deb114640e5'
The gzip program doesn't have this problem:
$ gzip -c 4327_010.pdf | md5sum
8965184bc4dace5325c41cc75c5837f1 -
$ gzip -c 4327_010.pdf | md5sum
8965184bc4dace5325c41cc75c5837f1 -
I guess it's because the gzip module use the current time by default when creating a file (the gzip program use the modification of the input file I guess). There is no way to change that with gzip.compress.
I was thinking to create a gzip.GzipFile in read/write mode, controlling the mtime but there is no such mode for gzip.GzipFile.
Inspired by #zwol suggestion I wrote the following function which correctly sets the filename and the OS (Unix) in the header:
def compress_md5(input_filename, output_filename):
f_in = open(input_filename, 'rb')
# Read data in buffer
buff = f_in.read()
# Create output buffer
c_buff = cStringIO.StringIO()
# Create gzip file
input_file_stat = os.stat(input_filename)
mtime = input_file_stat[8]
gzip_obj = gzip.GzipFile(input_filename, mode="wb", fileobj=c_buff, mtime=mtime)
# Compress data in memory
gzip_obj.write(buff)
# Close files
f_in.close()
gzip_obj.close()
# Retrieve compressed data
c_data = c_buff.getvalue()
# Change OS value
c_data = c_data[0:9] + '\003' + c_data[10:]
# Really write compressed data
f_out = open(output_filename, "wb")
f_out.write(c_data)
# Compute MD5
md5 = hashlib.md5(c_data).hexdigest()
return md5
The output is the same at different run. Moreover the output of file is the same than gzip:
$ gzip -9 -c 4327_010.pdf > ref_max/4327_010.pdf.gz
$ file ref_max/4327_010.pdf.gz
ref_max/4327_010.pdf.gz: gzip compressed data, was "4327_010.pdf", from Unix, last modified: Tue May 5 14:28:16 2015, max compression
$ file 4327_010.pdf.gz
4327_010.pdf.gz: gzip compressed data, was "4327_010.pdf", from Unix, last modified: Tue May 5 14:28:16 2015, max compression
However, md5 is different:
$ md5sum 4327_010.pdf.gz ref_max/4327_010.pdf.gz
39dc3e5a52c71a25c53fcbc02e2702d5 4327_010.pdf.gz
213a599a382cd887f3c4f963e1d3dec4 ref_max/4327_010.pdf.gz
gzip -l is also different:
$ gzip -l ref_max/4327_010.pdf.gz 4327_010.pdf.gz
compressed uncompressed ratio uncompressed_name
7286404 7600522 4.1% ref_max/4327_010.pdf
7297310 7600522 4.0% 4327_010.pdf
I guess it's because the gzip program and the python gzip module (which is based on the C library zlib) have a slightly different algorithm.
Wrap a gzip.GzipFile object around an io.BytesIO object. (In Python 2, use cStringIO.StringIO instead.) After you close the GzipFile, you can retrieve the compressed data from the BytesIO object (using getvalue), hash it, and write it out to a real file.
Incidentally, you really shouldn't be using MD5 at all anymore.
I have tried to use zlib.compress but the returned string miss the header of a gzip file.
Of course. That's the whole difference between the zlib module and the gzip module; zlib just deals with zlib-deflate compression without gzip headers, gzip deals with zlib-deflate data with gzip headers.
So, just call gzip.compress instead, and the code you wrote but didn't show us should just work.
As a side note:
with open(filename) as f:
md5 = hashlib.md5(f.read()).hexdigest()
You almost certainly want to open the file in 'rb' mode here. You don't want to convert '\r\n' into '\n' (if on Windows), or decode the binary data as sys.getdefaultencoding() text (if on Python 3), so open it in binary mode.
Another side note:
Don't use line-based APIs on binary files. Instead of this:
f_out.writelines(f_in)
… do this:
f_out.write(f_in.read())
Or, if the files are too large to read into memory all at once:
for buf in iter(partial(f_in.read, 8192), b''):
f_out.write(buf)
And one last point:
With many files (> 10 000), each several MB when compressed, in a NFS mounted drive, it is slow.
Does your system not have a tmp directory mounted on a faster drive?
In most cases, you don't need a real file. Either there's a string-based API (zlib.compress, gzip.compress, json.dumps, etc.), or the file-based API only requires a file-like object, like a BytesIO.
But when you do need a real temporary file, with a real file descriptor and everything, you almost always want to create it in the temporary directory.* In Python, you do this with the tempfile module.
For example:
def compress_and_md5(filename):
with tempfile.NamedTemporaryFile() as f_out:
with open(filename, 'rb') as f_in:
g_out = gzip.open(f_out)
g_out.write(f_in.read())
f_out.seek(0)
md5 = hashlib.md5(f_out.read()).hexdigest()
If you need an actual filename, rather than a file object, you can use f_in.name.
* The one exception is when you only want the temporary file to eventually rename it to a permanent location. In that case, of course, you usually want the temporary file to be in the same directory as the permanent location. But you can do that with tempfile just as easily. Just remember to pass delete=False.
So I'm playing with the Wikipedia dump file. It's an XML file that has been bzipped. I can write all the files to directories, but then when I want to do analysis, I have to reread all the files on the disk. This gives me random access, but it's slow. I have the ram to put the entire bzipped file into ram.
I can load the dump file just fine and read all the lines, but I cannot seek in it as it's gigantic. From what it seems, the bz2 library has to read and capture the offset before it can bring me there (and decompress it all, as the offset is in decompressed bytes).
Anyway, I'm trying to mmap the dump file (~9.5 gigs) and load it into bzip. I obviously want to test this on a bzip file before.
I want to map the mmap file to a BZ2File so I can seek through it (to get to a specific, uncompressed byte offset), but from what it seems, this is impossible without decompressing the entire mmap file (this would be well over 30 gigabytes).
Do I have any options?
Here's some code I wrote to test.
import bz2
import mmap
lines = '''This is my first line
This is the second
And the third
'''
with open("bz2TestFile", "wb") as f:
f.write(bz2.compress(lines))
with open("bz2TestFile", "rb") as f:
mapped = mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ)
print "Part of MMAPPED"
# This does not work until I hit a minimum length
# due to (I believe) the checksums in the bz2 algorithm
#
for x in range(len(mapped)+2):
line = mapped[0:x]
try:
print x
print bz2.decompress(line)
except:
pass
# I can decompress the entire mmapped file
print ":entire mmap file:"
print bz2.decompress(mapped)
# I can create a bz2File object from the file path
# Is there a way to map the mmap object to this function?
print ":BZ2 File readline:"
bzF = bz2.BZ2File("bz2TestFile")
# Seek to specific offset
bzF.seek(22)
# Read the data
print bzF.readline()
This all makes me wonder though, what is special about the bz2 file object that allows it to read a line after seeking? Does it have to read every line before it to get the checksums from the algorithm to work out correctly?
I found an answer! James Taylor wrote a couple scripts for seeking in BZ2 files, and his scripts are in the biopython module.
https://bitbucket.org/james_taylor/bx-python/overview
These work pretty well, although they do not allow for seeking to arbitrary byte offsets in the BZ2 file, his scripts read out blocks of BZ2 data and allow seeking based on blocks.
In particular, see bx-python / wiki / IO / SeekingInBzip2Files
Each of my fastq files is about 20 millions reads (or 20 millions lines). Now I need to split the big fastq files into chunks, each with only 1 million reads (or 1 million lines), for the ease of further analysis. fastq file is just like .txt.
My thought is, just count the line, and print out the lines after counting every 1 million lines. But the input file is .gz compressed form (fastq.gz), do I need to unzip first?
How can I do this with python?
I tried the following command:
zless XXX.fastq.gz |split -l 4000000 prefix
(gzip first then split the file)
However, seems it doesn't work with prefix (I tried) "-prefix", still it doesn't work. Also, with split command the output is like:
prefix-aa, prefix-ab...
If my prefix is XXX.fastq.gz, then the output will be XXX.fastq.gzab, which will destroy the .fastq.gz format.
So what I need is XXX_aa.fastq.gz, XXX_ab.fastq.gz (ie. suffix). How can I do that?
As posted here
zcat XXX.fastq.gz | split -l 1000000 --additional-suffix=".fastq" --filter='gzip > $FILE.gz' - "XXX_"
...I need to unzip it first.
No you don't, at least not by hand. gzip will allow you to open the compressed file, at which point you read out a certain number of bytes and write them out to a separate compressed file. See the examples at the bottom of the linked documentation to see how to both read and write compressed files.
with gzip.open(infile, 'rb') as inp:
for <some number of loops>:
with gzip.open(outslice,'wb') as outp:
outp.write(inp.read(slicesize))
else: # only if you're not sure that you got the whole thing
with gzip.open(outslice,'wb') as outp:
outp.write(inp.read())
Note that gzip-compressed files are not random-accessible so you will need to perform the operation in one go unless you want to decompress the source file to disk first.
You can read a gzipped file just like an uncompressed file:
>>> import gzip
>>> for line in gzip.open('myfile.txt.gz', 'r'):
... process(line)
The process() function would handle the specific line-counting and conditional processing logic that you mentioned.