I seem to remember that the Python gzip module previously allowed you to read non-gzipped files transparently. This was really useful, as it allowed to read an input file whether or not it was gzipped. You simply didn't have to worry about it.
Now,I get an IOError exception (in Python 2.7.5):
Traceback (most recent call last):
File "tst.py", line 14, in <module>
rec = fd.readline()
File "/sw/lib/python2.7/gzip.py", line 455, in readline
c = self.read(readsize)
File "/sw/lib/python2.7/gzip.py", line 261, in read
self._read(readsize)
File "/sw/lib/python2.7/gzip.py", line 296, in _read
self._read_gzip_header()
File "/sw/lib/python2.7/gzip.py", line 190, in _read_gzip_header
raise IOError, 'Not a gzipped file'
IOError: Not a gzipped file
If anyone has a neat trick, I'd like to hear about it. Yes, I know how to catch the exception, but I find it rather clunky to first read a line, then close the file and open it again.
The best solution for this would be to use something like https://github.com/ahupp/python-magic with libmagic. You simply cannot avoid at least reading a header to identify a file (unless you implicitly trust file extensions)
If you're feeling spartan the magic number for identifying gzip(1) files is the first two bytes being 0x1f 0x8b.
In [1]: f = open('foo.html.gz')
In [2]: print `f.read(2)`
'\x1f\x8b'
gzip.open is just a wrapper around GzipFile, you could have a function like this that just returns the correct type of object depending on what the source is without having to open the file twice:
#!/usr/bin/python
import gzip
def opener(filename):
f = open(filename,'rb')
if (f.read(2) == '\x1f\x8b'):
f.seek(0)
return gzip.GzipFile(fileobj=f)
else:
f.seek(0)
return f
Maybe you're thinking of zless or zgrep, which will open compressed or uncompressed files without complaining.
Can you trust that the file name ends in .gz?
if file_name.endswith('.gz'):
opener = gzip.open
else:
opener = open
with opener(file_name, 'r') as f:
...
Read the first four bytes. If the first three are 0x1f, 0x8b, 0x08, and if the high three bits of the fourth byte are zeros, then fire up the gzip compression starting with those four bytes. Otherwise write out the four bytes and continue to read transparently.
You should still have the clunky solution to back that up, so that if the gzip read fails nevertheless, then back up and read transparently. But it should be quite unlikely to have the first four bytes mimic a gzip file so well, but not be a gzip file.
You can iterate over files transparently using fileinput(files, openhook=fileinput.hook_compressed)
Related
UPD I have opened file and found this format. How I can decode 00000000....
I need to open .ebc file on Python. The size of this file is approximately 12GB.
I have used a huge amount of tools and Python libraries for this action, but it is obvious that I am doing something in a wrong way. I can't find suitable encoding.
I tried to read the file line by line because of it size.
Python lists two code pages for EBCDIC, "cp424" (Hebrew) and "cp500" (Western scripts).
Use it like this:
with open(path, encoding='cp500') as f:
for line in f:
# process one line of text
Note: if the file is 12G in size, you'll want to avoid to call f.read() or f.readlines(), as both would read the entire file into memory.
On a laptop, this is likely to freeze your system.
Instead, iterate over the contents line by line using Python's default line iteration.
If you just want to re-encode the file with a modern encoding, eg. the very popular UTF-8, use the following pattern:
with open(in_path, encoding='cp500') as src, open(out_path, 'w', encoding='utf8') as dest:
dest.writelines(src)
This should re-encode the file with a low-memory footprint, as it reads, converts and writes the contents line by line.
I'm trying to efficiently read in, and parse, a compressed text file using the gzip module. This link suggests wrapping the gzip file object with io.BufferedReader, like so:
import gzip, io
gz = gzip.open(in_path, 'rb')
f = io.BufferedReader(gz)
for line in f.readlines():
# do stuff
gz.close()
To do this in Python 3, I think gzip must be called with mode='rb'. So the result is that line is a binary string. However, I need line to be a text/ascii string. Is there a more efficient way to read in the file as a text string using BufferedReader, or will I have to decode line inside the for loop?
You can use io.TextIOWrapper to seamlessly wrap a binary stream to a text stream instead:
f = io.TextIOWrapper(gz)
Or as #ShadowRanger pointed out, you can simply open the gzip file in text mode instead, so that the gzip module will apply the io.TextIOWrapper wrapper for you:
for line in gzip.open(in_path, 'rt'):
# do stuff
In python's OS module there is a method to open a file and a method to read a file.
The docs for the open method say:
Open the file file and set various flags according to flags and
possibly its mode according to mode. The default mode is 0777 (octal),
and the current umask value is first masked out. Return the file
descriptor for the newly opened file.
The docs for the read method say;
Read at most n bytes from file descriptor fd. Return a string
containing the bytes read. If the end of the file referred to by fd
has been reached, an empty string is returned.
I understand what it means to read n bytes from a file. But how does this differ from open?
"Opening" a file doesn't actually bring any of the data from the file into your program. It just prepares the file for reading (or writing), so when your program is ready to read the contents of the file it can do so right away.
Opening a file allows you to read or write to it (depending on the flag you pass as the second argument), whereas reading it actually pulls the data from a file that is typcially saved into a variable for processing or printed as output.
You do not always read from a file once it is opened. Opening also allows you to write to a file, either by overwriting all the contents or appending to the contents.
To read from a file:
>>> myfile = open('foo.txt', 'r')
>>> myfile.read()
First you open the file with read permission (r)
Then you read() from the file
To write to a file:
>>> myfile = open('foo.txt', 'r')
>>> myfile.write('I am writing to foo.txt')
The only thing that is being done in line 1 of each of these examples is opening the file. It is not until we actually read() from the file that anything is changed
open gets you a fd (file descriptor), you can read from that fd later.
One may also open a file for other purpose, say write to a file.
It seems to me you can read lines from the file handle without invoking the read method but I guess read() truly puts the data in the variable location. In my course we seem to be printing lines, counting lines, and adding numbers from lines without using read().
The rstrip() method needs to be used, however, because printing the line from the file handle using a for in statement also prints the invisible line break symbol at the end of the line, as does the print statement.
From Python for Everybody by Charles Severance, this is the starter code.
"""
7.2
Write a program that prompts for a file name,
then opens that file and reads through the file,
looking for lines of the form:
X-DSPAM-Confidence: 0.8475
Count these lines and extract the floating point
values from each of the lines and compute the
average of those values and produce an output as
shown below. Do not use the sum() function or a
variable named sum in your solution.
You can download the sample data at
http://www.py4e.com/code3/mbox-short.txt when you
are testing below enter mbox-short.txt as the file name.
"""
# Use the file name mbox-short.txt as the file name
fname = input("Enter file name: ")
fh = open(fname)
for line in fh:
if not line.startswith("X-DSPAM-Confidence:") :
continue
print(line)
print("Done")
Without using zip64 extensions, a Zip file cannot be more than 2GB in size, so trying to write to a file that would put it over that limit won't work. I expected that when such a writing was attempted, it would raise an exception, but I've not been able to cause one to be raised. (The documentation is silent on the matter.) If an exception doesn't get raised in such a circumstance, how would I (efficiently) go about determining if the write succeeded or not?
I've got an exception trying to write big strings to a zip archive:
$ python write-big-zip.py
Traceback (most recent call last):
File "write-big-zip.py", line 7, in <module>
myzip.writestr('arcname%d'% i, b'a'*2**30)
File "/usr/lib/python2.7/zipfile.py", line 1125, in writestr
self._writecheck(zinfo)
File "/usr/lib/python2.7/zipfile.py", line 1020, in _writecheck
raise LargeZipFile("Zipfile size would require ZIP64 extensions")
zipfile.LargeZipFile: Zipfile size would require ZIP64 extensions
Using the script:
#!/usr/bin/env python
"""Write big strings to zip file until error."""
from zipfile import ZipFile
with ZipFile('big.zip', 'w') as myzip:
for i in range(4):
myzip.writestr('arcname%d'% i, b'a'*2**30)
import os
size = os.path.getsize("file") #Get the size of the file.
size = size/1073741824 #Converting bytes to GB.
if size < 2: # < is probably safer than <=
#do the zipping
else:
print "The file is too large!"
This isn't ideal, of course, but it might serve as a temporary solution until a better one has been found.
Again, I don't think this is a very good way of using zip. But if there is no appropriate exception (which there should), it might serve as a temporary solution.
After some frustration with unzip(1L), I've been trying to create a script that will unzip and print out raw data from all of the files inside a zip archive that is coming from stdin. I currently have the following, which works:
import sys, zipfile, StringIO
stdin = StringIO.StringIO(sys.stdin.read())
zipselect = zipfile.ZipFile(stdin)
filelist = zipselect.namelist()
for filename in filelist:
print filename, ':'
print zipselect.read(filename)
When I try to add validation to check if it truly is a zip file, however, it doesn't like it.
...
zipcheck = zipfile.is_zipfile(zipselect)
if zipcheck is not None:
print 'Input is not a zip file.'
sys.exit(1)
...
results in
File "/home/chris/simple/zipcat/zipcat.py", line 13, in <module>
zipcheck = zipfile.is_zipfile(zipselect)
File "/usr/lib/python2.7/zipfile.py", line 149, in is_zipfile
result = _check_zipfile(fp=filename)
File "/usr/lib/python2.7/zipfile.py", line 135, in _check_zipfile
if _EndRecData(fp):
File "/usr/lib/python2.7/zipfile.py", line 203, in _EndRecData
fpin.seek(0, 2)
AttributeError: ZipFile instance has no attribute 'seek'
I assume it can't seek because it is not a file, as such?
Sorry if this is obvious, this is my first 'go' with Python.
You should pass stdin to is_zipfile, not zipselect. is_zipfile takes a path to a file or a file object, not a ZipFile.
See the zipfile.is_zipfile documentation
You are correct that a ZipFile can't seek because it isn't a file. It's an archive, so it can contain many files.
To do this entirely in memory will take some work. The AttributeError message means that the is_zipfile method is trying to use the seek method of the file handle you provide. But standard input is not seekable, and therefore your file object for it has no seek method.
If you really, really can't store the file on disk temporarily, then you could buffer the entire file in memory (you would need to enforce a size limit for security), and then implement some "duck" code that looks and acts like a seekable file object but really just uses the byte-string in memory.
It is possible that you could cheat and buffer only enough of the data for is_zipfile to do its work, but I seem to recall that the table-of-contents for ZIP is at the end of the file. I could be wrong about that though.
Your 2011 python2 fragment was: StringIO.StringIO(sys.stdin.read())
In 2018 a python3 programmer might phrase that as: io.StringIO(...).
What you wanted was the following python3 fragment: io.BytesIO(...).
Certainly that works well for me when using the requests module to download binary ZIP files from webservers:
zf = zipfile.ZipFile(io.BytesIO(req.content))