I'm trying to extract data from a zip file in Python, but it's kind of slow. Could anyone advise me and see if I'm doing something that obviously makes it slower?
def go_through_zip(zipname):
out = {}
with ZipFile(zipname) as z:
for filename in z.namelist():
with z.open(filename) as f:
try:
outdict = make_dict(f)
out.update(outdict)
except:
print("File is not in the correct format")
return out
make_dict(f) just takes the file path and makes a dictionary, and this function is probably also slow, but that's not what I want to speed up right now.
Try using the following code for file extraction. it works fast as long as the size of the file being extracted is reasonable.
# importing required modules
from zipfile import ZipFile
# specifying the zip file name
file_name = "my_python_files.zip"
# opening the zip file in READ mode
with ZipFile(file_name, 'r') as zip:
# printing all the contents of the zip file
zip.printdir()
# extracting all the files
print('Extracting all the files now...')
zip.extractall()
print('Done!')
```
As title says, I'm downloading a bz2 file which has a folder inside and a lot of text files...
My first version was decompressing in memory, but Although it is only 90mbs when you uncomrpess it, it has 60 files of 750mb each.... Computer goes bum! obviusly cant handle like 40gb of ram XD)
So, The problem is that they are too big to keep all in memory at the same time... so I'm using this code that works but its sucks (Too slow):
response = requests.get('https:/fooweb.com/barfile.bz2')
# save file into disk:
compress_filepath = '{0}/files/sources/{1}'.format(zsets.BASE_DIR, check_time)
with open(compress_filepath, 'wb') as local_file:
local_file.write(response.content)
#We extract the files into folder
extract_folder = compress_filepath + '_ext'
with tarfile.open(compress_filepath, "r:bz2") as tar:
tar.extractall(extract_folder)
# We process one file at a time:
for filename in os.listdir(extract_folder):
filepath = '{0}/{1}'.format(extract_folder,filename)
file = open(filepath, 'r').readlines()
for line in file:
some_processing(line)
Is there a way I could make this without dumping it to disk... and only decompressing and reading one file from the .bz2 at a time?
Thank you very much for your time in advance, I hope somebody knows how to help me with this...
#!/usr/bin/python3
import sys
import requests
import tarfile
got = requests.get(sys.argv[1], stream=True)
with tarfile.open(fileobj=got.raw, mode='r|*') as tar:
for info in tar:
if info.isreg():
ent = tar.extractfile(info)
# now process ent as a file, however you like
print(info.name, len(ent.read()))
I did it this way:
response = requests.get(my_url_to_file)
memfile = io.BytesIO(response.content)
# We extract files in memory, one by one:
tar = tarfile.open(fileobj=memfile, mode="r:bz2")
for member_name in tar.getnames():
filecount+=1
file = tar.extractfile(member_name)
with open(file, 'r') as read_file:
for line in read_file:
process_line(line)
test.txt contains the list of files to be downloaded:
http://example.com/example/afaf1.tif
http://example.com/example/afaf2.tif
http://example.com/example/afaf3.tif
http://example.com/example/afaf4.tif
http://example.com/example/afaf5.tif
How these files can be downloaded using python with maximum download speed?
my thinking was as follows:
import urllib.request
with open ('test.txt', 'r') as f:
lines = f.read().splitlines()
for line in lines:
response = urllib.request.urlopen(line)
What after that?How to select download directory?
Select a path to your desired output directory (output_dir). In your for loop split every url on / character and use the last peace as the filename. Also open the files for writing in binary mode wb since the response.read() returns bytes, not str.
import os
import urllib.request
output_dir = 'path/to/you/output/dir'
with open ('test.txt', 'r') as f:
lines = f.read().splitlines()
for line in lines:
response = urllib.request.urlopen(line)
output_file = os.path.join(output_dir, line.split('/')[-1])
with open(output_file, 'wb') as writer:
writer.write(response.read())
Note:
Downloading multiple files can be faster if you use multiple threads since the download is rarely using the full bandwidth of your internet connection._
Also if the files you are downloading are pretty big you should probably stream the read (reading chunk by chunk). As #Tiran commented you should use shutil.copyfileobj(response, writer) instead of writer.write(response.read()).
I would only add that you should probably always specify the length parameter too: shutil.copyfileobj(response, writer, 5*1024*1024) # (at least 5MB) since the default value of 16kb is really small and it will just slow things down.
This works fine for me: (note that name must be absolute, for example 'afaf1.tif')
import urllib,os
def download(baseUrl,fileName,layer=0):
print 'Trying to download file:',fileName
url = baseUrl+fileName
name = os.path.join('foldertodwonload',fileName)
try:
#Note that folder needs to exist
urllib.urlretrieve (url,name)
except:
# Upon failure to download retries total 5 times
print 'Download failed'
print 'Could not download file:',fileName
if layer > 4:
return
else:
layer+=1
print 'retrying',str(layer)+'/5'
download(baseUrl,fileName,layer)
print fileName+' downloaded'
for fileName in nameList:
download(url,fileName)
Moved unnecessary code out from try block
I need to unzip a .ZIP archive. I already know how to unzip it, but it is a huge file and takes some time to extract. How would I print the percentage complete for the extraction? I would like something like this:
Extracting File
1% Complete
2% Complete
etc, etc
here an example that you can start with, it's not optimized:
import zipfile
zf = zipfile.ZipFile('test.zip')
uncompress_size = sum((file.file_size for file in zf.infolist()))
extracted_size = 0
for file in zf.infolist():
extracted_size += file.file_size
print "%s %%" % (extracted_size * 100/uncompress_size)
zf.extract(file)
to make it more beautiful do this when printing:
print "%s %%\r" % (extracted_size * 100/uncompress_size),
You can just monitor the progress of each file being extracted with tqdm():
from zipfile import ZipFile
from tqdm import tqdm
# Open your .zip file
with ZipFile(file=path) as zip_file:
# Loop over each file
for file in tqdm(iterable=zip_file.namelist(), total=len(zip_file.namelist())):
# Extract each file to another directory
# If you want to extract to current working directory, don't specify path
zip_file.extract(member=file, path=directory)
In python 2.6 ZipFile object has a open method which can open a named file in zip as a file object, you can sue that to read data in chunks
import zipfile
import os
def read_in_chunks(zf, name):
chunk_size= 4096
f = zf.open(name)
data_list = []
total_read = 0
while 1:
data = f.read(chunk_size)
total_read += len(data)
print "read",total_read
if not data:
break
data_list.append(data)
return "".join(data_list)
zip_file_path = r"C:\Users\anurag\Projects\untitled-3.zip"
zf = zipfile.ZipFile(zip_file_path, "r")
for name in zf.namelist():
data = read_in_chunks(zf, name)
Edit: To get the total size you can do something like this
total_size = sum((file.file_size for file in zf.infolist()))
So now you can print the total progress and progress per file, e.g. suppose you have only 1 big file in zip, other methods(e.g. just counting file sizes and extract) will not give any progress at all.
ZipFile.getinfolist() will generate a number of ZipInfo objects from the contents of the zip file. From there you can either total up the number of bytes of all the files in the archive and then count up how many you've extracted thus far, or you can go by the number of files total.
I don't believe you can track the progress of extracting a single file. The zipfile extract function has no callback for progress.
I have two zip files, both of them open well with Windows Explorer and 7-zip.
However when i open them with Python's zipfile module [ zipfile.ZipFile("filex.zip") ], one of them gets opened but the other one gives error "BadZipfile: File is not a zip file".
I've made sure that the latter one is a valid Zip File by opening it with 7-Zip and looking at its properties (says 7Zip.ZIP). When I open the file with a text editor, the first two characters are "PK", showing that it is indeed a zip file.
I'm using Python 2.5 and really don't have any clue how to go about for this. I've tried it both with Windows as well as Ubuntu and problem exists on both platforms.
Update: Traceback from Python 2.5.4 on Windows:
Traceback (most recent call last):
File "<module1>", line 5, in <module>
zipfile.ZipFile("c:/temp/test.zip")
File "C:\Python25\lib\zipfile.py", line 346, in init
self._GetContents()
File "C:\Python25\lib\zipfile.py", line 366, in _GetContents
self._RealGetContents()
File "C:\Python25\lib\zipfile.py", line 378, in _RealGetContents
raise BadZipfile, "File is not a zip file"
BadZipfile: File is not a zip file
Basically when the _EndRecData function is called for getting data from End of Central Directory" record, the comment length checkout fails [ endrec[7] == len(comment) ].
The values of locals in the _EndRecData function are as following:
END_BLOCK: 4096,
comment: '\x00',
data: '\xd6\xf6\x03\x00\x88,N8?<e\xf0q\xa8\x1cwK\x87\x0c(\x82a\xee\xc61N\'1qN\x0b\x16K-\x9d\xd57w\x0f\xa31n\xf3dN\x9e\xb1s\xffu\xd1\.....', (truncated)
endrec: ['PK\x05\x06', 0, 0, 4, 4, 268, 199515, 0],
filesize: 199806L,
fpin: <open file 'c:/temp/test.zip', mode 'rb' at 0x045D4F98>,
start: 4073
files named file can confuse python - try naming it something else. if it STILL wont work, try this code:
def fixBadZipfile(zipFile):
f = open(zipFile, 'r+b')
data = f.read()
pos = data.find('\x50\x4b\x05\x06') # End of central directory signature
if (pos > 0):
self._log("Trancating file at location " + str(pos + 22)+ ".")
f.seek(pos + 22) # size of 'ZIP end of central directory record'
f.truncate()
f.close()
else:
# raise error, file is truncated
I run into the same issue. My problem was that it was a gzip instead of a zip file. I switched to the class gzip.GzipFile and it worked like a charm.
astronautlevel's solution works for most cases, but the compressed data and CRCs in the Zip can also contain the same 4 bytes. You should do an rfind (not find), seek to pos+20 and then add write \x00\x00 to the end of the file (tell zip applications that the length of the 'comments' section is 0 bytes long).
# HACK: See http://bugs.python.org/issue10694
# The zip file generated is correct, but because of extra data after the 'central directory' section,
# Some version of python (and some zip applications) can't read the file. By removing the extra data,
# we ensure that all applications can read the zip without issue.
# The ZIP format: http://www.pkware.com/documents/APPNOTE/APPNOTE-6.3.0.TXT
# Finding the end of the central directory:
# http://stackoverflow.com/questions/8593904/how-to-find-the-position-of-central-directory-in-a-zip-file
# http://stackoverflow.com/questions/20276105/why-cant-python-execute-a-zip-archive-passed-via-stdin
# This second link is only losely related, but echos the first, "processing a ZIP archive often requires backwards seeking"
content = zipFileContainer.read()
pos = content.rfind('\x50\x4b\x05\x06') # reverse find: this string of bytes is the end of the zip's central directory.
if pos>0:
zipFileContainer.seek(pos+20) # +20: see secion V.I in 'ZIP format' link above.
zipFileContainer.truncate()
zipFileContainer.write('\x00\x00') # Zip file comment length: 0 byte length; tell zip applications to stop reading.
zipFileContainer.seek(0)
return zipFileContainer
I had the same problem and was able to solve this issue for my files, see my answer at
zipfile cant handle some type of zip data?
I'm very new at python and i was facing the exact same issue, none of the previous methods were working.
Trying to print the 'corrupted' file just before unzipping it returned an empty byte object.
Turned out, I was trying to unzip the file right after writing it to disk, without closing the file handler.
with open(path, 'wb') as outFile:
outFile.write(data)
outFile.close() # was missing this
with zipfile.ZipFile(path, 'r') as zip:
zip.extractall(destination)
Closing the file stream then unzipping the file resolved my issue.
Sometime there are zip file which contain corrupted files and upon unzipping the zip gives badzipfile error. but there are tools like 7zip winrar which ignores these errors and successfully unzip the zip file. you can create a sub process and use this code to unzip your zip file without getting BadZipFile Error.
import subprocess
ziploc = "C:/Program Files/7-Zip/7z.exe" #location where 7zip is installed
cmd = [ziploc, 'e',your_Zip_file.zip ,'-o'+ OutputDirectory ,'-r' ]
sp = subprocess.Popen(cmd, stderr=subprocess.STDOUT, stdout=subprocess.PIPE)
Show the full traceback that you got from Python -- this may give a hint as to what the specific problem is. Unanswered: What software produced the bad file, and on what platform?
Update: Traceback indicates having problem detecting the "End of Central Directory" record in the file -- see function _EndRecData starting at line 128 of C:\Python25\Lib\zipfile.py
Suggestions:
(1) Trace through the above function
(2) Try it on the latest Python
(3) Answer the question above.
(4) Read this and anything else found by google("BadZipfile: File is not a zip file") that appears to be relevant
I faced this problem and was looking for a good and clean solution; But there was no solution until I found this answer. I had the same problem that #marsl (among the answers) had. It was a gzipfile instead of a zipfile in my case.
I could unarchive and decompress my gzipfile with this approach:
with tarfile.open(archive_path, "r:gz") as gzip_file:
gzip_file.extractall()
Have you tried a newer python, or if that is too much trouble, simply a newer zipfile.py? I have successfully used a copy of zipfile.py from Python 2.6.2 (latest at the time) with Python 2.5 in order to open some zip files that weren't supported by Py2.5s zipfile module.
In some cases, you have to confirm if the zip file is actually in gzip format. this was the case for me and i solved it by :
import requests
import tarfile
url = ".tar.gz link"
response = requests.get(url, stream=True)
file = tarfile.open(fileobj=response.raw, mode="r|gz")
file.extractall(path=".")
for this this happened when the file wasn't downloaded fully I think. So I just delete it in my download code.
def download_and_extract(url: str,
path_used_for_zip: Path = Path('~/data/'),
path_used_for_dataset: Path = Path('~/data/tmp/'),
rm_zip_file_after_extraction: bool = True,
force_rewrite_data_from_url_to_file: bool = False,
clean_old_zip_file: bool = False,
gdrive_file_id: Optional[str] = None,
gdrive_filename: Optional[str] = None,
):
"""
Downloads data and tries to extract it according to different protocols/file types.
note:
- to force a download do:
force_rewrite_data_from_url_to_file = True
clean_old_zip_file = True
- to NOT remove file after extraction:
rm_zip_file_after_extraction = False
Tested with:
- zip files, yes!
Later:
- todo: tar, gz, gdrive
force_rewrite_data_from_url_to_file = remvoes the data from url (likely a zip file) and redownloads the zip file.
"""
path_used_for_zip: Path = expanduser(path_used_for_zip)
path_used_for_zip.mkdir(parents=True, exist_ok=True)
path_used_for_dataset: Path = expanduser(path_used_for_dataset)
path_used_for_dataset.mkdir(parents=True, exist_ok=True)
# - download data from url
if gdrive_filename is None: # get data from url, not using gdrive
import ssl
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
print("downloading data from url: ", url)
import urllib
import http
response: http.client.HTTPResponse = urllib.request.urlopen(url, context=ctx)
print(f'{type(response)=}')
data = response
# save zipfile like data to path given
filename = url.rpartition('/')[2]
path2file: Path = path_used_for_zip / filename
else: # gdrive case
from torchvision.datasets.utils import download_file_from_google_drive
# if zip not there re-download it or force get the data
path2file: Path = path_used_for_zip / gdrive_filename
if not path2file.exists():
download_file_from_google_drive(gdrive_file_id, path_used_for_zip, gdrive_filename)
filename = gdrive_filename
# -- write downloaded data from the url to a file
print(f'{path2file=}')
print(f'{filename=}')
if clean_old_zip_file:
path2file.unlink(missing_ok=True)
if filename.endswith('.zip') or filename.endswith('.pkl'):
# if path to file does not exist or force to write down the data
if not path2file.exists() or force_rewrite_data_from_url_to_file:
# delete file if there is one if your going to force a rewrite
path2file.unlink(missing_ok=True) if force_rewrite_data_from_url_to_file else None
print(f'about to write downloaded data from url to: {path2file=}')
# wb+ is used sinze the zip file was in bytes, otherwise w+ is fine if the data is a string
with open(path2file, 'wb+') as f:
# with open(path2file, 'w+') as f:
print(f'{f=}')
print(f'{f.name=}')
f.write(data.read())
print(f'done writing downloaded from url to: {path2file=}')
elif filename.endswith('.gz'):
pass # the download of the data doesn't seem to be explicitly handled by me, that is done in the extract step by a magic function tarfile.open
# elif is_tar_file(filename):
# os.system(f'tar -xvzf {path_2_zip_with_filename} -C {path_2_dataset}/')
else:
raise ValueError(f'File type {filename=} not supported.')
# - unzip data written in the file
extract_to = path_used_for_dataset
print(f'about to extract: {path2file=}')
print(f'extract to target: {extract_to=}')
if filename.endswith('.zip'):
import zipfile # this one is for zip files, inspired from l2l
zip_ref = zipfile.ZipFile(path2file, 'r')
zip_ref.extractall(extract_to)
zip_ref.close()
if rm_zip_file_after_extraction:
path2file.unlink(missing_ok=True)
elif filename.endswith('.gz'):
import tarfile
file = tarfile.open(fileobj=response, mode="r|gz")
file.extractall(path=extract_to)
file.close()
elif filename.endswith('.pkl'):
# no need to extract it, but when you use the data make sure you torch.load it or pickle.load it.
print(f'about to test torch.load of: {path2file=}')
data = torch.load(path2file) # just to test
assert data is not None
print(f'{data=}')
pass
else:
raise ValueError(f'File type {filename=} not supported, edit code to support it.')
# path_2_zip_with_filename = path_2_ziplike / filename
# os.system(f'tar -xvzf {path_2_zip_with_filename} -C {path_2_dataset}/')
# if rm_zip_file:
# path_2_zip_with_filename.unlink(missing_ok=True)
# # raise ValueError(f'File type {filename=} not supported.')
print(f'done extracting: {path2file=}')
print(f'extracted at location: {path_used_for_dataset=}')
print(f'-->Succes downloading & extracting dataset at location: {path_used_for_dataset=}')
you can use my code with pip install ultimate-utils for the most up to date version.
In the other case, this warning showing up when the ml/dl model has different format.
For the example:
you want to open pickle, but the model format is .sav
Solution:
you need to change the format to original format
pickle --> .pkl
tensorflow --> .h5
etc.
In my case, the zip file itself was missing from that directory - thus when I tried to unzip it, I got the error "BadZipFile: File is not a zip file". It got resolved after I moved the .zip file to the directory. Please confirm that the file is indeed present in your directory before running the python script.
In my case, the zip file was corrupted. I was trying to download the zip file with urllib.request.urlretrieve but the file wouldn't completely download for some reason.
I connected to a VPN, the file downloaded just fine, and I was able to open the file.