I need to unzip a .ZIP archive. I already know how to unzip it, but it is a huge file and takes some time to extract. How would I print the percentage complete for the extraction? I would like something like this:
Extracting File
1% Complete
2% Complete
etc, etc
here an example that you can start with, it's not optimized:
import zipfile
zf = zipfile.ZipFile('test.zip')
uncompress_size = sum((file.file_size for file in zf.infolist()))
extracted_size = 0
for file in zf.infolist():
extracted_size += file.file_size
print "%s %%" % (extracted_size * 100/uncompress_size)
zf.extract(file)
to make it more beautiful do this when printing:
print "%s %%\r" % (extracted_size * 100/uncompress_size),
You can just monitor the progress of each file being extracted with tqdm():
from zipfile import ZipFile
from tqdm import tqdm
# Open your .zip file
with ZipFile(file=path) as zip_file:
# Loop over each file
for file in tqdm(iterable=zip_file.namelist(), total=len(zip_file.namelist())):
# Extract each file to another directory
# If you want to extract to current working directory, don't specify path
zip_file.extract(member=file, path=directory)
In python 2.6 ZipFile object has a open method which can open a named file in zip as a file object, you can sue that to read data in chunks
import zipfile
import os
def read_in_chunks(zf, name):
chunk_size= 4096
f = zf.open(name)
data_list = []
total_read = 0
while 1:
data = f.read(chunk_size)
total_read += len(data)
print "read",total_read
if not data:
break
data_list.append(data)
return "".join(data_list)
zip_file_path = r"C:\Users\anurag\Projects\untitled-3.zip"
zf = zipfile.ZipFile(zip_file_path, "r")
for name in zf.namelist():
data = read_in_chunks(zf, name)
Edit: To get the total size you can do something like this
total_size = sum((file.file_size for file in zf.infolist()))
So now you can print the total progress and progress per file, e.g. suppose you have only 1 big file in zip, other methods(e.g. just counting file sizes and extract) will not give any progress at all.
ZipFile.getinfolist() will generate a number of ZipInfo objects from the contents of the zip file. From there you can either total up the number of bytes of all the files in the archive and then count up how many you've extracted thus far, or you can go by the number of files total.
I don't believe you can track the progress of extracting a single file. The zipfile extract function has no callback for progress.
Related
I'm trying to extract data from a zip file in Python, but it's kind of slow. Could anyone advise me and see if I'm doing something that obviously makes it slower?
def go_through_zip(zipname):
out = {}
with ZipFile(zipname) as z:
for filename in z.namelist():
with z.open(filename) as f:
try:
outdict = make_dict(f)
out.update(outdict)
except:
print("File is not in the correct format")
return out
make_dict(f) just takes the file path and makes a dictionary, and this function is probably also slow, but that's not what I want to speed up right now.
Try using the following code for file extraction. it works fast as long as the size of the file being extracted is reasonable.
# importing required modules
from zipfile import ZipFile
# specifying the zip file name
file_name = "my_python_files.zip"
# opening the zip file in READ mode
with ZipFile(file_name, 'r') as zip:
# printing all the contents of the zip file
zip.printdir()
# extracting all the files
print('Extracting all the files now...')
zip.extractall()
print('Done!')
```
I've tried a few different methods of doing this but each one has errors or will only function in specific ways.
randomData = ("Some Random stuff")
with open("outputFile.txt", "a") as file:
file.write(randomData)
exit()
What i'm trying to to is write to the "outputFile.txt" file and then next output to a different file such as "outputFileTwo.txt".
If you need different filename at every start then you have to save in other file (ie. config.txt) information about current filename. It can be number which you will use in filename (file1.txt, file2.txt, etc.).
And at start read number from config.txt, increase this number, use in filename, and write number again in config.txt.
Or you can use modul datetime to use current date and time in filename.
https://docs.python.org/3.5/library/datetime.html
There is also module to generate temporary (random and unique) filename.
https://docs.python.org/3.5/library/tempfile.html
If you're writing the same data to multiple files, you can do something like this:
data = "some data"
files = ['file1.txt', 'file2.txt', 'file3.txt']
for file in files:
with open(file, "a") as f:
f.write(data)
Based on your comment concerning a new file name for each run:
import time
randomData = ("Some Random stuff")
t,s = str(time.time()).split('.')
filename = t+".txt"
print ("writing to", filename)
with open(filename, "a") as file:
file.write(randomData)
I want to read the contents of a zip file into memory rather than extracting them to disc, find a particular file in the archive, open the file and extract a line from it.
Can a StringIO instance be opened and parsed? Suggestions? Thanks in advance.
zfile = ZipFile('name.zip', 'r')
for name in zfile.namelist():
if fnmatch.fnmatch(name, '*_readme.xml'):
name = StringIO.StringIO()
print name # prints StringIO instances
open(name, 'r') # IO Error: No such file or directory...
I found a few similar posts, but none that seem to address this issue: Extracting a zipfile to memory?
IMO just using read is enough:
zfile = ZipFile('name.zip', 'r')
files = []
for name in zfile.namelist():
if fnmatch.fnmatch(name, '*_readme.xml'):
files.append(zfile.read(name))
This will make a list with contents of files that match the pattern.
Test:
You can then parse contents afterwards by iterating through the list:
for file in files:
print(file[0:min(35,len(file))].decode()) # "parsing"
Or better use a functor:
import zipfile as zip
import os
import fnmatch
zip_name = os.sys.argv[1]
zfile = zip.ZipFile(zip_name, 'r')
def parse(contents, member_name = ""):
if len(member_name) > 0:
print( "Parsed `{}`:".format(member_name) )
print(contents[0:min(35, len(contents))].decode()) # "parsing"
for name in zfile.namelist():
if fnmatch.fnmatch(name, '*.cpp'):
parse(zfile.read(name), name)
This way there is no data kept in memory for no reason and memory foot print is smaller. It might be important if the files are big.
Don't overthink it. It Just Works:
import zipfile
# 1) I want to read the contents of a zip file ...
with zipfile.ZipFile('A-Zip-File.zip') as zipper:
# 2) ... find a particular file in the archive, open the file ...
with zipper.open('A-Particular-File.txt') as fp:
# 3) ... and extract a line from it.
first_line = fp.readline()
print first_line
The question you link shows you that you need to read the file. Depending on your use case that may already be enough. In your code you replace the loop variable holding a filename with an empty string buffer. Try something like this:
zfile = ZipFile('name.zip', 'r')
for name in zfile.namelist():
if fnmatch.fnmatch(name, '*_readme.xml'):
ex_file = zfile.open(name) # this is a file like object
content = ex_file.read() # now file-contents are a single string
If you really want a buffer that you can manipulate, then simply instantiate it with the contents:
buf = StringIO(zfile.open(name).read())
You may also want to look at BytesIO and note that there are differences between Python 2 and 3.
Thank you to everyone that contributed solutions. This is what ended up working for me:
zfile = ZipFile('name.zip', 'r')
for name in zfile.namelist():
if fnmatch.fnmatch(name, '*_readme.xml'):
zopen = zfile.open(name)
for line in zopen:
if re.match('(.*)<foo>(.*)</foo>(.*)', line):
print line
Ok, so I have a zip file that contains gz files (unix gzip).
Here's what I do --
def parseSTS(file):
import zipfile, re, io, gzip
with zipfile.ZipFile(file, 'r') as zfile:
for name in zfile.namelist():
if re.search(r'\.gz$', name) != None:
zfiledata = zfile.open(name)
print("start for file ", name)
with gzip.open(zfiledata,'r') as gzfile:
print("done opening")
filecontent = gzfile.read()
print("done reading")
print(filecontent)
This gives the following result --
>>>
start for file XXXXXX.gz
done opening
done reading
Then stays like that forever until it crashes ...
What can I do with filecontent?
Edit : this is not a duplicate since my gzipped files are in a zipped file and i'm trying to avoid extracting that zip file to disk. It works with zip files in a zip file as per How to read from a zip file within zip file in Python? .
I created a zip file containing a gzip'ed PDF file I grabbed from the web.
I ran this code (with two small changes):
1) Fixed indenting of everything under the def statement (which I also corrected in your Question because I'm sure that it's right on your end or it wouldn't get to the problem you have).
2) I changed:
zfiledata = zfile.open(name)
print("start for file ", name)
with gzip.open(zfiledata,'r') as gzfile:
print("done opening")
filecontent = gzfile.read()
print("done reading")
print(filecontent)
to:
print("start for file ", name)
with gzip.open(name,'rb') as gzfile:
print("done opening")
filecontent = gzfile.read()
print("done reading")
print(filecontent)
Because you were passing a file object to gzip.open instead of a string. I have no idea how your code is executing without that change, but it was crashing for me until I fixed it.
EDIT: Adding link to GZIP docs from James R's answer --
Also, see here for further documentation:
http://docs.python.org/2/library/gzip.html#examples-of-usage
END EDIT
Now, since my gzip'ed file is small, the behavior I observe is that is pauses for about 3 seconds after printing done reading, then outputs what is in filecontent.
I would suggest adding the following debugging line after your print "done reading" -- print len(filecontent). If this number is very, very large, consider not printing the entire file contents in one shot.
I would also suggest reading this for more insight into what I expect is your problem: Why is printing to stdout so slow? Can it be sped up?
EDIT 2 - an alternative if your system does not handle file io on zip files, causing no such file errors in the above:
def parseSTS(afile):
import zipfile
import zlib
import gzip
import io
with zipfile.ZipFile(afile, 'r') as archive:
for name in archive.namelist():
if name.endswith('.gz'):
bfn = archive.read(name)
bfi = io.BytesIO(bfn)
g = gzip.GzipFile(fileobj=bfi,mode='rb')
qqq = g.read()
print qqq
parseSTS('t.zip')
Most likely your problem lies here:
if name.endswith(".gz"): #as goncalopp said in the comments, use endswith
#zfiledata = zfile.open(name) #don't do this
#print("start for file ", name)
with gzip.open(name,'rb') as gzfile: #gz compressed files should be read in binary and gzip opens the files directly
#print("done opening") #trust in your program, luke
filecontent = gzfile.read()
#print("done reading")
print(filecontent)
See here for further documentation:
http://docs.python.org/2/library/gzip.html#examples-of-usage
test.txt contains the list of files to be downloaded:
http://example.com/example/afaf1.tif
http://example.com/example/afaf2.tif
http://example.com/example/afaf3.tif
http://example.com/example/afaf4.tif
http://example.com/example/afaf5.tif
How these files can be downloaded using python with maximum download speed?
my thinking was as follows:
import urllib.request
with open ('test.txt', 'r') as f:
lines = f.read().splitlines()
for line in lines:
response = urllib.request.urlopen(line)
What after that?How to select download directory?
Select a path to your desired output directory (output_dir). In your for loop split every url on / character and use the last peace as the filename. Also open the files for writing in binary mode wb since the response.read() returns bytes, not str.
import os
import urllib.request
output_dir = 'path/to/you/output/dir'
with open ('test.txt', 'r') as f:
lines = f.read().splitlines()
for line in lines:
response = urllib.request.urlopen(line)
output_file = os.path.join(output_dir, line.split('/')[-1])
with open(output_file, 'wb') as writer:
writer.write(response.read())
Note:
Downloading multiple files can be faster if you use multiple threads since the download is rarely using the full bandwidth of your internet connection._
Also if the files you are downloading are pretty big you should probably stream the read (reading chunk by chunk). As #Tiran commented you should use shutil.copyfileobj(response, writer) instead of writer.write(response.read()).
I would only add that you should probably always specify the length parameter too: shutil.copyfileobj(response, writer, 5*1024*1024) # (at least 5MB) since the default value of 16kb is really small and it will just slow things down.
This works fine for me: (note that name must be absolute, for example 'afaf1.tif')
import urllib,os
def download(baseUrl,fileName,layer=0):
print 'Trying to download file:',fileName
url = baseUrl+fileName
name = os.path.join('foldertodwonload',fileName)
try:
#Note that folder needs to exist
urllib.urlretrieve (url,name)
except:
# Upon failure to download retries total 5 times
print 'Download failed'
print 'Could not download file:',fileName
if layer > 4:
return
else:
layer+=1
print 'retrying',str(layer)+'/5'
download(baseUrl,fileName,layer)
print fileName+' downloaded'
for fileName in nameList:
download(url,fileName)
Moved unnecessary code out from try block