Reading appengine backup_info file gives EOFError - python

I'm trying to inspect my appengine backup files to work out when a data corruption occured. I used gsutil to locate and download the file:
gsutil ls -l gs://my_backup/ > my_backup.txt
gsutil cp gs://my_backup/LongAlphaString.Mymodel.backup_info file://1.backup_info
I then created a small python program, attempting to read the file and parse it using the appengine libraries.
#!/usr/bin/python
APPENGINE_PATH='/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/'
ADDITIONAL_LIBS = [
'lib/yaml/lib'
]
import sys
sys.path.append(APPENGINE_PATH)
for l in ADDITIONAL_LIBS:
sys.path.append(APPENGINE_PATH+l)
import logging
from google.appengine.api.files import records
import cStringIO
def parse_backup_info_file(content):
"""Returns entities iterator from a backup_info file content."""
reader = records.RecordsReader(cStringIO.StringIO(content))
version = reader.read()
if version != '1':
raise IOError('Unsupported version')
return (datastore.Entity.FromPb(record) for record in reader)
INPUT_FILE_NAME='1.backup_info'
f=open(INPUT_FILE_NAME, 'rb')
f.seek(0)
content=f.read()
records = parse_backup_info_file(content)
for r in records:
logging.info(r)
f.close()
The code for parse_backup_info_file was copied from
backup_handler.py
When I run the program, I get the following output:
./view_record.py
Traceback (most recent call last):
File "./view_record.py", line 30, in <module>
records = parse_backup_info_file(content)
File "./view_record.py", line 19, in parse_backup_info_file
version = reader.read()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/records.py", line 335, in read
(chunk, record_type) = self.__try_read_record()
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/api/files/records.py", line 307, in __try_read_record
(length, len(data)))
EOFError: Not enough data read. Expected: 24898 but got 2112
I've tried with a half a dozen different backup_info files, and they all show the same error (with different numbers.)
I have noticed that they all have the same expected length: I was reviewing different versions of the same model when I made that observation, it's not true when I view the backup files of other Modules.
EOFError: Not enough data read. Expected: 24932 but got 911
EOFError: Not enough data read. Expected: 25409 but got 2220
Is there anything obviously wrong with my approach?
I guess the other option is that the appengine backup utility is not creating valid backup files.
Anything else you can suggest would be very welcome.
Thanks in Advance

There are multiple metadata files created when an AppEngine Datastore backup is run:
LongAlphaString.backup_info is created once. This contains metadata about all of the entity types and backup files that were created in datastore backup.
LongAlphaString.[EntityType].backup_info is created once per entity type. This contains metadata about the the specific backup files created for [EntityType] along with schema information for the [EntityType].
Your code works for interrogating the file contents of LongAlphaString.backup_info, however it seems that you are trying to interrogate the file contents of LongAlphaString.[EntityType].backup_info. Here's a script that will print the contents in a human-readable format for each file type:
import cStringIO
import os
import sys
sys.path.append('/usr/local/google_appengine')
from google.appengine.api import datastore
from google.appengine.api.files import records
from google.appengine.ext.datastore_admin import backup_pb2
ALL_BACKUP_INFO = 'long_string.backup_info'
ENTITY_KINDS = ['long_string.entity_kind.backup_info']
def parse_backup_info_file(content):
"""Returns entities iterator from a backup_info file content."""
reader = records.RecordsReader(cStringIO.StringIO(content))
version = reader.read()
if version != '1':
raise IOError('Unsupported version')
return (datastore.Entity.FromPb(record) for record in reader)
print "*****" + ALL_BACKUP_INFO + "*****"
with open(ALL_BACKUP_INFO, 'r') as myfile:
parsed = parse_backup_info_file(myfile.read())
for record in parsed:
print record
for entity_kind in ENTITY_KINDS:
print os.linesep + "*****" + entity_kind + "*****"
with open(entity_kind, 'r') as myfile:
backup = backup_pb2.Backup()
backup.ParseFromString(myfile.read())
print backup

Related

Iterating through, and using the contents of, files in a folder

Evening folks.
I'm trying to iterate through the contents of a folder, and use the contents of each file in the folder.
More specifically, i have a folder of JSON files that have CIDRs in them. I need to iterate through the files, read the file, compare the CIDRs to the IP that's searching through it, then move on to the next file, if the IP isn't found in the CIDR list within the file.
I've been able to load and iterate through a single file, parse the JSON file, and compare the CIDRs to the JSON using the "ipaddress" and "json" modules built into Python. But when i try to iterate through the individual files, i get a "file not found" error.
The real catch is that i'm trying to do this entirely with standard library Python modules, which is throwing me for a loop.
Here's what i've done so far.
This function can read the JSON file if one is loaded specifically:
import json
with open('example.json', 'r') as example_file:
example_data = json.load(example_file)
print(json.dumps(chime_data, indent=4))
print(type(example_data))
print(example_data.keys())
print(example_data['JsonKey'])
individual_item = chime_data['JsonKey']
print(JsonKey)
And this one will read and Compare the CIDRs to an input IP address
import json
from ipaddress import ip_network, ip_address
with open('Example.json', 'r') as example_file:
example_data = json.load(example_file)
example = example_data['JsonKey']
print("Please provide valid IP: ")
ip = input()
def in_example(ip, cidr):
return ip_address(ip) in ip_network(cidr)
for data in cidr:
if ip_address(ip) in ip_network(data):
print("The IP is in the list as", data)
else:
continue
print("Have nice day.")
And these both work. But when i try to iterate through using this method, i get a "File not Found" error
import json
import os
working_directory = '/Desktop/ExampleFolder'
for subdir, dirs, files in os.walk(working_directory):
for file in files:
if file.endswith('.json'):
with open(file, 'r') as example_file:
example_data = json.load(example_file)
print(json.dumps(example_data, indent=4))
else:
print('Well shit, something broke')
print(type(example_data))
print(example_data.keys())
print(example_data['JsonKey'])
cidrs = chime_data['JsonKey']
print(cidrs)
Which prints out:
Traceback (most recent call last):
File "Desktop/jsonread.py", line 19, in <module>
with open(file, 'r') as example_file:
FileNotFoundError: [Errno 2] No such file or directory: 'first_file.json'
Would love some feedback and guidance.

JSONDecodeError when parsing funky JSON

Recently, I started working with JSON (with Python 3.7 under Debian 9). This is the first (probably of many) data sets in JSON which I've had the pleasure of working with.
I have used the Python built-in JSON module to interpret arbitrary strings and files. I now have a database with ~5570 rows pertaining information regarding to a given list of servers. There are a lot of things in the pipeline, which I have devised a plan for, but I'm stuck on this particular sanitation.
Here's the code I'm using to parse:
#!/usr/local/bin/python3.7
import json
def servers_from_json(file_name):
with open(file_name, 'r') as f:
data = json.loads(f.read())
servers = [{'asn': item['data']['resource'], 'resource': item['data']['allocations'][0]['asn_name']} for item in data]
return servers
servers = servers_from_json('working-things/working-format-for-parse')
print(servers)
My motive
I'm trying to get match each one of these servers to their ASN_NAME (which is a field ripped straight from RIPE's API; thus providing me with information pertaining to the physical dc each server is located at. Then, once that's done I'll write them to an existing SQL table, next to a Boolean.
So, here's where it gets funky. If I run the whole dataset through this I get this error message:
Traceback (most recent call last):
File "./parse-test.py", line 12, in <module>
servers = servers_from_json('2servers.json')
File "./parse-test.py", line 7, in servers_from_json
data = json.loads(f.read())
File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.7/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 38 column 2 (char 1098)
I noticed that the problem with my initial data set was that each JSON object wasn't delimited by ,\n.
Did some cleaning, still no luck.
I then added the first 3(?) objects to a completely clean file and.. success. I can get the script to read and interpret them the way I want.
Here's the data set with the comma delimiter:
http://db.farnworth.site/servers.json
Here's the working data set:
http://db.farnworth.site/working-format.json
Anyone got any ideas?
I am here assuming that | will not be present as part of the data. And separate each of the information chunks using | and then convert it into a list and load each list item using json module. Hope it helps!
You can try:
import json
import re
with open("servers.json", 'r') as f:
data = f.read()
pattern = re.compile(r'\}\{')
data = pattern.sub('}|{', data).split('|')
for item in data:
server_info = json.loads(item)
allocations = server_info['data']['allocations']
for alloc in allocations:
print(alloc['asn_name'])
I could read the output.json like this
import json
import re
with open("output.json", 'r') as f:
data = f.read()
server_info = json.loads(data)
for item in server_info:
allocations = item['data']['allocations']
for alloc in allocations:
print(alloc['asn_name'])

Cannot import all .JSON files into MongoDB

ISSUE RESOLVED: So it turns out that I never actually had an issue in the first place. When I did a count on the number of records to determine how many records I should expect to be imported, blank spaces between .json objects were being added towards the total record count. However, upon importing, only the objects with content were moved. I'll just leave this post here for reference anyway. Thank you to those who contributed regardless.
I have around ~33GB of .JSON files that were retrieved from Twitter's streaming API stored in a local directory. I am trying to import this data into a MongoDB collection. I have made two attempts:
First attempt: read through each file individually (~70 files). This successfully imported 11,171,885/ 22,343,770 documents.
import json
import glob
from pymongo import MongoClient
directory = '/data/twitter/output/*.json'
client = MongoClient("localhost", 27017)
db = client.twitter
collection = db.test
jsonFiles = glob.glob(directory)
for file in jsonFiles:
f = open(file, 'r')
for line in f.read().split("\n"):
if line:
try:
lineJson = json.loads(line)
except (ValueError, KeyError, TypeError) as e:
pass
else:
postid = collection.insert(lineJson)
print 'inserted with id: ' , postid
f.close()
Second attempt: Concatenate each .JSON file into one large file. This successfully import 11,171,879/ 22,343,770 documents.
import json
import os
from pymongo import MongoClient
import sys
client = MongoClient("localhost", 27017)
db = client.tweets
collection = db.test
script_dir = os.path.dirname(__file__)
file_path = os.path.join(script_dir, '/data/twitter/blob/historical-tweets.json')
try:
with open(file_path, 'r') as f:
for line in f.read().split("\n"):
if line:
try:
lineJson = json.loads(line)
except (ValueError, KeyError, TypeError) as e:
pass
else:
postid = collection.insert(lineJson)
print 'inserted with id: ' , postid
f.close()
The python script did not error out and output a traceback, it simply stopped running. Any ideas to what could be causing this? Or any alternative solutions to importing the data more efficiently? Thanks in advance.
You are reading the file one line at the time. Is each line really valid json? If not, json.loads will trace and you are hiding that trace with the pass statement.

How to print the content of zipped gzip'd files

Ok, so I have a zip file that contains gz files (unix gzip).
Here's what I do --
def parseSTS(file):
import zipfile, re, io, gzip
with zipfile.ZipFile(file, 'r') as zfile:
for name in zfile.namelist():
if re.search(r'\.gz$', name) != None:
zfiledata = zfile.open(name)
print("start for file ", name)
with gzip.open(zfiledata,'r') as gzfile:
print("done opening")
filecontent = gzfile.read()
print("done reading")
print(filecontent)
This gives the following result --
>>>
start for file XXXXXX.gz
done opening
done reading
Then stays like that forever until it crashes ...
What can I do with filecontent?
Edit : this is not a duplicate since my gzipped files are in a zipped file and i'm trying to avoid extracting that zip file to disk. It works with zip files in a zip file as per How to read from a zip file within zip file in Python? .
I created a zip file containing a gzip'ed PDF file I grabbed from the web.
I ran this code (with two small changes):
1) Fixed indenting of everything under the def statement (which I also corrected in your Question because I'm sure that it's right on your end or it wouldn't get to the problem you have).
2) I changed:
zfiledata = zfile.open(name)
print("start for file ", name)
with gzip.open(zfiledata,'r') as gzfile:
print("done opening")
filecontent = gzfile.read()
print("done reading")
print(filecontent)
to:
print("start for file ", name)
with gzip.open(name,'rb') as gzfile:
print("done opening")
filecontent = gzfile.read()
print("done reading")
print(filecontent)
Because you were passing a file object to gzip.open instead of a string. I have no idea how your code is executing without that change, but it was crashing for me until I fixed it.
EDIT: Adding link to GZIP docs from James R's answer --
Also, see here for further documentation:
http://docs.python.org/2/library/gzip.html#examples-of-usage
END EDIT
Now, since my gzip'ed file is small, the behavior I observe is that is pauses for about 3 seconds after printing done reading, then outputs what is in filecontent.
I would suggest adding the following debugging line after your print "done reading" -- print len(filecontent). If this number is very, very large, consider not printing the entire file contents in one shot.
I would also suggest reading this for more insight into what I expect is your problem: Why is printing to stdout so slow? Can it be sped up?
EDIT 2 - an alternative if your system does not handle file io on zip files, causing no such file errors in the above:
def parseSTS(afile):
import zipfile
import zlib
import gzip
import io
with zipfile.ZipFile(afile, 'r') as archive:
for name in archive.namelist():
if name.endswith('.gz'):
bfn = archive.read(name)
bfi = io.BytesIO(bfn)
g = gzip.GzipFile(fileobj=bfi,mode='rb')
qqq = g.read()
print qqq
parseSTS('t.zip')
Most likely your problem lies here:
if name.endswith(".gz"): #as goncalopp said in the comments, use endswith
#zfiledata = zfile.open(name) #don't do this
#print("start for file ", name)
with gzip.open(name,'rb') as gzfile: #gz compressed files should be read in binary and gzip opens the files directly
#print("done opening") #trust in your program, luke
filecontent = gzfile.read()
#print("done reading")
print(filecontent)
See here for further documentation:
http://docs.python.org/2/library/gzip.html#examples-of-usage

unzipping file results in "BadZipFile: File is not a zip file"

I have two zip files, both of them open well with Windows Explorer and 7-zip.
However when i open them with Python's zipfile module [ zipfile.ZipFile("filex.zip") ], one of them gets opened but the other one gives error "BadZipfile: File is not a zip file".
I've made sure that the latter one is a valid Zip File by opening it with 7-Zip and looking at its properties (says 7Zip.ZIP). When I open the file with a text editor, the first two characters are "PK", showing that it is indeed a zip file.
I'm using Python 2.5 and really don't have any clue how to go about for this. I've tried it both with Windows as well as Ubuntu and problem exists on both platforms.
Update: Traceback from Python 2.5.4 on Windows:
Traceback (most recent call last):
File "<module1>", line 5, in <module>
zipfile.ZipFile("c:/temp/test.zip")
File "C:\Python25\lib\zipfile.py", line 346, in init
self._GetContents()
File "C:\Python25\lib\zipfile.py", line 366, in _GetContents
self._RealGetContents()
File "C:\Python25\lib\zipfile.py", line 378, in _RealGetContents
raise BadZipfile, "File is not a zip file"
BadZipfile: File is not a zip file
Basically when the _EndRecData function is called for getting data from End of Central Directory" record, the comment length checkout fails [ endrec[7] == len(comment) ].
The values of locals in the _EndRecData function are as following:
END_BLOCK: 4096,
comment: '\x00',
data: '\xd6\xf6\x03\x00\x88,N8?<e\xf0q\xa8\x1cwK\x87\x0c(\x82a\xee\xc61N\'1qN\x0b\x16K-\x9d\xd57w\x0f\xa31n\xf3dN\x9e\xb1s\xffu\xd1\.....', (truncated)
endrec: ['PK\x05\x06', 0, 0, 4, 4, 268, 199515, 0],
filesize: 199806L,
fpin: <open file 'c:/temp/test.zip', mode 'rb' at 0x045D4F98>,
start: 4073
files named file can confuse python - try naming it something else. if it STILL wont work, try this code:
def fixBadZipfile(zipFile):
f = open(zipFile, 'r+b')
data = f.read()
pos = data.find('\x50\x4b\x05\x06') # End of central directory signature
if (pos > 0):
self._log("Trancating file at location " + str(pos + 22)+ ".")
f.seek(pos + 22) # size of 'ZIP end of central directory record'
f.truncate()
f.close()
else:
# raise error, file is truncated
I run into the same issue. My problem was that it was a gzip instead of a zip file. I switched to the class gzip.GzipFile and it worked like a charm.
astronautlevel's solution works for most cases, but the compressed data and CRCs in the Zip can also contain the same 4 bytes. You should do an rfind (not find), seek to pos+20 and then add write \x00\x00 to the end of the file (tell zip applications that the length of the 'comments' section is 0 bytes long).
# HACK: See http://bugs.python.org/issue10694
# The zip file generated is correct, but because of extra data after the 'central directory' section,
# Some version of python (and some zip applications) can't read the file. By removing the extra data,
# we ensure that all applications can read the zip without issue.
# The ZIP format: http://www.pkware.com/documents/APPNOTE/APPNOTE-6.3.0.TXT
# Finding the end of the central directory:
# http://stackoverflow.com/questions/8593904/how-to-find-the-position-of-central-directory-in-a-zip-file
# http://stackoverflow.com/questions/20276105/why-cant-python-execute-a-zip-archive-passed-via-stdin
# This second link is only losely related, but echos the first, "processing a ZIP archive often requires backwards seeking"
content = zipFileContainer.read()
pos = content.rfind('\x50\x4b\x05\x06') # reverse find: this string of bytes is the end of the zip's central directory.
if pos>0:
zipFileContainer.seek(pos+20) # +20: see secion V.I in 'ZIP format' link above.
zipFileContainer.truncate()
zipFileContainer.write('\x00\x00') # Zip file comment length: 0 byte length; tell zip applications to stop reading.
zipFileContainer.seek(0)
return zipFileContainer
I had the same problem and was able to solve this issue for my files, see my answer at
zipfile cant handle some type of zip data?
I'm very new at python and i was facing the exact same issue, none of the previous methods were working.
Trying to print the 'corrupted' file just before unzipping it returned an empty byte object.
Turned out, I was trying to unzip the file right after writing it to disk, without closing the file handler.
with open(path, 'wb') as outFile:
outFile.write(data)
outFile.close() # was missing this
with zipfile.ZipFile(path, 'r') as zip:
zip.extractall(destination)
Closing the file stream then unzipping the file resolved my issue.
Sometime there are zip file which contain corrupted files and upon unzipping the zip gives badzipfile error. but there are tools like 7zip winrar which ignores these errors and successfully unzip the zip file. you can create a sub process and use this code to unzip your zip file without getting BadZipFile Error.
import subprocess
ziploc = "C:/Program Files/7-Zip/7z.exe" #location where 7zip is installed
cmd = [ziploc, 'e',your_Zip_file.zip ,'-o'+ OutputDirectory ,'-r' ]
sp = subprocess.Popen(cmd, stderr=subprocess.STDOUT, stdout=subprocess.PIPE)
Show the full traceback that you got from Python -- this may give a hint as to what the specific problem is. Unanswered: What software produced the bad file, and on what platform?
Update: Traceback indicates having problem detecting the "End of Central Directory" record in the file -- see function _EndRecData starting at line 128 of C:\Python25\Lib\zipfile.py
Suggestions:
(1) Trace through the above function
(2) Try it on the latest Python
(3) Answer the question above.
(4) Read this and anything else found by google("BadZipfile: File is not a zip file") that appears to be relevant
I faced this problem and was looking for a good and clean solution; But there was no solution until I found this answer. I had the same problem that #marsl (among the answers) had. It was a gzipfile instead of a zipfile in my case.
I could unarchive and decompress my gzipfile with this approach:
with tarfile.open(archive_path, "r:gz") as gzip_file:
gzip_file.extractall()
Have you tried a newer python, or if that is too much trouble, simply a newer zipfile.py? I have successfully used a copy of zipfile.py from Python 2.6.2 (latest at the time) with Python 2.5 in order to open some zip files that weren't supported by Py2.5s zipfile module.
In some cases, you have to confirm if the zip file is actually in gzip format. this was the case for me and i solved it by :
import requests
import tarfile
url = ".tar.gz link"
response = requests.get(url, stream=True)
file = tarfile.open(fileobj=response.raw, mode="r|gz")
file.extractall(path=".")
for this this happened when the file wasn't downloaded fully I think. So I just delete it in my download code.
def download_and_extract(url: str,
path_used_for_zip: Path = Path('~/data/'),
path_used_for_dataset: Path = Path('~/data/tmp/'),
rm_zip_file_after_extraction: bool = True,
force_rewrite_data_from_url_to_file: bool = False,
clean_old_zip_file: bool = False,
gdrive_file_id: Optional[str] = None,
gdrive_filename: Optional[str] = None,
):
"""
Downloads data and tries to extract it according to different protocols/file types.
note:
- to force a download do:
force_rewrite_data_from_url_to_file = True
clean_old_zip_file = True
- to NOT remove file after extraction:
rm_zip_file_after_extraction = False
Tested with:
- zip files, yes!
Later:
- todo: tar, gz, gdrive
force_rewrite_data_from_url_to_file = remvoes the data from url (likely a zip file) and redownloads the zip file.
"""
path_used_for_zip: Path = expanduser(path_used_for_zip)
path_used_for_zip.mkdir(parents=True, exist_ok=True)
path_used_for_dataset: Path = expanduser(path_used_for_dataset)
path_used_for_dataset.mkdir(parents=True, exist_ok=True)
# - download data from url
if gdrive_filename is None: # get data from url, not using gdrive
import ssl
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
print("downloading data from url: ", url)
import urllib
import http
response: http.client.HTTPResponse = urllib.request.urlopen(url, context=ctx)
print(f'{type(response)=}')
data = response
# save zipfile like data to path given
filename = url.rpartition('/')[2]
path2file: Path = path_used_for_zip / filename
else: # gdrive case
from torchvision.datasets.utils import download_file_from_google_drive
# if zip not there re-download it or force get the data
path2file: Path = path_used_for_zip / gdrive_filename
if not path2file.exists():
download_file_from_google_drive(gdrive_file_id, path_used_for_zip, gdrive_filename)
filename = gdrive_filename
# -- write downloaded data from the url to a file
print(f'{path2file=}')
print(f'{filename=}')
if clean_old_zip_file:
path2file.unlink(missing_ok=True)
if filename.endswith('.zip') or filename.endswith('.pkl'):
# if path to file does not exist or force to write down the data
if not path2file.exists() or force_rewrite_data_from_url_to_file:
# delete file if there is one if your going to force a rewrite
path2file.unlink(missing_ok=True) if force_rewrite_data_from_url_to_file else None
print(f'about to write downloaded data from url to: {path2file=}')
# wb+ is used sinze the zip file was in bytes, otherwise w+ is fine if the data is a string
with open(path2file, 'wb+') as f:
# with open(path2file, 'w+') as f:
print(f'{f=}')
print(f'{f.name=}')
f.write(data.read())
print(f'done writing downloaded from url to: {path2file=}')
elif filename.endswith('.gz'):
pass # the download of the data doesn't seem to be explicitly handled by me, that is done in the extract step by a magic function tarfile.open
# elif is_tar_file(filename):
# os.system(f'tar -xvzf {path_2_zip_with_filename} -C {path_2_dataset}/')
else:
raise ValueError(f'File type {filename=} not supported.')
# - unzip data written in the file
extract_to = path_used_for_dataset
print(f'about to extract: {path2file=}')
print(f'extract to target: {extract_to=}')
if filename.endswith('.zip'):
import zipfile # this one is for zip files, inspired from l2l
zip_ref = zipfile.ZipFile(path2file, 'r')
zip_ref.extractall(extract_to)
zip_ref.close()
if rm_zip_file_after_extraction:
path2file.unlink(missing_ok=True)
elif filename.endswith('.gz'):
import tarfile
file = tarfile.open(fileobj=response, mode="r|gz")
file.extractall(path=extract_to)
file.close()
elif filename.endswith('.pkl'):
# no need to extract it, but when you use the data make sure you torch.load it or pickle.load it.
print(f'about to test torch.load of: {path2file=}')
data = torch.load(path2file) # just to test
assert data is not None
print(f'{data=}')
pass
else:
raise ValueError(f'File type {filename=} not supported, edit code to support it.')
# path_2_zip_with_filename = path_2_ziplike / filename
# os.system(f'tar -xvzf {path_2_zip_with_filename} -C {path_2_dataset}/')
# if rm_zip_file:
# path_2_zip_with_filename.unlink(missing_ok=True)
# # raise ValueError(f'File type {filename=} not supported.')
print(f'done extracting: {path2file=}')
print(f'extracted at location: {path_used_for_dataset=}')
print(f'-->Succes downloading & extracting dataset at location: {path_used_for_dataset=}')
you can use my code with pip install ultimate-utils for the most up to date version.
In the other case, this warning showing up when the ml/dl model has different format.
For the example:
you want to open pickle, but the model format is .sav
Solution:
you need to change the format to original format
pickle --> .pkl
tensorflow --> .h5
etc.
In my case, the zip file itself was missing from that directory - thus when I tried to unzip it, I got the error "BadZipFile: File is not a zip file". It got resolved after I moved the .zip file to the directory. Please confirm that the file is indeed present in your directory before running the python script.
In my case, the zip file was corrupted. I was trying to download the zip file with urllib.request.urlretrieve but the file wouldn't completely download for some reason.
I connected to a VPN, the file downloaded just fine, and I was able to open the file.

Categories

Resources