Error when downloading zip file TensorFlow Keras - python

I have been following this tutorial on machine translation by tensorflow: Neural machine translation with attention
I wanted to use the same code on a Japanes to English dataset, but it threw the following error:
When I try to download the file with my browser, it works without problems.
My code:
# Download the file
path_to_zip = tf.keras.utils.get_file(
'jpn-eng.zip', origin='http://www.manythings.org/anki/jpn-eng.zip', extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/jpn-eng/jpn.txt"

From comments
It seems the website www.manythings.org reject the request of
tf.keras.utils.get_file probably due to HTTP request head info is not
satisfy minimum info, you can work around it by download the file
first (e.g !wget http://www.manythings.org/anki/jpn-eng.zip and !unzip
jpn-eng.zip) then load it. (paraphrased from Mr. For Example)

Related

How To Download Google Pegasus Library Model

I am a very newbie and currently working for my Final Project. I watch a youtube video that teach me to code Abstractive Text Summarization with google's Pegasus library. It Works fine but I need it to be more efficient.
So here is the code
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
tokenizer = PegasusTokenizer.from_pretrained("google/pegasus-xsum")
model = PegasusForConditionalGeneration.from_pretrained("google/pegasus-xsum")
Everytime I run that code, it always download the "Google Pegasus-xsum" library which sized about 2.2 GB.
So here is the sample of the code in notebook : https://github.com/nicknochnack/PegasusSummarization/blob/main/Pegasus%20Tutorial.ipynb
and it will running download the library like picture below :
Is there any way to download the library first and then I saved it locally, and everytime I run the code it's just gonna call the library locally?
Something like caching or saving the library locally maybe?
Thanks.
Mac
Using inspect you can find and locate the modules easily.
import inspect
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
tokenizer = PegasusTokenizer.from_pretrained("google/pegasus-xsum")
model = PegasusForConditionalGeneration.from_pretrained("google/pegasus-xsum")
print(inspect.getfile(PegasusForConditionalGeneration))
print(inspect.getfile(PegasusTokenizer))
You will get their paths sth like this
/usr/local/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py
/usr/local/lib/python3.9/site-packages/transformers/models/pegasus/tokenization_pegasus.py
Now, if you go and see what is inside the tokenization_pegasus.py file, you will notice that the model of google/pegasus-xsum is being probably fetched by the following line
PRETRAINED_VOCAB_FILES_MAP = {
"vocab_file": {"google/pegasus-xsum": "https://huggingface.co/google/pegasus-xsum/resolve/main/spiece.model"}
}
where here if you open:
https://huggingface.co/google/pegasus-xsum/resolve/main/spiece.model
You will get the model downloaded directly to your machine.
UPDATE
After some search on Google, I've found sth important where you can get the used models and all their related files downloaded to your working directory by the following
tokenizer.save_pretrained("local_pegasus-xsum_tokenizer")
model.save_pretrained("local_pegasus-xsum_tokenizer_model")
Ref:
https://github.com/huggingface/transformers/issues/14561
So that after running it, you will see the following being saved automatically in your working directory. So, now you can call the models directly but you need to search how...
Also, the 12.2GB file that you wanted to know its path locally, it is being located here online
https://huggingface.co/google/pegasus-xsum/tree/main
And after downloading the models to your directory as you can see from the screenshot its name is pytorch_model.bin as it’s named online.

Dataset not found or corrupted. You can use download=True to download it

Recently I downloaded CelebA dataset from this page. I want to apply some transformations to this data set:
To do it firstly let's define transformations:
from torchvision import transforms
from torchvision.datasets CelebA
celeba_transforms = transforms.Compose([
transforms.CenterCrop(130),
transforms.Resize([64, 64]),
transforms.ToTensor()
])
And now execute it:
CelebA(root='img_align_celeba',
split='train',
download=False,
transform=celeba_transforms)
However result of this code is an error:
Dataset not found or corrupted. You can use download=True to download it
Setting download=True is also not working. Could you please help me with applying those transformations to this data set?
It seems like for some copyright/privacy/legal consideration CelebA dataset is slowly going "off-grid".
If you really have to use it, try downloading it from the baidu drive.
Other users report that there might be download quota issues, and retrying might resolve the issue.
What exactly is the error you get when you try download=True?
Finally I resolved the issue. I'm posting my solution:
Problem number one
There is a problem with downloading zip file img_align_celeba.zip due to reaching daily quota. Solution to this problem is simply downloading this file from internet e.g. Kaggle.
Problem number two
When using function CelebA function with download=True program will think for a while and then return error mentioned in a question title. Cause of the problem are broken .txt files which I listed below (those files are also downloaded via CelebA function):
For correct working of this function you have to download those .txt files directly from internet. I found them here. When you download all of those and replace old ones function CelebA should work without any problems.
Unfortunately, i cant comment Johns answer, due to lack of reputation. I just wanted to add, that you also need to unzip the image folder and the image-containing folder should be: data/celeba/img_align_celeba/000001.jpg
and so on (data is a free to choose folder-name and the parameter you pass to root = "./data" in torchs dataset function). In my case, all images had to be moved up one directory.

Get a local text file in tensorflow keras

I was following a tutorial online on using tensorflow and he used this code:
prepWork = tf.keras.utils.get_file('shakespeare.txt', urlToTextFile)
If I want to use this code for my own project, I need to read a local text file, let's say 'prepWork.txt', from my machine. I can't use get_file, because that only works for online files. How would I do this? Everything I've tried before doesn't work.
You can find a TextLoader class which reads a text file and transforms it into batches of consecutive fixed sized length of words in the following repository (in utils.py file) https://github.com/sherjilozair/char-rnn-tensorflow/blob/master/utils.py

using pipelines with a local model

I am trying to use a simple pipeline offline. I am only allowed to download files directly from the web.
I went to https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/tree/main and downloaded all the files in a local folder C:\\Users\\me\\mymodel
However, when I tried to load the model I get a strange error
from transformers import pipeline
classifier = pipeline(task= 'sentiment-analysis',
model= "C:\\Users\\me\\mymodel",
tokenizer = "C:\\Users\\me\\mymodel")
ValueError: unable to parse C:\Users\me\mymodel\modelcard.json as a URL or as a local path
What is the issue here?
Thanks!
Must be either of the two cases:
You didn't download all the required files properly
Folder path is wrong
FYI, I am listing out the required contents in the directory:
config.json
pytorch_model.bin/ tf_model.h5
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt
the solution was slightly indirect:
load the model on a computer with internet access
save the model with save_pretrained()
transfer the folder obtained above to the offline machine and point its path in the pipeline call
The folder will contain all the expected files.

How to use tf-hub models locally

I,ve been trying to use a BERT model from tf-hub https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2.
import tensorflow_hub as hub
bert_layer = hub.keras_layer('./bert_en_uncased_L-12_H-768_A-12_2', trainable=True)
But problem is that it is downloading data after every run.
So i downloaded the .tar file from tf-hub https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2
Now i'm trying to use this downloaded tar file(after untar)
I've followed this tutorial https://medium.com/#xianbao.qian/how-to-run-tf-hub-locally-without-internet-connection-4506b850a915
But it didn't work out well and there is no further infromation or script is provided in this blog post
if someone can provide complete script to use the dowloaded model locally(without internet) or can improve the above blog post(Medium).
I've also tried
untarredFilePath = './bert_en_uncased_L-12_H-768_A-12_2'
bert_lyr = hub.load(untarredFilePath)
print(bert_lyr)
Output
<tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x7f05c46e6a10>
Doesn't seems to work.
or is there any other method to do so..??
Hmm I cannot reproduce your problem. What worked for me:
script.sh
# download the model file using the 'wget' program
wget "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2?tf-hub-format=compressed"
# rename the downloaded file name to 'tar_file.tar.gz'
mv 2\?tf-hub-format\=compressed tar_file.tar.gz
# extract tar_file.tar.gz to the local directory
tar -zxvf tar_file.tar.gz
# turn off internet
# run a test script
python3 test.py
# running the last command prints some tensorflow warnings, and then '<tensorflow_hub.keras_layer.KerasLayer object at 0x7fd702a7d8d0>'
test.py
import tensorflow_hub as hub
print(hub.KerasLayer('.'))
I wrote this script using this medium article(https://medium.com/#xianbao.qian/how-to-run-tf-hub-locally-without-internet-connection-4506b850a915) as reference. I am creating a cache directory within my project and the tensorflow model is cached locally in this cached directory and I am able to load the model locally. Hope this helps you.
import os
os.environ["TFHUB_CACHE_DIR"] = r'C:\Users\USERX\PycharmProjects\PROJECTX\tf_hub'
import tensorflow as tf
import tensorflow_hub as hub
import hashlib
handle = "https://tfhub.dev/google/universal-sentence-encoder/4"
hashlib.sha1(handle.encode("utf8")).hexdigest()
embed = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")
def get_sentence_embeddings(paragraph_array):
embeddings=embed(paragraph_array)
return embeddings
After getting information from tf-hub team they provide this solution.
Let's say you have downloaded the .tar.gz file from official tf-hub model page from download button.
You have extracted it. You got a folder which contain assets, variable and model.
You put it in your working directory.
In script just add path to that folder:
import tensroflow-hub as hub
model_path ='./bert_en_uncased_L-12_H-768_A-12_2' # in my case
# one thing the path you have to provide is for folder which contain assets, variable and model
# not of the model.pb itself
lyr = hub.KerasLayer(model_path, trainable=True)
Hope it should work for you as well. Give it a try
The tensorflow_hub library caches downloaded and uncompressed models on disk to avoid repeated uploads. The documentation at tensorflow.org/hub/caching has been expanded to discuss this and other cases.

Categories

Resources