How to load tf.keras model directly from cloud bucket? - python

I try to load tf.keras model direcly from cloud bucket but I can't see easy wat to do it.
I would like to load whole model structure not only weights.
I see 3 possible directions:
Is posssible to load keras model directly from Google cloud bucket? Command tf.keras.model.load_model('gs://my_bucket/model.h5') doesn't work
I tried to use tensorflow.python.lib.ii.file_io but I don't know how to load this as model.
I copied model to local directory by gsutil cp command but I don't know how to wait until operation will be complete. tf try to load model before download operation is complete so the errors occurs
I will be thankful for any sugestions.
Peter

Load the file from gs storage
from tensorflow.python.lib.io import file_io
model_file = file_io.FileIO('gs://mybucket/model.h5', mode='rb')
Save a temporary copy of the model locally
temp_model_location = './temp_model.h5'
temp_model_file = open(temp_model_location, 'wb')
temp_model_file.write(model_file.read())
temp_model_file.close()
model_file.close()
Load model saved locally
model = tf.keras.models.load_model(temp_model_location)

Related

can I use .h5 file in Django project?

I'm making AI web page using Django and tensor flow. and I wonder how I add .h5 file in Django project.
writing whole code in views.py file
but I want to use pre-trained model
not online learning in webpage.
Yeah u can use .h5 file in django. You can use h5py for operations on .h5 files. Exapmle -
import h5py
filename = "filename.h5"
h5 = h5py.File(filename,'r')
# logic
...
h5.close()

Can't train model from checkpoint on Google Colab as session expires

I'm using Google Colab for finetuning a pre-trained model.
I successfully preprocessed a dataset and created an instance of the Seq2SeqTrainer class:
trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
The problem is training it from last checkpoint after the session is over.
If I run trainer.train(), it runs correctly. As it takes a long time, I sometimes came back to the Colab tab after a few hours, and I know that if the session has crashed I can continue training from the last checkpoint like this: trainer.train("checkpoint-5500")
The checkpoint data does no longer exist on Google Colab if I come back too late, so even though I know the point the training has reached, I will have to start all over again.
Is there any way to solve this problem? i.e. extend the session?
To fix your problem try adding a full fixed path, for example for your google drive and saving the checkpoint-5500 to it.
Using your trainer you can set the output directory as your Google Drive path when creating an instance of the Seq2SeqTrainingArguments.
When you come back to your code, if the session is indeed over you'll just need to load your checkpoint-5500 from your google drive instead of retraining everything.
Add the following code:
from google.colab import drive
drive.mount('/content/drive')
And then after your trainer.train("checkpoint-5500") is finished (or as it's last step) save your checkpoint to your google drive.
Or if you prefer, you can add a callback inside your fit function in order to save and update after every single epoch (that was if for some reason the session is crashing before it finish you'll still have some progress saved).

'UnpicklingError: invalid load key, '\x0a'. Trying to save and load a model

I have been stuck on this error for days.
I have created and saved a model on my Google Colab. It is saved in a '.tar' file. I want to save and load this model with the help of the torch library in Python. This is my code so far.
import torch
import pickle
import json
torch.save('/content/drive/MyDrive/model.tar',open('/content/drive/MyDrive/saved.tar', 'wb'))
filename = '/content/drive/MyDrive/saved.tar'
loaded =(torch.load(filename, map_location=torch.device('cpu')))
'model.tar' is the tar file of the model I have on my colab which I need to load. I know that 'loaded' is now of type 'string', which means I am doing something wrong with my torch.save() function call. It would be great if anyone can help. Thanks in advance.

Can I pickle a tensorflow model?

Will I be able to pickle all the .meta, .data and checkpoint files of a tensorflow model? That's because I want to run a prediction on my model and if i deploy it , the files can't be on disk right? I know about the tensorflow serving but I don't really understand it. I want to be able to load the tensforflow files without accessing the drive all the time.
Using pickle is not recommended. Instead, they have created a new format called "SavedModel format" that serves this exact purpose.
See: https://www.tensorflow.org/guide/saved_model

How to get variables and .pb file from checkpoint in TensorFlow?

I am looking to serve the Tensorflow models to make a Docker image and deploy using AWS. For this I need to have .pb and variables files that is must while serving any Tensorflow model. But, I only have checkpoint file of the model. Is there any way to restore variables folder from the checkpoint file?
I am able to create the .pb file, but not sure how to get the variables folder.
ckpt = tf.train.latest_checkpoint(args.model_path)
model.load_weights(ckpt)
ckpt_filename = os.path.basename(ckpt)
saved_model_path = os.path.join('pb_files', ckpt_filename)
model.save(saved_model_path)
https://www.tensorflow.org/guide/saved_model
Hello, I created this snippet from the above document. This code will create pb file, variables folder, and assets folder.

Categories

Resources