How to get variables and .pb file from checkpoint in TensorFlow? - python

I am looking to serve the Tensorflow models to make a Docker image and deploy using AWS. For this I need to have .pb and variables files that is must while serving any Tensorflow model. But, I only have checkpoint file of the model. Is there any way to restore variables folder from the checkpoint file?
I am able to create the .pb file, but not sure how to get the variables folder.

ckpt = tf.train.latest_checkpoint(args.model_path)
model.load_weights(ckpt)
ckpt_filename = os.path.basename(ckpt)
saved_model_path = os.path.join('pb_files', ckpt_filename)
model.save(saved_model_path)
https://www.tensorflow.org/guide/saved_model
Hello, I created this snippet from the above document. This code will create pb file, variables folder, and assets folder.

Related

can I use .h5 file in Django project?

I'm making AI web page using Django and tensor flow. and I wonder how I add .h5 file in Django project.
writing whole code in views.py file
but I want to use pre-trained model
not online learning in webpage.
Yeah u can use .h5 file in django. You can use h5py for operations on .h5 files. Exapmle -
import h5py
filename = "filename.h5"
h5 = h5py.File(filename,'r')
# logic
...
h5.close()

How to use config.json file for Hugging Face transformer model loading from Google Cloud Storage (GCS)?

I'd like to be able to load a Hugging Face transformer base (xlm-roberta-base) from GCS. However when loading using the pytorch_model.bin file, it requires a directory containing config.json file to be given as argument. However obviously GCS buckets do not act like regular directories. How can I achieve this?
So far what I have attempted is something like this:
fs = gcsfs.GCSFileSystem(project="{project_name}")
XLMRobertaModel.from_pretrained(fs.cat("{bucket}/xlm-roberta-base/pytorch_model.bin"),
from_pt=True, config=fs.cat("{bucket}/xlm-roberta-base/config.json"))
This produces error message:
OSError: Can't load the configuration of '<File-like object GCSFileSystem, bucket/xlm-roberta-base/config.json>'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '<File-like object GCSFileSystem, bucket/xlm-roberta-base/config.json>' is the correct path to a directory containing a config.json file
I know fs.cat("{bucket}/xlm-roberta-base/config.json") is not going to return a path to a directory, but I'm not sure what I should give as argument given the directory is in a GCS bucket.
Is it possible to do this at all?

using pipelines with a local model

I am trying to use a simple pipeline offline. I am only allowed to download files directly from the web.
I went to https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english/tree/main and downloaded all the files in a local folder C:\\Users\\me\\mymodel
However, when I tried to load the model I get a strange error
from transformers import pipeline
classifier = pipeline(task= 'sentiment-analysis',
model= "C:\\Users\\me\\mymodel",
tokenizer = "C:\\Users\\me\\mymodel")
ValueError: unable to parse C:\Users\me\mymodel\modelcard.json as a URL or as a local path
What is the issue here?
Thanks!
Must be either of the two cases:
You didn't download all the required files properly
Folder path is wrong
FYI, I am listing out the required contents in the directory:
config.json
pytorch_model.bin/ tf_model.h5
special_tokens_map.json
tokenizer.json
tokenizer_config.json
vocab.txt
the solution was slightly indirect:
load the model on a computer with internet access
save the model with save_pretrained()
transfer the folder obtained above to the offline machine and point its path in the pipeline call
The folder will contain all the expected files.

How to load tf.keras model directly from cloud bucket?

I try to load tf.keras model direcly from cloud bucket but I can't see easy wat to do it.
I would like to load whole model structure not only weights.
I see 3 possible directions:
Is posssible to load keras model directly from Google cloud bucket? Command tf.keras.model.load_model('gs://my_bucket/model.h5') doesn't work
I tried to use tensorflow.python.lib.ii.file_io but I don't know how to load this as model.
I copied model to local directory by gsutil cp command but I don't know how to wait until operation will be complete. tf try to load model before download operation is complete so the errors occurs
I will be thankful for any sugestions.
Peter
Load the file from gs storage
from tensorflow.python.lib.io import file_io
model_file = file_io.FileIO('gs://mybucket/model.h5', mode='rb')
Save a temporary copy of the model locally
temp_model_location = './temp_model.h5'
temp_model_file = open(temp_model_location, 'wb')
temp_model_file.write(model_file.read())
temp_model_file.close()
model_file.close()
Load model saved locally
model = tf.keras.models.load_model(temp_model_location)

Can I pickle a tensorflow model?

Will I be able to pickle all the .meta, .data and checkpoint files of a tensorflow model? That's because I want to run a prediction on my model and if i deploy it , the files can't be on disk right? I know about the tensorflow serving but I don't really understand it. I want to be able to load the tensforflow files without accessing the drive all the time.
Using pickle is not recommended. Instead, they have created a new format called "SavedModel format" that serves this exact purpose.
See: https://www.tensorflow.org/guide/saved_model

Categories

Resources