Standardisation of saving tensorflow models - python

I am dynamically creating tensorflow models (all types classification, timeseries, etc) and I want to standardise the process of saving and calling them. So far I have been able to save the models to model.json using
model_json = model.to_json()
and the model weights using
model.save_weights('model_weights.h5')
and then load them in from the json and h5 files. Will this method work with all tensorflow models (i.e. can ui standardise this part of the pipeline)
Thanks in advance

Related

Optimized model creation with TensorFlow Lite Model Maker

I am working on creating a custom model for image classification. The model will run on a microcontroller. So the goal is to make the model as small as possible while still being some what accurate. The model has three classes to identify two types of flowers and an "other" class for anything that is not one of the flowers. I have a directory flower_photos which contains the subdirectories roses, daisy, and other. Each subdirectory has hundreds of .jpg files. I believe this is the correct preparation structure required.
For optimization, I believe it will help by reducing the image size and convert them to grayscale. Will this help and does this need to be done prior to running it through model maker? I haven't been able to find a way to do it directly in model maker. I am also using for_int8() method for QuantizationConfig. It is my understanding that this is the best post training quatization for microcontrollers. Is this correct?
Below is my code so far. Is this correct for optimization for use on a microcontroller? Thanks!
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import image_classifier
from tflite_model_maker.config import QuantizationConfig
from tflite_model_maker.image_classifier import DataLoader
data = DataLoader.from_folder('C:\\Users\\username\\Python\\flower_photos')
train_data, test_data = data.split(0.9)
model = image_classifier.create(train_data)
loss, accuracy = model.evaluate(test_data)
config = QuantizationConfig.for_int8(representative_data=test_data)
model.export(export_dir='C:\\Users\\username\\Python\\model_int8', quantization_config=config)

Save entire model but load weights only

I have defined a deep learning model my_unet() in tensorflow. During training I set save_weigths=False since I wanted to save the entire model (not only the wieghts bu the whole configuration). The generated file is path_to_model.hdf5.
However, when loading back the model I used the earlier version (I forgot to update it) in which I first called the model and then load the model using:
model = my_unet()
model.load_weights(path_to_model.hdf5)
Instead of simply using: model = tf.keras.models.load_model(path_to_model.hdf5) to load the entire model.
Both ways of loading the model and the weights provided the same predictions when run in some dummy data and there were no errors.
My question is: Why loading the entire model using model.load_weights() does not generate any problem? What is the structure of the hdf5 file and how theese two ways of loading exactly work? Where can I find this information?
You can please see the documentation here for any future reference: http://davis.lbl.gov/Manuals/HDF5-1.8.7/UG/03_DataModel.html

How to convert a pretrained tensorflow pb frozen graph into a modifiable h5 keras model?

I have been searching for a method to do this for so long, and I can not find an answer. Most threads I found are of people wanting to do the opposite.
Backstory:
I am experimenting with some pre-trained models provided by the tensorflow/models repository. The models are saved as .pb frozen graphs. I want to fine-tune some of these models by changing the final layers to suit my application.
Hence, I want to load the models inside a jupyter notebook as a normal keras h5 model.
How can I do that?
do you have a better way to do so?
Thanks.
seems like all you would have to do is download the model files and store them in a directory. Call the directory for example c:\models. Then load the model.
model = tf.keras.models.load_model(r'c:\models')
model.summary() # prints out the model layers
# generate code to modify the model as you typically do for transfer learning
# compile the changed model
# train the model
# save the trained model as a .h5 file
dir=r'path to the directory you want to save the model to'
model_identifier= 'abcd.h5' # for abcd use whatever identification you want
save_path=os.path.join(dir, model_identifier)
model.save(save_path)

Loading a pre-trained model in Chainer Deep Learning Framework

I need to load a pre-trained model in Chainer framework, but as I understood, the saved (.npz) file only contains the weights and I have to reconstruct the model then load the weights into it, and there is no way to load the full model in one command like Tensorflow.
Is this true? I so, anyone with Chainer framework experience can provide some guidance? If not, what is the proper way to load a pre-trained model in the mentioned framework?
Yes, only npz files only contain weights. You need to first construct an instance of the model (a subclass of chainer.Chain), then load weights on it using load_npz. https://docs.chainer.org/en/stable/guides/serializers.html

Keras to_json(), what does it save?

My impression is that it only saves the model's architecture, so I should be able to call it before I start training? And then save_weights() saves the weights I need to restore the model? Any more details on this?
At what stage can I call to_json()? I.e. do I have to call compile() first? Can it be before fit() ?
As mentioned in Keras docs it only saves the architecture of the model:
Saving/loading only a model's architecture
If you only need to save the architecture of a model, and not its
weights or its training configuration, you can do:
# save as JSON
json_string = model.to_json()
# save as YAML
yaml_string = model.to_yaml()
The generated JSON / YAML files are human-readable and can be manually
edited if needed.
You can then build a fresh model from this data:
# model reconstruction from JSON:
from keras.models import model_from_json
model = model_from_json(json_string)
# model reconstruction from YAML
from keras.models import model_from_yaml
model = model_from_yaml(yaml_string)

Categories

Resources