How to save model which is based on tensorlayer as 'savedmodel' format? - python

I trained an NN model by Tensorlayer (a simple ABR algorthim based on RL Policy Gradient). Now I want to save it as 'savedmodel' format in tensorflow because I want to use tensorflow.js. In Tensorlayer I use tl.files.save_weights_to_hdf5('model/pg_policy.hdf5', self.model) to save my model. After that, I got a .hdf5 model file. But when I want to load my model, I must do it in my source code class.
So I don't know how to save my Tensorlayer model to 'savedmodel' format (https://tensorflow.google.cn/guide/saved_model). Or is there any way I can turn my Tensorlayer model to tensorflow model?
Here is my simple NN build by Tensorlayer, and sorry I am new to this area.
enter image description here

Related

How to convert flax saved checkpoints to model?

https://github.com/google-research/scenic/tree/main/scenic/projects/mbt
I am trying to use the pretrained model present in the git, which is basically a Flax Checkpoint. I want to convert it back to the model. How can I do that?

How to convert darknet image classification weights file to pytorch pt?

I created a model of darknet53.weights for image classification using my original data in darknet.
(This isn't a YOLO v3 model.)
Is there a way to convert a darknet53.weight to a pytorch pt model?
I tried quoting various codes on github etc., but all of them can convert only YOLOv3 weights file to pytorch's pt model.
I want to compare the accuracy of the darknet53 model created with darknet with other image classification models created with pytorch.
Initially, I tried to make a darknet53 model with pytorch, but that didn't work. Therefore, I created a darknet53 model with darknet.
If anyone knows a good way, please teach me.
Thanks.

Converting teachable machine model into tflite format

So I have trained a posenet model for classifying different poses on the teachable machine learning website (link for the website). I want to use this trained model in a flutter app, and for that I need to convert the model into tflite format. I checked many online blogs which said that the website has an option of converting it to that format, but they have removed that feature. Thus, I wanted to know how can I convert this teachable machine model into tflite format?
I downloaded a pose model of my own from that site, and the zip appears to be a Tensorflow.JS model.
Now that we know the unzipped file is just a TF.js model, refer to a tutorial like this to convert the TFJS model back into a keras SavedModel, which can be then saved into a tflite model.

How to convert a pretrained tensorflow pb frozen graph into a modifiable h5 keras model?

I have been searching for a method to do this for so long, and I can not find an answer. Most threads I found are of people wanting to do the opposite.
Backstory:
I am experimenting with some pre-trained models provided by the tensorflow/models repository. The models are saved as .pb frozen graphs. I want to fine-tune some of these models by changing the final layers to suit my application.
Hence, I want to load the models inside a jupyter notebook as a normal keras h5 model.
How can I do that?
do you have a better way to do so?
Thanks.
seems like all you would have to do is download the model files and store them in a directory. Call the directory for example c:\models. Then load the model.
model = tf.keras.models.load_model(r'c:\models')
model.summary() # prints out the model layers
# generate code to modify the model as you typically do for transfer learning
# compile the changed model
# train the model
# save the trained model as a .h5 file
dir=r'path to the directory you want to save the model to'
model_identifier= 'abcd.h5' # for abcd use whatever identification you want
save_path=os.path.join(dir, model_identifier)
model.save(save_path)

I use TFLiteConvert post_training_quantize=True but my model is still too big for being hosted in Firebase ML Kit's Custom servers

I have written a TensorFlow / Keras Super-Resolution GAN. I've converted the resulting trained .h5 model to a .tflite model, using the below code, executed in Google Colab:
import tensorflow as tf
model = tf.keras.models.load_model('/content/drive/My Drive/srgan/output/srgan.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.post_training_quantize=True
tflite_model = converter.convert()
open("/content/drive/My Drive/srgan/output/converted_model_quantized.tflite", "wb").write(tflite_model)
As you can see I use converter.post_training_quantize=True which was censed to help to output a lighter .tflite model than the size of my original .h5 model, which is 159MB. The resulting .tflite model is still 159MB however.
It's so big that I can't upload it to Google Firebase Machine Learning Kit's servers in the Google Firebase Console.
How could I either:
decrease the size of the current .tflite model which is 159MB (for example using a tool),
or after having deleted the current .tflite model which is 159MB, convert the .h5 model to a lighter .tflite model (for example using a tool)?
Related questions
How to decrease size of .tflite which I converted from keras: no answer, but a comment telling to use converter.post_training_quantize=True. However, as I explained it, this solution doesn't seem to work in my case.
In general, quantization means, shifting from dtype float32 to uint8. So theoretically our model should reduce by the size of 4. This will be clearly visible in files of greater size.
Check whether your model has been quantized or not by using the tool "https://lutzroeder.github.io/netron/". Here you have to load the model and check the random layers having weight.The quantized graph contains the weights value in uint8 format
In unquantized graph the weights value will be in float32 format.
Only setting "converter.post_training_quantize=True" is not enough to quantize your model. The other settings include:
converter.inference_type=tf.uint8
converter.default_ranges_stats=[min_value,max_value]
converter.quantized_input_stats={"name_of_the_input_layer_for_your_model":[mean,std]}
Hoping you are dealing with images.
min_value=0, max_value=255, mean=128(subjective) and std=128(subjective). name_of_the_input_layer_for_your_model= first name of the graph when you load your model in the above mentioned link or you can get the name of the input layer through the code "model.input" will give the output "tf.Tensor 'input_1:0' shape=(?, 224, 224, 3) dtype=float32". Here the input_1 is the name of the input layer(NOTE: model must include the graph configuration and the weight.)

Categories

Resources