I am trying to get HDF5/H5 file from existing project in keras.
this attention_ocr is related to OCR written in python. I would like to generate HDF5/H5 file so I can convert that with tensorflowjs_converter[ref] and will use in browser.
Reference:
How to import a TensorFlow SavedModel into TensorFlow.js
Importing a Keras model into TensorFlow.js
I am looking for installing keras environment and generating HDF5/H5 file.
Once your model is trained in keras, saving it as an HDF5 is simply one line:
my_model.save('my_filename.h5')
Related
I've been trying to compile my custom trained YoloV5 model using SageMaker Neo.
The compilation gives an error :
ClientError: InputConfiguration: No pth file found for PyTorch model.
Please make sure the framework you select is correct.
The weights are a .pt file.
Is there a way to convert the .pt file to .pth?
There is no difference between both file extensions. If SageMaker is only looking for a file with a .pth extension, then you can simply rename your file from filename.pt to filename.pth.
I am trying to load a spaCy text classification model that I trained previously. After training, the model was saved into the en_textcat_demo-0.0.0.tar.gz file.
I want to use this model in a jupyter notebook, but when I do
import spacy
spacy.load("spacy_files/en_textcat_demo-0.0.0.tar.gz")
I get
OSError: [E053] Could not read meta.json from spacy_files/en_textcat_demo-0.0.0.tar.gz
What is the correct way to load my model here?
You need to either unzip the tar.gz file or install it with pip.
If you unzip it, that will result in a directory, and you can give the directory name as an argument to spaCy load.
If you use pip install, it will be put with your other libraries, and you can use the model name like you would with a pretrained spaCy model.
Basically I have been trying to train a custom object detection model with ssd_mobilenet_v1_coco and ssd_inception_v2_coco on google colab tensorflow 1.15.2 using tensorflow object detection api. As soon as I start training it throws error for both the models respectively.
I also ran the python object_detection/builders/model_builder_tf1_test.py and it passed all the test without any errors or warnings.
ValueError: ssd_inception_v2 is not supported. See model_builder.py for features extractors compatible with different versions of Tensorflow.
ValueError: ssd_mobilenet_v1_coco is not supported. See model_builder.py for features extractors compatible with different versions of Tensorflow.
I have successfully changed the tensorflow to 1.15.2 by using below command this is my first step before installing any of the dependencies.
%tensorflow_version 1.x
import tensorflow
print(tensorflow.__version__)
When I checked the model_builder.py I can see that they still have support for ssd_mobilenet_v1 and ssd_inception_v2. I want to deploy my custom trained model ssd_mobilenet_v1 or ssd_inception_v2 on jetson tx2 by converting them into trt-tf models
. In these 2 documents https://www.elinux.org/Jetson_Zoo and https://github.com/NVIDIA-AI-IOT/tf_trt_models#od_models we can see object detection models which can be converted to tf-trt models. So my question is how can I train these models as they are supported on google colab on tensorflow 1.15.2 and deploy on jetson txt for converting them to tf-trt models? Can anyone guide me through it would be really helpful to conitue my learning and learn something interesting thanks
i think you downloaded both of the models from tensorflow v2 repository and tensorflow 1.15 will obviously not support them.
download models from here : https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md
and try again.good luck
I have a LSTM Keras Tensorflow model trained and exported in .h5 (HDF5) format.
My local machine does not support keras tensorflow. I have tried installing. But does not work.
Therefore, i used google colabs and exported the model.
I would like to know, how i can use the exported model in pycharm
Edit : I just now installed tensorflow on my machine
Thanks in Advance
Found the answer :
I ve exported the model as follows
model.save('/content/drive/My Drive/Colab Notebooks/model.h5')
Then i downloaded the file and saved in the folder where my other codes are. I have installed tensorflow.
Next i load the code and predicted using the saved model as follows.
import keras
model=keras.models.load_model('/content/drive/My Drive/Colab Notebooks/model.h5')
model.predict(instace)
You still need keras and tensorflow to use the model.
The accepted answer is correct but it misses that you first need to mount the "/content/drive"
from google.colab import drive
drive.mount('/content/drive')
Then you can save the weights of the model:
model.save_weights('my_model_weights.h5')
..or even save the whole model :
model.save('my_model.h5')
Once done, disconnect your mounted point using:
drive.flush_and_unmount()
I'm trying to convert a tflite model to it's quantized version. (I try to convert the pose estimation module multi_person_mobilenet_v1_075_float.tflite hosted here) to it's quantized version.
I therefore installed the tflite_converter command line tool, recommended here. But the examples do not fit my case where I only have a *.tflite file and no corresponding frozen_graph.pb file.
Thus when I just call
tflite_convert --output_file multi_person_quant.tflite --saved_model_dir ./
from within the directory containing multi_person_mobilenet_v1_075_float.tflite, I get an error message:
IOError: SavedModel file does not exist at: .//{saved_model.pbtxt|saved_model.pb}
I guess I need a .pb file for whatever I want to do... Any idea how to generate it from the *.tflite file?
Any other advice would also be helpful.