How to use Instance segmentation pretrained MaskRCNN model by Tensorflow? - python

Tensorflow recently seems to have released pretrained model for instance segmentation using MaskRCNN as per below tweet.
https://twitter.com/TensorFlow/status/963472849511092225
I downloaded mask_rcnn_resnet101_atrous_coco_2018_01_28.tar.gz from this and was trying to figure out how to use it.I found frozen model (pb) file and loaded the graph in Tensorboard.
But I cant figure out what should be the input for the model.I couldn't find a node where simply I can input an image , though I was able to locate nodes where we get classes,masks,bounding boxes etc.
Also there seems to be no details online on how to use it (May be because it is new)

If you follow this tensorflow tutorial it will show you how to run the frozen model on a single/group of images. To apply this to the model you downloaded, the simplest way would be to first replace the line:
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
with a path to your downloaded model i.e.
PATH_TO_CKPT = '/absolute/path/to/frozen_inference_graph.pb'
Then there is no need to run the code under the section Download Model. The rest should work the same.

Related

How to open and analyze Google Tables model using python/tf?

I read several discussions about this and still cannot make it work for my case
Have a classification model trained using Google Tables.
Exported the model and download the directory with cli.
My goal is to get a better understanding of the model trained by google, study it, understand its decisions. And later try to prune it to improve performance.
I'm using this code, just to start:
import tensorflow as tf
from tensorflow import keras
import struct2tensor
location = "model_dir"
model = tf.saved_model.load(location)
model.summary()
I get this error:
AttributeError: 'AutoTrackable' object has no attribute 'summary'
the variable model is of type:
<tensorflow.python.training.tracking.tracking.AutoTrackable at 0x7fa8eaa7ed30>
And I stuck there, don't know how to continue. Using Python 3.8 and the last version of those libraries. Any idea of how can I proceed?
Thanks!
The proper method to load your model depends on your file formatting.
You can see in the Tensorflow documentation that "The object returned by tf.saved_model.load is not a Keras object (i.e. doesn't have .fit, .predict, etc. methods)" and "Use tf.keras.models.load_model to restore the Keras model".
I'm not sure if you want to use the keras module or not, but since you have imported it I assume you do. In that case I would recommend checking this other Stackoverflow thread where it is explained how to use the tf.keras.models.load_model method depending if your model is saved as .pb or .h5.
If the model is saved as .pb you should use it with the string pointing to the directory where the model is saved, as you did in your code snippet but in this case using the keras method:
model = tf.keras.models.load_model('model_dir')
If instead it's saved as .h5 you should use it specifying it:
model = tf.keras.models.load_model('my_model_in_h5.h5')

How to convert a pretrained tensorflow pb frozen graph into a modifiable h5 keras model?

I have been searching for a method to do this for so long, and I can not find an answer. Most threads I found are of people wanting to do the opposite.
Backstory:
I am experimenting with some pre-trained models provided by the tensorflow/models repository. The models are saved as .pb frozen graphs. I want to fine-tune some of these models by changing the final layers to suit my application.
Hence, I want to load the models inside a jupyter notebook as a normal keras h5 model.
How can I do that?
do you have a better way to do so?
Thanks.
seems like all you would have to do is download the model files and store them in a directory. Call the directory for example c:\models. Then load the model.
model = tf.keras.models.load_model(r'c:\models')
model.summary() # prints out the model layers
# generate code to modify the model as you typically do for transfer learning
# compile the changed model
# train the model
# save the trained model as a .h5 file
dir=r'path to the directory you want to save the model to'
model_identifier= 'abcd.h5' # for abcd use whatever identification you want
save_path=os.path.join(dir, model_identifier)
model.save(save_path)

Can I retrain OpenCV DNN face detector using my own face dataset and .pb .pbtxt files provided by OpenCV?

I want to fine tune existing OpenCV DNN face detector to a face images database that I own. I have opencv_face_detector.pbtxt and opencv_face_detector_uint8.pb tensorflow files provided by OpenCV. I wonder if based on this files is there any way to fit the model to my data? So far, I haven't also managed to find any tensorflow training script for this model in OpenCV git repository and I only know, that given model is and SSD with resnet-10 as a backbone. I am also not sure, reading the information on the internet, if I can resume training from .pb file. Are you aware of availability of any scripts defining the model, that could be used for training? Would pbtxt and pb files be enough to continue training on new data?
Also, I noticed that there is a git containing caffe version of this model https://github.com/weiliu89/caffe/tree/ssd. Although I never worked with caffe before, would it be possible/easier to use existing weight (caffe .pg and .pbtxt files are also available in OpenCV's github) and fit the model to my dataset?
I don't see a way to do this in opencv, but I think you'd be able to load the model into tensorflow and use model.fit() to retrain.
The usual advice about transfer learning applies. You'd probably want to freeze most of the early layers and only retrain the last one or two. A slow learning rate would be advised as well.

How to save a MASK RCNN model after training?

I am using matterport repository to train MASK RCNN on a custom dataset. I have been successful in training. Now I want to save the trained model and use it in a web application to detect objects. How do I save the mask rcnn model after training? Please guide me.
The link of the repository:
https://github.com/matterport/Mask_RCNN
Based on this discussion on GitHub, it appears that trained model or weights of matterport/Mask RCNN can be saved as a JSON file in a manner similar to those trained via standard Keras:
import keras
import json
def save_model(trained_model, out_fname="model.json"):
jsonObj = trained_model.keras_model.to_json()
with open(out_fname, "w") as fh:
fj.write(jsonObj)
save_model(model, "mymodel.json")
Update: If you run into the error related to thread-like object, you might find this post helpful...
In the Inspect_model.ipynb notebook under the "Load Model" topic you can save it after it loads the model in inference mode.
in the folder Mask_RCNN/logs generates a folder inside it
I am not sure if we really need to save the whole model again since normally when we used the matterport git we just train new weights on the existing architecture and doesnt make changes to the architecture. When we used this for a pet project , post training - we defined a new model as the MASK RCNN object (from mrcnn.model import MaskRCNN) with the parameter mode as inference and then loaded the newly trained weights model.load_weights('<logpath/trainedweights.h5>', by_name=True)

Tensorflow Custom Model in OpenCV

I have trained new model on top of ssd_mobilenet_v1_coco for a custom data set. This model works fine in tensorflow. But now I want to use this in OpenCV.
net = cv2.dnn.readNetFromTensorflow("model/frozen_inference_graph.pb", "model/protobuf.pbtxt")
detections = net.forward()
So for the config file I convert frozen_graph to pbtxt and add it. But then I got the following error
[libprotobuf ERROR /home/chamath/Projects/opencv/opencv/3rdparty/protobuf/src/google/protobuf/text_format.cc:298] Error parsing text-format tensorflow.GraphDef: 731:5: Unknown enumeration value of "DT_RESOURCE" for field "type".
As suggested here I try to use this config file mentioned in the thread but when I use it object detection is not working properly. Incorrect number of squares detected and they are misplaced.
Is there any method to create a pbtxt config file that works with OpenCV? Or any suggestions how to make my model work in OpenCV?
Is likely that you havent generate the propper graph, after training.
You have to convert the graph like this:
python ../opencv/samples/dnn/tf_text_graph_ssd.py --input
trained-inference-graphs/inference_graph_v5.pb/frozen_inference_graph.pb
--output trained-inference-graphs/inference_graph_v5.pb/graph.pbtxt
Then pass the .pb and the graph.pbtxt to DNN.readNetFromTensorflow that should work for you :)

Categories

Resources