Are tensorflow2 keras models compatible with the Edge TPU? - python

I am trying to convert the keras model located here: https://www.tensorflow.org/lite/performance/post_training_integer_quant to a model that can run on the Edge TPU. What this example fails to mention is that in order to compile the model to something that is runnable on the TPU it needs to get saved first as a "frozen model" to a .pb file. I tried doing that but the Edge tpu compiler still complains that the model's tensor size's are still not constant. I also read somewhere that tensorflow2 does not support frozen graphs yet. Is that true, and if so, how can I convert this keras model to something that is runnable on the TPU? Does a complete guide for how to write a TPU compatible model exist somewhere?

Related

tf.keras.models: load_model() ValueError: Unknown layer: Functional in OpenCV/DNN module. How to use pre-trained model with old versions of tf/keras?

I did transferlearning by using MaskRCNN for multiple-object detection in an environment with:
python=3.6.12
tensorflow==1.15.3
keras==2.2.4
mrcnn==2.1
And the model works.
Now I would like to implement mrcnn real-time with my laptop camera and OpenCV.
Firstly, I would apply face detection with res10_300x300_ssd_iter_140000.caffemodel because my mrcnn model works better if it is run on a face. I chose res10 because I have aleady used it in another project and it worked well!
Unfortunatly, I notice that MaskRCNN doesn't work with the latest version of tensorflow. Moreover, res10_300x300_ssd_iter_140000.caffemodel doesn't work with old versions of tensorflow and I get this error: "ValueError: Unknown layer: Functional".
I would like to know if it is possible to use res10_300x300_ssd_iter_140000.caffemodel
with previous versions of tensorflow, isn't it?
Is there a way to do a porting of MaskRCNN to a more recent version of tensorflow?
Or, is there a way to use res10 with old versions of tensorflow?
A different model for face detection in opencv with a good accuracy?
A different model rather than mrcnn tha is compatible with res10?
Any advice is welcome!
Thanks!
My Resources:
https://github.com/opencv/opencv/wiki/Deep-Learning-in-OpenCV
https://machinelearningmastery.com/how-to-train-an-object-detection-model-with-keras/
https://www.pyimagesearch.com/2020/05/04/covid-19-face-mask-detector-with-opencv-keras-tensorflow-and-deep-learning/

Tensorflow Object API TF2 not displaying visualizations in Tensorboard

Description
I had a setup for training using the object detection API that worked really well, however I have had to upgrade from TF1.15 to TF2 and so instead of using model_main.py I am now using model_main_tf2.py and using mobilenet ssd 320x320 pipeline to transfer train a new model.
When training my model in TF1.15 it would display a whole heap of scalars as well as detection box image samples. It was fantastic.
In TF2 training I get no such data, just loss scalars and 3 input images!! and yet the event files are huge gigabytes! where as they were in hundreds of megs using TF1.15
The thing is there is nowhere to specify what data is presented. I have not changed anything other than which model_main py file I use to run the training. I added num_visualizations: to the pipeline config file but no visualizations of detection boxes appear.
Can someone please explain to me what is going on? I need to be able to see whats happening throughout training!
Thank You
I am training on PC in virtual environment before performing TRT optimization in Linux but I think that is irrelevant here really.
Environment
GPU Type: P220
Operating System + Version: Win10 Pro
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 2
Relevant Files
TF1.15 vs TF2 screenshots:
TF1 (model_main.py) Tensorboard Results
TF2 (model_main_tf2.py) Tensorboard Results
Steps To Reproduce
The repo I am working with GitHub Object Detection API
Model
Pipeline Config File
UPDATE: I have investigated further and discovered that the tensorboard settings are being set in Object Detection Trainer 1 for TF1.15 and Object Detection Trainer 2 for TF2
So if someone who knows more than I do about this could work out what the difference is and what I need to do to get same result in tensorboard with v2 as I do with the first one that would be amazing and save me enormous headache. It would seem that this, even though it is documented as being for TF2, is not actually following TF2 syntax but I could be wrong.

Yolov4 weights conversion to tflite failure

I am using Google Colabs to convert Yolov4 darknet weights to tflite version.
I used this blog to train my own yolo detector and I got to acceptable accuracy for my detections.
Then I tried every repository on github 1,2,3 to covert my custom yolov4 weights to tflite version and I failed in each time. I faced the problem (can not reshape array) here and I solved it by modifying file.names and adjusting the change in the config file. In the end when I executed this code:
!python convert_tflite.py --weights ./data/yolov4.weights --output ./data/yolov4.tflite
I faced this error:
ValueError: Input 0 of node model_1/batch_normalization/AssignNewValue was passed float from model_1/batch_normalization/FusedBatchNormV3/ReadVariableOp/resource:0 incompatible with expected resource.
This problem was discussed here but I couldn't find any solution
I am using google colab which make it easier with using the preinstalled libraries but maybe there could be some incomparability in libraries used in conversion ?
Any help would be appreciated..

how to train a custom object detection model ssd_mobilenet_v1_coco and ssd_inception_v2_coco on google colab tensorflow 1.15.2?

Basically I have been trying to train a custom object detection model with ssd_mobilenet_v1_coco and ssd_inception_v2_coco on google colab tensorflow 1.15.2 using tensorflow object detection api. As soon as I start training it throws error for both the models respectively.
I also ran the python object_detection/builders/model_builder_tf1_test.py and it passed all the test without any errors or warnings.
ValueError: ssd_inception_v2 is not supported. See model_builder.py for features extractors compatible with different versions of Tensorflow.
ValueError: ssd_mobilenet_v1_coco is not supported. See model_builder.py for features extractors compatible with different versions of Tensorflow.
I have successfully changed the tensorflow to 1.15.2 by using below command this is my first step before installing any of the dependencies.
%tensorflow_version 1.x
import tensorflow
print(tensorflow.__version__)
When I checked the model_builder.py I can see that they still have support for ssd_mobilenet_v1 and ssd_inception_v2. I want to deploy my custom trained model ssd_mobilenet_v1 or ssd_inception_v2 on jetson tx2 by converting them into trt-tf models
. In these 2 documents https://www.elinux.org/Jetson_Zoo and https://github.com/NVIDIA-AI-IOT/tf_trt_models#od_models we can see object detection models which can be converted to tf-trt models. So my question is how can I train these models as they are supported on google colab on tensorflow 1.15.2 and deploy on jetson txt for converting them to tf-trt models? Can anyone guide me through it would be really helpful to conitue my learning and learn something interesting thanks
i think you downloaded both of the models from tensorflow v2 repository and tensorflow 1.15 will obviously not support them.
download models from here : https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md
and try again.good luck

Why tensorflow would automatically continue training the model?

I'm running my python-project which is about training a neural network with tensorflow at PyCharm, and find that my network has been already well-trained since the second restart of running my project.
I have no command to restore any trained model in my project (I do have commands for saving models). Anyone knows anything about my problem?
Is it possible that tensorflow or pycharm have default settings to save and restore?
Great thanks!

Categories

Resources