Description
I had a setup for training using the object detection API that worked really well, however I have had to upgrade from TF1.15 to TF2 and so instead of using model_main.py I am now using model_main_tf2.py and using mobilenet ssd 320x320 pipeline to transfer train a new model.
When training my model in TF1.15 it would display a whole heap of scalars as well as detection box image samples. It was fantastic.
In TF2 training I get no such data, just loss scalars and 3 input images!! and yet the event files are huge gigabytes! where as they were in hundreds of megs using TF1.15
The thing is there is nowhere to specify what data is presented. I have not changed anything other than which model_main py file I use to run the training. I added num_visualizations: to the pipeline config file but no visualizations of detection boxes appear.
Can someone please explain to me what is going on? I need to be able to see whats happening throughout training!
Thank You
I am training on PC in virtual environment before performing TRT optimization in Linux but I think that is irrelevant here really.
Environment
GPU Type: P220
Operating System + Version: Win10 Pro
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 2
Relevant Files
TF1.15 vs TF2 screenshots:
TF1 (model_main.py) Tensorboard Results
TF2 (model_main_tf2.py) Tensorboard Results
Steps To Reproduce
The repo I am working with GitHub Object Detection API
Model
Pipeline Config File
UPDATE: I have investigated further and discovered that the tensorboard settings are being set in Object Detection Trainer 1 for TF1.15 and Object Detection Trainer 2 for TF2
So if someone who knows more than I do about this could work out what the difference is and what I need to do to get same result in tensorboard with v2 as I do with the first one that would be amazing and save me enormous headache. It would seem that this, even though it is documented as being for TF2, is not actually following TF2 syntax but I could be wrong.
Related
I'm running a simple TFLite cloud detection model for Coral trained with the help of a Jupyter notebook here. Though it never returns more than 10 outcomes, why is that? I don't see any limitation while training, is there a limitation by TFLite? For the actual run I use mostly the code here.
Even the testing set in the Jupyter notebook does not show more than 10 boxes. The dataset seems alright as when trained via Roboflow it returns hundreds of outcomes.
Thanks in advance, I have never trained AI models.
I tried checking the config files and looked on StackOverflow if anybody had had a similar problem, found nothing. Also the Jupyter notebook is no help.
I did transferlearning by using MaskRCNN for multiple-object detection in an environment with:
python=3.6.12
tensorflow==1.15.3
keras==2.2.4
mrcnn==2.1
And the model works.
Now I would like to implement mrcnn real-time with my laptop camera and OpenCV.
Firstly, I would apply face detection with res10_300x300_ssd_iter_140000.caffemodel because my mrcnn model works better if it is run on a face. I chose res10 because I have aleady used it in another project and it worked well!
Unfortunatly, I notice that MaskRCNN doesn't work with the latest version of tensorflow. Moreover, res10_300x300_ssd_iter_140000.caffemodel doesn't work with old versions of tensorflow and I get this error: "ValueError: Unknown layer: Functional".
I would like to know if it is possible to use res10_300x300_ssd_iter_140000.caffemodel
with previous versions of tensorflow, isn't it?
Is there a way to do a porting of MaskRCNN to a more recent version of tensorflow?
Or, is there a way to use res10 with old versions of tensorflow?
A different model for face detection in opencv with a good accuracy?
A different model rather than mrcnn tha is compatible with res10?
Any advice is welcome!
Thanks!
My Resources:
https://github.com/opencv/opencv/wiki/Deep-Learning-in-OpenCV
https://machinelearningmastery.com/how-to-train-an-object-detection-model-with-keras/
https://www.pyimagesearch.com/2020/05/04/covid-19-face-mask-detector-with-opencv-keras-tensorflow-and-deep-learning/
I am trying to convert the keras model located here: https://www.tensorflow.org/lite/performance/post_training_integer_quant to a model that can run on the Edge TPU. What this example fails to mention is that in order to compile the model to something that is runnable on the TPU it needs to get saved first as a "frozen model" to a .pb file. I tried doing that but the Edge tpu compiler still complains that the model's tensor size's are still not constant. I also read somewhere that tensorflow2 does not support frozen graphs yet. Is that true, and if so, how can I convert this keras model to something that is runnable on the TPU? Does a complete guide for how to write a TPU compatible model exist somewhere?
Basically I have been trying to train a custom object detection model with ssd_mobilenet_v1_coco and ssd_inception_v2_coco on google colab tensorflow 1.15.2 using tensorflow object detection api. As soon as I start training it throws error for both the models respectively.
I also ran the python object_detection/builders/model_builder_tf1_test.py and it passed all the test without any errors or warnings.
ValueError: ssd_inception_v2 is not supported. See model_builder.py for features extractors compatible with different versions of Tensorflow.
ValueError: ssd_mobilenet_v1_coco is not supported. See model_builder.py for features extractors compatible with different versions of Tensorflow.
I have successfully changed the tensorflow to 1.15.2 by using below command this is my first step before installing any of the dependencies.
%tensorflow_version 1.x
import tensorflow
print(tensorflow.__version__)
When I checked the model_builder.py I can see that they still have support for ssd_mobilenet_v1 and ssd_inception_v2. I want to deploy my custom trained model ssd_mobilenet_v1 or ssd_inception_v2 on jetson tx2 by converting them into trt-tf models
. In these 2 documents https://www.elinux.org/Jetson_Zoo and https://github.com/NVIDIA-AI-IOT/tf_trt_models#od_models we can see object detection models which can be converted to tf-trt models. So my question is how can I train these models as they are supported on google colab on tensorflow 1.15.2 and deploy on jetson txt for converting them to tf-trt models? Can anyone guide me through it would be really helpful to conitue my learning and learn something interesting thanks
i think you downloaded both of the models from tensorflow v2 repository and tensorflow 1.15 will obviously not support them.
download models from here : https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md
and try again.good luck
I am working on a Letter Recognition Application for a robot. I used my home PC for training the model and wanted the recognition to be on the RPI Zero W with the already trained model.
I got an HDF model. When I try to install Tensorflow on the RPI zero, it's throwing a hash error, as far as I found it this is due to TF beeing for 64bit machines. When I try to install Tensorflow Lite, the installation stocks and crashes.
For saving the model I use:
classifier.save('test2.h5')
That are the Prediction lines:
test_image = ks.preprocessing.image.load_img('image.jpg')
test_image = ks.preprocessing.image.img_to_array(test_image)
result = classifier.predict(test_image)
I also tried to compile the python script via Nuitka, but as the RPI is ARM and nuitka is not offering cross-compile, this possibility felt out.
You can use already available TFLite to solve your issue.
If that does not help, you can also build TFLite from source.
Please refer to below links:
https://www.tensorflow.org/lite/guide/build_rpi
https://medium.com/#haraldfernengel/compiling-tensorflow-lite-for-a-raspberry-pi-786b1b98e646