I'm trying to train an inventory-tracking application using tensorflow object detection api and I've used this tutorial.
My image dataset is too small for training all the weights in the neural network and I want to train the few latter layers or even just the softmax layer. But I didn't find any tutorial which tells me how to declare which layers I want to train.
How can I do this?
Can anyone give me a link or github issue about this?
Related
I had a pre-trained model(tensorflow model) which was trained using data from publicly available data set. I had meta file and ckpt file. I’d like to train my tensorflow model using new data from privately obtained data set. I have small dataset, so I’d like to fine-tune my model according to ‘Strategy 2’ or ‘Strategy 3’.
Strategy 2: Train some layers and leave the others frozen.
Strategy 3: Freeze the convolutional base.
Reference site: https://towardsdatascience.com/transfer-learning-from-pre-trained-models-f2393f124751
However, I couldn’t find sample code which is implemented in a transfer learning and fine-tuning for tensorflow model. There are many examples with keras model. How can I implement in a transfer learning and fine-tuning for my tensorflow model?
If you don't have to use Tensorflow's functions, You can use example code with tf.keras module of Tensorflow 2.0 also..
I am new to semantic segmentation, I implemented the FCN network and now I want to try not to train from scratch and use the pre-train vgg16 weights. I saw an implementation like this link, but I am not sure where the new dataset input comes to the network.
To be more clear, in the above link, the vvg part returns input image from trained network and the the output layers 3,4,7.
image_input, pool3_out, pool4_out, fc7_out = self._load_vgg16()
I am not sure where the new batch of data gets into the model. I appreciate your guidance.
I have an image classification problem where the number of classes increases over time and when a new class is created I just trained the model with images of the new class. I know this is not possible to do with a CNN, so to solve this problem I did transfer learning where I used a Keras pretrained model to extract the features of the images but instead of replacing the last layers (used for classification) with new layers, I used a Random Forest that is able to increase the number of classes. I achieved an accuracy of 86% using the InceptionResnetV2 trained on the imagenet dataset, which is good for now.
Now I want to do the same but on an object detection problem. How can I achieve this? Can I use the Tensorflow Object Detection API?
Is it possible to replace the last layers, of a pretrained CNN with a detection algorithm like Faster-RCNN or SSD, with a random forest?
Yes, you could implement the above-mentioned approach using Tensorflow object detection API. Also, you could use your InceptionResnetV2 trained model as a feature extractor. The tensorflow object detection API already has InceptionResnetV2 feature extractor trained on coco dataset. Its available at https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
Or if you want to provide or create custom feature extractor, please follow the link https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/defining_your_own_model.md
If you are new to Tensorflow object detection API. Please follow this tutorial,
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10
Hope this helps.
I'm working on a project that requires the recognition of just people in a video or a live stream from a camera. I'm currently using the tensorflow object recognition API with python, and i've tried different pre-trained models and frozen inference graphs. I want to recognize only people and maybe cars so i don't need my neural network to recognize all 90 classes that come with the frozen inference graphs, based on mobilenet or rcnn, as it seems this slows the process, and 89 of this 90 classes are not needed in my project. Do i have to train my own model or is there a way to modify the inference graphs and the existing models? This is probably a noob question for some of you, but mind that i've worked with tensorflow and machine learning for just one month.
Thanks in advance
Shrinking the last layer to output 1 or two classes is not likely to yield large speed ups. This is because most of the computation is in the intermediate layers. You could shrink the intermediate layers, but this would result in poorer accuracy.
Yes, you have to train own model. Let's see in short words some ways how to do.
OPTION 1. When you want to apply transfer knowledge as maximum as possible, you can froze the CNN layers. After, you change a quantity of detected classes with dimension of classifier (dense layers). The classifier is the latest part in CNN architecture. Now, you should retrain only classifier.
OPTION 2. Assuming, you want to apply transfer knowledge for first layers of CNN (for example, froze first 2-3 CNN layers) and retrain rest of CNN with classifier. After, you change a quantity of detected classes with dimension of classifier. Now, you should retrain rest of CNN layers and classifier.
OPTION 3. Assuming, you want to retrain whole CNN with classifier. After, you change a quantity of detected classes with dimension of classifier. Now, you should retrain whole CNN with classifier.
Generally, the Tensorflow Object Detection API is a good start for beginners! How to proceed with your problem you can see here more detail about whole process and extra explanation here.
I need to implement a classification application for neuron-signals. In the first step, I need to train a denoising autoencoder(DAE) layer for signal cleaning then, I will feed the output to a DBN network for classification. I tried to find support for these types in Tensorflow but all what I found was two models CNN and RNN. Does anyone has an idea about a robust implementation for these two models using Tensorflow?