How can I train tensorflow deeplab model? - python

I need to train tensorflow deeplab model with my shoes dataset. Then i will use this model in order to remove background of image shoe. How could i train it ? Could you explain step by step ? You have any example for this situation ?
tensorflow/deeplab

You will need read some parts of Deeplab code
Download repo
Now you need to put your data in tfrecord in proper format
Use some of scripts in https://github.com/tensorflow/models/tree/master/research/deeplab/datasets to download and generate example datasets
Prepare analogous script for your shoes dataset
Add information about data to Deeplab source file https://github.com/tensorflow/models/blob/master/research/deeplab/datasets/data_generator.py add info in analogous format like example datasets
Check flags for architecture https://github.com/tensorflow/models/blob/master/research/deeplab/common.py
Check specific flags and then train, export, count statistics or visualize using train.py, vis.py, export_model.py, eval.py in folder https://github.com/tensorflow/models/tree/master/research/deeplab

Related

How to use OpenImages dataset to train binary model in Keras

I am trying to use the Open Images dataset to train a binary CNN model (Orange vs. Not Orange).
I use the OID v4 toolkit to download images of few classes both in train and test.
Now I'm stuck with how to conert the multiclass shape in each directory to a binary.
I believe I need some tool to change the subfolders (=classes) name.
I've succeeded using os and shutil packages to manipulate the directories as requested.
TXS.

Is it possible to create labels.txt manually?

I recently convert my model to tensorflow lite but I only got the .tflite file and not a labels.txt for my Android project. So is it possible to create my own labels.txt using the classes that I used to classify? If not, then how to generate labels.txt?
You should be able to generate and use your own labels.txt. The file needs to include the label names in the order you provided them in training, with one name per line.
Run the following code after installing the TFLite Model Maker library and pass the dataset for classification:
data = ImageClassifierDataLoader.from_folder('folder/')
train_data, test_data = data.split(0.8)
model = image_classifier.create(train_data)
loss, accuracy = model.evaluate(test_data)
model.export('image_classifier.tflite', 'imageLabels.txt')
On running it in Colab or locally, the labels files would be auto-generated with the categories from each of the subfolders.

Tensorflow Object Detection: training from scratch using a .h5 (hdf5) file

I need to train from scratch a CNN over a COCO dataset with a specific configuration: https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/embedded_ssd_mobilenet_v1_coco.config
Thus, I installed TF Object Detection API and I downloaded the COCO dataset. However the dataset is in .h5 extension.
Is it possible to run the training with this kind of file or do I need to convert it in images in someway? If that is possible, what would the command be?
PS: I was not able to find a pre-trained model with that config, this is why I need to train a cnn from scratch.
My suggestion would be to convert the .hdf5 file to a .tfrecord file, you can find examples of how to do this here.

Tensorflow: Fine tune Inception model

For a few days I am following the instructions here:https://github.com/tensorflow/models/tree/master/inception
for fine-tuning inception model. The problem is that my dataset is huge so converting it to TFRecords format would fill my entire hard-disk space. Is there a way of fine-tuning without using this format? Thanks!
Fine-tuning is independent of the data format; you're fine there. TFRecords promotes training and scoring speed; it shouldn't affect the quantity of iterations or epochs needed, nor the ultimate classification accuracy.
You can train any model without converting your data to tfrecords. Here there is a great gist that fine-tunes VGG by reading directly from jpg files. You can change the slim architecture to the Inception one and you should be fine!
In the current configuration your dataset must be divided into train and test folders, with classes as sub-folders. But you can change it to whatever you want.
I've not experienced any huge difference in the speed of the code compared to the one that uses tfrecords.

Train Inception from Scratch in TensorFlow

I wanted to train the inception model like shown in the tensorflow github-tutorial.
Except i wanted to use a selfmade Dataset of TFRecord files.
bazel build inception/imagenet_train
bazel-bin/inception/imagenet_train --num_gpus=1 --batch_size=32 --train_dir=/tmp/imagenet_train --data_dir=/tmp/imagenet_data
I changed the data directory to the folder with my own TFRecord files.
Now i´am wondering whether i´am realy training from scratch, or if this is the same thing like the "retraining the last layer -Tutorial"
Yes you are training from scratch: see the code

Categories

Resources