I need to train from scratch a CNN over a COCO dataset with a specific configuration: https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/embedded_ssd_mobilenet_v1_coco.config
Thus, I installed TF Object Detection API and I downloaded the COCO dataset. However the dataset is in .h5 extension.
Is it possible to run the training with this kind of file or do I need to convert it in images in someway? If that is possible, what would the command be?
PS: I was not able to find a pre-trained model with that config, this is why I need to train a cnn from scratch.
My suggestion would be to convert the .hdf5 file to a .tfrecord file, you can find examples of how to do this here.
Related
I am trying to use the Open Images dataset to train a binary CNN model (Orange vs. Not Orange).
I use the OID v4 toolkit to download images of few classes both in train and test.
Now I'm stuck with how to conert the multiclass shape in each directory to a binary.
I believe I need some tool to change the subfolders (=classes) name.
I've succeeded using os and shutil packages to manipulate the directories as requested.
TXS.
I created a model of darknet53.weights for image classification using my original data in darknet.
(This isn't a YOLO v3 model.)
Is there a way to convert a darknet53.weight to a pytorch pt model?
I tried quoting various codes on github etc., but all of them can convert only YOLOv3 weights file to pytorch's pt model.
I want to compare the accuracy of the darknet53 model created with darknet with other image classification models created with pytorch.
Initially, I tried to make a darknet53 model with pytorch, but that didn't work. Therefore, I created a darknet53 model with darknet.
If anyone knows a good way, please teach me.
Thanks.
I want to fine tune existing OpenCV DNN face detector to a face images database that I own. I have opencv_face_detector.pbtxt and opencv_face_detector_uint8.pb tensorflow files provided by OpenCV. I wonder if based on this files is there any way to fit the model to my data? So far, I haven't also managed to find any tensorflow training script for this model in OpenCV git repository and I only know, that given model is and SSD with resnet-10 as a backbone. I am also not sure, reading the information on the internet, if I can resume training from .pb file. Are you aware of availability of any scripts defining the model, that could be used for training? Would pbtxt and pb files be enough to continue training on new data?
Also, I noticed that there is a git containing caffe version of this model https://github.com/weiliu89/caffe/tree/ssd. Although I never worked with caffe before, would it be possible/easier to use existing weight (caffe .pg and .pbtxt files are also available in OpenCV's github) and fit the model to my dataset?
I don't see a way to do this in opencv, but I think you'd be able to load the model into tensorflow and use model.fit() to retrain.
The usual advice about transfer learning applies. You'd probably want to freeze most of the early layers and only retrain the last one or two. A slow learning rate would be advised as well.
I am fairly new to keras and I am trying transfer learning here:
https://www.tensorflow.org/tutorials/images/transfer_learning
My dataset however is not a binary and I have tfrecord file. I can read the file in tensorflow. I do not want to feed the images as an input to the network as the input comes from the pre-trained model. How can I pass the images and labels in the ImageDataGenerator class in Keras.
For anyone that may have this issue in the future. If the pre-train process is all correct. You can use the tf.data API to read and prepare the images for the training and the (image, label) set, can be fed to to the (.fit) method of your model.
look at this great Post to get familiar how to read the tfrecord file:
https://medium.com/#moritzkrger/speeding-up-keras-with-tfrecord-datasets-5464f9836c36
I wanted to train the inception model like shown in the tensorflow github-tutorial.
Except i wanted to use a selfmade Dataset of TFRecord files.
bazel build inception/imagenet_train
bazel-bin/inception/imagenet_train --num_gpus=1 --batch_size=32 --train_dir=/tmp/imagenet_train --data_dir=/tmp/imagenet_data
I changed the data directory to the folder with my own TFRecord files.
Now i´am wondering whether i´am realy training from scratch, or if this is the same thing like the "retraining the last layer -Tutorial"
Yes you are training from scratch: see the code