I am trying to learn object detection models and strategies they use but I cant get my head around understanding how Yolo or SSD does it using convolutional neural networks. Can anybody give some insight ? How do they structure CNN's ? It would be wonderful if code(Python) could be provided.
Related
I have a real-time problem which is aimed to detect 9 objects. As far as I understand, yolo has promising results on real-time object detection problems so I am searching good instructions to train a pre-trained yolo model with my custom "own" dataset.
I have my dataset and they are already labeled, also they have bounding box coordinates in .txt files in yolo format. However, it is a bit confusing to find a good instruction on the web about yolo custom dataset training for own object detection problem, since instructions are mostly using generic dataset such as COCO, PASCAL etc. or their instructions are not well enough to implement the object detection model on own dataset.
TL;DR
My question is, are there some handy instructions about implementing yolo object detection for own dataset? I am more looking for frameworks to implement yolo model rather than darknet C implementation since I am more familiar with python so it would be perfect if you could provide Pytorch or Tensorflow implementation.
It is more appraciated if you already implemented yolov3-v4 with your own dataset with the help of instructions you found on the web and you are willing to share those instructions.
Thanks in advance.
For training purpose I would highly recommend AlexeyAB's repository as it's highly optimised for accuracy and speed, although it is also written in C. As far as testing and deployment is considered you have a lot of options:
OpenCV's DNN Module: refer this article.
Tensorflow Model
Pytorch Model
Out of these OpenCV's DNN implementation is the fastest for testing/inference.
I have Data A from accelerometer and gyroscope sensors like this
I want to create a pre-trained model to be used to classify with the LSTM method using Data A in python. Is that possible? Because from what I read, pre-trained is used for image data and uses methods such as CNN for classification. In addition, I tried to find data that has gone through the pre-trained process but have not found it so I doubt whether it is possible.
And if I do the classification using LSTM, can I use the A data that has gone through the pre-trained?
Is there a tutorial I can study? Any help would be very much appreciated.
I advise you to look at some HAR datasets you can find plenty of examples in Kaggle that are using a LSTM, the first one that comes to my mind is the excellent tutorial of Jason Brownlee https://machinelearningmastery.com/how-to-develop-rnn-models-for-human-activity-recognition-time-series-classification/
I wish to know whether I can use an Inception or ResNet model to identify faces. I want to know whether transfer learning and training is even considerable for my task.
I just want to be able to identify faces but I am also curious whether I can retrain/optimize a pre-trained model for my task.
Or have I been reading of things wrong; do I need to get a pre-trained model that was designed for faces?
I have tried poking around with Inception and VGG16 but I have not trained them for faces. I am working on it but I want to know whether this is even viable or simply a waste of time. If I use transfer learning with FaceNet I think I'll be better off.
Transfer learning for facial detection would be a great way to go ahead. Also, yes transfer learning with facenet is a great idea.
Also, for transfer learning to work it is not necessary that the model had to be initially pre-trained with only faces like using facenet. A model pre-trained with imagenet would also be pretty darn good! This is a very hot topic, so do not try to reinvent the wheel. There are many repositories that have already done this using transfer learning from imagenet dataset and using resnet50 with astonishingly good results.
Here is a link to one such repository:
https://github.com/loheden/face_recognition_with_siamese_network
Also note that siamese networks is a technique that is especially good in the facial recognition use case. The concept of siamese is really simple: take two images and compare the features of these two images. If the similarity in features are above a set threshold, then the two images are the same (the two faces are the same) else not the same (face not recognized).
Here is a research paper on siamese networks for facial recognition.
Also, here is a two-part tutorial on how to implement the siamese network for facial recognition using transfer learning:
http://www.loheden.com/2018/07/face-recognition-with-siamese-network.html
http://www.loheden.com/2018/07/face-recognition-with-siamese-network_29.html
The above tutorial's code is in the first Github link I shared at the beginning of this answer.
I'm currently trying to use the Object Detection framework from Google Tensorflow.
I have a trained model for something similar to MNIST. What is the easiest way to use this as the classification checkpoint?
As I currently understand it I can use a classification or an object detection checkpoint.
Just don't know how to use my classification checkpoint as I think the structure of the network is specified by the type in
feature_extractor {
type: "ssd_mobilenet_v2"
...
Do I have to provide my own model type using this?
All of the pretrained models have a lot of layers and my MNIST like dataset has only 3 which was quite easy to train.
The goal in general is to detect math symbols on a white background with bounding boxes. The classification part was easy but trying to extend it by object detection seems to hard. Using pretrained models for object detection which were trained on real world images seem to be better than scratch but pretty bad in general.
Any ideas appreciated!
I just started learning about machine learning recently and have a project where I have to develop a program for QR code localization so that a QR code can be detected and read at any angle of rotation. Development will be done in Python.
The plan is to gather various images of the QR codes at different angles with different backgrounds. From this I would like to create a dataset for training with neural networks and then testing.
The issue that I'm having is that I can't seem to figure out a correct feature design for the dataset and how to identify the QR code from the images for feature processing. Would I use ground-truth images to isolate the QR code or edge magnitude maps? Feature design for images seems to confuse me.
Any help with this would be amazing? Thanks for your time.
You mention that you want to train neural networks. Instead of starting with your problem, start with a beginner example.
Start with MNIST example for deep learning.
Train your Neural Network on notMNIST dataset that is used in Udacity Deep Learning Course.
In these two examples, you will see that you do not design features but NN somehow founds correct features. Easiest solution would be to use same technique for QR codes in your dataset.