I wish to know whether I can use an Inception or ResNet model to identify faces. I want to know whether transfer learning and training is even considerable for my task.
I just want to be able to identify faces but I am also curious whether I can retrain/optimize a pre-trained model for my task.
Or have I been reading of things wrong; do I need to get a pre-trained model that was designed for faces?
I have tried poking around with Inception and VGG16 but I have not trained them for faces. I am working on it but I want to know whether this is even viable or simply a waste of time. If I use transfer learning with FaceNet I think I'll be better off.
Transfer learning for facial detection would be a great way to go ahead. Also, yes transfer learning with facenet is a great idea.
Also, for transfer learning to work it is not necessary that the model had to be initially pre-trained with only faces like using facenet. A model pre-trained with imagenet would also be pretty darn good! This is a very hot topic, so do not try to reinvent the wheel. There are many repositories that have already done this using transfer learning from imagenet dataset and using resnet50 with astonishingly good results.
Here is a link to one such repository:
https://github.com/loheden/face_recognition_with_siamese_network
Also note that siamese networks is a technique that is especially good in the facial recognition use case. The concept of siamese is really simple: take two images and compare the features of these two images. If the similarity in features are above a set threshold, then the two images are the same (the two faces are the same) else not the same (face not recognized).
Here is a research paper on siamese networks for facial recognition.
Also, here is a two-part tutorial on how to implement the siamese network for facial recognition using transfer learning:
http://www.loheden.com/2018/07/face-recognition-with-siamese-network.html
http://www.loheden.com/2018/07/face-recognition-with-siamese-network_29.html
The above tutorial's code is in the first Github link I shared at the beginning of this answer.
Related
I have a real-time problem which is aimed to detect 9 objects. As far as I understand, yolo has promising results on real-time object detection problems so I am searching good instructions to train a pre-trained yolo model with my custom "own" dataset.
I have my dataset and they are already labeled, also they have bounding box coordinates in .txt files in yolo format. However, it is a bit confusing to find a good instruction on the web about yolo custom dataset training for own object detection problem, since instructions are mostly using generic dataset such as COCO, PASCAL etc. or their instructions are not well enough to implement the object detection model on own dataset.
TL;DR
My question is, are there some handy instructions about implementing yolo object detection for own dataset? I am more looking for frameworks to implement yolo model rather than darknet C implementation since I am more familiar with python so it would be perfect if you could provide Pytorch or Tensorflow implementation.
It is more appraciated if you already implemented yolov3-v4 with your own dataset with the help of instructions you found on the web and you are willing to share those instructions.
Thanks in advance.
For training purpose I would highly recommend AlexeyAB's repository as it's highly optimised for accuracy and speed, although it is also written in C. As far as testing and deployment is considered you have a lot of options:
OpenCV's DNN Module: refer this article.
Tensorflow Model
Pytorch Model
Out of these OpenCV's DNN implementation is the fastest for testing/inference.
I'd like to implement something like the title, but I wonder if it's technically possible.
I know that it is possible to recognize pictures with CNN,
but I don't know if can be automatically covered nipple area.
If have library information about any related information,
I would like to get some advice.
CNNs are able to detect whatever you train them for, to varying degree of accuracy. What you would need are a lot of training samples (ie. samples of ground truths with the original image, and the labeled image) with which to train your models, and then some new data which you can test the accuracy of your model on. The point is, CNNs are not biased to innately learn a task, you have to tell them what to learn!
I can recommend the machine learning library Keras (https://keras.io/) if you plan to do some machine learning using CNNs, as it's pretty simple and somewhat beginner-friendly. Take some of the tutorials for CNNs, which are quite good.
Essentially, you have what I can only assume is a pretty niche problem. The main issue will come down to how much data you have to train your model. CNNs need a lot of training data, especially for a problem like this which isn't simple. A way which would make this simpler would be to have a model which detects the ahem area of interest and denotes it as such on a per-pixel basis. Then a simple mask could be applied to the source image to censor it. This relates to image segmentation, and there are many academic papers on the topic.
I'm working on a project that requires the recognition of just people in a video or a live stream from a camera. I'm currently using the tensorflow object recognition API with python, and i've tried different pre-trained models and frozen inference graphs. I want to recognize only people and maybe cars so i don't need my neural network to recognize all 90 classes that come with the frozen inference graphs, based on mobilenet or rcnn, as it seems this slows the process, and 89 of this 90 classes are not needed in my project. Do i have to train my own model or is there a way to modify the inference graphs and the existing models? This is probably a noob question for some of you, but mind that i've worked with tensorflow and machine learning for just one month.
Thanks in advance
Shrinking the last layer to output 1 or two classes is not likely to yield large speed ups. This is because most of the computation is in the intermediate layers. You could shrink the intermediate layers, but this would result in poorer accuracy.
Yes, you have to train own model. Let's see in short words some ways how to do.
OPTION 1. When you want to apply transfer knowledge as maximum as possible, you can froze the CNN layers. After, you change a quantity of detected classes with dimension of classifier (dense layers). The classifier is the latest part in CNN architecture. Now, you should retrain only classifier.
OPTION 2. Assuming, you want to apply transfer knowledge for first layers of CNN (for example, froze first 2-3 CNN layers) and retrain rest of CNN with classifier. After, you change a quantity of detected classes with dimension of classifier. Now, you should retrain rest of CNN layers and classifier.
OPTION 3. Assuming, you want to retrain whole CNN with classifier. After, you change a quantity of detected classes with dimension of classifier. Now, you should retrain whole CNN with classifier.
Generally, the Tensorflow Object Detection API is a good start for beginners! How to proceed with your problem you can see here more detail about whole process and extra explanation here.
I just started learning about machine learning recently and have a project where I have to develop a program for QR code localization so that a QR code can be detected and read at any angle of rotation. Development will be done in Python.
The plan is to gather various images of the QR codes at different angles with different backgrounds. From this I would like to create a dataset for training with neural networks and then testing.
The issue that I'm having is that I can't seem to figure out a correct feature design for the dataset and how to identify the QR code from the images for feature processing. Would I use ground-truth images to isolate the QR code or edge magnitude maps? Feature design for images seems to confuse me.
Any help with this would be amazing? Thanks for your time.
You mention that you want to train neural networks. Instead of starting with your problem, start with a beginner example.
Start with MNIST example for deep learning.
Train your Neural Network on notMNIST dataset that is used in Udacity Deep Learning Course.
In these two examples, you will see that you do not design features but NN somehow founds correct features. Easiest solution would be to use same technique for QR codes in your dataset.
I am currently working on a project for identification of mood/emotions of a person .
As the first step we are working on a image recognition , detection and tracking python code.
I went through the various different approach towards this problem and found out.
1)Haar cascade method (fast but no scope for recognition and reading expressions ).
2)Neural networks(Great at image recognition ie for details such as smile/ anger..... ).
I am confused with neural networks , ie the approach.
We can first use haar cascade to detect the faces with ease(really fast) then use either canny edge detection or Cropping to crop out the part of the face.
After that is done i have no clue on how to proceed .
This is my idea of it.
continue using haar cascade method to detect the features of the face like eyes, nose ,cheek ,lips....
then find out the distance between them to find out ratios which we can further use to form a neural network .
Different internal layers would be used to detect different features.
We can use differential method to optimize the cost by altering the weights of the synapses .
How good is the approach and is there a better way to do it .
Like say we can use canny edge to detect the edges and then make a new matrix just out of the edges and then use this to train the data.
I dont know , i am really confused.
Anyways thanks in advance for all answers
Image processing libraries such as scikit-image or OpenCV are a good place to start. For example, here's an example of canny edge detection in OpenCV.
Regarding neural networks, as lejlot pointed out, you've got to ask yourself how much you want to build from scratch.
An example for building your own neural network based on some parameters (which you'd have to define for your facial features), I suggest you read through A Neural Network in 11 lines of Python which illustrates some of problems you might face (especially part 2 where it's about image processing too).
What you seem to need are Convolutional Neurl Networks (check this http://cs231n.github.io/convolutional-networks/ to learn about them).
Convolutional Neural Networks (CNN for short) are a kind of neural nets that learn to extract visual features from an image and how to relate those features to recognize what's on the image, so you don't need to detect all the features, just give a CNN a bunch of labeled face pictures and it will learn to identificate the mood of the eprson.
What you can do is to detect the face in every picture (openCV is good enough at detecting faces) and then crop and align each face so all the faces have the same size. Then feed the CNN with all the faces, and it will gradually learn to recognize the emotions of a person.