I am trying to face recognise by python Face-recognition library
I have tried below code for below image
Code :
import face_recognition
image = face_recognition.load_image_file("img/bill.jpeg")
property(image)
face_locations = face_recognition.face_locations(image)
print(len(face_locations))
For below image I am getting output for total face : 6
Image :
But When I am trying by a cartoon image
I am getting output : 0
How can I recognise cartoon face by face-recognition?
Sorry to say but if the face recognition is good it should not recognize cartoon faces, it's designed to recognize human faces and therefore should only tell you how many human faces it is on the image, otherwise it's a bad designed algorithm. If you want a machine-learning algorithm to recognize cartoon faces you would have to train it your self for that specific test.
I did a quick search on google and the first things I found was an article named "Cartoon Face Recognition: A Benchmark Dataset" at https://arxiv.org/pdf/1907.13394.pdf . Maybe you can find an already existing machine-learning algorithm that have been trained to recognize cartoon faces.
Hope this helped and I hope you find what you're looking for.
--------------------------------EDIT--------------------------------
I found these two git repositories, could be worth looking into more
https://github.com/srvCodes/Cartoon-Face-Detection-and-Recognition
https://github.com/hako/dissertation
The last link is a link for emotions of cartoon character.
Short answer: You need to train a new model to detect cartoon characters.
Long explanation:
Because cartoon characters have different facial features from normal
human faces. For cartoons, the face edges are smooth, perfectly round
eyes, smooth-shaped mouth, and cartoonish face structure.
The pre-trained model that you are using doesn't know to identify these structures, it hasn't seen such images during training.
A model detects a face using a lot of filters, these filters could detect lines and shapes in an image. If all these filters combine and give a high output, then there is a face in that location.
So, you either have to look for models that are trained on cartoons, or label
and train a model by your self.
Related
I have 3 images of differents objets : a smartphone, a shirt and a packet of pasta.
I want to perform recognition of each object on any images containing one of these objects.
For example, if we have the same phone in a picture, i want to be able to see the phone with a bounded box drawn in this picture. If the phone is different, nothing should be drawn.
I first tried to perform object recognition using neural network like Mask R-CNN with python and tensorflow. But i realized that i haven't a huge training dataset, only my 3 images. Neural network algorithms seem to be adapted to recognize concept like dog, smartphone, landscape but not a particular dog, a specific smartphone or a specific landscape.
To get to the point, if i have in input any picture that contain the same smartphone, the same shirt or the same packet of pasta, i want the program to detect that.
What algorithms are best suited to perform this recognition ?
Try using the COCO dataset. Since the COCO weights have already been trained on thousands of items and images, you should just be able to run the splash feature to help detection with Mask RCNN.
Worst case scenario, if you want to train your own dataset, just find a lot of photos online relating to the objects you want to detect, annotate them, then train.
I am trying to do handwriting character recognition using Tensorflow in Google-colab.
I have trained and tested model with an accuracy of 91%
I tried it on image given in the tutorial, and it worked correctly.
it was 28*28 resized.
When I wanted to try it on my input-image, it is predicting wrong results as 2,3, but my input-image is of 'digit-6'.
the problem may be in image-operations and before passing to model.
also, further I wanted to pass that image for realtime-recognition.
I am doing resizing, inverting of the image, to make it compatible with my trained labels.
OpenCV input image is represented opposite-notation of tensorflow labels, as the current matrix represents black as 0 and white as 255.
my GitHub Jupyter-notebook file is followed from tutorial of digitalocean's blog
How can I upload an image taken from a phone/webcam and recognize characters from that image?
where I am making mistakes in processing image?
further, I wanted to pass that image in a project - real-time recognition of characters
testing images are
do you know Mnist data set is restricted with padding of images?
appropriate realtime image processing is needed.
This is useful article about that
https://link.medium.com/0ySCmyMpzU
and following is my project about simple mnist game
https://github.com/mym0404/Math-Writer
I am running the code provided by Adrian in this link https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/#comment-473194.
I have a dataset containing 6 faces of 3 people each. I ran this code and it works fine when detecting my face and my friend’s face. It faces trouble while detecting the third person’s face. It detects it as my face. Does this algorithm work only for binary classification? Will the accuracy improve if I improve if I make the dataset bigger?
If you increase your database. It will increase the accuracy for sure. Also opencv is not a good solution if you want to detect a face in real time for the sake of accuracy.Cause, when an object is moving, then the cascade function shows some kind of miss classification.
However you can use dlib function to make your processing robust. Or you can use yolo for face recognition.
I have been working on the problem of recognizing faces from given caricatures using the IIIT-CFW dataset
So far, I have tried using Python's dlib library for detecting landmark points from the cartoon faces. However, it doesn't seem to work well on faces other than real human ones.
Is there any alternative for the same? Any suggestions regarding face alignment and landmark detection would be appreciated.
I would train a face landmarking model using dlib's tool on that dataset. Dlib comes with example programs showing you how to train new models (e.g. http://dlib.net/train_shape_predictor.py.html)
I am currently working on a project for identification of mood/emotions of a person .
As the first step we are working on a image recognition , detection and tracking python code.
I went through the various different approach towards this problem and found out.
1)Haar cascade method (fast but no scope for recognition and reading expressions ).
2)Neural networks(Great at image recognition ie for details such as smile/ anger..... ).
I am confused with neural networks , ie the approach.
We can first use haar cascade to detect the faces with ease(really fast) then use either canny edge detection or Cropping to crop out the part of the face.
After that is done i have no clue on how to proceed .
This is my idea of it.
continue using haar cascade method to detect the features of the face like eyes, nose ,cheek ,lips....
then find out the distance between them to find out ratios which we can further use to form a neural network .
Different internal layers would be used to detect different features.
We can use differential method to optimize the cost by altering the weights of the synapses .
How good is the approach and is there a better way to do it .
Like say we can use canny edge to detect the edges and then make a new matrix just out of the edges and then use this to train the data.
I dont know , i am really confused.
Anyways thanks in advance for all answers
Image processing libraries such as scikit-image or OpenCV are a good place to start. For example, here's an example of canny edge detection in OpenCV.
Regarding neural networks, as lejlot pointed out, you've got to ask yourself how much you want to build from scratch.
An example for building your own neural network based on some parameters (which you'd have to define for your facial features), I suggest you read through A Neural Network in 11 lines of Python which illustrates some of problems you might face (especially part 2 where it's about image processing too).
What you seem to need are Convolutional Neurl Networks (check this http://cs231n.github.io/convolutional-networks/ to learn about them).
Convolutional Neural Networks (CNN for short) are a kind of neural nets that learn to extract visual features from an image and how to relate those features to recognize what's on the image, so you don't need to detect all the features, just give a CNN a bunch of labeled face pictures and it will learn to identificate the mood of the eprson.
What you can do is to detect the face in every picture (openCV is good enough at detecting faces) and then crop and align each face so all the faces have the same size. Then feed the CNN with all the faces, and it will gradually learn to recognize the emotions of a person.