I'm using dlib and mediapipe to get facial landmarks arround a face image. My question is about the jittering. In Mediapipe, the researchers advise us to use temporal filter like the one euro filter to reduce the jittering. But I noticed that, in two consecutives frames that are similar, without mouvement, the facial landmarks is not stable. So I observe the detected bounding box arround the face, and it is not stable too. Is it a common solution to stabilized the detected bounding box before the estimation of the facial landmarks?
Related
Are there any libraries/packages in Python which help detect landmarks from facial images and then compute facial measurements like the distance between the eyes, eyebrow length, lip thickness...?
I am currently working on a python implementation of Adrian Rosebrock's video blink detector with dlib blog post:
https://www.pyimagesearch.com/author/adrian/
Basically, I am using dlib's frontal face detector and passing the bounding box around the face to dlib's landmark detector as seen in this picture:
https://imgur.com/xvkfNeG
Sometimes dlib's frontal face detector doesn't find a face, but other face detectors like OpenCV's do. Adrian's blog made it sound like I could use openCV's frontal face detector and pass the bounding box along instead.
However when I do this the landmark detector can't find the eyes of the person correctly as seen in this photo:
https://imgur.com/3eAFFsQ
Is there way I could use an alternative face detector with dlib's landmark detector? Or am I stuck using dlib's frontal face detector because the bounding box passed by a different face detector will be ever so slightly incorrect for the dlib landmark detector?
Thank you for your time!
Checking the images you are providing it just look like you are not passing the correct parameters to the plotting method. The results look correct, just upside-down.
You can use your own face detector. You just have to use dlib.rectangle() function. First, find the bounding boxes from your face detector and after that map them to dlib.rectangle(x,y,w,h).
Then you can pass the bounding boxes from this list to predictor(img, rect).
I want to detect face from image with low brightness. I'm using dlib for detecting the face from image. But the dlib detector is detecting no face at all. I've the following code to detect faces from image.
detector=dlib.get_frontal_face_detector()
faces=detector(image)
when i try to print the length of the faces it displays zero.
Can anybody help me, what shall I do? Is there other way to detect images from low brightness images? thanks.
Dlib face detector is a very precise one. But as a cost it has low recall, especially when images are bad and/or faces are small.
Try another face detector, like
Seeta https://github.com/seetaface/SeetaFaceEngine
Pico https://github.com/nenadmarkus/pico
or OpenCV
Those may provide detections. But false detections as well.
Ive been exploring dlib's face detector over its python API. On most images in my data set it seems to perform slightly better than cv2 on most images so I kept playing around with it on multiple faces in picture scenarios.
Going through dlib's python examples it seems like it would be possible to train these images but I am wondering if anyone has a suggestion how to make sure that the two faces on the far left and right are detected out of the box?
This is he image that I am having trouble finding all 6 faces on (https://images2.onionstatic.com/onionstudios/6215/original/600.jpg)
Dlib has a very precise face detector. But it works bad detecting not frontal (like far left) and/or occluded faces (like far right).
Seeta (https://github.com/seetaface/SeetaFaceEngine) works better with those. But it's less precise.
Also I tried retraining Dlib's face detector. And obtained much lower precise than DLIB and less recall than Seeta. So, re-training DLIB seems not perfect idea.
In my experience, Dlib does not do very well out of the box with obscured and profile faces out of the box. I would recommend training Dlib with more data of this kind.
I have a not-so-simple question.
The Situation:
I'm working on robust facial detection API in python written on top of OpenCV (cv not cv2).
I am using Haar Cascades for face detection specially
front - haarcascade_frontalface_default.xml
profile - haarcascade_profileface.xml
Each worker is using different harr classifier (front/profile) and produce the set of ROI (Region of Interests) then do a uion on them and merge all overlaping bouding boxes.
The result is "your casual red square" around a face with 70% accuracy and not so may phantom faces.
The problem:
Simply tilting the face. My algorithm cannot detect a tilted face.
For profile detection I did a simple flip of a image to detect both left and right profile.
I was thinking there "should" be a better way to detect a tilted face than to call algorithm multiple times for multiple slightly rotated images. (This is a only solution that came to my mind).
The question:
Is there a approach or a way or a specific harr classifier for detection of tilted faces?
Thank you :)