How can I use Python and OpenCV to find facial similarity?
I've successfully used OpenCV and Python to extract faces from multiple photographs using Haar Cascades.
I now have a directory of images, all of which are faces of different people.
What I'd like to do is take a sample image, and then see which face it most looks like.
I've tried using pyssim:
pyssim needle.jpg "haystack/*"
But, unfortunately, it's looking a image similarity (colours etc) rather than facial features.
To reiterate - I'm quite happy detecting faces, what I'd like to be able to do is compare them.
Related
First of all, I am a beginner in computer vision field, learning OpenCV from the web.
What I am trying is stitching multispectral (bands > 3) images with OpenCV stitching APIs.
I already know that OpenCV doesn't support multispectral image.
So, the idea I came up with is as follows:
Extract the RGB images from each multispectral image.
Use cv2.Stitcher_create() and stitcher.stitch class to stitch all the RGB images (reference: https://pyimagesearch.com/2018/12/17/image-stitching-with-opencv-and-python/). And save the warping and arrangement informations (ex. Homography, matching keypoints...) in making RGB panorama.
Stitch each remaining bands' image by loading the informations that saved in step 2.
The problem is, I can't find the codes for the saving and loading informations that required in step 2 and 3.
Is the suggested method possible? And if possible, is there any tips or references that I can use?
Yes you can do it (I did it before for my paper on stitching construction plans). You need to save the camera parameters after the feature matching and probably also the seam masks.
Look here (cameras) and here (seam masks)
Is there a way to track down and nicely blur faces or part of face (like hair) for multiple 360 degree images via python opencv. ? I'am using Windows OS and python3.8
Two methods with opencv and python
Using a Gaussian blur to anonymize faces in images and video streams
Applying a “pixelated blur” effect to anonymize faces in images and video
The method is well explained here and you can access code.
Now, a more advance solution if you are using GPU, and you want to run the application on a live video stream.. its with nvidia DS and Deep Learning. The github here reports results on T4, i believe you should be able to run it on Jetson nano. Here is the link
Yes, there is. First you need to detect the face(s) using Haar-cascade, which will provide you the rectangle coordinates of the face location. Then you can use this answer to blur the desired portion of an image.
Ive been exploring dlib's face detector over its python API. On most images in my data set it seems to perform slightly better than cv2 on most images so I kept playing around with it on multiple faces in picture scenarios.
Going through dlib's python examples it seems like it would be possible to train these images but I am wondering if anyone has a suggestion how to make sure that the two faces on the far left and right are detected out of the box?
This is he image that I am having trouble finding all 6 faces on (https://images2.onionstatic.com/onionstudios/6215/original/600.jpg)
Dlib has a very precise face detector. But it works bad detecting not frontal (like far left) and/or occluded faces (like far right).
Seeta (https://github.com/seetaface/SeetaFaceEngine) works better with those. But it's less precise.
Also I tried retraining Dlib's face detector. And obtained much lower precise than DLIB and less recall than Seeta. So, re-training DLIB seems not perfect idea.
In my experience, Dlib does not do very well out of the box with obscured and profile faces out of the box. I would recommend training Dlib with more data of this kind.
I'm starting with openCV and Python, and I need to do the following tasks:
Get the profile picture of a person, detect the face and save it
Use the saved face as the replacement of a puppet head in a video
Point 1 is already done.
Can you help me with point 2?
Kind regards and thanks in advance.
You can detect the puppet face and replace it with the image that you got cropped out from the profile picture.
Try out detecting faces with the same algorithm (Probably you used Haar Object Detection) on the puppet video and see if it's detecting. If it's detecting the puppet face, simply get the co-ordinates and replace it with the face. Check out this question.
If the puppet face is not too similar with a human face you will need Haar Templates to detect the Puppet head on the video. For that you would have to prepare a template yourself. Look into this link.
Also look into this link. It's in C but you can convert it to Python without much effort.
I'm kinda new both to OCR recognition and Python.
What I'm trying to achieve is to run Tesseract from a Python script to 'recognize' some particular figures in a .tif.
I thought I could do some training for Tesseract but I didn't find any similar topic on Google and here at SO.
Basically I have some .tif that contains several images (like an 'arrow', a 'flower' and other icons), and I want the script to print as output the name of that icon. If it finds an arrow then print 'arrow'.
Is it feasible?
This is by no means a complete answer, but if there are multiple images in the tif and if you know the size in advance, you can standardize the image samples prior to classifying them. You would cut up the image into all the possible rectangles in the tif.
So when you create a classifier (I don't mention the methods here), the end result would take a synthesis of classifying all of the smaller rectangles.
So if given a tif , the 'arrow' or 'flower' images are 16px by 16px , say, you can use
Python PIL to create the samples.
from PIL import Image
image_samples = []
im = Image.open("input.tif")
sample_dimensions = (16,16)
for box in get_all_corner_combinations(im, sample_dimensions):
image_samples.append(im.crop(box))
classifier = YourClassifier()
classifications = []
for sample in image_samples:
classifications.append (classifier (sample))
label = fuse_classifications (classifications)
Again, I didn't talk about the learning step of actually writing YourClassifier. But hopefully this helps with laying out part of the problem.
There is a lot of research on the subject of learning to classify images as well as work in cleaning up noise in images before classifying them.
Consider browsing through this nice collection of existing Python machine learning libraries.
http://scipy-lectures.github.com/advanced/scikit-learn/index.html
There are many techniques that relate to images as well.