I'm starting with openCV and Python, and I need to do the following tasks:
Get the profile picture of a person, detect the face and save it
Use the saved face as the replacement of a puppet head in a video
Point 1 is already done.
Can you help me with point 2?
Kind regards and thanks in advance.
You can detect the puppet face and replace it with the image that you got cropped out from the profile picture.
Try out detecting faces with the same algorithm (Probably you used Haar Object Detection) on the puppet video and see if it's detecting. If it's detecting the puppet face, simply get the co-ordinates and replace it with the face. Check out this question.
If the puppet face is not too similar with a human face you will need Haar Templates to detect the Puppet head on the video. For that you would have to prepare a template yourself. Look into this link.
Also look into this link. It's in C but you can convert it to Python without much effort.
Related
Is there a way to track down and nicely blur faces or part of face (like hair) for multiple 360 degree images via python opencv. ? I'am using Windows OS and python3.8
Two methods with opencv and python
Using a Gaussian blur to anonymize faces in images and video streams
Applying a “pixelated blur” effect to anonymize faces in images and video
The method is well explained here and you can access code.
Now, a more advance solution if you are using GPU, and you want to run the application on a live video stream.. its with nvidia DS and Deep Learning. The github here reports results on T4, i believe you should be able to run it on Jetson nano. Here is the link
Yes, there is. First you need to detect the face(s) using Haar-cascade, which will provide you the rectangle coordinates of the face location. Then you can use this answer to blur the desired portion of an image.
fig:Shoe in the red circle is to be detected
I am trying to create a python script using cv2 that can recognize the shoe of the baller and determine whether the shoe is beyond, on or before the white line(refer to the image).
I have no idea about any kind of approach to use, what kind of algorithms might be helpful. Need some guideline, please help!
(Image is attached)
I realize this would work better as a comment because it isn't a full answer, but I don't have enough rep yet to leave comments, haha.
You may be interested in OpenCV's Canny Edge detection algorithm:
http://docs.opencv.org/trunk/da/d22/tutorial_py_canny.html
This will allow you to find shapes within your image.
Also, you can find similarly colored blobs using SimpleBlobDetector:
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
This should make it fairly easy to detect the white line.
In order to detect a more complex object like the shoe, you'll probably have to make something like a object detection cascade file and use a CascadeClassifier to find it:
http://docs.opencv.org/2.4/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#cascade-classifier
http://johnallen.github.io/opencv-object-detection-tutorial/
Basically, you take a bunch of pictures to "teach" what the object looks like, and output that info to a file that a CascadeClassifier can use to detect objects in input images. It may be hard to distinguish between different brands of shoe though, if you need it to be that specific. Also, you may need to adjust the input images (saturation, brightness, etc) before trying to detect objects in order to get good results.
Im fairly new to image processing so I was looking for some help on where to start or some input on how to do this if you know the answer.
What I would like to be able to do is take an image with a table in it, and be able to detect the table. Id even be happy to settle with detecting many planar surfaces as I can sift through that part easily. What I get right now with openCV when I do some contour detection is:
Original,
Contour Detected
As you can see this may have potential with some sophistication maybe, but it missed the bulk of the table. Im currently working in Python for this as well.
Good day. I have this set of geotagged photos. I want to build a system which approximate the location of a query image based on how similar it is from the geotagged photos. I will be using python and opencv to accomplish this task. However, the problem is that most of the geotagged photos have people on it (I'm only after the background scenery).
I found some face detection algorithms that I can use to detect people on photos. However, what I need is to detect the whole body of the people in the images and just leave out the background.
Opencv have algorithms which can be used in removing background (I was hoping to reverse the output and leave the background instead). However, this is only applicable to videos (subtracting static with moving parts). Can you guys recommend any solution to this problem (where to start/ related studies/ algorithms)? I appreciate any help. Thanks!
I am new to the computer vision area and i have been given this task,
I need to recognize an amount of images with a camera as soon as they enter the camera focus, this images would be scanned previously and stored in some sort of database.(maybe the key-points collection to each image)
well, i've been doing some research and found that SIFT may do the trick but i don't know how to use it properly, i need to do this on Python-opencv
Note: I already found examples in which I can get the key-points on an image using SIFT, but the code is very confusing to someone who does not know the language, any help is appreciated.
Here is a good page for you to get started and learn the basics along the way.