I am currently making an application with OpenCV and a web server that finds certain car brands as part of an ongoing game in my family.
However, I don't know where to start. I googled it but all I found was a post on finding a yellow ball. I want to find a car logo from a picture (which could be angled or glaring) so I identify the car brand and add points to the score.
I know it seems like a tall order but could anybody help?
You could probably use Haar cascades in openCv to do this. You will need to train haar detectors with both positive and negative samples of the logo but there is already utilities in openCv to help you with this. Just read up about haar in openCv documentation
Related
I am currently learning opencv to process images in Python.
I have some pictures and I should detect which part of the picture represents the sky: then, I should calculate the number of pixels that belong to the sky over the total.
In your opinion, is there a way to do this with opencv or should I train a neural network to recognize the sky in a series of pictures?
Any help would be greatly appreciated. Thank you.
I tried thresholding, countouring, background subtraction in opencv
imho, a CNN for that is a bit overpowered. Personally, I would try to select all the correlated pixel, starting from a targeted one that surely represent the sky. (something like the luminance/color picker implemented in adobe camera raw)
I am new to python and opencv. I am analysing images of clouds, and I need to remove the buildings, so that the subsequent analysis will have less noise. I tried using Canny edge detection and then fill in the contours, but did not get too far. I also tried thresholding by pixel colours, but cannot reliably exclude just the buildings and not other parts of the image containing the clouds.
Is there a way I can efficiently and accurately remove the buildings and keep all of the clouds/sky? Thanks for the tips in advance.
You could use a computer vision model that finds the buildings. There may be some open source ones out there. The only one I can think of at the moment is this semantic segmentation model. There should be details on how to implement it, but there could definitely be others out there.
https://github.com/CSAILVision/semantic-segmentation-pytorch
I think one of the classes is buildings and you could theoretically run the model and get the dimensions of the building and take it out.
I am working on a face recognition project where a I can first enroll myself and then start a recognize script which will start my webcam and it will recognize myself. I am following this article and its working perfectly fine.
What I have noticed is that if I show my photo to the camera instead myself in front of the camera, it still detects and recognize my face. There is no anti-spoofing involved in it. I want to include anti-spoofing method in the code so that it can detect weather the face detected/recognized is real of fake. For this I thought of following below approaches:
1. Eye blink detection: Initially I thought I would implement an eye blink detection algorithm, but it also has its disadvantage. What if a real face person didnt blinked his eyes for sometime, in that case our code will tag that face as fake. Also the eyes was also not getting detected at a distance of 1-1.5meter from the camera.
2. Using temperature sensor: I also interfaced omron thermal sensor so that I can get the temperature of the face. In normal human face, temperature is always above a threshold. In case of face in photo, it will always be below that threshold. I implemented this and it was working fine. But later realized that if someone showed photo in phone, in that case due to phone's high screen temperature, its always more than the threshold and thus it is tagged in as real photo.
Above mentioned methods didn't worked for me. I am looking for a simple solution which can work in all the scenarios. I am doing this project on raspberry pi, so looking for a solution which is compatible with raspberry pi. Please help. Thanks
Sorry for any mistake because I am not from raspberry pi's background but as a decent guy helping people I think that you should try resolution check (if it is possible) because phone's screen would always have less resolution than the real face. And then you can use it with the eye blink method to catch hold of a phone as photos do not blink eyes. Average human blinks 12 times in a minute so 1 time every 5 seconds . This will help you to catch hold of the printed photos. Hope this would help.
You should use an object detector on top of the face detector. It can definitely detect a phone.
You could retrain it to detect a photo being held up as well.
Have the object detector run first, save the bounding box coordinates of the phone, then see if the face bounding box coordinates reside inside of the phone.
I'm working on a face recognition system right now as my thesis project. Have you tried this article? Adrian says that it is usable in Raspberry Pi, but it means we have to install TensorFlow & Keras to do it. I think this could help.
Good day. I have this set of geotagged photos. I want to build a system which approximate the location of a query image based on how similar it is from the geotagged photos. I will be using python and opencv to accomplish this task. However, the problem is that most of the geotagged photos have people on it (I'm only after the background scenery).
I found some face detection algorithms that I can use to detect people on photos. However, what I need is to detect the whole body of the people in the images and just leave out the background.
Opencv have algorithms which can be used in removing background (I was hoping to reverse the output and leave the background instead). However, this is only applicable to videos (subtracting static with moving parts). Can you guys recommend any solution to this problem (where to start/ related studies/ algorithms)? I appreciate any help. Thanks!
I'm starting with openCV and Python, and I need to do the following tasks:
Get the profile picture of a person, detect the face and save it
Use the saved face as the replacement of a puppet head in a video
Point 1 is already done.
Can you help me with point 2?
Kind regards and thanks in advance.
You can detect the puppet face and replace it with the image that you got cropped out from the profile picture.
Try out detecting faces with the same algorithm (Probably you used Haar Object Detection) on the puppet video and see if it's detecting. If it's detecting the puppet face, simply get the co-ordinates and replace it with the face. Check out this question.
If the puppet face is not too similar with a human face you will need Haar Templates to detect the Puppet head on the video. For that you would have to prepare a template yourself. Look into this link.
Also look into this link. It's in C but you can convert it to Python without much effort.