I am looking into image processing using an SJ4000 camera, linked up via USB to a Raspberry Pi (running Raspbian Jessie) for image processing with OpenCV in Python. I have achieved quite a bit using my webcam but now need to port it into the SJ4000's environment, however I am stuck at this hurdle.
The code I've used is identical to the answer to this question: rotated face detection.
On my laptop's webcam, I get a reasonably good framerate. When the SJ4000 is connected to my laptop via USB as well, I get a good framerate. However, on the Raspberry Pi, when I execute the same code, the image is just frozen for some reason. I then need to force quit the video viewer window which shows up as it's simply frozen.
EDIT 1: After closing the Spyder IDE and loading it up again a few times, and executing the same code, I can see a feed, but the framerate is very low (2-3 seconds per frame) and it will just freeze after some time.
EDIT 2: I've done further testing and find that when I include the face detection code, it takes a long time for the feed to be displayed as there is a TEN second delay. When I forward the feed live without any processing, it's very responsive.
How should I get around this? Is the only way getting a more powerful processor?
Thanks for any help!
Like others said, face detection is very computationally expensive using HOG/Haar descriptors. You won't be able to do real time face detection on the Raspberry Pi. On my Raspberry Pi 3, I can do human body detection on a 300x300 image at around 5 fps.
What I recommend is: Do motion detection. When motion is detected, start face detection.
Further optimization can be done by running face detection in its own thread, and have motion detection feed a FIFO of frames to be analyzed by face detector if motion is detected in a frame. That way, your face detector can operate asynchronously, and not hold up the main thread capturing the video frames, and doing motion detection.
Related
I'm working on a code, which reads incoming videos from Raspberry Pi, performs face detection on the frames, places frames around the faces, and then write backs the frames into an MP4 file with the same FPS. I use OpenCV to open and read from the PiCam.
When I looked into the saved video, it looks like it's moving too fast. I let my code to run for around 2 minutes, but my video has a length of 30 second. When I disable all post-processings (face detection), I can observe stable speed on the output video.
I can understand that Raspberry Pi has a small processor for heavy computations, but cannot understand why the video length is shorter? Is it possible that my face detection pipeline running much slower than the camera FPS, so the camera buffer should drop frames that are not going to be grabbed by the pipeline in a timely-fashion?
Any help here is highly appreciated!
I am working on a face recognition project where a I can first enroll myself and then start a recognize script which will start my webcam and it will recognize myself. I am following this article and its working perfectly fine.
What I have noticed is that if I show my photo to the camera instead myself in front of the camera, it still detects and recognize my face. There is no anti-spoofing involved in it. I want to include anti-spoofing method in the code so that it can detect weather the face detected/recognized is real of fake. For this I thought of following below approaches:
1. Eye blink detection: Initially I thought I would implement an eye blink detection algorithm, but it also has its disadvantage. What if a real face person didnt blinked his eyes for sometime, in that case our code will tag that face as fake. Also the eyes was also not getting detected at a distance of 1-1.5meter from the camera.
2. Using temperature sensor: I also interfaced omron thermal sensor so that I can get the temperature of the face. In normal human face, temperature is always above a threshold. In case of face in photo, it will always be below that threshold. I implemented this and it was working fine. But later realized that if someone showed photo in phone, in that case due to phone's high screen temperature, its always more than the threshold and thus it is tagged in as real photo.
Above mentioned methods didn't worked for me. I am looking for a simple solution which can work in all the scenarios. I am doing this project on raspberry pi, so looking for a solution which is compatible with raspberry pi. Please help. Thanks
Sorry for any mistake because I am not from raspberry pi's background but as a decent guy helping people I think that you should try resolution check (if it is possible) because phone's screen would always have less resolution than the real face. And then you can use it with the eye blink method to catch hold of a phone as photos do not blink eyes. Average human blinks 12 times in a minute so 1 time every 5 seconds . This will help you to catch hold of the printed photos. Hope this would help.
You should use an object detector on top of the face detector. It can definitely detect a phone.
You could retrain it to detect a photo being held up as well.
Have the object detector run first, save the bounding box coordinates of the phone, then see if the face bounding box coordinates reside inside of the phone.
I'm working on a face recognition system right now as my thesis project. Have you tried this article? Adrian says that it is usable in Raspberry Pi, but it means we have to install TensorFlow & Keras to do it. I think this could help.
I am trying to create a custom VR headset that displays a live feed from a remote camera, and in order for the view to be clear from the vr headset, I need to duplicate the image from the camera for both eyes and apply barrel distortion (see attached picture) to them to offset the distortion from the lenses. Duplicating the image should be simple, but I do not know how to apply the distortion.
Most of the solutions I've found online are built in to some sort of game engine or VR SDK, but I don't want to use a game engine since I'm only processing a raw camera feed.
I am planning on using OpenCV to do this and I'm hoping to get at least 30fps at 1080p (hardware is an NVIDIA Jetson Nano with a CSI camera). What would be the best way to go about doing this?
I want to build a webcam based 3D scanner, since I'm going to use a lot of webcams I doing tests before.
I have orderer 3 exact camera that I will drive in python to take snapshot at the same time.
Obviously the bus is going to be saturated when there will be 50 of them.
What I want to know is if the camera are able to hold the picture until they are transfered to the computer.
To simulate this behavior I'd like to slow down the USB bus and make a snapshot with 3 camera,
I'm under windows 7 pro, is this possible?
Thanks.
PS : couldn't I saturate the USB BUS by pluggin some USB external harddrive and doing some file transfert?
What I want to know is if the camera are able to hold the picture until they are transfered to the computer.
That depends on the camera model, but since you mention in your post you are using "webcams", then the answer is almost certainly no. You could slow down the requests you make to the camera to take a picture though.
This sequence of events is possible:
wait
request camera takes picture
camera returns picture as normal
wait
This sequence of events is not possible (with webcams at least)
wait
request camera takes picture
wait
camera returns picture at a significantly later time that you want
to have control over
wait
If you need the functionality displayed in the last sequence I provide (a controllable time between capture and readout of the picture) you will need to upgrade to a better camera, such as a machine vision camera. These cameras usually cost considerably more than webcams and are unlikely to interface over USB (though you might find some that do).
You might be able to find some other solution to your problem (for instance what happens if you request 50 photos from 50 cameras nd saturate the USB bus? Do the webcams you have buffer the data well enough so that it achieves your ultimate goal, or does this affect the quality of the picture?)
I'm new to Open CV and image processing, but I was wondering if I have a still image, say a jpeg file is it possible to run Open CV or another package on the image and have it identify where the humans are in the image? (I don't know if that sounds impossible or not since I haven't worked much in this but any advice would be appreciated) The photos are from the raspberry pi, which takes a photo after a PIR motion scanner detects movement.
You can use opencv for this. Opencv Hog descriptor may help you for this. Opencv can be used in rasberry pi as well.
Here is the link for the people detection code, or you can find it in samples of the opencv package.