I have written a motion detection by python, and it's return the time of the detected motion at the end
the problem:
I use the fps to understand the time of the detected moment. but the problem is I have long video (24 hours long) and it has dynamic framerate so the time of the detected motion doesn't working.
Thank you so much for your help
I want you to help me for knowing the detected motion time on my video.
Related
I recorded and saved a 20-second video in a folder and then played it back. I don't know why, but it cuts off one second every two seconds. Even writing the simplest program, using low resolution and 15 frames the problem continues. Do you have any ideas what I could change to eliminate this problem?
I have a Raspberry PI Zero 2W
import picamera
camera = picamera.PiCamera(resolution=(352,240), framerate=15)
camera.start_recording('timestamped.mp4')
camera.wait_recording(20)
camera.stop_recording()
print("stop")
i was wondering if anyone was able to return the frames and the time (in mins and secs) of the video where objects were detected?
Im a beginner to object detection and i was trying to detect objects from a video using yolov5. I wanted the time when the objcts were detected to be printed.Was anyone able to achieve this objective?
I'm working on a code, which reads incoming videos from Raspberry Pi, performs face detection on the frames, places frames around the faces, and then write backs the frames into an MP4 file with the same FPS. I use OpenCV to open and read from the PiCam.
When I looked into the saved video, it looks like it's moving too fast. I let my code to run for around 2 minutes, but my video has a length of 30 second. When I disable all post-processings (face detection), I can observe stable speed on the output video.
I can understand that Raspberry Pi has a small processor for heavy computations, but cannot understand why the video length is shorter? Is it possible that my face detection pipeline running much slower than the camera FPS, so the camera buffer should drop frames that are not going to be grabbed by the pipeline in a timely-fashion?
Any help here is highly appreciated!
I am working on a face recognition project where a I can first enroll myself and then start a recognize script which will start my webcam and it will recognize myself. I am following this article and its working perfectly fine.
What I have noticed is that if I show my photo to the camera instead myself in front of the camera, it still detects and recognize my face. There is no anti-spoofing involved in it. I want to include anti-spoofing method in the code so that it can detect weather the face detected/recognized is real of fake. For this I thought of following below approaches:
1. Eye blink detection: Initially I thought I would implement an eye blink detection algorithm, but it also has its disadvantage. What if a real face person didnt blinked his eyes for sometime, in that case our code will tag that face as fake. Also the eyes was also not getting detected at a distance of 1-1.5meter from the camera.
2. Using temperature sensor: I also interfaced omron thermal sensor so that I can get the temperature of the face. In normal human face, temperature is always above a threshold. In case of face in photo, it will always be below that threshold. I implemented this and it was working fine. But later realized that if someone showed photo in phone, in that case due to phone's high screen temperature, its always more than the threshold and thus it is tagged in as real photo.
Above mentioned methods didn't worked for me. I am looking for a simple solution which can work in all the scenarios. I am doing this project on raspberry pi, so looking for a solution which is compatible with raspberry pi. Please help. Thanks
Sorry for any mistake because I am not from raspberry pi's background but as a decent guy helping people I think that you should try resolution check (if it is possible) because phone's screen would always have less resolution than the real face. And then you can use it with the eye blink method to catch hold of a phone as photos do not blink eyes. Average human blinks 12 times in a minute so 1 time every 5 seconds . This will help you to catch hold of the printed photos. Hope this would help.
You should use an object detector on top of the face detector. It can definitely detect a phone.
You could retrain it to detect a photo being held up as well.
Have the object detector run first, save the bounding box coordinates of the phone, then see if the face bounding box coordinates reside inside of the phone.
I'm working on a face recognition system right now as my thesis project. Have you tried this article? Adrian says that it is usable in Raspberry Pi, but it means we have to install TensorFlow & Keras to do it. I think this could help.
I am looking into image processing using an SJ4000 camera, linked up via USB to a Raspberry Pi (running Raspbian Jessie) for image processing with OpenCV in Python. I have achieved quite a bit using my webcam but now need to port it into the SJ4000's environment, however I am stuck at this hurdle.
The code I've used is identical to the answer to this question: rotated face detection.
On my laptop's webcam, I get a reasonably good framerate. When the SJ4000 is connected to my laptop via USB as well, I get a good framerate. However, on the Raspberry Pi, when I execute the same code, the image is just frozen for some reason. I then need to force quit the video viewer window which shows up as it's simply frozen.
EDIT 1: After closing the Spyder IDE and loading it up again a few times, and executing the same code, I can see a feed, but the framerate is very low (2-3 seconds per frame) and it will just freeze after some time.
EDIT 2: I've done further testing and find that when I include the face detection code, it takes a long time for the feed to be displayed as there is a TEN second delay. When I forward the feed live without any processing, it's very responsive.
How should I get around this? Is the only way getting a more powerful processor?
Thanks for any help!
Like others said, face detection is very computationally expensive using HOG/Haar descriptors. You won't be able to do real time face detection on the Raspberry Pi. On my Raspberry Pi 3, I can do human body detection on a 300x300 image at around 5 fps.
What I recommend is: Do motion detection. When motion is detected, start face detection.
Further optimization can be done by running face detection in its own thread, and have motion detection feed a FIFO of frames to be analyzed by face detector if motion is detected in a frame. That way, your face detector can operate asynchronously, and not hold up the main thread capturing the video frames, and doing motion detection.