How to run specific code after n seconds on a video - python

I am working on python3 and using Microsoft azure face API function 'CF.face.detect' to detect faces in a video.
I want to detect faces after every 1 second in the video that means run CF.face.detect once/second on video frame.
Please tell how to do it
Thanks in advance

If you know how many fps your video has, you could read the frames one by one and detect on every n-th frame, n being the number of fps of a video you're processing.
fps = x
cnt = 0
for f in get_frames():
if cnt % fps == 0:
# run algorithm here
cv.imwrite(f)
After you have gone through the video then you can run the algorithm. But I would suggest to run the algorithm in the loop and save the frame then, preferably with drawn result (squares for detection)

Related

Python CV2 reads out-of-date frames from video stream

I am using Python 3.9 and Open-CV (cv2) to read frames from a video stream and save them as JPGs.
My program seems to run OK. It captures the video stream fine, obtains frames, and saves them as JPGs.
However, the frames it is obtaining from the stream are out-of-date - sometimes by several minutes. The clock in the video stream is running accurately, but the clock displays in the JPGs are all identical (to the second - but one or more minutes prior to the datetime in the program's "print()" output (and the saved JPG file time), and moving objects that were in view at the time they were saved are missing completely.
Strangely:
The JPG images are not identical in size. They grow by 10K - 20K as the sequence progresses. Even though they look identical to the eye, they show significant difference when compared using CV2 - but no difference if compared using PIL (which is about 10 - 15 times slower for image comparisons).
The camera can be configured to send a snapshot by email when it detects motion. These snapshots are up-to-date, and show moving objects that were in frame at the time (but no clock display). Enabling or disabling this facility has no effect on the out-of-date issue with JPGs extracted from the video stream. And, sadly, the snapshots are only about 60K, and too low resolution for our purposes (which is an AI application that needs images to be 600K or more).
The camera itself is ONVIF - and things like PTZ work nicely from Python code. Synology Surveillance Station works really well with it in every aspect. This model has reasonably good specs - zoom and good LPR anti-glare functionality. It is made in China - but I don't want to be 'a poor workman who blames his tools'.
Can anyone spot something in the program code that may be causing this?
Has anyone encountered this issue, and can suggest a work-around or different library / methodology?
(And if it is indeed an issue with this brand / model of camera, you are welcome to put in a plug for a mid-range LPR camera that works well for you in an application like this.)
Here is the current program code:
import datetime
from time import sleep
import cv2
goCapturedStream = None
# gcCameraLogin, gcCameraURL, & gcPhotoFolder are defined in the program, but omitted for simplicity / obfuscation.
def CaptureVideoStream():
global goCapturedStream
print(f"CaptureVideoStream({datetime.datetime.now()}): Capturing video stream...")
goCapturedStream = cv2.VideoCapture(f"rtsp://{gcCameraLogin}#{gcCameraURL}:554/stream0")
if not goCapturedStream.isOpened(): print(f"Error: Video Capture Stream was not opened.")
return
def TakePhotoFromVideoStream(pcPhotoName):
llResult = False ; laFrame = None
llResult, laFrame = goCapturedStream.read()
print(f"TakePhotoFromVideoStream({datetime.datetime.now()}): Result is {llResult}, Frame data type is {type(laFrame)}, Frame length is {len(laFrame)}")
if not ".jpg" in pcPhotoName.lower(): pcPhotoName += ".jpg"
lcFullPathName = f"{gcPhotoFolder}/{pcPhotoName}"
cv2.imwrite(lcFullPathName, laFrame)
def ReleaseVideoStream():
global goCapturedStream
goCapturedStream.release()
goCapturedStream = None
# Main Program: Obtain sequence of JPG images from captured video stream
CaptureVideoStream()
for N in range(1,7):
TakePhotoFromVideoStream(f"Test{N}.jpg")
sleep(2) # 2 seconds
ReleaseVideoStream()
Dan Masek's suggestions were very valuable.
The program (now enhanced significantly) saves up-to-date images correctly, when triggered by the camera's inbuilt motion detection (running in a separate thread and communicating through global variables).
The key tricks were:
A much faster loop reading the frames (and discarding most of them). I reduced the sleep to 0.1 (and even further to 0.01), and saved relatively few frames to JPG files only when required
Slowing down the frame rate on the camera (from 25 to 10 fps - even tried 5 at one point). This meant that the camera didn't get ahead of the software and send unpredictable frames.

How to change frame rate FPS of an existing video using openCV python

I am trying to change the Frame rate i.e., FPS of an existing video using openCV library in python. Below is the code that I am trying to execute. Even after setting the FPS property using cv2.CAP_PROP_FPS the video is not playing faster in the cv2.imshow() method. Even After setting the FPS property the getter returns the older FPS value. So how do I set the FPS value higher and make the video play faster?
Used version:
python = 3.7.4 and
opencv-python - 4.1.0.25
import cv2
video = cv2.VideoCapture("yourVideoPath.mp4");
video.set(cv2.CAP_PROP_FPS, int(60))
if __name__ == '__main__':
print("Frame rate : {0}".format(video.get(cv2.CAP_PROP_FPS)))
while video.isOpened():
ret1, frame2 = video.read()
cv2.imshow("Changed", frame2)
if cv2.waitKey(10) & 0xFF == ord('q'): # press q to quit
break
video.release()
cv2.destroyAllWindows()
If you're only trying to play the video in the displayed window, the limiting factor is not the fps of the video but the time spent waiting with the code waitKey(10) which makes the program wait for 10ms between each frame.
The read() method of the VideoCapture class simply returns the next frame with no concept of waiting or frame rate. The only thing preventing this code running as fast as it can is the waitKey(10) section, which is thus the main factor determining speed. To change the frame rate as seen through the imshow() method, you'd need to edit the time spent waiting. This is likely the dominant factor, but not the only one as the reading of a frame does take time.
If you're actually trying to change the playback rate of an existing file and have that saved to that file, I am unsure if OpenCV actually supports this, and I imagine it would be dependent on what back end you're using - OpenCV implements the VideoCapture class using different 3rd party backends.. As per the documentation of VideoCapture.set() I'd investigate the return value of video.set(cv2.CAP_PROP_FPS, int(60)) as the documentation suggests it will return true if this has changed something.
As an alternative you could investigate using something like FFMPEG which supports this relatively easily. If you want to stick with OpenCV I know from personal experience you can do this with the VideoWriter class. In this method you would read in the video frame by frame using the VideoCapture class, and then save it at the desired frame rate with VideoWriter. I suspect FFMPEG will likely meet your needs however!

extract equal number of frames for all videos

I am working on action recognition. I'm using opencv and python.
I want to extract equal number of frames (say n frames) from each video. My videos are of different lengths and I want to skip some frames in which the duration of the video is longer.
Can anyone give me an idea how to solve this?
Note that when you use imshow to display the frames of a video, you do so within a for/while loop, which is usually infinite. Setting the for/while loop to run n times will give you the first n frames of the video. In python, you can use imwrite with filename that changes with every iteration
filename = 'frame' + str(i)
imwrite(filename, img)

Unable to get OpenCV 3.1 FPS over ~15 FPS

I have some extremely simple performance testing code below for measuring the FPS of my webcam with OpenCV 3.1 + Python3 on a Late 2011 Macbook Pro:
cap = cv2.VideoCapture(0)
count = 0
start_time = time.perf_counter()
end_time = time.perf_counter()
while (start_time + 1) > end_time:
count += 1
cap.read()
# Attempt to force camera FPS to be higher
cap.set(cv2.CAP_PROP_FPS, 30)
end_time = time.perf_counter()
print("Got count", count)
Doing no processing, not even displaying the image or doing this in another thread, I am only getting around 15 FPS.
Trying to access the FPS of the camera with cap.get(cv2.CAP_PROP_FPS) I get 0.0.
Any ideas why?
I've already searched the internet a fair amount for answers, so things I've thought about:
I build OpenCV with release flags, so it shouldn't be doing extra debugging logic
Tried manually setting the FPS each frame (see above)
My FPS with other apps (e.g. Camera toy in Chrome) is 30FPS
There is no work being done in the app on the main thread, so putting the video capture logic in another thread as most other posts suggest shouldn't make a difference
** EDIT **
Additional details: it seems like the first frame I capture is quick, then subsequent frames are slower; seems like this could be a buffer issues (i.e. the camera is being paused after the first frame because a new buffer must be allocated to write to?)
Tweaked the code to calculate the average FPS so far after each read:
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_CONVERT_RGB, False)
cap.set(cv2.CAP_PROP_FPS, 30)
start_time = time.perf_counter()
count = 0
cv2.CAP_PROP_FPS
end_time = time.perf_counter()
while True:
count += 1
ret, frame = cap.read()
end_time = time.perf_counter()
print("Reached FPS of: ", count / (end_time - start_time))
And I get one frame around 30FPS, and then subsequent frames are slower:
Reached FPS of: 27.805818385257446
Reached FPS of: 19.736237223924398
Reached FPS of: 18.173748156583795
Reached FPS of: 17.214809956810114
Reached FPS of: 16.94737657138959
Reached FPS of: 16.73624509452099
Reached FPS of: 16.33156408530572
** EDIT **
Still no luck as of 10/20. My best bet is there are some issues with memory transfer since the camera itself can definitively capture at 30 FPS based on the ability of other apps.
IT'S NOT ANSWER. Since the comment in original question is too long for your attention. I post outside instead.
First, it's normal when CV_CAP_PROP_FPS return 0. OpenCV for Python just a wrapper for OpenCV C++. As far as I know, this property only works for video file, not camera. You have to calculate FPS yourself (like your edited).
Second, OpenCV have a bug that always convert the image get from camera to RGB https://github.com/opencv/opencv/issues/4831. Normal camera usually use YUYV color. It's take a lot of time. You can check all supported resolution + fps https://trac.ffmpeg.org/wiki/Capture/Webcam. I see some camera not support RGB color and OpenCV force to get RGB and take terrible FPS. Due to camera limitation, in the same codec, the higher resolution, the slower fps. In different supported codec, the bigger output in same resolution, the slower fps. For example, my camera support yuyv and mjpeg, in HD resolution, YUYV have max 10 fps while MJPEG have max 30 fps.
So, first you can try ffmpeg executable to get frames. After identifying where error come from, if ffmpeg works well, you can use ffmpeg library (not ffmpeg executable) to get frame from your camera (OpenCV using ffmpeg for most video I/O including camera).
Be aware that the I only work with ffmpeg and OpenCV in C++ language, not Python. Using ffmpeg library is another long story.
Good luck!

Python and OpenCV - getting the duration time of a video at certain points

Let say I have made a program to detect a green ball in a video. Whenever there is a green ball detected, I want to print out the duration of video at the time the green ball is detected. Is it possible?
In this answer, you will find a solution to determine the frames per second.
so you'd want to use:
fps = cap.get(cv2.cv.CV_CAP_PROP_FPS)
and count the number of frames you're at. Then you can compute the video time with
videotime = current_frame_number / fps.
EDIT:
#Miki suggested to use CAP_PROP_POS_MSEC which should result in the same time (in [ms])
Corrected my typo as pointed out by #Swiper-CCCVI
You can simply measure a certain position in the video in milliseconds using
time_milli = cap.get(cv2.CAP_PROP_POS_MSEC)
and then divide time_milli by 1000 to get the time in seconds.

Categories

Resources