imshow() with desired framerate with opencv - python

Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for "reading" the image from the queue and then substract that value from number of miliseconds avalible for one frame:
if I have as input video with 50FPS and i want to playback it in real-time i do 1000/50 => 20ms per frame.
And then wait that time using cv2.WaitKey()
But still I get some laggy output. Which is slower then the source video

I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer()
calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer.
eg cv2.WaitKey((1000/50) - (time processing finished - time read started) - 10)
or you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished
I haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1

I faced the same video in one of my project in which my source video have 2 fps. so in order to show it in good manners using cv2.imshow I used a delay function before displaying of frame. Its a kind of hack but this thing work for me. The code for this hack is given below. Hope you will get some help from it. peace!
import cv2
import numpy as np
import time
cap = cv2.VideoCapture (0)
width = 400
height = 350
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (width, height))
flipped = cv2.flip(frame, 1)
framerot = cv2.rotate(frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
framerot = cv2.resize(framerot, (width, height))
StackImg = np.hstack([frame, flipped, framerot])
#Put time of sleep according to your fps
time.sleep(2)
cv2.imshow("ImageStacked", StackImg)
if cv2.waitKey(1) & 0xff == ord('q'):
break
cv2.destroyAllWindows()

Related

Python 3.7 OpenCV - Slow processing

i'm trying to take pictures with OpenCV 4.4.0.40 and only save them if a switch, read by an Arduino, is press.
So far everything work, but it's super slow, it take about 15 seconds for the Switch value to change.
Arduino = SerialObject()
if os.path.exists(PathCouleur) and os.path.exists(PathGris):
Images = cv2.VideoCapture(Camera)
Images.set(3, 1920)
Images.set(4, 1080)
Images.set(10, Brightness)
Compte = 0
SwitchNumero = 0
while True:
Sucess, Video = Images.read()
cv2.imshow("Camera", Video)
Video = cv2.resize(Video, (Largeur, Hauteur))
Switch = Arduino.getData()
try:
if Switch[0] == "1":
blur = cv2.Laplacian(Video, cv2.CV_64F).var()
if blur < MinBlur:
cv2.imwrite(PathCouleur + ".png", Video)
cv2.imwrite(PathGris + ".png", cv2.cvtColor(Video, cv2.COLOR_BGR2GRAY))
Compte += 1
except IndexError as err:
pass
if cv2.waitKey(40) == 27:
break
Images.release()
cv2.destroyAllWindows()
else:
print("Erreur, aucun Path")
the saved images width are 640 and the height is 480 and the showimage is 1920x1080 but even without the showimage it's slow.
Can someone help me optimize this code please?
I would say that this snippet of code is responsible for the delay, since you have two major calls that depend on external devices to respond (the OpenCV calls and Arduino.getData()).
Sucess, Video = Images.read()
cv2.imshow("Camera", Video)
Video = cv2.resize(Video, (Largeur, Hauteur))
Switch = Arduino.getData()
One solution that comes to mind is to use the multithreading lib and separate the Arduino.getData() call from the main loop cycle and use it in a separate class to run in the background (you should use a sleep on the secondary thread, something like 50 or 100ms delay).
That way you should have a better response on the Switch event, when it tries to read the value on the main loop.
cam=cv2.VideoCapture(0,cv2.CAP_DSHOW)
use cv2.CAP_DSHOW it will improve the loading time of camera becasue it wil give you video feed directly

OpenCV buffer problem when using external trigger

I'm using OpenCV with USB camera on a Raspberry Pi4. I have activated an external trigger by RPI's GPIO pin. I wrote a short Python code to test the trigger and made sure it worked. Then I wrote another Python program to save a single image captured by the camera. Here is the code:
import time
import cv2
import smbus
from gpiozero import LED
TRIG_ADDR = 17
def setup_trigger_control_gpio(pin):
trigger = LED(pin)
return trigger
def setup_camera(frame_width, frame_height, fps):
cap = cv2.VideoCapture(0, cv2.CAP_ANY)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, frame_width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_height)
cap.set(cv2.CAP_PROP_FPS, fps)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
if cap.isOpened() is not True:
print ("Cannot open camera. Exiting.")
quit()
else:
return cap
trigger = setup_trigger_control_gpio(TRIG_ADDR)
cap = setup_camera(640, 480, 120)
trigger.on()
ret, frame = cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite("images/frame.pgm", frame)
trigger.off()
cap.release()
I set the buffersize to 1, since I'm using a trigger to capture the frame exactly when I need to. The program however gets stuck on the cap.read() line as if it did not receive the trigger. When I ran the program after that I ran the short program for the trigger only, it has finished successfully. Sometimes I had to run the trigger program more than once for the main program to finish, which I find a bit scary, because of the inconsistency.
I tried to set the buffersize to 10, which seemingly worked, however the saved image was empty(all black). I have also tried to "flush" the buffer by reading the first 10 empty frames, which worked and I have finally saved a proper image. The real problems occur when I try to process real time video this way, as even after flushing the buffer, the images do not correspond to the point in time when they should've been taken. Therefor I would love to use the feature of setting the buffersize to one, so that I would know exactly which frame is being processed. I will be thankful for any ideas.

Read last frame from PiCamera Ring Buffer

I am building a motion detector using PiCamera that sends an image to a message queue to have ML detect a person (or not) which then responds back to the client. The motion detection code is mostly lifted from pyimagesearch.com and is working well.
I now want to save video (including the original few frames) once I know a person is detected. Because of the delay in detecting a human (~3 secs) I plan on using a circularIO ring buffer to ensure I capture the initial few seconds of movement. This means I need to change how I loop through frames in detecting motion. Currently I am using:
rawCapture = PiRGBArray(camera, size=tuple(resolution))
camera = PiCamera()
for f in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
frame = f.array
# resize, blur, grayscale frame etc.
# compare to average of last few frames to detect motion
My understanding is this just grabs the Latest frame (ie. not every frame) and keeps on looping until motion is detected. This is what I want as I dont want the process to lag behind real-time by trying to check every frame.
To change this looping logic to leverage the ring buffer I think I can use something like this:
camera = PiCamera()
with camera:
ringbuf = picamera.PiCameraCircularIO(camera, seconds=60)
camera.start_recording(ringbuf, format='h264')
while True:
# HOW TO GET LATEST FRAME FROM ringbuf???
# resize, blur, grayscale frame etc.
# compare to average of last few frames to detect motion
As indicated in the snippet above, how can I get the latest frame as a numpy array from ringbuf to perform by motion detection checking? I have seen examples that get every frame by adding motion_output=detect_function when start_recording is called, but this will check every frame and I dont want the process to lag behind real-time action.
So I seem to have solved this. Solution is as below. I am guessing I lose a frame every time I switch to the detect_motion function, but that is fine by me. Use of saveAt lets me export the video to file after a fixed number of seconds once motion is detected.
def detect_motion(camera):
camera.capture(rawCapture, format="bgr", use_video_port=True)
frame = rawCapture.array
# resize, blur, grayscale frame etc.
# compare to average of last few frames to detect motion
with camera:
stream = PiCameraCircularIO(camera, seconds=60)
camera.start_recording(stream, format='h264')
try:
while True:
time.sleep(.3)
if detect_motion(camera):
print('Motion detected!')
saveAt = datetime.now() + timedelta(seconds=30)
else:
print('Nothing to see here')
if saveAt:
print((saveAt - datetime.now()).seconds)
if saveAt < datetime.now():
write_video(stream)
saveAt = None

Python Opencv control (increase/decrease) the video playback speed as custom

I'm writing a program to control the video playback speed as custom rate.
Is there is anyway to achieve that?
What code should be added to control the playback speed?
import cv2
cap = cv2.VideoCapture('video.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
In the docs it is stated:
Note
This function should be followed by waitKey function which displays
the image for specified milliseconds. Otherwise, it won’t display the
image. For example, waitKey(0) will display the window infinitely
until any keypress (it is suitable for image display). waitKey(25)
will display a frame for 25 ms, after which display will be
automatically closed. (If you put it in a loop to read videos, it will
display the video frame-by-frame)
In cv2.waitKey(X) function X means the number of milliseconds for an image to be displayed on the screen. In your case it is set to 1, so theoretically you are able to achieve 1000 fps (frames per seconds). But frame decoding takes time in VideoCapture object and limits your framerate. To change the playback speed you need to declare variable and use it as a parameter in waitKey function.
import cv2
cap = cv2.VideoCapture('video.mp4')
frameTime = 10 # time of each frame in ms, you can add logic to change this value.
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(frameTime) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Alternatively, as frame decoding is the most time consuming task you can move it to the second thread and use a queue of decoded frames. See this link for details.
The third approach is to separate grabbing and decoding process and simply decode every nth frame. That will result in displaying only a subset of frames from the source video but from the user perspective the video will be played faster.
import cv2
cap = cv2.VideoCapture('video.mp4')
i=0 #frame counter
frameTime = 1 # time of each frame in ms, you can add logic to change this value.
while(cap.isOpened()):
ret = cap.grab() #grab frame
i=i+1 #increment counter
if i % 3 == 0: # display only one third of the frames, you can change this parameter according to your needs
ret, frame = cap.retrieve() #decode frame
cv2.imshow('frame',frame)
if cv2.waitKey(frameTime) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
You can use ffmpeg to speed up (or slow down) your video like fast forwarding by using "presentation time stamps".
For speed up an example would be:
ffmpeg -i YOUR_INPUT_MOVIE.mp4 -vf "setpts=0.20*PTS" YOUR_OUTPUT_MOVIE.mp4
which will speed up your movie by 5x.
For slow down an example would be:
ffmpeg -i YOUR_INPUT_MOVIE.mp4 -vf "setpts=5*PTS" YOUR_OUTPUT_MOVIE.mp4
which will slow down your movie by 5x.
Note: this method will drop frames.

Getting current frame with OpenCV VideoCapture in Python

I am using cv2.VideoCapture to read the frames of an RTSP video link in a python script. The .read() function is in a while loop which runs once every second, However, I do not get the most current frame from the stream. I get older frames and in this way my lag builds up. Is there anyway that I can get the most current frame and not older frames which have piped into the VideoCapture object?
I also faced the same problem. Seems that once the VideoCapture object is initialized it keeps storing the frames in some buffer of sort and returns a frame from that for every read operation. What I did is I initialized the VideoCapture object every time I wanted to read a frame and then released the stream. Following code captures 10 images at an interval of 10 seconds and stores them. Same can be done using while(True) in a loop.
for x in range(0,10):
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imwrite('test'+str(x)+'.png',frame)
cap.release()
time.sleep(10)
I've encountered the same problem and found a git repository of Azure samples for their computer vision service.
The relevant part is the Camera Capture module, specifically the Video Stream class.
You can see they've implemented a Queue that is being updated to keep only the latest frame:
def update(self):
try:
while True:
if self.stopped:
return
if not self.Q.full():
(grabbed, frame) = self.stream.read()
# if the `grabbed` boolean is `False`, then we have
# reached the end of the video file
if not grabbed:
self.stop()
return
self.Q.put(frame)
# Clean the queue to keep only the latest frame
while self.Q.qsize() > 1:
self.Q.get()
I'm working with a friend in a hack doing the same. We don't want to use all the frames. So far we found that very same thing: grab() (or read) tries to get you all the frames, and I guess with rtp: it will maintain a buffer and drop if you're not responsive enough.
Instead of read you can also use grab() and receive(). First one ask for the frame. Receives reads it into memory. So if you call grab several times it will effectively skip those.
We got away with doing this:
#show some initial image
while True:
cv2.grab()
if cv2.waitKey(10):
im = cv2.receive()
# process
cv2.imshow...
Not production code but...
Inside the 'while' you can use:
while True:
cap = cv2.VideoCapture()
urlDir = 'rtsp://ip:port/h264_ulaw.sdp'
cap.open(urlDir)
# get the current frame
_,frame = cap.read()
cap.release() #releasing camera
image = frame
Using the following was causing a lot of issues for me. The frames being passed to the function were not sequention.
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
function_that_uses_frame(frame)
time.sleep(0.5)
The following also didn't work for me as suggested by other comments. I was STILL getting issues with taking the most recent frame.
cap = cv2.VideoCapture(0)
while True:
ret = capture.grab()
ret, frame = videocapture.retrieve()
function_that_uses_frame(frame)
time.sleep(0.5)
Finally, this worked but it's bloody filthy. I only need to grab a few frames per second, so it will do for the time being. For context, I was using the camera to generate some data for an ML model and my labels compared to what was being captured was out of sync.
while True:
ret = capture.grab()
ret, frame = videocapture.retrieve()
ret = capture.grab()
ret, frame = videocapture.retrieve()
function_that_uses_frame(frame)
time.sleep(0.5)
I made an adaptive system as the ones the others on here posted here still resulted in somewhat inaccurate frame representation and have completely variable results depending on the hardware.
from time import time
#...
cap = cv2.VideoCapture(url)
cap_fps = cap.get(cv2.CAP_PROP_FPS)
time_start = time()
time_end = time_start
while True:
time_difference = int((((end_time-start_time))*cap_fps)+1) #Note that the 1 might be changed to fit script bandwidth
for i in range(0, time_difference):
a = cap.grab()
_, frame = cap.read()
time_start = time()
#Put your code here
variable = function(frame)
#...
time_end = time()
This way the skipped frames adapt to the amount of frames missed in the video stream - allowing for a much smoother transition and a relatively real-time frame representation.

Categories

Resources