Python Opencv control (increase/decrease) the video playback speed as custom - python

I'm writing a program to control the video playback speed as custom rate.
Is there is anyway to achieve that?
What code should be added to control the playback speed?
import cv2
cap = cv2.VideoCapture('video.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

In the docs it is stated:
Note
This function should be followed by waitKey function which displays
the image for specified milliseconds. Otherwise, it won’t display the
image. For example, waitKey(0) will display the window infinitely
until any keypress (it is suitable for image display). waitKey(25)
will display a frame for 25 ms, after which display will be
automatically closed. (If you put it in a loop to read videos, it will
display the video frame-by-frame)
In cv2.waitKey(X) function X means the number of milliseconds for an image to be displayed on the screen. In your case it is set to 1, so theoretically you are able to achieve 1000 fps (frames per seconds). But frame decoding takes time in VideoCapture object and limits your framerate. To change the playback speed you need to declare variable and use it as a parameter in waitKey function.
import cv2
cap = cv2.VideoCapture('video.mp4')
frameTime = 10 # time of each frame in ms, you can add logic to change this value.
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(frameTime) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Alternatively, as frame decoding is the most time consuming task you can move it to the second thread and use a queue of decoded frames. See this link for details.
The third approach is to separate grabbing and decoding process and simply decode every nth frame. That will result in displaying only a subset of frames from the source video but from the user perspective the video will be played faster.
import cv2
cap = cv2.VideoCapture('video.mp4')
i=0 #frame counter
frameTime = 1 # time of each frame in ms, you can add logic to change this value.
while(cap.isOpened()):
ret = cap.grab() #grab frame
i=i+1 #increment counter
if i % 3 == 0: # display only one third of the frames, you can change this parameter according to your needs
ret, frame = cap.retrieve() #decode frame
cv2.imshow('frame',frame)
if cv2.waitKey(frameTime) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

You can use ffmpeg to speed up (or slow down) your video like fast forwarding by using "presentation time stamps".
For speed up an example would be:
ffmpeg -i YOUR_INPUT_MOVIE.mp4 -vf "setpts=0.20*PTS" YOUR_OUTPUT_MOVIE.mp4
which will speed up your movie by 5x.
For slow down an example would be:
ffmpeg -i YOUR_INPUT_MOVIE.mp4 -vf "setpts=5*PTS" YOUR_OUTPUT_MOVIE.mp4
which will slow down your movie by 5x.
Note: this method will drop frames.

Related

Keep an thread that contains an infinite loop, that updates a variable, and another thread that contains a timer that closes both threads when it ends

I need a main while() loop, which updates the screenshot frame all the time, but when I get to a part of the code because the sync needs to be very precise, what I need is to create 2 threads or subprocess (I think using subprocess is better in this case).
One that keeps updating the frames and the other thread or subprocess that makes a delay of 3 seconds, only then to start working with the last frame that was updated (because of this delay it is so important to wait for the frames to be updated).
This is my code:
import multiprocessing
import time
import cv2
import numpy as np
#library for Optical Character Recognition (OCR)
import pytesseract #pip install pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract'
# Create a VideoCapture object
cap = cv2.VideoCapture(1)
# Check if camera opened successfully
if (cap.isOpened() == False):
print("Unable to read camera feed")
.
# We convert the resolutions from float to integer.
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
while(True):
ret, frame = cap.read()
if ret == True:
#HERE SHOULD BE THE FORK IN 2 INDEPENDENT PROCESSES
text = pytesseract.image_to_string(frame) #OCR in subprocess_2
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break #Close main loop
This is the flowchart of how the program should work. Showing how the subprocess_1 repeats updating the value of the variable frame until subprocess_2 finishes executing (in this particular case, in principle I plan to try 3 seconds delay).
I thought of using a separate function but I'm really having trouble implementing it. I would also like to know if it is possible to implement all frame updates in a single loop while() .
def handle_frame_requests(conn1):
try:
while True:
request = conn1.recv()
conn1.send(frame) # The frame must be pickle-able
except EOFError:
pass
def capture_cam(conn1):
global frame
frame = None
Thread(target=handle_frame_requests, args=(conn1,), daemon=True).start()
cap = cv2.VideoCapture(1) #the same webcam
if (cap.isOpened() == False):
print("Unable to read camera!")
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
while(True):
ret, frame = cap.read() #here load the frame variable
if ret == True:
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'): break
else:
break
But regardless of how many nested while loops I use, the problem is that I can't get one process to keep updating the webcam, while another process keeps a timer, so that when the timer indicates it, both processes will exit and return to the webcam. main line, where the main while loop can continue to receive data from the webcam via the first loop while(True):

Cropping a frame in video

What I'm trying to achieve is to crop my roi in the video frame into a variable and then further send it as a parameter..
Consider in face detection, and that x,y,x+w,y+h are the coordinates of the roi, which is the face and my aim is to crop that face and to show it.
The code below is just to explain my error and problem...
import cv2
cap=cv2.VideoCapture("D:\\Downloads\\video2.mp4")
#x,y,w,h, will change according the video i.e. where the face is detected.
#For the purpose of explaining, i took these values.
x=50
y=100
w=75
h=90
while(cap.isOpened()):
_,frame=cap.read()
crop_frame=frame[y:y+h,x:x+w]
cv2.imshow("Frame",frame)
cv2.imshow("crop_frame",frame)
cv2.destroyAllWindows()
cap.release()
But upon doing this, I get this error:
crop_frame=frame[y:y+h,x:x+w]
TypeError: 'NoneType' object is not subscriptable
This error wasn't there when i was working with images but on video inputs, i get this error.
Any solution to this problem or any alternative solution?
Basically you are slicing None when the video ends and no frame can be read: None[y:y+h,x:x+w]
You should use the retval to check if there is a frame to process, see the doc here: cv::VideoCapture::read.
So, try this code as experiment:
import cv2
cap=cv2.VideoCapture("video.mp4")
while(cap.isOpened()):
ret, frame=cap.read()
if ret:
cv2.imshow("Frame", frame)
else:
print('None frame:', frame)
break
cv2.destroyAllWindows()
cap.release()
There is no rendering because there is no time to do so, that's why you need to look at the next example.
Follows a simple script as example. These are the main points:
First, your loop is missing waytKey() function to allow the rendering, see the doc
here.
If you want your variable outside the loop, you must define
it outside the loop.
Also, you should choose which frame to crop.
You can also add HighGui (https://docs.opencv.org/4.1.0/d7/dfc/group__highgui.html) controls for selecting the frame, etc.
import cv2
cap=cv2.VideoCapture("video.mp4")
x=50
y=100
w=75
h=90
crop_frame = None
while(cap.isOpened()):
ret, frame = cap.read()
# if ret: to be added
cv2.imshow("Frame",frame)
keypressed = cv2.waitKey(10)
if keypressed == ord('q'):
break
if keypressed == ord('f'):
crop_frame = frame[y:y+h,x:x+w]
cv2.imshow("crop_frame", crop_frame)
cv2.destroyAllWindows()
cap.release()
cv2.imwrite('crop_frame.jpg', crop_frame)
You run and it shows the video.
Press 'F' and the current frame is cropped and presented in a new window.
Press 'Q': the loop exists and the cropped frame is saved as an image.

OpenCV taking single camera frame while camera still running

I wrote a script for an image processing. I need to take a frame from camera and then make some operations. I can do this but the time when script is initializing the camera is very long. Is there any solution that I will run my script and camera will be working all the time and for example when I will press a button it will save a frame?
This is my code for now:
import cv2
cap = cv2.VideoCapture(1)
cap.set(3, 640)
cap.set(4, 480)
while True:
_, img = cap.read()
cv2.imshow('Output', img)
if cv2.waitKey(1) & 0xFF==ord('s'):
print('DO IMAGE PROCESSING...')
elif cv2.waitKey(1) & 0xFF==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
The problem is that when I am pressing "q" sometimes it doesn't stop. Can you give me an advice which loop or maybe which lib should I use for that?
Thanks!

imshow() with desired framerate with opencv

Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for "reading" the image from the queue and then substract that value from number of miliseconds avalible for one frame:
if I have as input video with 50FPS and i want to playback it in real-time i do 1000/50 => 20ms per frame.
And then wait that time using cv2.WaitKey()
But still I get some laggy output. Which is slower then the source video
I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer()
calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer.
eg cv2.WaitKey((1000/50) - (time processing finished - time read started) - 10)
or you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished
I haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1
I faced the same video in one of my project in which my source video have 2 fps. so in order to show it in good manners using cv2.imshow I used a delay function before displaying of frame. Its a kind of hack but this thing work for me. The code for this hack is given below. Hope you will get some help from it. peace!
import cv2
import numpy as np
import time
cap = cv2.VideoCapture (0)
width = 400
height = 350
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (width, height))
flipped = cv2.flip(frame, 1)
framerot = cv2.rotate(frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
framerot = cv2.resize(framerot, (width, height))
StackImg = np.hstack([frame, flipped, framerot])
#Put time of sleep according to your fps
time.sleep(2)
cv2.imshow("ImageStacked", StackImg)
if cv2.waitKey(1) & 0xff == ord('q'):
break
cv2.destroyAllWindows()

load Video Using OpenCV in python

Here is my code which is running...but I didnt understood why we used:
if cv2.waitKey(1000) & 0xFF == ord('q'):
break
in the code......under display the resulting frame comment:
import numpy as np
import cv2
cap = cv2.VideoCapture('C:\\Users\\KRK\\Desktop\\Dec17thVideo.mp4')
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1000) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
waitkey displays the image for the specified number of milliseconds. Without it, you actually wouldn't be able to see anything. Then 0xFF == ord('q') detects when the keystroke q is pressed on the keyboard.
Think of waitkey as a pause function. After the code has been executed; at lightning speed :), waitkey says, pause for 1000 milliseconds to display the frame. Within this, detect if the user pressed q. If q is pressed, then get outside of my infinite while loop. When this happens, then the window won't be shown anymore.
Their documentation is a good resource as well.

Categories

Resources