OpenCV- Reading after n frames - python

I have written the following code
import cv2
import datetime
import time
import pandas as pd
cascPath = 'haarcascade_frontalface_dataset.xml' # dataset
faceCascade = cv2.CascadeClassifier(cascPath)
video_capture = cv2.VideoCapture('video1.mp4')
frames = video_capture.get(cv2.CAP_PROP_FRAME_COUNT)
fps = int(video_capture.get(cv2.CAP_PROP_FPS))
print(frames) #1403 frames
print(fps) #30 fps
# calculate duration of the video
seconds = int(frames / fps)
print("duration in seconds:", seconds) #46 seconds
df = pd.DataFrame(columns=['Time(Seconds)', 'Status'])
start = time.time()
print(start)
n=5
while True:
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #converts frame to grayscale image
faces = faceCascade.detectMultiScale(
gray, scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.FONT_HERSHEY_SIMPLEX
)
if len(faces) == 0:
print(time.time()-start, 'No Face Detected')
df = df.append({'Time(Seconds)': (time.time()-start) , 'Status':'No Face detected' }, ignore_index=True)
else:
print(time.time()-start, 'Face Detected')
df = df.append({'Time(Seconds)':(time.time()-start), 'Status':'Face Detected' }, ignore_index=True)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
# Display the resulting frame
cv2.imshow('Video', frame)
df.to_csv('output.csv', index = False)
if cv2.waitKey(1) & 0xFF == ord('q'):
# print(df.head(2))
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
If you want to download the video I m working on, you can download it from here
download the haar cascade XML file from here
I have a few doubts in this.
Currently it is running on all the 1403 frames of the video, I want to optimize it such that it runs inference after every n frames, which is customizable. in code I have mentioned n =5. So, if n= 5. no of frames should be 1403/5 = 280
The timestamps in my CSV are not coming accurate, I want them to be relative to the video. Basically, the first column (Time(Seconds) should designate the time in the video and the status should determine the status (detected/not detected) of the frame at that moment, the Time(second) column should end at around 46 seconds which is the length of the video.
my cv2.imshow is showing a video that is somewhere around 2x speed, I believe I can control the speed by using cv2.imKey(), what should be the optimal parameter for cv2.waitKey so that I get a similar speed video as output.
Thanks for going through the whole question

If you want to read every 'n' frames, you can wrap your VideoCapture.read() call in a loop like this:
for a in range(n):
ret, frame = video_capture.read();
For the timestamps in the csv file, if that came with the dataset I'd trust that. It's possible the camera isn't capturing at a consistent framerate. If you think the framerate is consistent and want to generate the timestamps yourself you can keep track of how many frames you've gone through and divide the video length by that. (i.e. at frame 150 the timestamp would be (150 / 1403) * 46 seconds)
cv2.imshow() just shows frames as fast as the loop runs. This is mostly controlled through cv2.waitKey(milliseconds). If you think the processing that you're doing in the loop takes a negligible amount of time you can just set the time in the waitKey to be ((n / 1403) * 46 * 1000). Otherwise you should use the python time module to track how long the processing takes and subtract that time from the wait.
Edit:
Sorry, I should have been more clear with the first part. That for loop only has the VideoCapture.read() line in it, nothing else. This way you'll read 'n' frames, but only process one out of every 'n' frames. This isn't replacing the overall while loop that you already have. You're just using the for loop to dump the frames you want to skip.
Oh, and you should also have a check for the return value of the read().
if not ret:
break;
The program will probably crash at the end of the video if it doesn't have that check.

Related

Increase smoothness of video

I am trying to create a python code that scans the barcode and retrieves the output. A library called pyzbar had already been created for the same purpose. Using that and OpenCV, I had created a code (attached below), for scanning and drawing bounding boxes on the barcode/QR code. The problem I'm facing is that when I input a pre-recorded video above 100 MB as input the output video that is displayed/saved is very slow, but with live stream, there is no such issue. I tried several methods to reduce the fps like PROP_FPS, but nothing worked. I even tried multithreading method and it seems to not have any effect. The code that I was referring to is attached below. Please help me out on the same.
import cv2
import numpy as np
from pyzbar.pyzbar import decode
cv2.namedWindow("Result", cv2.WINDOW_NORMAL)
cap = cv2.VideoCapture('2016_0806_040333_0081.mp4')
cap.set(3, 1280)
cap.set(4, 720)
#frame_width = int(cap.get(3))
#frame_height = int(cap.get(4))
#cap.set(cv2.CAP_PROP_FPS, 0.1)
#size = (frame_width, frame_height)
#result = cv2.VideoWriter('processed_video.avi', cv2.VideoWriter_fourcc(*'MJPG'),0.1, size)
while(True):
ret, frame = cap.read()
for barcode in decode(frame):
myData = barcode.data.decode('utf-8')
pts = np.array([barcode.polygon],np.int32)
pts = pts.reshape((-1,1,2))
cv2.polylines(frame, [pts], True, (255,0,255),5)
pts2 = barcode.rect
akash = []
akash.append(myData)
cv2.putText(frame, myData, (pts2[0], pts2[1]), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (255, 0, 255), 2)
"""
f=open('output.csv','a+')
for ele in akash:
f.write(ele+'\n')
"""
result.write(frame)
cv2.imshow('Result',frame)
cv2.waitKey(1)
#video.release()
#result.release()
#cv2.destroyAllWindows()
print("The video was successfully saved")
You could change the frame rate of your project, but this won't add missing frames. Even if your editor can attempt to extrapolate the needed frames, it can result in glitches.
Best approach if you have the option is to shoot the video at a higher frame rate to begin with.

How can I recognize ever 30th frame and ignore the rest?

I am developing a GUI with PyQt5 and I am stuck.
Because my program is running on a RaspberryPi4 I have limited processing power. I am getting video input from my webcam and want to perform face_recognition operations on this input. Due to the limited processing power i need to ignore a lot of input frames and just use every n-th frame to perform face recognition with to speed up the process.
I tried to program a delay similar to this thread: Call function every x seconds (Python)
but it didn't work. Is there a possibility to directly refer to a frame?
This is the function where I am reading from the webcam:
def run(self):
checker=0
process_this_frame = 0
# capture from web cam
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH,640);
cap.set(cv2.CAP_PROP_FRAME_HEIGHT,300);
while True:
ret, cv_img = cap.read()
if ret:
img = cv2.resize(cv_img, (0, 0), fx=0.25, fy=0.25)
process_this_frame = process_this_frame+2
print('process_this_frame: ' , process_this_frame)
if process_this_frame % 20 == 0:
predictions = predict(img, model_path="trained_knn_model.clf")
print('showing predicted face')
cv_img = show_prediction_labels_on_image(cv_img, predictions)
checker=1
self.change_pixmap_signal.emit(cv_img)
else:
checker=0
self.change_pixmap_signal.emit(cv_img)
Specifically I am looking for a suitable if condition to execute the predict function only on every n-th frame and when I am not doing the predict on the cv_img I want to display just the frame in my else case. I tried with multiple modulo operators but did not find a suitable solution.
How can I do that? It would be cool to refer to a number of frames instead of using a time delay so I can try to find the best solution.

How does time work for .read() command? Extracting a frame from certain time with OpenCV? [duplicate]

Is there a way to get a specific frame using VideoCapture() method?
My current code is:
import numpy as np
import cv2
cap = cv2.VideoCapture('video.avi')
This is my reference tutorial.
The following code can accomplish that:
import cv2
cap = cv2.VideoCapture(videopath)
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number-1)
res, frame = cap.read()
frame_number is an integer in the range 0 to the number of frames in the video.
Notice: you should set frame_number-1 to force reading frame frame_number. It's not documented well but that is how the VideoCapture module behaves.
One may obtain amount of frames by:
amount_of_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
Thank you GPPK.
The video parameters should be given as integers. Each flag has its own value. See here for the code.
The correct solution is:
import numpy as np
import cv2
#Get video name from user
#Ginen video name must be in quotes, e.g. "pirkagia.avi" or "plaque.avi"
video_name = input("Please give the video name including its extension. E.g. \"pirkagia.avi\":\n")
#Open the video file
cap = cv2.VideoCapture(video_name)
#Set frame_no in range 0.0-1.0
#In this example we have a video of 30 seconds having 25 frames per seconds, thus we have 750 frames.
#The examined frame must get a value from 0 to 749.
#For more info about the video flags see here: https://stackoverflow.com/questions/11420748/setting-camera-parameters-in-opencv-python
#Here we select the last frame as frame sequence=749. In case you want to select other frame change value 749.
#BE CAREFUL! Each video has different time length and frame rate.
#So make sure that you have the right parameters for the right video!
time_length = 30.0
fps=25
frame_seq = 749
frame_no = (frame_seq /(time_length*fps))
#The first argument of cap.set(), number 2 defines that parameter for setting the frame selection.
#Number 2 defines flag CV_CAP_PROP_POS_FRAMES which is a 0-based index of the frame to be decoded/captured next.
#The second argument defines the frame number in range 0.0-1.0
cap.set(2,frame_no);
#Read the next frame from the video. If you set frame 749 above then the code will return the last frame.
ret, frame = cap.read()
#Set grayscale colorspace for the frame.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#Cut the video extension to have the name of the video
my_video_name = video_name.split(".")[0]
#Display the resulting frame
cv2.imshow(my_video_name+' frame '+ str(frame_seq),gray)
#Set waitKey
cv2.waitKey()
#Store this frame to an image
cv2.imwrite(my_video_name+'_frame_'+str(frame_seq)+'.jpg',gray)
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
If you want an exact frame, you could just set the VideoCapture session to that frame. It's much more intuitive to automatically call on that frame. The "correct" solution requires you to input known data: like fps, length, and whatnot. All you need to know with the code below is the frame you want to call.
import numpy as np
import cv2
cap = cv2.VideoCapture(video_name) # video_name is the video being called
cap.set(1,frame_no) # Where frame_no is the frame you want
ret, frame = cap.read() # Read the frame
cv2.imshow('window_name', frame) # show frame on window
If you want to hold the window, until you press exit:
while True:
ch = 0xFF & cv2.waitKey(1) # Wait for a second
if ch == 27:
break
Set a specific frame
From the documentation of the VideoCaptureProperties (docs) is possible to see that the way to set the frame in the VideoCapture is:
frame = 30
cap.set(cv2.CAP_PROP_POS_FRAMES, frame)
Notice that you don't have to pass to the function frame - 1 because, as the documentation says, the flag CAP_PROP_POS_FRAMES rapresent the "0-based index of the frame to be decoded/captured next".
Concluding a full example where i want to read a frame at each second is:
import cv2
cap = cv2.VideoCapture('video.avi')
# Get the frames per second
fps = cap.get(cv2.CAP_PROP_FPS)
# Get the total numer of frames in the video.
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
frame_number = 0
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number) # optional
success, image = cap.read()
while success and frame_number <= frame_count:
# do stuff
frame_number += fps
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number)
success, image = cap.read()
Set a specific time
In the documentation linked above is possible to see that the way to set a specific time in the VideoCapture is:
milliseconds = 1000
cap.set(cv2.CAP_PROP_POS_MSEC, milliseconds)
And like before a full example that read a frame each second che be achieved in this way:
import cv2
cap = cv2.VideoCapture('video.avi')
# Get the frames per second
fps = cap.get(cv2.CAP_PROP_FPS)
# Get the total numer of frames in the video.
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
# Calculate the duration of the video in seconds
duration = frame_count / fps
second = 0
cap.set(cv2.CAP_PROP_POS_MSEC, second * 1000) # optional
success, image = cap.read()
while success and second <= duration:
# do stuff
second += 1
cap.set(cv2.CAP_PROP_POS_MSEC, second * 1000)
success, image = cap.read()
For example, to start reading 15th frame of the video you can use:
frame = 15
cap.set(cv2.CAP_PROP_POS_FRAMES, frame-1)
In addition, I want to say, that using of CAP_PROP_POS_FRAMES property does not always give you the correct result. Especially when you deal with compressed files like mp4 (H.264).
In my case when I call cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number) for .mp4 file, it returns False, but when I call it for .avi file, it returns True.
Take into consideration when decide using this 'feature'.
very-hit recommends using CV_CAP_PROP_POS_MSEC property.
Read this thread for additional info.

How to capture frame from toll gate camera video only when a vehicle is at a halt, using Python OpenCV?

I've a scenario where only trucks will pass a toll gate of which I want to capture the number plate only when the truck has halted (to get a good quality image to run OCR on). The OCR solution is built but capturing a frame every time a truck comes to a halt seems to be tricky to me.
Can you help me with the approach or a similar working code to achieve the same using Python 3.6+ and OpenCV. I'm not willing to run any explicit model to detect motion or something, just a simple background subtraction would do, in order to avoid overhead time.
Sample image frame from the video: click here.
Here is the code I'm currently working on, it checks if the background subtraction between two respective frames is more than 10% threshold, then it captures the frame. But, I've to do just the opposite, i.e, if background subtraction is zero, then capture the frame, more logic needs to be added here, like, after capturing a frame, we've to skip all following static frames which are true positive until the the next truck arrives and comes to a halt.
The code:
x_0 = 720
x_1 = 870
y_0 = 190
y_1 = 360
fgbg = cv2.createBackgroundSubtractorMOG2()
cap = cv2.VideoCapture(r"C:\\Users\\aa\\file.asf")
i=0
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frame = cv2.GaussianBlur(frame, (21, 21), 0)
fgmask = fgbg.apply(frame)
fgmask_crop= fgmask[y_0:y_1, x_0:x_1]
frame_crop = frame[y_0:y_1, x_0:x_1]
#out_video.write(frame_crop)
cv2.imshow("crop", fgmask_crop)
fg = cv2.copyTo(frame,fgmask)
bg=cv2.copyTo(frame,cv2.bitwise_not(fgmask))
pixels = cv2.countNonZero(fgmask_crop)
image_area = frame_crop.shape[0] * frame_crop.shape[1]
area_ratio = (pixels / image_area) * 100
if area_ratio>10:
i=i+1
print(i)
target= 'C:\\Users\\aa\\op'
fileName = ("res%d.png" % (i))
path_nm = os.path.join(target, fileName)
cv2.imwrite(path_nm,frame_crop)
key = cv2.waitKey(25)
if key == ord('q'):
break
else:
break
cv2.destroyAllWindows()
#out.release()
cap.release()
Any help shall be highly acknowledged.

OpenCV/Python: read specific frame using VideoCapture

Is there a way to get a specific frame using VideoCapture() method?
My current code is:
import numpy as np
import cv2
cap = cv2.VideoCapture('video.avi')
This is my reference tutorial.
The following code can accomplish that:
import cv2
cap = cv2.VideoCapture(videopath)
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number-1)
res, frame = cap.read()
frame_number is an integer in the range 0 to the number of frames in the video.
Notice: you should set frame_number-1 to force reading frame frame_number. It's not documented well but that is how the VideoCapture module behaves.
One may obtain amount of frames by:
amount_of_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
Thank you GPPK.
The video parameters should be given as integers. Each flag has its own value. See here for the code.
The correct solution is:
import numpy as np
import cv2
#Get video name from user
#Ginen video name must be in quotes, e.g. "pirkagia.avi" or "plaque.avi"
video_name = input("Please give the video name including its extension. E.g. \"pirkagia.avi\":\n")
#Open the video file
cap = cv2.VideoCapture(video_name)
#Set frame_no in range 0.0-1.0
#In this example we have a video of 30 seconds having 25 frames per seconds, thus we have 750 frames.
#The examined frame must get a value from 0 to 749.
#For more info about the video flags see here: https://stackoverflow.com/questions/11420748/setting-camera-parameters-in-opencv-python
#Here we select the last frame as frame sequence=749. In case you want to select other frame change value 749.
#BE CAREFUL! Each video has different time length and frame rate.
#So make sure that you have the right parameters for the right video!
time_length = 30.0
fps=25
frame_seq = 749
frame_no = (frame_seq /(time_length*fps))
#The first argument of cap.set(), number 2 defines that parameter for setting the frame selection.
#Number 2 defines flag CV_CAP_PROP_POS_FRAMES which is a 0-based index of the frame to be decoded/captured next.
#The second argument defines the frame number in range 0.0-1.0
cap.set(2,frame_no);
#Read the next frame from the video. If you set frame 749 above then the code will return the last frame.
ret, frame = cap.read()
#Set grayscale colorspace for the frame.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#Cut the video extension to have the name of the video
my_video_name = video_name.split(".")[0]
#Display the resulting frame
cv2.imshow(my_video_name+' frame '+ str(frame_seq),gray)
#Set waitKey
cv2.waitKey()
#Store this frame to an image
cv2.imwrite(my_video_name+'_frame_'+str(frame_seq)+'.jpg',gray)
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
If you want an exact frame, you could just set the VideoCapture session to that frame. It's much more intuitive to automatically call on that frame. The "correct" solution requires you to input known data: like fps, length, and whatnot. All you need to know with the code below is the frame you want to call.
import numpy as np
import cv2
cap = cv2.VideoCapture(video_name) # video_name is the video being called
cap.set(1,frame_no) # Where frame_no is the frame you want
ret, frame = cap.read() # Read the frame
cv2.imshow('window_name', frame) # show frame on window
If you want to hold the window, until you press exit:
while True:
ch = 0xFF & cv2.waitKey(1) # Wait for a second
if ch == 27:
break
Set a specific frame
From the documentation of the VideoCaptureProperties (docs) is possible to see that the way to set the frame in the VideoCapture is:
frame = 30
cap.set(cv2.CAP_PROP_POS_FRAMES, frame)
Notice that you don't have to pass to the function frame - 1 because, as the documentation says, the flag CAP_PROP_POS_FRAMES rapresent the "0-based index of the frame to be decoded/captured next".
Concluding a full example where i want to read a frame at each second is:
import cv2
cap = cv2.VideoCapture('video.avi')
# Get the frames per second
fps = cap.get(cv2.CAP_PROP_FPS)
# Get the total numer of frames in the video.
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
frame_number = 0
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number) # optional
success, image = cap.read()
while success and frame_number <= frame_count:
# do stuff
frame_number += fps
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number)
success, image = cap.read()
Set a specific time
In the documentation linked above is possible to see that the way to set a specific time in the VideoCapture is:
milliseconds = 1000
cap.set(cv2.CAP_PROP_POS_MSEC, milliseconds)
And like before a full example that read a frame each second che be achieved in this way:
import cv2
cap = cv2.VideoCapture('video.avi')
# Get the frames per second
fps = cap.get(cv2.CAP_PROP_FPS)
# Get the total numer of frames in the video.
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
# Calculate the duration of the video in seconds
duration = frame_count / fps
second = 0
cap.set(cv2.CAP_PROP_POS_MSEC, second * 1000) # optional
success, image = cap.read()
while success and second <= duration:
# do stuff
second += 1
cap.set(cv2.CAP_PROP_POS_MSEC, second * 1000)
success, image = cap.read()
For example, to start reading 15th frame of the video you can use:
frame = 15
cap.set(cv2.CAP_PROP_POS_FRAMES, frame-1)
In addition, I want to say, that using of CAP_PROP_POS_FRAMES property does not always give you the correct result. Especially when you deal with compressed files like mp4 (H.264).
In my case when I call cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number) for .mp4 file, it returns False, but when I call it for .avi file, it returns True.
Take into consideration when decide using this 'feature'.
very-hit recommends using CV_CAP_PROP_POS_MSEC property.
Read this thread for additional info.

Categories

Resources