Image frame saving from Video without losing resolution and quality - python

I want to save the images extracted from the video without losing any information, resolution and quality. I have saved using OpenCV in four formats png, bmp, jpg, tif The code is as below
file = "video.MP4"
video = cv2.VideoCapture(file)
# Find OpenCV version
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
# With webcam get(CV_CAP_PROP_FPS) does not work.
# Let's see for ourselves.
if int(major_ver) < 3 :
fps = video.get(cv2.cv.CV_CAP_PROP_FPS)
print ("Frames per second using video.get(cv2.cv.CV_CAP_PROP_FPS): {0}".format(fps))
else :
fps = video.get(cv2.CAP_PROP_FPS)
print ("Frames per second using video.get(cv2.CAP_PROP_FPS) : {0}".format(fps))
# Number of frames to capture
num_frames = 120;
print ("Capturing {0} frames".format(num_frames))
# Start time
start = time.time()
data = []
# Grab a few frames
for i in range(0, num_frames) :
ret, frame = video.read()
data.append(frame)
print("Shape",frame.shape)
# End time
end = time.time()
# Time elapsed
seconds = end - start
print ("Time taken : {0} seconds".format(seconds))
# Calculate frames per second
fps = num_frames / seconds;
print ("Estimated frames per second : {0}".format(fps))
for i,img in enumerate(data):
fileName = 'Frames\img_'+ str(i) + '.tif'
cv2.imwrite(fileName,img)
# Release video
video.release()
My Questions are
In which of the format (png, bmp, jpg, tif) I should save so that it should not lose the resolution and quality of video.
Just putting extension like .tif in cv2.imwrite() saves it to that particular format?
Which library is the best to save OpenCV, scikit-image or Pillow
Any help or suggestion will be highly appreciated.
Regards

well .jpg is a lossy format so you can loose some information. So it'd be better to use .png format to save the video frames. Because in this case you can also set the IMWRITE_PNG_COMPRESSION level. In cv2 its value is from 0 to 9. A higher value means higher compression level and longer time. This value defaults to 1 in cv2 so you can set it to 0 if you don't want to compress it at all. A example code is as follow,
# lets say you stored a single video frame in variable named 'frame' then
# save it as follows to avoid any compression or loss of information
png_compression_level = 0 # for no compression what so ever (means larger file size)
cv2.imwrite(dest_dir_path, frame,
[int(cv2.IMWRITE_PNG_COMPRESSION), png_compression_level])
you can then also verify if the stored image lost any information or not by using a pip package called sewar
img = cv2.imread('read the frame you saved as iamge')
# then
from sewar.full_ref import rmse, psnr, uqi, ssim
print("RMSE: ", rmse(img, frame)) # 0 is ideal value
print("PSNR: ", psnr(img, frame)) # inf is ideal value
print("SSIM: ", ssim(img, frame)) # (1, 1) is ideal value
print("UQI: ", uqi(img, frame)) # 1 is the ideal value

Related

Best way to store images so that they are same across formats

We are designing an algorithm that builds a Merkle tree from a list of Perceptual hashes. the hashes are generated for every frame that we capture from a video. The incentive behind this is that we are able to identify hashes even if the video format has changed.
To verify this, we had two images : Video.mp4 and Video.avi. We extracted frames at 30 fps, and ran pHash over these images. To test our functionality, it is imperative that both the images at every instance (from .mp4 and from .avi) stay the same. However there are still some differences in those two images.
Including code for reference:
Extract frames from video:
def extract_frames(file_path, write_to_path, fps=30):
cap = cv2.VideoCapture(file_path)
count = 0
os.mkdir(f'{write_to_path}/frames')
while cap.isOpened():
ret, frame = cap.read()
if ret:
grayed_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite(f"{write_to_path}/frames/frame{count}.bmp", grayed_image)
count += fps # i.e. at 30 fps, this advances one second
cap.set(1, count)
else:
cap.release()
break
print(f"Frame Extraction complete. Extracted {count // fps} frames.")
return count
Test if two images are similar
def check_images(path_1, path_2):
img_1 = cv2.imread(path_1, 0)
img_2 = cv2.imread(path_2, 0)
if img_1.shape == img_2.shape:
difference = cv2.subtract(img_1, img_2)
print(difference)
result = not np.any(difference)
return result
print("Unequal shapes, ", img_1.shape, img_2.shape)
return False
The Perceptual hash function
def generate_p_hashes(count, frame_path, fps=30):
count_two = 0
hashes = []
# fileToWrite = open('/content/hash.txt', 'a')
while count_two != count:
temp_hash = imagehash.phash(Image.open(f"{frame_path}/frames/frame{count_two}.bmp"))
count_two += fps
str_temp_hash = str(temp_hash)
hashes.append(str_temp_hash)
print(f"PHash generation complete. Generated {count_two // fps} hashes")
return hashes
Imagehash is a Python package available at : https://github.com/JohannesBuchner/imagehash
The images:
a. Frame captured from .avi file:
b. Frame captured from .mp4 file:
Here's what I've tried:
Convert image to grayscale so color channels are excluded.
Try all different image formats (JPEG, PNG with compression 0, TIFF, BMP)
Sample output:
What is the best way to store these images, so that irrespective of the video source that I am extracting from, the image will stay the same ?
Lossy-compressed files or video streams of different technologies will never give you exactly the same content from the same original source, this is not possible. With a high compression ratio, the images can be quite different.
If the goal is to authenticate, watermark or detect copies, you need to use features that are robust to lossy compression/decompression.

Compress video in Python using OpenCV and algorithm

so I want to create a program in which I will compress a video using this method:
Save a frame for a video each second, and for the rest of the frames in that second, save only the changes from that main frame. And then combine all that info to create a smaller-sized video.
So my idea is to iterate through each frame, save that frame in an array, and when I reach the main frame, find the differences and rewrite all the other frames. However I'm not sure how I can create a video using differences and a main frame afterward... Should I use cv2.VideoWriter or that only works with frames that haven't been altered?
This is what I have for now (I haven't saved the frames yet because I'm not sure of the format that I need):
import cv2
import time
imcap = cv2.VideoCapture('test2.mp4') # test 1
sample_rate = 30
success, img = imcap.read() # get the next frame
frame_no = 0
fps = imcap.get(cv2.CAP_PROP_FPS)
print("Frames per second: {0}".format(fps))
while success:
frame_no += 1
if frame_no % sample_rate == 0:
cv2.imshow('frame', img)
cv2.waitKey()
print(frame_no)
# read next frame
success, img = imcap.read()
imcap.release()

How does time work for .read() command? Extracting a frame from certain time with OpenCV? [duplicate]

Is there a way to get a specific frame using VideoCapture() method?
My current code is:
import numpy as np
import cv2
cap = cv2.VideoCapture('video.avi')
This is my reference tutorial.
The following code can accomplish that:
import cv2
cap = cv2.VideoCapture(videopath)
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number-1)
res, frame = cap.read()
frame_number is an integer in the range 0 to the number of frames in the video.
Notice: you should set frame_number-1 to force reading frame frame_number. It's not documented well but that is how the VideoCapture module behaves.
One may obtain amount of frames by:
amount_of_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
Thank you GPPK.
The video parameters should be given as integers. Each flag has its own value. See here for the code.
The correct solution is:
import numpy as np
import cv2
#Get video name from user
#Ginen video name must be in quotes, e.g. "pirkagia.avi" or "plaque.avi"
video_name = input("Please give the video name including its extension. E.g. \"pirkagia.avi\":\n")
#Open the video file
cap = cv2.VideoCapture(video_name)
#Set frame_no in range 0.0-1.0
#In this example we have a video of 30 seconds having 25 frames per seconds, thus we have 750 frames.
#The examined frame must get a value from 0 to 749.
#For more info about the video flags see here: https://stackoverflow.com/questions/11420748/setting-camera-parameters-in-opencv-python
#Here we select the last frame as frame sequence=749. In case you want to select other frame change value 749.
#BE CAREFUL! Each video has different time length and frame rate.
#So make sure that you have the right parameters for the right video!
time_length = 30.0
fps=25
frame_seq = 749
frame_no = (frame_seq /(time_length*fps))
#The first argument of cap.set(), number 2 defines that parameter for setting the frame selection.
#Number 2 defines flag CV_CAP_PROP_POS_FRAMES which is a 0-based index of the frame to be decoded/captured next.
#The second argument defines the frame number in range 0.0-1.0
cap.set(2,frame_no);
#Read the next frame from the video. If you set frame 749 above then the code will return the last frame.
ret, frame = cap.read()
#Set grayscale colorspace for the frame.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#Cut the video extension to have the name of the video
my_video_name = video_name.split(".")[0]
#Display the resulting frame
cv2.imshow(my_video_name+' frame '+ str(frame_seq),gray)
#Set waitKey
cv2.waitKey()
#Store this frame to an image
cv2.imwrite(my_video_name+'_frame_'+str(frame_seq)+'.jpg',gray)
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
If you want an exact frame, you could just set the VideoCapture session to that frame. It's much more intuitive to automatically call on that frame. The "correct" solution requires you to input known data: like fps, length, and whatnot. All you need to know with the code below is the frame you want to call.
import numpy as np
import cv2
cap = cv2.VideoCapture(video_name) # video_name is the video being called
cap.set(1,frame_no) # Where frame_no is the frame you want
ret, frame = cap.read() # Read the frame
cv2.imshow('window_name', frame) # show frame on window
If you want to hold the window, until you press exit:
while True:
ch = 0xFF & cv2.waitKey(1) # Wait for a second
if ch == 27:
break
Set a specific frame
From the documentation of the VideoCaptureProperties (docs) is possible to see that the way to set the frame in the VideoCapture is:
frame = 30
cap.set(cv2.CAP_PROP_POS_FRAMES, frame)
Notice that you don't have to pass to the function frame - 1 because, as the documentation says, the flag CAP_PROP_POS_FRAMES rapresent the "0-based index of the frame to be decoded/captured next".
Concluding a full example where i want to read a frame at each second is:
import cv2
cap = cv2.VideoCapture('video.avi')
# Get the frames per second
fps = cap.get(cv2.CAP_PROP_FPS)
# Get the total numer of frames in the video.
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
frame_number = 0
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number) # optional
success, image = cap.read()
while success and frame_number <= frame_count:
# do stuff
frame_number += fps
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number)
success, image = cap.read()
Set a specific time
In the documentation linked above is possible to see that the way to set a specific time in the VideoCapture is:
milliseconds = 1000
cap.set(cv2.CAP_PROP_POS_MSEC, milliseconds)
And like before a full example that read a frame each second che be achieved in this way:
import cv2
cap = cv2.VideoCapture('video.avi')
# Get the frames per second
fps = cap.get(cv2.CAP_PROP_FPS)
# Get the total numer of frames in the video.
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
# Calculate the duration of the video in seconds
duration = frame_count / fps
second = 0
cap.set(cv2.CAP_PROP_POS_MSEC, second * 1000) # optional
success, image = cap.read()
while success and second <= duration:
# do stuff
second += 1
cap.set(cv2.CAP_PROP_POS_MSEC, second * 1000)
success, image = cap.read()
For example, to start reading 15th frame of the video you can use:
frame = 15
cap.set(cv2.CAP_PROP_POS_FRAMES, frame-1)
In addition, I want to say, that using of CAP_PROP_POS_FRAMES property does not always give you the correct result. Especially when you deal with compressed files like mp4 (H.264).
In my case when I call cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number) for .mp4 file, it returns False, but when I call it for .avi file, it returns True.
Take into consideration when decide using this 'feature'.
very-hit recommends using CV_CAP_PROP_POS_MSEC property.
Read this thread for additional info.

Python store video as hdf5 results in large file size

I try to store a video clip frame by frame into a hdf5 file.
My code is working so far but I noticed, that compared to the source video file, the size of the hdf5 file is more than 10 times bigger.
Input file: avi 200 x 126px, duration: 16 minutes, size: 82 MB
Output file: hdf5, gzip compression, compression = 9, size: 1 GB
The code to store the frames is pretty simple:
import h5py
from skvideo.io import VideoCapture
frames = []
cap = VideoCapture('/home/ubuntu/PycharmProjects/video2H5Test/data/video_F100_scaled2.avi')
cap.open()
it = 0
while True:
retval, image = cap.read()
if image != None:
frames.append(image)
it += 1
if (it % 1000 == 0):
print('Processed %d frames so far' % (it))
if not retval:
break
with h5py.File('./test3.hdf5','w') as h5File:
h5File.create_dataset('camera1',data=frames,compression='gzip',compression_opts=9)
As you can see I already use gzip to compress my dataset.
Is there any other way to save memory consumption?
For those who came across the same problem:
Initialize your dataset with the first image:
myDataSet = myFile.create_dataset('someName', data=image[None, ...], maxshape=(
None, image.shape[0], image.shape[1], image.shape[2]), chunks=True)
To add an image simply resize the whole dataset:
myDataSet.resize(myDataSet.len() + 1, axis=0)
myDataSet[myDataSet.len() - 1] = image
What is your chunking scheme in the output hdf file? Compression is done by chunks, so considering that most of the information in video does not change from frame to frame, you should get much better compression ratio when different frames exist in the same chunk. I can try it out if you provide sample video file.

OpenCV/Python: read specific frame using VideoCapture

Is there a way to get a specific frame using VideoCapture() method?
My current code is:
import numpy as np
import cv2
cap = cv2.VideoCapture('video.avi')
This is my reference tutorial.
The following code can accomplish that:
import cv2
cap = cv2.VideoCapture(videopath)
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number-1)
res, frame = cap.read()
frame_number is an integer in the range 0 to the number of frames in the video.
Notice: you should set frame_number-1 to force reading frame frame_number. It's not documented well but that is how the VideoCapture module behaves.
One may obtain amount of frames by:
amount_of_frames = cap.get(cv2.CAP_PROP_FRAME_COUNT)
Thank you GPPK.
The video parameters should be given as integers. Each flag has its own value. See here for the code.
The correct solution is:
import numpy as np
import cv2
#Get video name from user
#Ginen video name must be in quotes, e.g. "pirkagia.avi" or "plaque.avi"
video_name = input("Please give the video name including its extension. E.g. \"pirkagia.avi\":\n")
#Open the video file
cap = cv2.VideoCapture(video_name)
#Set frame_no in range 0.0-1.0
#In this example we have a video of 30 seconds having 25 frames per seconds, thus we have 750 frames.
#The examined frame must get a value from 0 to 749.
#For more info about the video flags see here: https://stackoverflow.com/questions/11420748/setting-camera-parameters-in-opencv-python
#Here we select the last frame as frame sequence=749. In case you want to select other frame change value 749.
#BE CAREFUL! Each video has different time length and frame rate.
#So make sure that you have the right parameters for the right video!
time_length = 30.0
fps=25
frame_seq = 749
frame_no = (frame_seq /(time_length*fps))
#The first argument of cap.set(), number 2 defines that parameter for setting the frame selection.
#Number 2 defines flag CV_CAP_PROP_POS_FRAMES which is a 0-based index of the frame to be decoded/captured next.
#The second argument defines the frame number in range 0.0-1.0
cap.set(2,frame_no);
#Read the next frame from the video. If you set frame 749 above then the code will return the last frame.
ret, frame = cap.read()
#Set grayscale colorspace for the frame.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#Cut the video extension to have the name of the video
my_video_name = video_name.split(".")[0]
#Display the resulting frame
cv2.imshow(my_video_name+' frame '+ str(frame_seq),gray)
#Set waitKey
cv2.waitKey()
#Store this frame to an image
cv2.imwrite(my_video_name+'_frame_'+str(frame_seq)+'.jpg',gray)
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
If you want an exact frame, you could just set the VideoCapture session to that frame. It's much more intuitive to automatically call on that frame. The "correct" solution requires you to input known data: like fps, length, and whatnot. All you need to know with the code below is the frame you want to call.
import numpy as np
import cv2
cap = cv2.VideoCapture(video_name) # video_name is the video being called
cap.set(1,frame_no) # Where frame_no is the frame you want
ret, frame = cap.read() # Read the frame
cv2.imshow('window_name', frame) # show frame on window
If you want to hold the window, until you press exit:
while True:
ch = 0xFF & cv2.waitKey(1) # Wait for a second
if ch == 27:
break
Set a specific frame
From the documentation of the VideoCaptureProperties (docs) is possible to see that the way to set the frame in the VideoCapture is:
frame = 30
cap.set(cv2.CAP_PROP_POS_FRAMES, frame)
Notice that you don't have to pass to the function frame - 1 because, as the documentation says, the flag CAP_PROP_POS_FRAMES rapresent the "0-based index of the frame to be decoded/captured next".
Concluding a full example where i want to read a frame at each second is:
import cv2
cap = cv2.VideoCapture('video.avi')
# Get the frames per second
fps = cap.get(cv2.CAP_PROP_FPS)
# Get the total numer of frames in the video.
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
frame_number = 0
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number) # optional
success, image = cap.read()
while success and frame_number <= frame_count:
# do stuff
frame_number += fps
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number)
success, image = cap.read()
Set a specific time
In the documentation linked above is possible to see that the way to set a specific time in the VideoCapture is:
milliseconds = 1000
cap.set(cv2.CAP_PROP_POS_MSEC, milliseconds)
And like before a full example that read a frame each second che be achieved in this way:
import cv2
cap = cv2.VideoCapture('video.avi')
# Get the frames per second
fps = cap.get(cv2.CAP_PROP_FPS)
# Get the total numer of frames in the video.
frame_count = cap.get(cv2.CAP_PROP_FRAME_COUNT)
# Calculate the duration of the video in seconds
duration = frame_count / fps
second = 0
cap.set(cv2.CAP_PROP_POS_MSEC, second * 1000) # optional
success, image = cap.read()
while success and second <= duration:
# do stuff
second += 1
cap.set(cv2.CAP_PROP_POS_MSEC, second * 1000)
success, image = cap.read()
For example, to start reading 15th frame of the video you can use:
frame = 15
cap.set(cv2.CAP_PROP_POS_FRAMES, frame-1)
In addition, I want to say, that using of CAP_PROP_POS_FRAMES property does not always give you the correct result. Especially when you deal with compressed files like mp4 (H.264).
In my case when I call cap.set(cv2.CAP_PROP_POS_FRAMES, frame_number) for .mp4 file, it returns False, but when I call it for .avi file, it returns True.
Take into consideration when decide using this 'feature'.
very-hit recommends using CV_CAP_PROP_POS_MSEC property.
Read this thread for additional info.

Categories

Resources