I'm trying to detect camera is capturing frozen frames or black frame. Suppose a camera is capturing video frames and suddenly same frame is capturing again and again. I spend long time to get any idea about this problem statement but i failed. So how we detect it or any idea/steps/procedure for this problem.
This was my approach to solve this issue.
Frozen frames: calculate absolute difference over HSV/RGB per every pixel in two consecutive frames np.arrays and determine max allowed diff that is valid for detecting frozen frames.
Black frames have naturally very low (or zero) V-value sum over the frame. Determine max V-sum of whole frame to determine, below which the frame is "black".
You can use this simple opencv method to detect black frame
if (cv::countNonZero(frame) == 0)
{
//do something if frame is black
}
Related
I read a Video with Opencv Video Capture class, then i convert to Frames.
Now, i Need to increase The Fps or The Frames in my Video(like to create slow motion Video) , I read about Frame Blending To increase Frames in Slow motoin Videos ,So i think i need this way for my problem .
so how actually Frame Blending works or any algorithms to impelement It on Opencv and are there other taqnics to increase frames ?
I have two frames taken from a web cam. In these frames there is a white area which I need to detect small lines. these lines are propargating by time. this white area is is moving slowly up and down. When a line propagates i need to record the time. I have tried houges transformation to detect lines but I think comparing two images is a good move. Is it possible to do so? Because frame is moving evrytime slowly.
Frame 01
Frame 02
As you can see Frame 2 has been moved down a little bit. I want to find the difference between two frames which will not effect this small move
I would like to detect framed video content (frequently used in TV advertising and referred to as single split, program split etc.)
Example 1:
Example 2 (3 screen captures, 2 seconds offset):
I have the video sequence as well as 3 screen captures available to analyze (middle, middle between middle and end, end).
To get started, I already tried a few methods like bounding box detection, and autocrop algorithms on the screen captures using opencv, imagemagick and PIL. This works to some extent, but not reliably.
Every TV station uses their own artwork for the surrounding frame
They sometimes animate the surrounding frame in the first few seconds
The background of the surrounding frame can be static but also animated, changing colors, etc
What would be an effective method to get a rather precise true/false reading on the media examples above? I would appreciate some ideas to build a suitable algorithm.
Thanks
You are essentially looking for two static, horizontal lines and two static, vertical lines that represent the edges of the inset video clip. What is inside the frame and what is outside the frame may/will be changing - only the edges of the inset frame are constant.
I would be applying a strong directional filter (Sobel) oriented at 0 and 90 degrees to find the horizontal and vertical lines. Then work through some number of frames accumulating all the edges that the two filters find. The brightest lines in the image at the end should be the best defined ones that have stayed still the longest.
I am getting video input from 2 separate cameras with some area of overlap between the output videos. I have tried out a code which combines the video output horizontally. Here is the link for that code:
https://github.com/rajatsaxena/NeuroscienceLab/blob/master/positiontracking/combinevid.py
To explain the problem visually:
The red part shows the overlap region between two image frame. I need the output to look like the second image, with first frame in blue and second frame in green (as shown in third illustration)
A solutions I can think of but unable to implement is, Using SIFT/SURF find out the maximum distance keypoints from both frames and then take the first video frame completely and just pick the non overlapping region from second video frame and horizontally combine them to get the stitched output.
Let me know of any other solutions possible as well. Thanks!
I read this post one hour ago. I tried some really easy approach. Not perfect but in some cases should work well. For example, if you have both cameras on one frame placed side by side.
I took 2 images from the phone like on a picture (color images). Program select Rectangles region from both source images and resize end extract this roi rectangles. The idea is to find the "best" overlapping Rect regions by normalized correlation.
M1 and M2 is mat roi to compare,
matchTemplate(M1, M2, res, TM_CCOEFF_NORMED);
After, I find this overlapping Rect use this to crop source images and combine by hconcat() function together.
My code is in C++ but is really simple to replicate this in python. It is not the best solution but one of the most simple solution. If your cameras are fixed in stable position between themselves. This is a good solution I think.
I hold my phone in hand :)
You can also use this simple approach on video. The speed depends only on the number of rectangle candidate you compare.
You can improve this by smart region to compare selection.
Also, I am thinking about another idea to use optical flow by putting your images from a camera at the same time to sequence behind each other. From the possible overlapping regions in one image extract good features to track and find them in the region of second images.
Surf and sift are great for this but this is the most simple idea on my mind.
Code is Here Code
I have raw-rgb video coming from PAL 50i camera. How can I detect the start of frame, just like I would detect the keyframe of h264 video, in gstreamer? I would like to do that for indexing/cutting purposes.
If this really is raw rgb video, there is no (realistic) way to detect the start of the frame. I would assume your video would come as whole frames, so one buffer == one frame, and hence no need for such detection.