I am currently extracting frames from a video so I can add some bounding boxes in each frame. Then, I want to put all the frames together and make a new video, using openCV. The problem is that every time I want to do that, I have to extract thousand of frames first. Is there a way to do it without having to extract the frames?
Thanks
That assumption isn't correct. You can use OpenCV's VideoCapture to load a video, get a frame, do some processing on it and save it using a VideoWriter object - one frame at a time. There is no need to load all frames into memory and then store them all at once.
Related
Can I modify the pixel value of frames(to hide some information) after extracting it from a video file and then generating the new video with the help of modified (in terms of pixel value) frames so that I can retrieve my information on again extracting the frames from new video?
I've used OpenCV to try this but it seems there's some compression being done on frames while extracting them or maybe while putting them together because I was not able to retrieve my hidden information in the pixels.
Do you know is this achievable? And if yes, then please suggest me a way to do this in python.
I have tried this code but it results in loss of my information on pixel level because I think some compression is being done somewhere.
Code:
clip = ImageSequenceClip(new_frames, fps=fps1)
clip.write_videofile("out.mp4", fps=fps1)
TL;DR:
This code produces a black screen video.
where fps1 is from the original video I stitch on
I am trying to stitch a video using frames from many videos.
I created an array containing all the images in their respective place and then passed frame by frame on each video and assigned the correct frame in the array. When I acted that way the result was ok, but the process was slow so I saved each frame to a file and loaded it within the stitching process. Python throw an exception that the array is to big and I chunked the video into parts and saved each chunk. The result came out as a black screen, even thought when I debugged I could show each frame on the ImageSequenceClip correctly. I tried reinstalling moviepy. I use windows 10 and I converted all frames to png type.
Well #BajMile was indeed right offering to use opencv.
What took me a while to realize is that I have to use only functions of opencv, also for the images I was opening and resizing.
I have a problem, not so easy to solve i guess. In general, I have a database of frames from different videos and I want to find for a given picture (which is not necessarily one of the frames but from some same source video) the matching source video.
So lets say I have some videos and extracted frames each x seconds. The frames are stored in the db.
My guess would now be to loop over all video frames in the db and try to find matching features. So I would somehow have to find features in the source image and then try to find these in the frames stored in the db.
My question is how can I achieve this? The problem is that camera angle and vieweing distance can be quite different when the picture in question was not taken quite close to the time the frame was extracted previously.
Is this even feasible?
I'm working with Python and OpenCV.
Thanks and best regards
I am trying to rapidly select and process different frames from a video using OpenCV Python. To select a frame, I have used the 'CAP_PROP_POS_FRAMES' (or cap.set(2, frame_no)). However when using this I noticed a delay of about 200 ms to decode the selected frame. My script will be jumping in between frames a lot (not necessarily chronological) which means this will cause a big delay between each iteration.
I suspected OpenCV is buffering the upcoming frames after I set the frame number. Therefore I tried pre-decoding of the video by basically putting the entire video as a list so it can be accessed from RAM. This worked fantastic except bigger videos completely eat up my memory.
I was hoping someone knows a way to either set the frame number without this 200ms delay or to decode the video without using all of my memory space. Any suggestions are also welcome!
I don't know how to avoid that 200ms delay, but I have a suggestion on how you could decode the video first even if its size is greater than your RAM. You could use numpy's memmap:
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.memmap.html.
In practice you could have a function that initializes this memory-mapped matrix and then iterate over each frame of the video using VideoCapture and then store each frame in this matrix. After that you will be able to jump between each frame just by accessing this memory-mapped matrix.
I want to create a VideoClip of only one frame of the video. The first frame will do.I am using moviepy. I have tried writing this code:
dur=1/fps #fps= frame rate
clip=VideoFileClip("vid.mp4").subclip(0,dur)
but it did not give me any exact results in case dur was a recurring decimal.
Also I need a way to find the frame rate of an existing video.
When you create a clip with clip=VideoFileClip("vid.mp4"), the fps is given by clip.fps.
If you want to get the first frame as a clip you write
clip2 = clip.to_ImageClip(t=0).set_duration(some_duration_in_seconds)
But it is unclear what you want ot do with that first frame. Maybe if you explain more about your goal I can give you a more appropriate solution.