Weird coding outcome which isn't making much sense. I am trying to capture from a raspberry pi camera using the V4L2 driver as I need to use cv2 for image processing. I am using python to write the code.
The weirdness revolves around capturing images using cv2. when I type in the following commands
import cv2
from matplotlib import pyplot
camera = cv2.VideoCapture(0)
grab,frame = camera.read()
pyplot.imshow(frame)
I am able to grab a frame and display it using matplotlib. When I grab a second frame
grab,frame2 = camera.read()
pyplot.imshow(frame2)
The code will grab a second frame and display it perfectly fine.
However when I try to use an existing variable like frame or frame2 the camera will not grab a new frame and just print the prior frame.
I tried to clear the variable by typing
frame = []
grab,frame = camera.read()
pyplot.imshow(frame)
but this didn't fix the issue, still printing the prior frame.
I think you are "suffering from buffering"!
When OpenCV reads a frame, it tends to gather a few, I think it is 5 frames or so, or there may be some algorithm that determines available memory or something similar.
Anyway, the answer is to read a few more frames to clear the buffer and then it will acquire some fresh frames.
Related
I am working with a system that uses an Allied Vision Camera with Vimba Python.
Currently, I grab frames synchronously inside a loop, convert them into numpy arrays and append those to a list.
for _ in range(10):
frame = cam.get_frame()
img = np.ndarray(buffer=frame._buffer, dtype=np.uint16, shape=(frame._frame.height, frame._frame.width))
vTmpImg.append(img)
I need to optimize this process because it takes a significant amount of time. It would be ideal that the camera started streaming, taking frames and putting them in a queue or something and the I could retrieve them when I needed them. I figured that a good way to handle it is taking the frames asynchronously.
I've read the examples that Vimba has on asynchronous_grab, but it is still not clear to me how can I grab the frames that the camera is taking.
Does anyone know how to approach it?
Thanks in advance.
What is unclear about the asynchronous grab? The code or the concept?
Maybe asynchronous_grab_opencv.py is easier to modify. It transforms the frame into an OpenCV frame that can then be modified/saved etc in the Handler class. Basically, switch out the imshow command line for whatever you want to do with your frames.
Code:
clip = ImageSequenceClip(new_frames, fps=fps1)
clip.write_videofile("out.mp4", fps=fps1)
TL;DR:
This code produces a black screen video.
where fps1 is from the original video I stitch on
I am trying to stitch a video using frames from many videos.
I created an array containing all the images in their respective place and then passed frame by frame on each video and assigned the correct frame in the array. When I acted that way the result was ok, but the process was slow so I saved each frame to a file and loaded it within the stitching process. Python throw an exception that the array is to big and I chunked the video into parts and saved each chunk. The result came out as a black screen, even thought when I debugged I could show each frame on the ImageSequenceClip correctly. I tried reinstalling moviepy. I use windows 10 and I converted all frames to png type.
Well #BajMile was indeed right offering to use opencv.
What took me a while to realize is that I have to use only functions of opencv, also for the images I was opening and resizing.
I am trying to change the Frame rate i.e., FPS of an existing video using openCV library in python. Below is the code that I am trying to execute. Even after setting the FPS property using cv2.CAP_PROP_FPS the video is not playing faster in the cv2.imshow() method. Even After setting the FPS property the getter returns the older FPS value. So how do I set the FPS value higher and make the video play faster?
Used version:
python = 3.7.4 and
opencv-python - 4.1.0.25
import cv2
video = cv2.VideoCapture("yourVideoPath.mp4");
video.set(cv2.CAP_PROP_FPS, int(60))
if __name__ == '__main__':
print("Frame rate : {0}".format(video.get(cv2.CAP_PROP_FPS)))
while video.isOpened():
ret1, frame2 = video.read()
cv2.imshow("Changed", frame2)
if cv2.waitKey(10) & 0xFF == ord('q'): # press q to quit
break
video.release()
cv2.destroyAllWindows()
If you're only trying to play the video in the displayed window, the limiting factor is not the fps of the video but the time spent waiting with the code waitKey(10) which makes the program wait for 10ms between each frame.
The read() method of the VideoCapture class simply returns the next frame with no concept of waiting or frame rate. The only thing preventing this code running as fast as it can is the waitKey(10) section, which is thus the main factor determining speed. To change the frame rate as seen through the imshow() method, you'd need to edit the time spent waiting. This is likely the dominant factor, but not the only one as the reading of a frame does take time.
If you're actually trying to change the playback rate of an existing file and have that saved to that file, I am unsure if OpenCV actually supports this, and I imagine it would be dependent on what back end you're using - OpenCV implements the VideoCapture class using different 3rd party backends.. As per the documentation of VideoCapture.set() I'd investigate the return value of video.set(cv2.CAP_PROP_FPS, int(60)) as the documentation suggests it will return true if this has changed something.
As an alternative you could investigate using something like FFMPEG which supports this relatively easily. If you want to stick with OpenCV I know from personal experience you can do this with the VideoWriter class. In this method you would read in the video frame by frame using the VideoCapture class, and then save it at the desired frame rate with VideoWriter. I suspect FFMPEG will likely meet your needs however!
There is about a 10-20 second delay between when I run my program and when the web camera actually takes the image. Is there any way to speed up this process?
I have looked several places and haven't found a solution.
video_capture = cv2.VideoCapture(1)
ret, frame = video_capture.read()
I just don't get what is taking these two lines of code so long to execute when I can take a picture with my webcam instantly through the normal camera application.
Ok so it took me a while but the problem was solved by switching the API. I changed the line of code:
video_capture = cv2.VideoCapture(1)
to
video_capture = cv2.VideoCapture(1, cv2.CAP_DSHOW)
by adding this, it now works instantly, removing the delay which was present before.
I'm recording video with a Raspberry Pi 2 and camera module in a Python script, using the picamera package. See minimal example below:
import picamera
import time
with picamera.PiCamera(resolution=(730, 1296), framerate=49) as camera:
camera.rotation=270
camera.start_preview()
time.sleep(0.5)
camera.start_recording('test.h264')
time.sleep(3)
camera.stop_recording()
camera.stop_preview()
Results
The result is a video with bad encoding:
first frame is ok
in the next 59 frames the scene is barely visible, almost all green or purple (not clear what's making change between the two colors)
frame number 61 is ok
Basically only the I-frames are correctly encoded. This was clarified experimenting different values of the intra_period parameter of the start_recording function.
Notes and attempts already made
First and foremost, I was using the same code to correctly record video in the past on the same Raspberry Pi and camera. It's not clear to me if the problem appeared when reinstalling the complete image, during updates, installation of other packages...
Also:
if I don't set the resolution parameter and rotation, the camera works fine
several video players have been tested on the same and on other machines, and processing frame by frame with OpenCV, it is really a problem in the video file
mjpeg format works fine
the same problem happens setting sensor_mode=5
Questions
The main question is how to correctly record video at a set resolution, by correction of the code above or a workaround.
Secondary question: I'm curious to know what could cause such behaviour.