I am working with a system that uses an Allied Vision Camera with Vimba Python.
Currently, I grab frames synchronously inside a loop, convert them into numpy arrays and append those to a list.
for _ in range(10):
frame = cam.get_frame()
img = np.ndarray(buffer=frame._buffer, dtype=np.uint16, shape=(frame._frame.height, frame._frame.width))
vTmpImg.append(img)
I need to optimize this process because it takes a significant amount of time. It would be ideal that the camera started streaming, taking frames and putting them in a queue or something and the I could retrieve them when I needed them. I figured that a good way to handle it is taking the frames asynchronously.
I've read the examples that Vimba has on asynchronous_grab, but it is still not clear to me how can I grab the frames that the camera is taking.
Does anyone know how to approach it?
Thanks in advance.
What is unclear about the asynchronous grab? The code or the concept?
Maybe asynchronous_grab_opencv.py is easier to modify. It transforms the frame into an OpenCV frame that can then be modified/saved etc in the Handler class. Basically, switch out the imshow command line for whatever you want to do with your frames.
Related
I'm struggling with a real-time application I'm currently writing. I capture a webcam stream and apply multiple image processing algorithms to each individual frame, e.g. to get the emotion of a person in the frame and to detect objects in it.
Unfortunately, the algorithms have different runtimes and since some are based on neural networks, those in particular are slow.
My goal is, that I want to show a video stream without lags. I don't care if an image processing algorithm grabs only every n-th frame or shows the results with a delay.
To get rid of the lags, I put the image processing in different threads but I wonder if there is a more sophisticated way to synchronize my analysis on the video stream's frames - or maybe even a library that helps building pipelines for real time data analytics?
Every hint is welcome!
I have similar problem like I have seen others have but I don't understand the solutions
I have a USB camera using its own capturing library and the resulting image-frame is stored in a share-memory, lets call it image.
So, I have an image object that I would like to further process with gstreamer.
So how can I do that. My knowledge ends at cv2.VideoCapture(gstr) (Which I do not use in this case since I have the image already. What I need to do is take an image I already have in the application and put into Gstreamer buffer somehow.
What I understand you can create an Appsrc name=source... and then I can take the image into a buffer.
All in all I jstu want to go from an Image array and be able to send it to a Gstream UDP sink and a multifilesink.
I hope it makes sense. I start with Basler USB3 camera and use their api to capture images, hence the confusion.
Weird coding outcome which isn't making much sense. I am trying to capture from a raspberry pi camera using the V4L2 driver as I need to use cv2 for image processing. I am using python to write the code.
The weirdness revolves around capturing images using cv2. when I type in the following commands
import cv2
from matplotlib import pyplot
camera = cv2.VideoCapture(0)
grab,frame = camera.read()
pyplot.imshow(frame)
I am able to grab a frame and display it using matplotlib. When I grab a second frame
grab,frame2 = camera.read()
pyplot.imshow(frame2)
The code will grab a second frame and display it perfectly fine.
However when I try to use an existing variable like frame or frame2 the camera will not grab a new frame and just print the prior frame.
I tried to clear the variable by typing
frame = []
grab,frame = camera.read()
pyplot.imshow(frame)
but this didn't fix the issue, still printing the prior frame.
I think you are "suffering from buffering"!
When OpenCV reads a frame, it tends to gather a few, I think it is 5 frames or so, or there may be some algorithm that determines available memory or something similar.
Anyway, the answer is to read a few more frames to clear the buffer and then it will acquire some fresh frames.
I am trying to rapidly select and process different frames from a video using OpenCV Python. To select a frame, I have used the 'CAP_PROP_POS_FRAMES' (or cap.set(2, frame_no)). However when using this I noticed a delay of about 200 ms to decode the selected frame. My script will be jumping in between frames a lot (not necessarily chronological) which means this will cause a big delay between each iteration.
I suspected OpenCV is buffering the upcoming frames after I set the frame number. Therefore I tried pre-decoding of the video by basically putting the entire video as a list so it can be accessed from RAM. This worked fantastic except bigger videos completely eat up my memory.
I was hoping someone knows a way to either set the frame number without this 200ms delay or to decode the video without using all of my memory space. Any suggestions are also welcome!
I don't know how to avoid that 200ms delay, but I have a suggestion on how you could decode the video first even if its size is greater than your RAM. You could use numpy's memmap:
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.memmap.html.
In practice you could have a function that initializes this memory-mapped matrix and then iterate over each frame of the video using VideoCapture and then store each frame in this matrix. After that you will be able to jump between each frame just by accessing this memory-mapped matrix.
I wrote some image analysis in OpenCV's Python API. Images are acquired in real-time from a webcam with the read() function of a cv2.VideoCapture object within a while True loop.
Processing a frame takes about 100ms. My camera would be capable of providing 30 fps. But if I even try to set its FPS to 15 my slow processing will lead to increasing lag. The processing happens on frames that get older and older in relation to "now". I can only run in real-time if I set the FPS to 5 which is a bit low. I assume incoming frames are buffered and once my loop returns to the start, the next frame is read from that buffer instead of straight from the camera.
I read elsewhere that running the frame grabbing and the processing in seperate threads would be the solution but I never used threading. Maybe I could get the most recent frame from the buffer instead?
I am using Python 3. I would prefer a OpenCV 3 answer if that is relevant, will accept a OpenCV 2 solution happily too.