How to get the latest frame from capture device (camera) in opencv - python

I want to connect to a camera, and only capture a frame when an event happens (e.g. keypress). A simplified version of what I'd like to do is this:
cap = cv2.VideoCapture(device_id)
while True:
if event:
img = cap.read()
preprocess(img)
process(img)
cv.Waitkey(10)
However, cap.read seems to only capture the next frame in the queue, and not the latest. I did a lot of searching online, and there seems to be a lot of questions on this but no definitive answer. Only some dirty hacks which involve opening and closing the capture device just before and after grabbing (which won't work for me as my event might be triggered multiple times per second); or assuming a fixed framerate and reading a fixed-n times on each event (which won't work for me as my event is unpredictable and could happen at any interval).
A nice solution would be:
while True:
if event:
while capture_has_frames:
img = cap.read()
preprocess(img)
process(img)
cv.Waitkey(10)
But what is capture_has_frames? Is it possible to get that info? I tried looking into CV_CAP_PROP_POS_FRAMES but it's always -1.
For now I have a separate thread where the capture is running at full fps, and on my event I'm grabbing the latest image from that thread, but this seems overkill.
(I'm on Ubuntu 16.04 btw, but I guess it shouldn't matter. I'm also using pyqtgraph for display)

I think the solution mentioned in the question, namely having a separate thread that clears the buffer, is the easiest non-brittle solution for this. Here reasonably nice (I think) code for this:
import cv2, queue, threading, time
# bufferless VideoCapture
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.q = queue.Queue()
t = threading.Thread(target=self._reader)
t.daemon = True
t.start()
# read frames as soon as they are available, keeping only most recent one
def _reader(self):
while True:
ret, frame = self.cap.read()
if not ret:
break
if not self.q.empty():
try:
self.q.get_nowait() # discard previous (unprocessed) frame
except queue.Empty:
pass
self.q.put(frame)
def read(self):
return self.q.get()
cap = VideoCapture(0)
while True:
time.sleep(.5) # simulate time between events
frame = cap.read()
cv2.imshow("frame", frame)
if chr(cv2.waitKey(1)&255) == 'q':
break
The frame reader thread is encapsulated inside the custom VideoCapture class, and communication with the main thread is via a queue.
I posted very similar code for a node.js question, where a JavaScript solution would have been better. My comments on another answer to that question give details why a non-brittle solution without separate thread seems difficult.
An alternative solution that is easier but supported only for some OpenCV backends is using CAP_PROP_BUFFERSIZE. The 2.4 docs state it is "only supported by DC1394 [Firewire] v 2.x backend currently." For Linux backend V4L, according to a comment in the 3.4.5 code, support was added on 9 Mar 2018, but I got VIDEOIO ERROR: V4L: Property <unknown property string>(38) not supported by device for exactly this backend. It may be worth a try first; the code is as easy as this:
cap.set(cv2.CAP_PROP_BUFFERSIZE, 0)

Here's a simplified version of Ulrich's solution.
OpenCV's read() function combines grab() and retrieve() in one call, where grab() just loads the next frame in memory, and retrieve decodes the latest grabbed frame (demosaicing & motion jpeg decompression).
We're only interested in decoding the frame we're actually reading, so this solution saves some CPU, and removes the need for a queue
import cv2
import threading
# bufferless VideoCapture
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.lock = threading.Lock()
self.t = threading.Thread(target=self._reader)
self.t.daemon = True
self.t.start()
# grab frames as soon as they are available
def _reader(self):
while True:
with self.lock:
ret = self.cap.grab()
if not ret:
break
# retrieve latest frame
def read(self):
with self.lock:
_, frame = self.cap.retrieve()
return frame
EDIT: Following Arthur Tacca's comment, added a lock to avoid simultaneous grab & retrieve, which could lead to a crash as OpenCV isn't thread-safe.

Its also possible to always get the latest frame by using cv2.CAP_GSTREAMER backend. If you have gstreamer support enabled in cv2.getBuildInformation(), you can initialize your video capture with the appsink parameters sync=false and drop=true
Example:
cv2.VideoCapture("rtspsrc location=rtsp://... ! decodebin ! videoconvert ! video/x-raw,framerate=30/1 ! appsink drop=true sync=false", cv2.CAP_GSTREAMER)

On my Raspberry Pi 4,
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
does work and was all that I needed for my pi camera to give me the latest frame, with a consistent 3+ second delay between the scene in front of the camera and displaying that scene in the preview image. My code takes 1.3 seconds to process an image, so I'm not sure why the other 2 seconds of delay are present, but it's consistent and works.
Side note: since my code takes over a second to process an image, I also added
cap.set( cv2.CAP_PROP_FPS, 2 )
in case it reduces any unneeded activity, since I can't quite get a frame a second. When I put cv2.CAP_PROP_FPS to 1, though, I got a strange output of all my frames being almost entirely dark, so setting FPS too low can cause an issue

If you don't want to capture the frame when there is no event happening, why are you preprocessing/processing your frame? If you do not process your frame, you can simply discard it unless the event occur. Your program should be able to capture, evaluate your condition and discard at a sufficient speed, i.e. fast enough compared to your camera FPS capture rate, to always get the last frame in the queue.
If not proficient in python because I do my OpenCV in C++, but it should look similar to this:
vidcap = cv.VideoCapture( filename )
while True:
success, frame = vidcap.read()
If Not success:
break
If cv.waitKey(1):
process(frame)
As per OpenCV reference, vidcap.read() returns a bool. If frame is read correctly, it will be True. Then, the captured frame is store in variable frame. If there is no key press, the loop keeps on going. When a key is pressed, you process your last captured frame.

Related

OpenCV buffer problem when using external trigger

I'm using OpenCV with USB camera on a Raspberry Pi4. I have activated an external trigger by RPI's GPIO pin. I wrote a short Python code to test the trigger and made sure it worked. Then I wrote another Python program to save a single image captured by the camera. Here is the code:
import time
import cv2
import smbus
from gpiozero import LED
TRIG_ADDR = 17
def setup_trigger_control_gpio(pin):
trigger = LED(pin)
return trigger
def setup_camera(frame_width, frame_height, fps):
cap = cv2.VideoCapture(0, cv2.CAP_ANY)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, frame_width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_height)
cap.set(cv2.CAP_PROP_FPS, fps)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
if cap.isOpened() is not True:
print ("Cannot open camera. Exiting.")
quit()
else:
return cap
trigger = setup_trigger_control_gpio(TRIG_ADDR)
cap = setup_camera(640, 480, 120)
trigger.on()
ret, frame = cap.read()
if ret:
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite("images/frame.pgm", frame)
trigger.off()
cap.release()
I set the buffersize to 1, since I'm using a trigger to capture the frame exactly when I need to. The program however gets stuck on the cap.read() line as if it did not receive the trigger. When I ran the program after that I ran the short program for the trigger only, it has finished successfully. Sometimes I had to run the trigger program more than once for the main program to finish, which I find a bit scary, because of the inconsistency.
I tried to set the buffersize to 10, which seemingly worked, however the saved image was empty(all black). I have also tried to "flush" the buffer by reading the first 10 empty frames, which worked and I have finally saved a proper image. The real problems occur when I try to process real time video this way, as even after flushing the buffer, the images do not correspond to the point in time when they should've been taken. Therefor I would love to use the feature of setting the buffersize to one, so that I would know exactly which frame is being processed. I will be thankful for any ideas.

imshow() with desired framerate with opencv

Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for "reading" the image from the queue and then substract that value from number of miliseconds avalible for one frame:
if I have as input video with 50FPS and i want to playback it in real-time i do 1000/50 => 20ms per frame.
And then wait that time using cv2.WaitKey()
But still I get some laggy output. Which is slower then the source video
I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer()
calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer.
eg cv2.WaitKey((1000/50) - (time processing finished - time read started) - 10)
or you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished
I haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1
I faced the same video in one of my project in which my source video have 2 fps. so in order to show it in good manners using cv2.imshow I used a delay function before displaying of frame. Its a kind of hack but this thing work for me. The code for this hack is given below. Hope you will get some help from it. peace!
import cv2
import numpy as np
import time
cap = cv2.VideoCapture (0)
width = 400
height = 350
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (width, height))
flipped = cv2.flip(frame, 1)
framerot = cv2.rotate(frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
framerot = cv2.resize(framerot, (width, height))
StackImg = np.hstack([frame, flipped, framerot])
#Put time of sleep according to your fps
time.sleep(2)
cv2.imshow("ImageStacked", StackImg)
if cv2.waitKey(1) & 0xff == ord('q'):
break
cv2.destroyAllWindows()

How to access 1 webcam with 2 threads

I am using python 3.5 with opencv.
I want to use 2 threads:
Thread 1: Save the video to a file
Thread 2: Display the video to the user
To view/capture the video from webcam i am using snippets of code from the following website: opencv video docs
I can capture and save the video using the following code:
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
while(True):
ret, frame = cap.read()
if ret==True:
frame = cv2.flip(frame,0)
# write the flipped frame
out.write(frame)
else:
break
out.release()
cv2.destroyAllWindows()
I can view the video using the following code:
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
Each of these pieces of code are in their own functions called capture and display. I then call them in separate threads with pythons threading library as follows:
cap = cv2.VideoCapture(0)
Thread(target=capture).start()
Thread(target=display).start()
cap.release()
I get an error I assume is related to both threads wanting to access the video buffer at the same time.
I understand this can be done without threads but there are other things I would like to do further than can only be done in separate threads.
How can I access the cap video capture from both threads?
My flask/django experience is increadibly limited, so I am not sure how to do it for that exactly, but I will answer the question posted directly.
First you need to create a thread-safe object to avoid calling at the same time the read function in different threads.
import cv2
import threading
class VideoCamera(object):
# filename can be 0 to access the webcam
def __init__(self, filename):
self.lock = threading.Lock()
self.openVideo(filename)
def openVideo(self, filename):
self.lock.acquire()
self.videoCap = cv2.VideoCapture(filename)
self.lock.release()
With this, you should be able to create an object with a lock and to open safely a video (in case that you want to open another video with the same object).
Now you have 2 options, either you create a thread that updates the frame and stores the current one internally or update do in a thread safe manner the get next frame function. I will do the second one here to show you:
def getNextFrame(self):
self.lock.acquire()
img = None
# if no video opened return None
if self.videoCap.isOpened():
ret, img = self.videoCap.read()
self.lock.release()
return img
This way you should be able to access the video cap with 2 frames... however, the frames will be different every time the function is called.
I hope this helps you.

I want to receive RTSP stream in OpenCV from web URL [duplicate]

I have recently set up a Raspberry Pi camera and am streaming the frames over RTSP. While it may not be completely necessary, here is the command I am using the broadcast the video:
raspivid -o - -t 0 -w 1280 -h 800 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/output.h264}' :demux=h264
This streams the video perfectly.
What I would now like to do is parse this stream with Python and read each frame individually. I would like to do some motion detection for surveillance purposes.
I am completely lost on where to start on this task. Can anyone point me to a good tutorial? If this is not achievable via Python, what tools/languages can I use to accomplish this?
Using the same method listed by "depu" worked perfectly for me.
I just replaced "video file" with "RTSP URL" of actual camera.
Example below worked on AXIS IP Camera.
(This was not working for a while in previous versions of OpenCV)
Works on OpenCV 3.4.1 Windows 10)
import cv2
cap = cv2.VideoCapture("rtsp://root:pass#192.168.0.91:554/axis-media/media.amp")
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame', frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Bit of a hacky solution, but you can use the VLC python bindings (you can install it with pip install python-vlc) and play the stream:
import vlc
player=vlc.MediaPlayer('rtsp://:8554/output.h264')
player.play()
Then take a snapshot every second or so:
while 1:
time.sleep(1)
player.video_take_snapshot(0, '.snapshot.tmp.png', 0, 0)
And then you can use SimpleCV or something for processing (just load the image file '.snapshot.tmp.png' into your processing library).
use opencv
video=cv2.VideoCapture("rtsp url")
and then you can capture framse. read openCV documentation visit: https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_video_display/py_video_display.html
Depending on the stream type, you can probably take a look at this project for some ideas.
https://code.google.com/p/python-mjpeg-over-rtsp-client/
If you want to be mega-pro, you could use something like http://opencv.org/ (Python modules available I believe) for handling the motion detection.
Here is yet one more option.
It's much more complicated than the other answers.
But this way, with just one connection to the camera, you could "fork" the same stream simultaneously to several multiprocesses, to the screen, recast it into multicast, write it to disk, etc.
Of course, just in the case you would need something like that (otherwise you'd prefer the earlier answers)
Let's create two independent python programs:
Server program (rtsp connection, decoding) server.py
Client program (reads frames from shared memory) client.py
Server must be started before the client, i.e.
python3 server.py
And then in another terminal:
python3 client.py
Here is the code:
(1) server.py
import time
from valkka.core import *
# YUV => RGB interpolation to the small size is done each 1000 milliseconds and passed on to the shmem ringbuffer
image_interval=1000
# define rgb image dimensions
width =1920//4
height =1080//4
# posix shared memory: identification tag and size of the ring buffer
shmem_name ="cam_example"
shmem_buffers =10
shmem_filter =RGBShmemFrameFilter(shmem_name, shmem_buffers, width, height)
sws_filter =SwScaleFrameFilter("sws_filter", width, height, shmem_filter)
interval_filter =TimeIntervalFrameFilter("interval_filter", image_interval, sws_filter)
avthread =AVThread("avthread",interval_filter)
av_in_filter =avthread.getFrameFilter()
livethread =LiveThread("livethread")
ctx =LiveConnectionContext(LiveConnectionType_rtsp, "rtsp://user:password#192.168.x.x", 1, av_in_filter)
avthread.startCall()
livethread.startCall()
avthread.decodingOnCall()
livethread.registerStreamCall(ctx)
livethread.playStreamCall(ctx)
# all those threads are written in cpp and they are running in the
# background. Sleep for 20 seconds - or do something else while
# the cpp threads are running and streaming video
time.sleep(20)
# stop threads
livethread.stopCall()
avthread.stopCall()
print("bye")
(2) client.py
import cv2
from valkka.api2 import ShmemRGBClient
width =1920//4
height =1080//4
# This identifies posix shared memory - must be same as in the server side
shmem_name ="cam_example"
# Size of the shmem ringbuffer - must be same as in the server side
shmem_buffers =10
client=ShmemRGBClient(
name =shmem_name,
n_ringbuffer =shmem_buffers,
width =width,
height =height,
mstimeout =1000, # client timeouts if nothing has been received in 1000 milliseconds
verbose =False
)
while True:
index, isize = client.pull()
if (index==None):
print("timeout")
else:
data =client.shmem_list[index][0:isize]
img =data.reshape((height,width,3))
img =cv2.GaussianBlur(img, (21, 21), 0)
cv2.imshow("valkka_opencv_demo",img)
cv2.waitKey(1)
If you got interested, check out some more in https://elsampsa.github.io/valkka-examples/
Hi reading frames from video can be achieved using python and OpenCV . Below is the sample code. Works fine with python and opencv2 version.
import cv2
import os
#Below code will capture the video frames and will sve it a folder (in current working directory)
dirname = 'myfolder'
#video path
cap = cv2.VideoCapture("your rtsp url")
count = 0
while(cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
else:
cv2.imshow('frame', frame)
#The received "frame" will be saved. Or you can manipulate "frame" as per your needs.
name = "rec_frame"+str(count)+".jpg"
cv2.imwrite(os.path.join(dirname,name), frame)
count += 1
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Use in this
cv2.VideoCapture("rtsp://username:password#IPAddress:PortNO(rest of the link after the IPAdress)").

Getting current frame with OpenCV VideoCapture in Python

I am using cv2.VideoCapture to read the frames of an RTSP video link in a python script. The .read() function is in a while loop which runs once every second, However, I do not get the most current frame from the stream. I get older frames and in this way my lag builds up. Is there anyway that I can get the most current frame and not older frames which have piped into the VideoCapture object?
I also faced the same problem. Seems that once the VideoCapture object is initialized it keeps storing the frames in some buffer of sort and returns a frame from that for every read operation. What I did is I initialized the VideoCapture object every time I wanted to read a frame and then released the stream. Following code captures 10 images at an interval of 10 seconds and stores them. Same can be done using while(True) in a loop.
for x in range(0,10):
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imwrite('test'+str(x)+'.png',frame)
cap.release()
time.sleep(10)
I've encountered the same problem and found a git repository of Azure samples for their computer vision service.
The relevant part is the Camera Capture module, specifically the Video Stream class.
You can see they've implemented a Queue that is being updated to keep only the latest frame:
def update(self):
try:
while True:
if self.stopped:
return
if not self.Q.full():
(grabbed, frame) = self.stream.read()
# if the `grabbed` boolean is `False`, then we have
# reached the end of the video file
if not grabbed:
self.stop()
return
self.Q.put(frame)
# Clean the queue to keep only the latest frame
while self.Q.qsize() > 1:
self.Q.get()
I'm working with a friend in a hack doing the same. We don't want to use all the frames. So far we found that very same thing: grab() (or read) tries to get you all the frames, and I guess with rtp: it will maintain a buffer and drop if you're not responsive enough.
Instead of read you can also use grab() and receive(). First one ask for the frame. Receives reads it into memory. So if you call grab several times it will effectively skip those.
We got away with doing this:
#show some initial image
while True:
cv2.grab()
if cv2.waitKey(10):
im = cv2.receive()
# process
cv2.imshow...
Not production code but...
Inside the 'while' you can use:
while True:
cap = cv2.VideoCapture()
urlDir = 'rtsp://ip:port/h264_ulaw.sdp'
cap.open(urlDir)
# get the current frame
_,frame = cap.read()
cap.release() #releasing camera
image = frame
Using the following was causing a lot of issues for me. The frames being passed to the function were not sequention.
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
function_that_uses_frame(frame)
time.sleep(0.5)
The following also didn't work for me as suggested by other comments. I was STILL getting issues with taking the most recent frame.
cap = cv2.VideoCapture(0)
while True:
ret = capture.grab()
ret, frame = videocapture.retrieve()
function_that_uses_frame(frame)
time.sleep(0.5)
Finally, this worked but it's bloody filthy. I only need to grab a few frames per second, so it will do for the time being. For context, I was using the camera to generate some data for an ML model and my labels compared to what was being captured was out of sync.
while True:
ret = capture.grab()
ret, frame = videocapture.retrieve()
ret = capture.grab()
ret, frame = videocapture.retrieve()
function_that_uses_frame(frame)
time.sleep(0.5)
I made an adaptive system as the ones the others on here posted here still resulted in somewhat inaccurate frame representation and have completely variable results depending on the hardware.
from time import time
#...
cap = cv2.VideoCapture(url)
cap_fps = cap.get(cv2.CAP_PROP_FPS)
time_start = time()
time_end = time_start
while True:
time_difference = int((((end_time-start_time))*cap_fps)+1) #Note that the 1 might be changed to fit script bandwidth
for i in range(0, time_difference):
a = cap.grab()
_, frame = cap.read()
time_start = time()
#Put your code here
variable = function(frame)
#...
time_end = time()
This way the skipped frames adapt to the amount of frames missed in the video stream - allowing for a much smoother transition and a relatively real-time frame representation.

Categories

Resources