QueryFrame not advancing frame in OpenCV - python

The code
import cv
capture = cv.CaptureFromFile("a.avi")
while True:
frame = cv.QueryFrame(capture)
cv.ShowImage("a',frame)
Shows the same initial frame from the video repeatedly (QueryFrame is not advancing the video and grabbing the next frame). It works fine if the video is captured from a webcam.
Any ideas?

I see the same mistakes over and over again, so this is probably the last time I'll address them. Hopefully people will start using the search box in the future and dig a little deeper.
Call cv.WaitKey() after displaying the frame. If don't have a delay between displaying the frames some problems could happen. I believe this the problem.
Code defensively: if you are calling a function/method that can fail, believe in Murphy, and add the appropriate check to verify it doesn't:
import cv
capture = cv.CaptureFromFile("a.avi")
if not capture :
print "Error loading video file"
# Should exit the application
while True:
frame = cv.QueryFrame(capture)
if not frame:
print "Could not retrieve frame"
cv.ShowImage("a", frame)
k = cv.WaitKey(10)
if k == 27:
break # ESC key was pressed

Related

How should I properly use cv2.waitKey when wanting to start/pause a video?

I've written a small script that allows be to run/pause a video stream using OpenCV. I don't understand why I needed to use the cv2.waitkey() in the manner I did. The structure of my code is as follows:
def marker(event, x, y, flags, param):
# Method called by mouse click
global run
if event == cv2.EVENT_LBUTTONDOWN:
run = not run
...
window_name = 'Editor Window'
cv2.namedWindow(window_name)
cv2.setMouseCallback(window_name, marker)
fvs = cv2.VideoCapture(args["video"])
(grabbed, frame) = fvs.read()
while grabbed:
# grab the frame from the threaded video file stream, resize
# it, and convert it to grayscale (while still retaining 3
# channels)
if run:
# Code to display video fames ...
cv2.waitKey(1) & 0xFF # Use A
(grabbed, frame) = fvs.read()
else:
cv2.waitKey(1) & 0xFF # Use B
The code is very sensitive to the use of cv2.waitKey:
If I don't have "Use A", the window freezes, without ever showing the video. I would've expected it to run and then close very quickly. Why is this not the case?
If "Use B" is absent, execution freezes after the first mouse click. Does openCV somehow need to be waiting for a keyboard entry in order to see a mouse click? If so, why?
If "Use B" has a delay of 0 (i.e. wait indefinitely), then it only seems to see mouse click events intermittently, though sometimes I'm also periodically pressing on the space bar as I click. Why is this? Am I somehow getting 'lucky' if I give a mouse-click soon enough before (or after?) a key press? If not, why the intermittent response?
Ultimately, I don't really understand the workings of cv2.waitKey(). I would've thought it waits for the given time delay for a keyboard event and then moves on. The interaction with a mouse click even confuses me.
According to the OpenCV Documentation:
The function waitKey waits for a key event infinitely (when delay <= 0 ) or for delay milliseconds, when it is positive. Since the OS has a minimum time between switching threads, the function will not wait exactly delay ms, it will wait at least delay ms, depending on what else is running on your computer at that time. It returns the code of the pressed key or -1 if no key was pressed before the specified time had elapsed.
So you can pause by pressing the "p" button of the keyboard using the OpenCV cv2.waitKey(delay) function like this:
import cv2
cap = cv2.VideoCapture('your_video.mov')
while(True):
_, frame = cap.read()
cv2.imshow("Frame", frame)
key = cv2.waitKey(1)
if key == ord("p"):
cv2.waitKey(0)
waitKey drives the event-loop. Which is essential for things like keyboard or mouse events. Thus you always need to drive it if you expect interactive reactions.
Also, you can pull the waitKey in front of the if, and just issue it once.

Direct way to get camera screenshot

I'm working with Python OpenCV to a project that as an initial step involves capturing an image from a webcam; I tried to automate this process by using capture = cv2.VideoCapture and capture.read(), but the camera's video mode activation and its subsequent self-adjusting are too slow for what I want to achieve in the end.
Is there a more direct method of automatically capturing a screenshot with Python (and OpenCV)? If not, do you have any alternative suggestion?
Thanks
If you want your camera screenshot function to be responsive, you need to initialize the camera capture outside of this function.
On the following code snippet, the screenshot function is triggered by pressing c:
import cv2
def screenshot():
global cam
cv2.imshow("screenshot", cam.read()[1]) # shows the screenshot directly
#cv2.imwrite('screenshot.png',cam.read()[1]) # or saves it to disk
if __name__ == '__main__':
cam = cv2.VideoCapture(0) # initializes video capture
while True:
ret, img = cam.read()
cv2.imshow("cameraFeed", img) # a window is needed as a context for key capturing (here, I display the camera feed, but there could be anything in the window)
ch = cv2.waitKey(5)
if ch == 27:
break
if ch == ord('c'): # calls screenshot function when 'c' is pressed
screenshot()
cv2.destroyAllWindows()
To clarify: cameraFeed window is only here for the purpose of the demo (where screenshot is triggered manually). If screenshot is called automatically in your program, then you don't need this part.
Hope it helps!
Basically you need to do 3 things:
#init the cam
video_capture = cv2.VideoCapture(0)
#get a frame from cam
ret, frame = video_capture.read()
#write that to disk
cv2.imwrite('screenshot.png',frame)
of course, you should wait a while before, if not you could save a weird black screen (or just the 1st thing the camera got :-) )

How to get the latest frame from capture device (camera) in opencv

I want to connect to a camera, and only capture a frame when an event happens (e.g. keypress). A simplified version of what I'd like to do is this:
cap = cv2.VideoCapture(device_id)
while True:
if event:
img = cap.read()
preprocess(img)
process(img)
cv.Waitkey(10)
However, cap.read seems to only capture the next frame in the queue, and not the latest. I did a lot of searching online, and there seems to be a lot of questions on this but no definitive answer. Only some dirty hacks which involve opening and closing the capture device just before and after grabbing (which won't work for me as my event might be triggered multiple times per second); or assuming a fixed framerate and reading a fixed-n times on each event (which won't work for me as my event is unpredictable and could happen at any interval).
A nice solution would be:
while True:
if event:
while capture_has_frames:
img = cap.read()
preprocess(img)
process(img)
cv.Waitkey(10)
But what is capture_has_frames? Is it possible to get that info? I tried looking into CV_CAP_PROP_POS_FRAMES but it's always -1.
For now I have a separate thread where the capture is running at full fps, and on my event I'm grabbing the latest image from that thread, but this seems overkill.
(I'm on Ubuntu 16.04 btw, but I guess it shouldn't matter. I'm also using pyqtgraph for display)
I think the solution mentioned in the question, namely having a separate thread that clears the buffer, is the easiest non-brittle solution for this. Here reasonably nice (I think) code for this:
import cv2, queue, threading, time
# bufferless VideoCapture
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.q = queue.Queue()
t = threading.Thread(target=self._reader)
t.daemon = True
t.start()
# read frames as soon as they are available, keeping only most recent one
def _reader(self):
while True:
ret, frame = self.cap.read()
if not ret:
break
if not self.q.empty():
try:
self.q.get_nowait() # discard previous (unprocessed) frame
except queue.Empty:
pass
self.q.put(frame)
def read(self):
return self.q.get()
cap = VideoCapture(0)
while True:
time.sleep(.5) # simulate time between events
frame = cap.read()
cv2.imshow("frame", frame)
if chr(cv2.waitKey(1)&255) == 'q':
break
The frame reader thread is encapsulated inside the custom VideoCapture class, and communication with the main thread is via a queue.
I posted very similar code for a node.js question, where a JavaScript solution would have been better. My comments on another answer to that question give details why a non-brittle solution without separate thread seems difficult.
An alternative solution that is easier but supported only for some OpenCV backends is using CAP_PROP_BUFFERSIZE. The 2.4 docs state it is "only supported by DC1394 [Firewire] v 2.x backend currently." For Linux backend V4L, according to a comment in the 3.4.5 code, support was added on 9 Mar 2018, but I got VIDEOIO ERROR: V4L: Property <unknown property string>(38) not supported by device for exactly this backend. It may be worth a try first; the code is as easy as this:
cap.set(cv2.CAP_PROP_BUFFERSIZE, 0)
Here's a simplified version of Ulrich's solution.
OpenCV's read() function combines grab() and retrieve() in one call, where grab() just loads the next frame in memory, and retrieve decodes the latest grabbed frame (demosaicing & motion jpeg decompression).
We're only interested in decoding the frame we're actually reading, so this solution saves some CPU, and removes the need for a queue
import cv2
import threading
# bufferless VideoCapture
class VideoCapture:
def __init__(self, name):
self.cap = cv2.VideoCapture(name)
self.lock = threading.Lock()
self.t = threading.Thread(target=self._reader)
self.t.daemon = True
self.t.start()
# grab frames as soon as they are available
def _reader(self):
while True:
with self.lock:
ret = self.cap.grab()
if not ret:
break
# retrieve latest frame
def read(self):
with self.lock:
_, frame = self.cap.retrieve()
return frame
EDIT: Following Arthur Tacca's comment, added a lock to avoid simultaneous grab & retrieve, which could lead to a crash as OpenCV isn't thread-safe.
Its also possible to always get the latest frame by using cv2.CAP_GSTREAMER backend. If you have gstreamer support enabled in cv2.getBuildInformation(), you can initialize your video capture with the appsink parameters sync=false and drop=true
Example:
cv2.VideoCapture("rtspsrc location=rtsp://... ! decodebin ! videoconvert ! video/x-raw,framerate=30/1 ! appsink drop=true sync=false", cv2.CAP_GSTREAMER)
On my Raspberry Pi 4,
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
does work and was all that I needed for my pi camera to give me the latest frame, with a consistent 3+ second delay between the scene in front of the camera and displaying that scene in the preview image. My code takes 1.3 seconds to process an image, so I'm not sure why the other 2 seconds of delay are present, but it's consistent and works.
Side note: since my code takes over a second to process an image, I also added
cap.set( cv2.CAP_PROP_FPS, 2 )
in case it reduces any unneeded activity, since I can't quite get a frame a second. When I put cv2.CAP_PROP_FPS to 1, though, I got a strange output of all my frames being almost entirely dark, so setting FPS too low can cause an issue
If you don't want to capture the frame when there is no event happening, why are you preprocessing/processing your frame? If you do not process your frame, you can simply discard it unless the event occur. Your program should be able to capture, evaluate your condition and discard at a sufficient speed, i.e. fast enough compared to your camera FPS capture rate, to always get the last frame in the queue.
If not proficient in python because I do my OpenCV in C++, but it should look similar to this:
vidcap = cv.VideoCapture( filename )
while True:
success, frame = vidcap.read()
If Not success:
break
If cv.waitKey(1):
process(frame)
As per OpenCV reference, vidcap.read() returns a bool. If frame is read correctly, it will be True. Then, the captured frame is store in variable frame. If there is no key press, the loop keeps on going. When a key is pressed, you process your last captured frame.

Getting current frame with OpenCV VideoCapture in Python

I am using cv2.VideoCapture to read the frames of an RTSP video link in a python script. The .read() function is in a while loop which runs once every second, However, I do not get the most current frame from the stream. I get older frames and in this way my lag builds up. Is there anyway that I can get the most current frame and not older frames which have piped into the VideoCapture object?
I also faced the same problem. Seems that once the VideoCapture object is initialized it keeps storing the frames in some buffer of sort and returns a frame from that for every read operation. What I did is I initialized the VideoCapture object every time I wanted to read a frame and then released the stream. Following code captures 10 images at an interval of 10 seconds and stores them. Same can be done using while(True) in a loop.
for x in range(0,10):
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
cv2.imwrite('test'+str(x)+'.png',frame)
cap.release()
time.sleep(10)
I've encountered the same problem and found a git repository of Azure samples for their computer vision service.
The relevant part is the Camera Capture module, specifically the Video Stream class.
You can see they've implemented a Queue that is being updated to keep only the latest frame:
def update(self):
try:
while True:
if self.stopped:
return
if not self.Q.full():
(grabbed, frame) = self.stream.read()
# if the `grabbed` boolean is `False`, then we have
# reached the end of the video file
if not grabbed:
self.stop()
return
self.Q.put(frame)
# Clean the queue to keep only the latest frame
while self.Q.qsize() > 1:
self.Q.get()
I'm working with a friend in a hack doing the same. We don't want to use all the frames. So far we found that very same thing: grab() (or read) tries to get you all the frames, and I guess with rtp: it will maintain a buffer and drop if you're not responsive enough.
Instead of read you can also use grab() and receive(). First one ask for the frame. Receives reads it into memory. So if you call grab several times it will effectively skip those.
We got away with doing this:
#show some initial image
while True:
cv2.grab()
if cv2.waitKey(10):
im = cv2.receive()
# process
cv2.imshow...
Not production code but...
Inside the 'while' you can use:
while True:
cap = cv2.VideoCapture()
urlDir = 'rtsp://ip:port/h264_ulaw.sdp'
cap.open(urlDir)
# get the current frame
_,frame = cap.read()
cap.release() #releasing camera
image = frame
Using the following was causing a lot of issues for me. The frames being passed to the function were not sequention.
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
function_that_uses_frame(frame)
time.sleep(0.5)
The following also didn't work for me as suggested by other comments. I was STILL getting issues with taking the most recent frame.
cap = cv2.VideoCapture(0)
while True:
ret = capture.grab()
ret, frame = videocapture.retrieve()
function_that_uses_frame(frame)
time.sleep(0.5)
Finally, this worked but it's bloody filthy. I only need to grab a few frames per second, so it will do for the time being. For context, I was using the camera to generate some data for an ML model and my labels compared to what was being captured was out of sync.
while True:
ret = capture.grab()
ret, frame = videocapture.retrieve()
ret = capture.grab()
ret, frame = videocapture.retrieve()
function_that_uses_frame(frame)
time.sleep(0.5)
I made an adaptive system as the ones the others on here posted here still resulted in somewhat inaccurate frame representation and have completely variable results depending on the hardware.
from time import time
#...
cap = cv2.VideoCapture(url)
cap_fps = cap.get(cv2.CAP_PROP_FPS)
time_start = time()
time_end = time_start
while True:
time_difference = int((((end_time-start_time))*cap_fps)+1) #Note that the 1 might be changed to fit script bandwidth
for i in range(0, time_difference):
a = cap.grab()
_, frame = cap.read()
time_start = time()
#Put your code here
variable = function(frame)
#...
time_end = time()
This way the skipped frames adapt to the amount of frames missed in the video stream - allowing for a much smoother transition and a relatively real-time frame representation.

DestroyWindow does not close window on Mac using Python and OpenCV

My program generates a series of windows using the following code:
def display(img, name, fun):
global clicked
cv.NamedWindow(name, 1)
cv.ShowImage(name, img)
cv.SetMouseCallback(name, fun, img)
while cv.WaitKey(33) == -1:
if clicked == 1:
clicked = 0
cv.ShowImage(name, img)
cv.DestroyWindow(name)
I press "q" within the gui window to close it. However, the code continues to the next call of the display function and displays a second gui window while not closing the first. I'm using a Mac with OpenCV 2.1, running the program in Terminal. How can I close the gui windows? Thanks.
You need to run cv.startWindowThread() after opening the window.
I had the same issue and now this works for me.
Hope this helps for future readers. And there is also a cv2 binding (I advise to use that instead of cv).
This code works for me:
import cv2 as cv
import time
WINDOW_NAME = "win"
image = cv.imread("ela.jpg", cv.CV_LOAD_IMAGE_COLOR)
cv.namedWindow(WINDOW_NAME, cv.CV_WINDOW_AUTOSIZE)
initialtime = time.time()
cv.startWindowThread()
while (time.time() - initialtime < 5):
print "in first while"
cv.imshow(WINDOW_NAME, image)
cv.waitKey(1000)
cv.waitKey(1)
cv.destroyAllWindows()
cv.waitKey(1)
initialtime = time.time()
while (time.time() - initialtime < 6):
print "in second while"
The same issue happens with the C++ version, on Linux:
Trying to close OpenCV window has no effect
There are a few peculiarities with the GUI in OpenCV. The destroyImage call fails to close a window (atleast under Linux, where the default backend was Gtk+ until 2.1.0) unless waitKey was called to pump the events. Adding a waitKey(1) call right after destroyWindow may work.
Even so, closing is not guaranteed; the the waitKey function is only intercepted if a window has focus, and so if the window didn't have focus at the time you invoked destroyWindow, chances are it'll stay visible till the next destroyWindow call.
I'm assuming this is a behaviour that stems from Gtk+; the function didn't give me any trouble when I used it under Windows.
Sayem2603
I tried your solution and it worked for me - thanks! I did some trial and error and discovered that looping 4 times did the trick for me... or posting the same code 4 times just the same..
Further, I drilled down to:
cv2.destroyAllWindows()
cv2.waitKey(1)
cv2.waitKey(1)
cv2.waitKey(1)
cv2.waitKey(1)
or simply calling DestroyAllWindows and then looping the waitKey() code 4 times:
cv2.destroyAllWindows()
for i in range (1,5):
cv2.waitKey(1)
Worked as well. I am not savvy enough to know why this works exactly, though I assume it has something to do with the interruption and delay created by looping that code(?)
Matthäus Brandl said, above, that the third waitKey() worked for him, so perhaps it is slightly different on each system? (I am running Linux Mint with 3.16.1 kernel and python 2.7)
The delay, alone, doesn't explain it, as simply increasing the delay time on the waitKey() does not do the trick. (Also looped print("Hello") 1000 times instead of using wiatKey() just to see if the delay that created helped any - it did not.) Must have something more to do with how waitKey() interacts with window events.
OpenCV Docs say: "This function is the only method in HighGUI that can fetch and handle events, so it needs to be called periodically for normal event processing unless HighGUI is used within an environment that takes care of event processing."
Perhaps it creates an interrupt of sorts in the GUI display that allows the destroyAllWindows() action to process?
J
Here is what worked for me:
cv2.namedWindow("image")
cv2.imshow('image', img)
cv2.waitKey(0) # close window when a key press is detected
cv2.destroyWindow('image')
cv2.waitKey(1)
This solution works for me (under Ubuntu 12.04 with python open in the shell):
Re-invoke cv.ShowImage after the window is 'destroyed'.
If you are using Spyder ( Anaconda Package ) there is the problem.
None of the solutions worked for me.
I discovered that the problem wasn't the functions, but a problem on Spyder really. Try to use a texteditor plus running on terminal and you be fine using simply:
WINDOW_NAME = "win"
image = cv.imread("foto.jpg", 0)
cv.namedWindow(WINDOW_NAME, cv.CV_WINDOW_AUTOSIZE)
cv.startWindowThread()
cv.imshow(WINDOW_NAME, image)
cv.waitKey()
cv.destroyAllWindows()
I solved the problem by calling cv2.waitKey(1) in a for loop, I don't know why it worked but gets my job done, so I didn't bother myself further.
for i in range(1,10):
cv2.destroyAllWindows()
cv2.waitkey(1)
you are welcome to explain.
It seems that none of the above solutions worked for me if I run it on Jupyter Notebook (the window hangs when closing and you need to force quit Python to close the window).
I am on macOS High Sierra 10.13.4, Python 3.6.5, OpenCV 3.4.1.
The below code works if you run it as a .py file (source: https://www.learnopencv.com/read-write-and-display-a-video-using-opencv-cpp-python/). It opens the camera, records the video, closes the window successfully upon pressing 'q', and saves the video in .avi format.
import cv2
import numpy as np
# Create a VideoCapture object
cap = cv2.VideoCapture(0)
# Check if camera opened successfully
if (cap.isOpened() == False):
print("Unable to read camera feed")
# Default resolutions of the frame are obtained.The default resolutions are system dependent.
# We convert the resolutions from float to integer.
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
# Define the codec and create VideoWriter object.The output is stored in 'outpy.avi' file.
out = cv2.VideoWriter('outpy.avi',cv2.VideoWriter_fourcc('M','J','P','G'), 10, (frame_width,frame_height))
while(True):
ret, frame = cap.read()
if ret == True:
# Write the frame into the file 'output.avi'
out.write(frame)
# Display the resulting frame
cv2.imshow('frame',frame)
# Press Q on keyboard to stop recording
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Break the loop
else:
break
# When everything done, release the video capture and video write objects
cap.release()
out.release()
# Closes all the frames
cv2.destroyAllWindows()
Fiddling around with this issue in the python console I observed the following behavior:
issuing a cv2.imshow after cv2.destroyWindow sometimes closes the window. Albeit the old window pops up again with the next highgui call, e.g., cv2.namedWindow
the third call of cv2.waitKey after cv2.destroyWindow closed the window every time I tried. Additionally the closed window remained closed, even when using cv2.namedWindow afterwards
Hope this helps somebody.
(I used Ubuntu 12.10 with python 2.7.3 but OpenCV 2.4.2 from the 13.04 repos)
After searching aroung for some time, none of the solutions provided worked for me so since there's a bug in this function and I did not have time to fix it, I did not have to use the cv2 window to show the frames. Once a few frames have been saved, you can open the file in a different viewer, like VLC or MoviePlayer ( for linux ).
Here's how i did mine.
import cv2
threadDie = True # change this to false elsewhere to stop getting the video
def getVideo(Message):
print Message
print "Opening url"
video = cv2.VideoCapture("rtsp://username:passwordp#IpAddress:554/axis-media/media.amp")
print "Opened url"
fourcc = cv2.cv.CV_FOURCC('X','V','I','D')
fps = 25.0 # or 30.0 for a better quality stream
writer = cv2.VideoWriter('out.avi', fourcc,fps, (640,480),1)
i = 0
print "Reading frames "
while threadDie:
ret, img = video.read()
print "frame number: ",i
i=i+1
writer.write(img)
del(video)
print "Finished capturing video"
Then open the file with a different viewer, prabably in a nother function, like if you like vlc, you can start it and pass the saved file as a parameter. On the terminal, i would do this
vlc out.avi #out.avi is my video file being saved by the function above.
This worked for me on arch linux.
I had the same issue. The problem is that while(cap.isOpened()): loop does not finish so that I added below structure. When video has no frame in the following part, it returns ret values as False. Normally, I put destroyAllWindows command out of loop but I moved it into the loop. It works in my code properly.
while(cap.isOpened()):
ret, frame = cap.read()
if ret == False:
cap.release()
cv2.waitKey(1)
cv2.destroyAllWindows()
cv2.waitKey(1)
This worked for me in spyder :
import cv2 as cv
cv.namedWindow("image")
img = cv.imread("image_name.jpg")
cv.imshow("image",img)
cv.waitKey(5000) # 5 sec delay before image window closes
cv.destroyWindow("image")
Remember use only cv.waitKey(positive Integer) for this to work
cv2.imshow("the image I want to show ",img)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.waitKey(1) # to close the window.
The above code worked well for me.
I'm using Mac and python 3.7 .
Close the terminal, later close the window, it worked for me in Visual Studio Code in Windows; I made a task to compile and run the executable in the terminal, the program used my webcam to capture video and display it in a QT window, when I clicked the close button it didn't close, it reopened itself again and continued with the program until I closed the terminal and later could close the program window without it reopening again.

Categories

Resources