Here is my code which is running...but I didnt understood why we used:
if cv2.waitKey(1000) & 0xFF == ord('q'):
break
in the code......under display the resulting frame comment:
import numpy as np
import cv2
cap = cv2.VideoCapture('C:\\Users\\KRK\\Desktop\\Dec17thVideo.mp4')
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1000) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
waitkey displays the image for the specified number of milliseconds. Without it, you actually wouldn't be able to see anything. Then 0xFF == ord('q') detects when the keystroke q is pressed on the keyboard.
Think of waitkey as a pause function. After the code has been executed; at lightning speed :), waitkey says, pause for 1000 milliseconds to display the frame. Within this, detect if the user pressed q. If q is pressed, then get outside of my infinite while loop. When this happens, then the window won't be shown anymore.
Their documentation is a good resource as well.
Related
What I'm trying to achieve is to crop my roi in the video frame into a variable and then further send it as a parameter..
Consider in face detection, and that x,y,x+w,y+h are the coordinates of the roi, which is the face and my aim is to crop that face and to show it.
The code below is just to explain my error and problem...
import cv2
cap=cv2.VideoCapture("D:\\Downloads\\video2.mp4")
#x,y,w,h, will change according the video i.e. where the face is detected.
#For the purpose of explaining, i took these values.
x=50
y=100
w=75
h=90
while(cap.isOpened()):
_,frame=cap.read()
crop_frame=frame[y:y+h,x:x+w]
cv2.imshow("Frame",frame)
cv2.imshow("crop_frame",frame)
cv2.destroyAllWindows()
cap.release()
But upon doing this, I get this error:
crop_frame=frame[y:y+h,x:x+w]
TypeError: 'NoneType' object is not subscriptable
This error wasn't there when i was working with images but on video inputs, i get this error.
Any solution to this problem or any alternative solution?
Basically you are slicing None when the video ends and no frame can be read: None[y:y+h,x:x+w]
You should use the retval to check if there is a frame to process, see the doc here: cv::VideoCapture::read.
So, try this code as experiment:
import cv2
cap=cv2.VideoCapture("video.mp4")
while(cap.isOpened()):
ret, frame=cap.read()
if ret:
cv2.imshow("Frame", frame)
else:
print('None frame:', frame)
break
cv2.destroyAllWindows()
cap.release()
There is no rendering because there is no time to do so, that's why you need to look at the next example.
Follows a simple script as example. These are the main points:
First, your loop is missing waytKey() function to allow the rendering, see the doc
here.
If you want your variable outside the loop, you must define
it outside the loop.
Also, you should choose which frame to crop.
You can also add HighGui (https://docs.opencv.org/4.1.0/d7/dfc/group__highgui.html) controls for selecting the frame, etc.
import cv2
cap=cv2.VideoCapture("video.mp4")
x=50
y=100
w=75
h=90
crop_frame = None
while(cap.isOpened()):
ret, frame = cap.read()
# if ret: to be added
cv2.imshow("Frame",frame)
keypressed = cv2.waitKey(10)
if keypressed == ord('q'):
break
if keypressed == ord('f'):
crop_frame = frame[y:y+h,x:x+w]
cv2.imshow("crop_frame", crop_frame)
cv2.destroyAllWindows()
cap.release()
cv2.imwrite('crop_frame.jpg', crop_frame)
You run and it shows the video.
Press 'F' and the current frame is cropped and presented in a new window.
Press 'Q': the loop exists and the cropped frame is saved as an image.
I wrote a script for an image processing. I need to take a frame from camera and then make some operations. I can do this but the time when script is initializing the camera is very long. Is there any solution that I will run my script and camera will be working all the time and for example when I will press a button it will save a frame?
This is my code for now:
import cv2
cap = cv2.VideoCapture(1)
cap.set(3, 640)
cap.set(4, 480)
while True:
_, img = cap.read()
cv2.imshow('Output', img)
if cv2.waitKey(1) & 0xFF==ord('s'):
print('DO IMAGE PROCESSING...')
elif cv2.waitKey(1) & 0xFF==ord('q'):
break
cap.release()
cv2.destroyAllWindows()
The problem is that when I am pressing "q" sometimes it doesn't stop. Can you give me an advice which loop or maybe which lib should I use for that?
Thanks!
I'm writing a program to control the video playback speed as custom rate.
Is there is anyway to achieve that?
What code should be added to control the playback speed?
import cv2
cap = cv2.VideoCapture('video.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
In the docs it is stated:
Note
This function should be followed by waitKey function which displays
the image for specified milliseconds. Otherwise, it won’t display the
image. For example, waitKey(0) will display the window infinitely
until any keypress (it is suitable for image display). waitKey(25)
will display a frame for 25 ms, after which display will be
automatically closed. (If you put it in a loop to read videos, it will
display the video frame-by-frame)
In cv2.waitKey(X) function X means the number of milliseconds for an image to be displayed on the screen. In your case it is set to 1, so theoretically you are able to achieve 1000 fps (frames per seconds). But frame decoding takes time in VideoCapture object and limits your framerate. To change the playback speed you need to declare variable and use it as a parameter in waitKey function.
import cv2
cap = cv2.VideoCapture('video.mp4')
frameTime = 10 # time of each frame in ms, you can add logic to change this value.
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow('frame',frame)
if cv2.waitKey(frameTime) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Alternatively, as frame decoding is the most time consuming task you can move it to the second thread and use a queue of decoded frames. See this link for details.
The third approach is to separate grabbing and decoding process and simply decode every nth frame. That will result in displaying only a subset of frames from the source video but from the user perspective the video will be played faster.
import cv2
cap = cv2.VideoCapture('video.mp4')
i=0 #frame counter
frameTime = 1 # time of each frame in ms, you can add logic to change this value.
while(cap.isOpened()):
ret = cap.grab() #grab frame
i=i+1 #increment counter
if i % 3 == 0: # display only one third of the frames, you can change this parameter according to your needs
ret, frame = cap.retrieve() #decode frame
cv2.imshow('frame',frame)
if cv2.waitKey(frameTime) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
You can use ffmpeg to speed up (or slow down) your video like fast forwarding by using "presentation time stamps".
For speed up an example would be:
ffmpeg -i YOUR_INPUT_MOVIE.mp4 -vf "setpts=0.20*PTS" YOUR_OUTPUT_MOVIE.mp4
which will speed up your movie by 5x.
For slow down an example would be:
ffmpeg -i YOUR_INPUT_MOVIE.mp4 -vf "setpts=5*PTS" YOUR_OUTPUT_MOVIE.mp4
which will slow down your movie by 5x.
Note: this method will drop frames.
I would like to lunch a usb-camera at my raspberry pi, but when I try to start python code (opencv), it shows this message and stop a lunch
ASSERT: "false" in file qasciikey.cpp, line 501
Aborted
Can someone please explain me, what error is this?
It doesnt work even with another codes as well... But camera works fine, when I open it in programs like Camorama webcam viewer. I read that this part makes a problem
cv2.waitKey(20)
So it makes in another code (when i uncomment it, it lunch the code but dont show camera output) But in this code, even if I uncomment it, the code themself doesn't work: show upper error message
Here is the code
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',frame)
cv2.imshow('gray',gray)
if cv2.waitKey(20) & 0xFF == ord('q'): # HERE
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
oh are you Japanese yes bro computers are little racist do not press any japanese letter while your code runing. for waitKey you can use esc or number keys:
if cv2.waitKey(20) & 0xFF == ord('0'):
break
This is the code when i execute it:
You can see the frame opens but doesnt show anything
I want to use a usb camera with a raspberry pi 3 model b v1.2 using opencv 3.3 and python 2.7.
I work with opencv in an virtual enviroment.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read() #Capture frame-by-frame
#Our operations on the frame come here
#gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#Display resulting frame
cv2.imshow('frame',frame)
cv2.waitKey(10)
#if cv2.waitKey(1) & 0xFF == ord('q'):
# break
#when everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I just have no idea how to get around this error. I already searched the error and i am getting helpless, anyone having an idea?
EDIT: i am currently playing around with the code and i can get frames but most of the time the screen stays grey. I use # to show how the code looks now
Ok, it now opens a window and shows the output of the camera
Because of this code:
import sys
sys.path.append('/home/pi/.virtualenvs/cv/lib/python2.7/site-
packages/usr/local/lib/python2.7/site-packages')
and i also use sudo python program.py in the terminal
But this Error :"NameError: name 'CV_CAP_PROP_FRAME_HEIGHT' is not defined" still persists...