The error I get while debugging on IDLE python (3.7) is:
cv2.error: OpenCV(3.4.3) (some directory files)
- error : (-215:Assertion failed) !_src.empty() in function 'cvtColor'
The program itself is taken from a website:
import cv2
import numpy as np
face_cascade = cv2.CascadeClassifier('C:\Program Files\Python38\Lib\site-packages\cv2\data\haarcascade_frontalface_alt2.xml')
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)
for (x, y, w, h) in faces:
print(x,y,w,h)
roi_gray = gray[y:y + h, x:x + w] # (ycord_start, ycord_end)
roi_color = frame[y:y + h, x:x + w]
img_item = 'my-image.png'
cv2.imwrite(img_item,roi_gray)
cv2.imshow('frame',frame)
color = (0,0,255)
stroke = 2
width = x + w
height = y + h
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
So I would like to know what is causing this error.
The problem is in
cap = cv2.VideoCapture(0)
In fact 0 is the id of the opened video capturing device (i.e. a camera index). I imagine that the program can't detect any camera and thus it throws this error. You can check if video capturing has been initialized already by printing cap.isOpened() befor the loop, if it is false, then you have a problem in initializing the video capturing.
Regards
Related
This question already has answers here:
error: (-215) !empty() in function detectMultiScale
(26 answers)
Closed 10 months ago.
I am trying to do face and eye recognition with rectangles for eyes and face using OpenCv Python.
I read some of the questions and I tried the answers below the questions on similiar topics but still getting the error. The code that i've tried activates the webcam and in a second it stops working.
Here the error message:
error: OpenCV(4.5.4-dev) D:\a\opencv-python\opencv-python\opencv\modules\objdetect\src\cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function 'cv::CascadeClassifier::detectMultiScale'
Here my code that i have tried:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye_default.xml')
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 5)
roi_gray = gray[y:y+w, x:x+w]
roi_color = frame[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray, 1.3, 5)
for (ex, ey, ew, eh) in eyes:
cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 5)
cv2.imshow('frame', frame)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Thank you.
Edit: I noticed that once I run the code the webcam has been activated and if there is no face in front of the camera it stays active without any error but once I show my face it stops working.
Hope this will work !
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
eye_cascade = cv2.CascadeClassifier("haarcascade_eye.xml")
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 5)
roi_gray = gray[y:y+w, x:x+w]
roi_color = frame[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray, 1.3, 5)
for (ex, ey, ew, eh) in eyes:
cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 5)
cv2.imshow('frame', frame)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Last time I checked my code, the window size is normal. But when I run it now, the window became small. Please does anybody know how to get this to normal?
Here's my code:
import cv2 as cv
face_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml')
eyeglasses_cascade = cv.CascadeClassifier('haarcascade_eye_tree_eyeglasses.xml')
cap = cv.VideoCapture(0)
while cap.isOpened():
_, frame = cap.read()
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.1, 4)
for (x, y, w, h) in faces:
cv.rectangle(frame, (x,y), (x+w, y+w), (255,0,0), 3)
cv.putText(frame,'Face', (x, y+h+30), cv.FONT_HERSHEY_SIMPLEX, 1, (255,0,0), 2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = frame[y:y+h, x:x+w]
glasses = eyeglasses_cascade.detectMultiScale(roi_gray)
for (gx, gy, gw, gh) in glasses:
cv.rectangle(roi_color, (gx,gy), (gx+gw, gy+gh), (0,255,0), 2)
cv.imshow("img", frame)
if cv.waitKey(1) & 0xFF == ord('x'):
break
cap.release()
You have to tell OpenCV what size to use with the capture device:
cap = cv.VideoCapture(0)
cap.set(cv.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv.CAP_PROP_FRAME_HEIGHT, 1080)
Note, your camera may only support certain resolutions, so it's important to check that.
I restarted my pc, and weirdly enough, it works. Window size is now back to normal
I am trying to run this OpenCV code to detect faces with my video camera. It's giving me this error whenever I run my code. The light on my video camera blinks but then shuts down with this error in the console box along with this one cv2.error: OpenCV(4.5.1) error: (-215:Assertion failed) !empty() in function 'cv::CascadeClassifier::detectMultiScale'
Here's the code
# Load the cascade
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# To capture video from webcam.
cap = cv2.VideoCapture(0)
# To use a video file as input
# cap = cv2.VideoCapture('filename.mp4')
while True:
# Read the frame
_, img = cap.read()
# Convert to grayscale
#THIS IS THE ERROR AREA
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Detect the faces
faces = face_cascade.detectMultiScale(gray, 1.1, 4)
# Draw the rectangle around each face
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
# Display
cv2.imshow('img', img)
# Stop if escape key is pressed
k = cv2.waitKey(30) & 0xff
if k == 27:
break
# Release the VideoCapture object
cap.release()
It seems your script is not able to find the haarcascade_frontalface_default.xml file properly because of relative path. Try to give an absolute path and check.
cv2.error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-2y91i_7w\opencv\modules\objdetect\src\cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function 'cv::CascadeClassifier::detectMultiScale'
[ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-2y91i_7w\opencv\modules\videoio\src\cap_msmf.cpp (435) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
this is the error i'm facing
below is my code
import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
smile_cascade = cv2.CascadeClassifier('haarcascade_smile.xml')
def detect(gray,frame):
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(frame, (x,y), ((x+w),(y+h)), (2555,0,0), 2)
roi_gray = gray[y:y + h, x:x + w]
roi_color = frame[y:y + h, x:x +w]
smiles = smile_cascade.detectMultiScale(roi_gray, 1.8,20)
for (sx,sy,sw,sh) in smiles:
cv2.rectangle(roi_color ,(sx,sy), ((sx + sw) , (sy + sh)),(0,0,225),2)
return frame
video_capture = cv2.VideoCapture(0)
while True:
_, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
canvas = detect(gray, frame)
cv2.imshow('Video', canvas)
if cv2.waitkey(1) & xff == qrd('q'):
break
video_capture.release()
cv2.destroyAllWindows()
I got a different error and fixed the indentation of the return line of the detect() method, see the comment.
Also, there were some errors with waytkey() function, which actually is waitKey().
This should work (at least it does on my machine):
import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
smile_cascade = cv2.CascadeClassifier('haarcascade_smile.xml')
def detect(gray, frame):
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
cv2.rectangle(frame, (x,y), ((x+w),(y+h)), (2555,0,0), 2)
roi_gray = gray[y:y + h, x:x + w]
roi_color = frame[y:y + h, x:x +w]
smiles = smile_cascade.detectMultiScale(roi_gray, 1.8,20)
for (sx,sy,sw,sh) in smiles:
cv2.rectangle(roi_color ,(sx,sy), ((sx + sw) , (sy + sh)),(0,0,225),2)
return frame # << outdent
video_capture = cv2.VideoCapture(0)
while True:
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
canvas = detect(gray, frame)
cv2.imshow('Video', canvas)
# changed here below the waitKey() and added ret:
keypressed = cv2.waitKey(10)
if keypressed == ord('q') or not ret:
break
video_capture.release()
cv2.destroyAllWindows()
I have following code:
import numpy as np
import cv2
import pickle
import rtsp
import PIL as Image
face_cascade = cv2.CascadeClassifier('cascades\data\haarcascade_frontalface_alt2.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("trainner.yml")
labels = {"person_name": 1}
with open("labels.pickle", 'rb') as f:
og_labels = pickle.load(f)
labels = {v:k for k,v in og_labels.items()}
url = 'rtsp://user:pass#xxx.xxx.x.xxx:YYYY/stream0/mobotix.mjpeg'
with rtsp.Client(url) as client:
client.preview()
while True:
frame = client.read(raw=True)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=5)
for (x,y,w,h) in faces:
print(x,y,w,h)
roi_gray = gray[y:y+h, x:x+w]
roi_color = frame[y:y+h, x:x+w]
#recognize?
id_, conf = recognizer.predict(roi_gray)
if conf>=45: # and conf <=85:
print(id_)
print(labels[id_])
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
color = (0,0,255)
stroke = 2
cv2.putText(frame, name, (x,y), font, 1, color, stroke, cv2.LINE_AA)
#img_item = "my-image.png"
#cv2.imwrite(img_item, roi_gray)
color = (0, 0, 255)
stroke = 2
end_cord_x = x + w
end_cord_y = y + h
cv2.rectangle(frame, (x,y), (end_cord_x, end_cord_y), color, stroke)
cv2.imshow('IP cam',frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
Everything is working fine, I'm running my code and first it's opening the Cam view of "client.preview", face detection is not working at this time. When I close this one, the IPcam windows opens and everything is working. (I'm still getting a lot of missed frames by the RTSP stream but no direct issue).
If I leave the code "client.preview" out of it, I'm getting an error from opencv because of src_empty.
If I try to change the code to "client.read()" idem an error occurs from opencv because of src_empty.
How should I fix this?
Ok I understand that the RTSP protocol is first sending empty frames and building up the buffer.
with rtsp.Client(url) as client:
while True:
frame = client.read(raw=True)
if not frame:
time.sleep(.30)
else:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=5)
for (x,y,w,h) in faces:
cv2.imshow('IP cam',frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
I deleted the line : client.preview() because of a double function.
The code is starting and after period I receive following error:
if not frame:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Before the rtsp client has received its first frame, the read call on the rtsp client will return None. When you call "client.preview" the time taken for the window to open gives the rtsp client enough time to receive the first frame from the stream before the first read is called. All calls to read beyond this will always return a frame, which is why everything works when it is included in the code. The solution to your problem would be check the result from read function of the rtsp client to ensure that you have received a frame before processing it.
import numpy as np
import cv2
import pickle
import rtsp
import PIL as Image
import time
face_cascade = cv2.CascadeClassifier('cascades\data\haarcascade_frontalface_alt2.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("trainner.yml")
labels = {"person_name": 1}
with open("labels.pickle", 'rb') as f:
og_labels = pickle.load(f)
labels = {v:k for k,v in og_labels.items()}
url = 'rtsp://user:pass#xxx.xxx.x.xxx:YYYY/stream0/mobotix.mjpeg'
with rtsp.Client(url) as client:
client.preview()
while True:
frame = client.read(raw=True)
if frame is not None:
time.sleep(.10)
else:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.3, minNeighbors=5)
for (x,y,w,h) in faces:
print(x,y,w,h)
roi_gray = gray[y:y+h, x:x+w]
roi_color = frame[y:y+h, x:x+w]
#recognize?
id_, conf = recognizer.predict(roi_gray)
if conf>=45: # and conf <=85:
print(id_)
print(labels[id_])
font = cv2.FONT_HERSHEY_SIMPLEX
name = labels[id_]
color = (0,0,255)
stroke = 2
cv2.putText(frame, name, (x,y), font, 1, color, stroke, cv2.LINE_AA)
#img_item = "my-image.png"
#cv2.imwrite(img_item, roi_gray)
color = (0, 0, 255)
stroke = 2
end_cord_x = x + w
end_cord_y = y + h
cv2.rectangle(frame, (x,y), (end_cord_x, end_cord_y), color, stroke)
cv2.imshow('IP cam',frame)
if cv2.waitKey(20) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()