Open CV has problem with opening webcam stream - python

So I'm trying to make simple computer vision app that displays different colored square around your face in live webcam feed.
The problem is when I start the app using vscode terminal my laptop webcam just turns on for some time and then closes but no app window appears?
The error in the terminal:
[ WARN:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-ttbyx0jz\opencv\modules\videoio\src\cap_msmf.cpp (1021) CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -2147483638
Traceback (most recent call last):
File "C:\Users\asher\Downloads\Work\Work Stuff\Python Stuff\Learning Python AI blah blah\Face_Realtime.py", line 23, in <module>
grayscaled_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: OpenCV(4.5.2) C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-ttbyx0jz\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor'
[ WARN:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-ttbyx0jz\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
My app's code:
import cv2
from random import randrange
# loading pre-trained data from opencv (haarcascade)
# classifier is just detector
trained_face_data = cv2.CascadeClassifier(
'haarcascade_frontalface_default.xml')
webcam = cv2.VideoCapture(0) # capturing live video
# loop to capture video
while True:
successful_frame_read, frame = webcam.read()
# we need to convert to grayscale before detecting faces
grayscaled_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# we will detect faces using the line below
face_coordinates = trained_face_data.detectMultiScale(grayscaled_img)
for (x, y, w, h) in face_coordinates: # loop to show all faces
# create rectangles around face and randrage here creates random colors for rectangles
cv2.rectangle(frame, (x, y), (x+w, y+h), (randrange(128, 256),randrange(128, 256), randrange(128, 256)), 10)
# this is app name for window and taking the img
cv2.imshow('Face Detector', frame)
key = cv2.waitKey(1)
if key==81 or key==113:
break
webcam.release()
print('Trippin through times lol... but code finished')

Try to add if successful_frame_read:, to check whether the frame got successfully read or not. The if statement ensures that only readable frames get processed. This works because the return value of successful_frame_read is a boolean which, as you may guess, tells you if the frame was successfully read. You may need it, because some frames might be corrupt and cause this error.
Your code should look like this:
import cv2
from random import randrange
# loading pre-trained data from opencv (haarcascade)
# classifier is just detector
trained_face_data = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
webcam = cv2.VideoCapture(0) # capturing live video
# loop to capture video
while True:
successful_frame_read, frame = webcam.read()
if successful_frame_read: # The newly added if statement
# we need to convert to grayscale before detecting faces
grayscaled_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# we will detect faces using the line below
face_coordinates = trained_face_data.detectMultiScale(grayscaled_img)
for (x, y, w, h) in face_coordinates: # loop to show all faces
# create rectangles around face and randrage here creates random colors for rectangles
cv2.rectangle(frame, (x, y), (x+w, y+h), (randrange(128, 256),randrange(128, 256), randrange(128, 256)), 10)
# this is app name for window and taking the img
cv2.imshow('Face Detector', frame)
key = cv2.waitKey(1)
if key==81 or key==113:
break
webcam.release()
print('Trippin through times lol... but code finished')
Fixing the Attribute Error
AttributeError: module 'cv2' has no attribute 'CascadeClassifier'
The issue might be a wrong installation. To ensure, that there are no dependency issues, please install Virtualenv if you haven't already.
Windows:
pip install virtualenv
Install Virtualenv.
cd C:\Users\asher\Downloads\Work\Work Stuff\Python Stuff\Learning Python AI blah blah\ Go to the directory of the project
virtualenv venv Create a virtualenv called venv
venv\Scripts\activate Activate the virtualenv
Install the python packages.
To install OpenCV use pip install opencv-python.

Related

detectMultiScale function cannot be found in cv2?

First time posting.
I'm currently trying to use opencv2 in python using virtual studio on M1 macbook air, but for some reason I cannot use the detectMultiScale function? I have completely uninstalled everything and reinstalled everything to no avail. error: (-215) !empty() in function detectMultiScale
For instance:
grayscaled_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
face_coordinates = trained_face_data.detectMultiScale(grayscaled_img)
gets me
face_coordinates = trained_face_data.detectMultiScale(grayscaled_img)
cv2.error: OpenCV(4.5.1) /private/var/folders/nz/vv4_9tw56nv9k3tkvyszvwg80000gn/T/pip-req-build-39p1qqfs/opencv/modules/objdetect/src/cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function 'detectMultiScale'
Also, I am completely new with using python, so a quick question I have is whether I need to create a virtual environment, activate it, and then pip download opencv2 there? I tried downloading cv2 into just my python file's terminal, but I'm unable to import cv2 there. I'm thinking it might be because the file paths are different? But I have no idea how to use pip installer to a certain file path. I can only import cv2 after installing cv2 inside a venv.
Please advise, thank you!
Before asking this question I've tried :
import cv2
#trained face data import
trained_face_data = cv2.CascadeClassifier('frontfacedetector.xml')
#import my face
img = cv2.imread('Aaron_Prof_Pic.jpg')
#change to grayscale
grayscaled_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#detecting faces
face_coordinates = trained_face_data.detectMultiScale(grayscaled_img)
cv2.imshow('why isnt this working', grayscaled_img)
cv2.waitKey()
print("Code completed")
This means that the haarcascade_frontalface_default.xml isn't present in the same directory as of the code or either it's corrupted.
If it's there in the same directory then try using
trained_face_data = cv2.CascadeClassifier(cv2.data.haarcascades+"haarcascade_frontalface_default.xml")
The final code should look somewhat like :
import cv2
trained_face_data = cv2.CascadeClassifier(cv2.data.haarcascades+"haarcascade_frontalface_default.xml")
img = cv2.imread('Aaron_Prof_Pic.jpg')
imgGray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
faces = trained_face_data.detectMultiScale(imgGray)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (0,255,0), 2)
cv2.imshow("Face detector", img)
key_press = cv2.waitKey(1)
if key_press == 32: #The ascii code for spacebar
quit()#quits the programme when spacebar is hit

Python OpenCV images get blurry after successive shots

I'm new to Open CV and python, and I've been facing a problem:
I've been doing a project on my Raspberry Pi where the webcam takes a grey scale image, removes the background, and saves it in a folder.
This is used by a machine learning algorithm to detect the object in the image.
The webcam is fixed at a particular point so I first take an image of the background and then take a picture of the object. The background is then removed from the object and it looks fine.
But if I repeat the process and overwrite the image, it becomes blurred.
This effect keeps happening until after about three or four shots the image becomes blurry and my program cant identify the object in it.
My code is:
#get Background
import cv2
cam = cv2.videoCapture(0)
ret, frame = cam.read()
if ret:
img_name = '/home/pi/Desktop/background.png'
grey_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite(img_name, grey_img)
print('{} written'.format(img_name))
cam.release()
#takeImage
import cv2
import numpy as np
ret, frame = cam.read()
back = cv2.imread('/home/pi/Desktop/background.png')
if ret:
img_name = '/home/pi/Desktop/img_capture.png'
grey_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imwrite(img_name, grey_img)
grey_obj = cv2.subtract(cv2.imread(img_name), back)
cv2.imwrite(img_name, grey_obj)
print('{} written'.format(img_name))
cam.release()
I'm using a Logitech webcam but I'm not sure of the exact model
Please help me out, and thanks in advance.
Clear at the start:
Not so much:
Not at all:

cv2.imshow image window placement is outside of viewable screen

I am running Anaconda install of python35 with cv2 install from menpo.
I am having trouble with cv2.imshow() inconsistently placing the image window outside of the viewable screen when running code similar to the one below both as a standalone script and line by line in the console (cmd, spyder, ipython)...
import cv2
img = cv2.imread('Image71.jpg',0)
cv2.startWindowThread()
cv2.namedWindow('image')
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
I have also tried the above without cv2.starWindowThread() and cv2.namedWindow() with the same result. The window appears on my taskbar but is not in view, cv2.waitKey(0) responds to the keystroke, and I am not able to bring the window into view using any of the window arrangement shortcut keys for Windows 10 (e.g. alt+tab, Winkey + left, etc).
My OS is Win10 version 1709.
Any help is much appreciated, thx!
img = cv2.imread("test.png")
winname = "Test"
cv2.namedWindow(winname) # Create a named window
cv2.moveWindow(winname, 40,30) # Move it to (40,30)
cv2.imshow(winname, img)
cv2.waitKey()
cv2.destroyAllWindows()
wrapped answer by Kinght in a function for easy calling
def showInMovedWindow(winname, img, x, y):
cv2.namedWindow(winname) # Create a named window
cv2.moveWindow(winname, x, y) # Move it to (x,y)
cv2.imshow(winname,img)
img = cv2.imread('path.png')
showInMovedWindow('named_window',img, 0, 200)

opencv, python: How to track live trackers from environment and find changes in environment

I'm trying my hands on OpenCV with Python and 'am kind of stuck.
I want to find specific trackers from every frame of a live camera and detect changes in the environment with a tracker of different color (say red).
Right now, my code takes a frame of my video which I select, and shows trackers which are too much to understand.
Can you help me in fixing this code?
import numpy as np
import cv2
from matplotlib import pyplot as plt
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
# Initiate feature detector
orb = cv2.FastFeatureDetector_create()
# find the keypoints with ORB
kp = orb.detect(gray)
img2 = cv2.drawKeypoints(frame, kp, outImage=None, color=(0, 255, 0), flags=0)
plt.imshow(img2), plt.show()
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
This is my image without an object (bottle)
This is me with an object
Now what I want to track changes in between the images (with bottle and without bottle), i.e changes in environment of an image should be tracked live on videocam!

how to make python perform a command after open cv detected a face?

I am building a robot. Pretty much all my code has been copied from other people's projects on tutorials.
I'm using a Raspberry Pi camera to detect faces, and I have this electric Nerf gun, that I want to fire, ONLY after opencv detects a face. Right now, my code fires the Nerf gun no matter if a face is detected or not. Please tell me what I have done wrong? I think the problem is in the if len(faces) > 0 area. Here is all my code.
program 1, called cbcnn.py
import RPi.GPIO as gpio
import time
def init():
gpio.setmode(gpio.BOARD)
gpio.setup(22, gpio.OUT)
def fire(tf):
init()
gpio.output(22, True)
time.sleep(tf)
gpio.cleanup()
print 'fire'
fire(3)
program 2, called cbfd2.py
import io
import picamera
import cv2
import numpy
from cbcnn import fire
#Create a memory stream so photos doesn't need to be saved in a file
stream = io.BytesIO()
#Get the picture (low resolution, so it should be quite fast)
#Here you can also specify other parameters (e.g.:rotate the image)
with picamera.PiCamera() as camera:
camera.resolution = (320, 240)
camera.vflip = True
camera.capture(stream, format='jpeg')
#Convert the picture into a numpy array
buff = numpy.fromstring(stream.getvalue(), dtype=numpy.uint8)
#Now creates an OpenCV image
image = cv2.imdecode(buff, 1)
#Load a cascade file for detecting faces
face_cascade = cv2.CascadeClassifier('/home/pi/cbot/faces.xml /haarcascade_frontalface_default.xml')
#Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#Look for faces in the image using the loaded cascade file
faces = face_cascade.detectMultiScale(gray, 1.1, 5)
print "Found "+str(len(faces))+" face(s)"
if len(faces) > 0:
("fire")
#Draw a rectangle around every found face
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
#Save the result image
cv2.imwrite('result.jpg',image)
It fires because you have a fire(3) right after you defined the fire(tf) function. And the command below only creates a tuple with 1 string value: ("fire"). It doesn't call the fire function.
if len(faces) > 0:
("fire")
If you want to fire only when faces are detected, move that fire(3) under this IF and remove it from the top.
BTW you're importing another thing here: from cbcnn import fire with the same name as your function. This will overwrite your function name and if you put fire(3) below your import line it probably throws an error. Either change you fire function fire(tf) to fire_rocket(tf) and change your fire(3) to fire_rocket(3) under the IF.
Or add this on your import line (which you actually aren't even using!) and you can keep your fire function name as is:
from cbcnn import fire as Fire
Edit after question was changed:
Fix the IF I mentioned above and put fire(some number) in there.
The reason it fires is because when you import something from another program it runs the whole script. Because fire(3) is on there it will automatically call the function when you import it.
To avoid this you have to either:
remove other parts from your cbcnn.py: print and fire(3)
Or
put those parts in this IF statement to only run them when you actually run cbcnn.py yourself, and not by importing it:
if __name__=='__main__':
print(fire)
fire(3)
I already answer you in another post but, where you are putting:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
There you can put whatever you want, if you go into the for loop, it means you have detected a face. As you can see on the code you posted, in that case their a drawing a rectangle, but you can do anything there, the x,y,w and h are giving you the coordinates and size of the detected face.
import io
import picamera
import cv2
import numpy
import RPi.GPIO as gpio
import time
#Create a memory stream so photos doesn't need to be saved in a file
stream = io.BytesIO()
#Get the picture (low resolution, so it should be quite fast)
#Here you can also specify other parameters (e.g.:rotate the image)
with picamera.PiCamera() as camera:
camera.resolution = (320, 240)
camera.vflip = True
camera.capture(stream, format='jpeg')
#Convert the picture into a numpy array
buff = numpy.fromstring(stream.getvalue(), dtype=numpy.uint8)
#Now creates an OpenCV image
image = cv2.imdecode(buff, 1)
#Load a cascade file for detecting faces
face_cascade = cv2.CascadeClassifier('/home/pi/cbot/faces.xml/haarcascade_frontalface_default.xml')
#Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
#Look for faces in the image using the loaded cascade file
faces = face_cascade.detectMultiScale(gray, 1.1, 5)
print "Found "+str(len(faces))+" face(s)"
#Draw a rectangle around every found face
for (x,y,w,h) in faces:
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),2)
def init():
gpio.setmode(gpio.BOARD)
gpio.setup(22, gpio.OUT)
def fire(tf):
init()
gpio.output(22, True)
time.sleep(tf)
gpio.cleanup()
print 'fire'
fire(3)
#Save the result image
cv2.imwrite('result.jpg',image)

Categories

Resources