opencv Python face recognizer.load - python

I have a problem with face detection.
code:
import cv2
import os
import numpy as np
from PIL import Image
path = 'dataSet'
cascadePath = "Classifiers/face.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);
cam = cv2.VideoCapture(0)
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.load('trainer/trainer.yml')
AttributeError:
'cv2.face_LBPHFaceRecognizer' object has no attribute 'Load'
Help me, I already researched and still have not found an answer.
I am using python 3.6.1 and opencv 3.0

That's because it doesn't have that property. You mean read.
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
Here's the help:
| read(...)
| read(filename) -> None
| . #brief Loads a FaceRecognizer and its model state.
| .
| . Loads a persisted model and state from a given XML or YAML file . Every FaceRecognizer has to
| . overwrite FaceRecognizer::load(FileStorage& fs) to enable loading the model state.
| . FaceRecognizer::load(FileStorage& fs) in turn gets called by
| . FaceRecognizer::load(const String& filename), to ease saving a model.

Related

AttributeError: module 'cv2' has no attribute 'face' OpenCV 4.7.0

After referring to some Stack Overflow answers I did pip install opencv-contrib-python, still I am getting those errors.
I am using OpenCV 4.7.0.
This is for a facial recognition project tutorial that I am following.
import cv2
import numpy as np
from PIL import Image
import os
# Path for face image database
path = 'dataset'
recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");
# function to get the images and label data
def getImagesAndLabels(path):
imagePaths = [os.path.join(path,f) for f in os.listdir(path)]
faceSamples=[]
ids = []
for imagePath in imagePaths:
PIL_img = Image.open(imagePath).convert('L') # grayscale
img_numpy = np.array(PIL_img,'uint8')
id = int(os.path.split(imagePath)[-1].split(".")[1])
faces = detector.detectMultiScale(img_numpy)
for (x,y,w,h) in faces:
faceSamples.append(img_numpy[y:y+h,x:x+w])
ids.append(id)
return faceSamples,ids
print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))
# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml')
# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))
I keep getting this error:
File "/Users/sashuponnaganti/workspace/Facial Recognition Project/face_trainer.py", line 7, in <module>
recognizer = cv2.face.LBPHFaceRecognizer_create()
AttributeError: module 'cv2' has no attribute 'face'
Any ideas how to fix this?
I already tried doing pip install opencv-contrib-python and I had already installed it so it made no difference.
This question has been solved. I had installed all my opencv packages in a conda environment but was running it with the wrong interpreter.

OpenCV: Error in loading Net from onnx file

I'm trying to load with cv.dnn.readNetFromONNX a pre-trained torch model (U2Net to be precise) saved as onnx.
But I'm receiving the error:
error: OpenCV(4.1.2) /io/opencv/modules/dnn/include/opencv2/dnn/dnn.inl.hpp:349:
error (-204:Requested object was not found) Required argument "starts" not found
into dictionary in function 'get'
This is the code to reproduce the error with Google Colab:
### get U2Net implementation ###
%cd /content
!git clone https://github.com/shreyas-bk/U-2-Net
### download pre-trained model ###
!gdown --id 1ao1ovG1Qtx4b7EoskHXmi2E9rp5CHLcZ -O /content/U-2-Net/u2net.pth
###
%cd /content/U-2-Net
### imports ###
from google.colab import files
from model import U2NET
import torch
import os
### create U2Net model from state
model_dir = '/content/U-2-Net/u2net.pth'
net = U2NET(3, 1)
net.load_state_dict(torch.load(model_dir, map_location='cpu'))
net.eval()
### pass to it a dummy input and save to onnx ###
img = torch.randn(1, 3, 320, 320, requires_grad=False)
img = img.to(torch.device('cpu'))
output_dir = os.path.join('/content/u2net.onnx')
torch.onnx.export(net, img, output_dir, opset_version=11, verbose=True)
### load the model in OpenCV ###
import cv2 as cv
net = cv.dnn.readNetFromONNX('/content/u2net.onnx')
[ OpenCV => 4.1.2, Platform => Google Colab, Torch => 1.11.0+cu113]
As #berak suggestet, the issue was related to the OpenCV version (was 4.1.2). Updating to 4.5.5 solved the issue.

Frame and keras model

I have a model which predicting the steering angle of a car from a picture which I like to implement in a android project by getting the frames from the camera.
in my Python code I'm using a h5 file instead of the tflite file in which the picture is converted to numpy and get processed using Cv2 lib.
Python Code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import load_model
from PIL import Image
import cv2
def img_preprocess(img):
img = img[60:135,:,:]
img = cv2.cvtColor(img, cv2.COLOR_RGB2YUV)
img = cv2.GaussianBlur(img, (3, 3), 0)
img = cv2.resize(img, (200, 66))
img = img/255
return img
if __name__ == '__main__':
model = load_model('model.h5')
image1 = Image.open("Gta2.png")
image1 = np.asarray(image1)
image1 = img_preprocess(image1)
image1 = np.array([image1])
steering_angle = float(model.predict(image1))
if(steering_angle > 0):
print('turn right')
print('turn wheel : {}'.format(steering_angle))
else:
print('turn left')
print('turn wheel : {}'.format(steering_angle))
I've been imported the model.tflite to my project assets and know I need to process the CameraBridgeViewBase.CvCameraViewFrame object from the camera to fit to my model input.
the model input and output.
so my questions are:
How process the CvCameraViewFrame Object as in the python code in the 'img_preprocess' function?
How to reach the input specs?
my android code:
You can first convert your Mat frame to Bitmap, such as the solution in this question.
And then use TFLite Task library ImageClassifier to run the inference. Here is an example of how to use ImageClassifier, which comes from the TFLite Image Classification reference app.
Alternatively, you can write your own code to process the image and run inference using Interpreter. See the example here.

How can I plot my trained model result on video using Detectron2?

I am new on using Detectron2. I want to load the video from local drive. And then, do detection using my trained model using Detectron2's VideoVisualizer.
I tried to find a tutorial about this. But it does not exist. Could you please what do I do?
Thank you
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import tqdm
import cv2
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.video_visualizer import VideoVisualizer
from detectron2.utils.visualizer import ColorMode, Visualizer
from detectron2.data import MetadataCatalog
import time
video = cv2.VideoCapture('gdrive/My Drive/video.mp4')
width = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.OUTPUT_DIR = 'gdrive/My Drive/mask_rcnn/'
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set threshold for this model
predictor = DefaultPredictor(cfg)
v = VideoVisualizer(MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), ColorMode.IMAGE)
First, check the following tutorial (you can skip training parts if you don't want to train on your own data).
https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=Vk4gID50K03a
Then, look at the following code to inference on video.
https://github.com/facebookresearch/detectron2/blob/master/demo/demo.py

importing opencv moduls

I have a simple code as mentioned below:
import cv
from opencv.cv import *
from opencv.highgui import *
img = cv.LoadImage("test.jpg")
cap = cv.CreateCameraCapture(0)
while cv.WaitKey(1) != 10:
img = cv.QueryFrame(cap)
cv.ShowImage("cam view", img)
cascade = cv.LoadHaarClassifierCascade('haarcascade_frontalface_alt.xml', cv.Size(1,1))
But I faced to this error:
# AttributeError: 'module' object has no attribute 'LoadImage'
when I change the code to below:
import cv
#from opencv.cv import *
#from opencv.highgui import *
img = cv.LoadImage("test.jpg")
cap = cv.CreateCameraCapture(0)
while cv.WaitKey(1) != 10:
img = cv.QueryFrame(cap)
cv.ShowImage("cam view", img)
cascade = cv.LoadHaarClassifierCascade('haarcascade_frontalface_alt.xml', cv.Size(1,1))
now the first error got solve and another error raise.
AttributeError: 'module' object has no attribute 'LoadHaarClassifierCascade'
I need both of the modules but it seems that they have conflict to gether.
Now what I have to do?
In OpenCV to load a haar classifier (in the python interface anyway) you just use cv.Load.
import cv
cascade = cv.Load('haarcascade_frontalface_alt.xml')
See the examples here.
Also, the samples that come with the OpenCV source are really good (in OpenCV-2.xx/samples/python).
How do you access the stuff you've imported?
# imports the cv module, all stuff contained in it and
# the module itself is now accessible via: cv.classname, cv.functionname
# where classname, functionname is the name of the class/function which
# the cv module provides..
import cv
# imports everything contained in the opencv.cv module
# after this import it is available via it's classname, functionname, etc.
# Attention: without prefix!!
from opencv.cv import *
# #see opencv.cv import
from opencv.highgui import *
#see python modules for more details about modules and imports in python.
If you can provide which classes are contained in which module I could add a specific solution for your problem.

Categories

Resources