How to solve : internal_error: 'numpy.ndarray' object is not callable? - python

I have this problem. I run this code on flask api
# face verification with the VGGFace2 model
from matplotlib import pyplot
from PIL import Image
from numpy import asarray
from scipy.spatial.distance import cosine
from mtcnn.mtcnn import MTCNN
from keras_vggface.vggface import VGGFace
from keras_vggface.utils import preprocess_input
# extract a single face from a given photograph
def extract_face(filename, required_size=(254, 254)):
# load image from file
pixels = pyplot.imread(filename)
# create the detector, using default weights
detector = MTCNN()
# detect faces in the image
results = detector.detect_faces(pixels)
# extract the bounding box from the first face
x1, y1, width, height = results[0]['box']
x2, y2 = x1 + width, y1 + height
# extract the face
face = pixels[y1:y2, x1:x2]
# resize pixels to the model size
image = Image.fromarray(face)
image = image.resize(required_size)
face_array = asarray(image)
# print(face_array)
return face_array
# extract faces and calculate face embeddings for a list of photo files
def get_embeddings(filenames):
# extract faces
faces = [extract_face(f) for f in filenames]
# convert into an array of samples
samples = asarray(faces, 'float32')
# prepare the face for the model, e.g. center pixels
samples = preprocess_input(samples, version=2)
# create a vggface model
model = VGGFace(model='vgg16', include_top=False, input_shape=(254, 254, 3), pooling='max')
# perform prediction
yhat = model.predict(samples)
return yhat
# determine if a candidate face is a match for a known face
def is_match(known_embedding, candidate_embedding, thresh=0.45):
# calculate distance between embeddings
score = cosine(known_embedding, candidate_embedding)
print('Match percentage (%.3f)' % (100 - (100 * score)))
print('>face is a Match (%.3f <= %.3f)' % (score, thresh))
# define filenames
filenames = ['audacious.jpg', 'face-20190717050545949130_123.jpg']
# get embeddings file filenames
embeddings = get_embeddings(filenames)
# define sharon stone
sharon_id = embeddings[0]
# verify known photos of sharon
print('Positive Tests')
is_match(embeddings[0], embeddings[1])
I test with first hit, the process work well. But when the second hit that give error :
'numpy.ndarray' object is not callable
'Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(3, 3, 3, 64), dty
pe=float32) is not an element of this graph.'
If i run not on API, just in file then run with : python3 file.py, any times i run not give any errors
any clue ?

Check this line:
samples = asarray(faces, 'float32')
and try to replace it with:
samples = asarray(faces, dtype=np.float32)

Related

Pytorch transforms.Compose usage for pair of images in segmentation tasks

I'm trying to use the transforms.Compose() in my segmentation task. But I'm not sure how to use the same (almost) random transforms for both the image and the mask.
So in my segmentation task, I have the raw picture and the corresponding mask, I'd like to generate more random transformed image pairs for training popurse. Meaning if I do some transform on my raw pictures, and this transformation should also happen on my mask pictures, and then this pair can go into my CNN. My transformer is something like:
train_transform = transforms.Compose([
transforms.Resize(512), # resize, the smaller edge will be matched.
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomRotation(90),
transforms.RandomResizedCrop(320,scale=(0.3, 1.0)),
AddGaussianNoise(0., 1.),
transforms.ToTensor(), # convert a PIL image or ndarray to tensor.
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) # normalize to Imagenet mean and std
])
mask_transform = transforms.Compose([
transforms.Resize(512), # resize, the smaller edge will be matched.
transforms.RandomHorizontalFlip(p=0.5),
transforms.RandomVerticalFlip(p=0.5),
transforms.RandomRotation(90),
transforms.RandomResizedCrop(320,scale=(0.3, 1.0)),
##---------------------!------------------
transforms.ToTensor(), # convert a PIL image or ndarray to tensor.
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) # normalize to Imagenet mean and std
])
Notice, in the code block, I added a class that can add random noise to the raw images transformation, which is not in the mask_transformation, that I want my mask images follow the raw image transformation, but ignore the random noise. So how can these two transformations happen in pairs (with the same random act)?
This seems to have an answer here: How to apply same transform on a pair of picture.
Basically, you can use the torchvision functional API to get a handle to the randomly generated parameters of a random transform such as RandomCrop. Then call torchvision.transforms.functional.crop() on both images with the same parameter values. It seems a bit lengthy but gets the job done. You can skip some transforms on some images, as per your need.
Another option that I've seen elsewhere is to re-seed the random generator with the same seed, to force generation of the same random transformations twice. I would think that such implementations are hacky and keep changing with pytorch versions (e.g. whether to re-seed np.random, random, or torch.manual_seed() ?)
So Sabyasachi's answer is really helpful for me, and I was able to use the transformer in PyTorch to transform my images. This usage of the torchvision.transformer is not the most straightforward way for transferring images. So I'm adding my solution that has an example of using the torchvision.transforms.functional, but also using skimage.filters, and lots of transform functions are available here: https://scikit-image.org/docs/dev/api/skimage.filters.html#skimage.filters.unsharp_mask.
import torchvision.transforms.functional as TF
from skimage.filters import gaussian
from skimage.filters import unsharp_mask
def transformer(image, mask):
# image and mask are PIL image object.
img_w, img_h = image.size
# Random horizontal flipping
if random.random() > 0.5:
image = TF.hflip(image)
mask = TF.hflip(mask)
# Random vertical flipping
if random.random() > 0.5:
image = TF.vflip(image)
mask = TF.vflip(mask)
# Random affine
affine_param = transforms.RandomAffine.get_params(
degrees = [-180, 180], translate = [0.3,0.3],
img_size = [img_w, img_h], scale_ranges = [1, 1.3],
shears = [2,2])
image = TF.affine(image,
affine_param[0], affine_param[1],
affine_param[2], affine_param[3])
mask = TF.affine(mask,
affine_param[0], affine_param[1],
affine_param[2], affine_param[3])
image = np.array(image)
mask = np.array(mask)
# Randome GaussianBlur -- only for images
if random.random() < 0.25:
sigma_param = random.uniform(0.01, 1)
image = gaussian(image, sigma=sigma_param)
# Randome Gaussian Noise -- only for images
if random.random() < 0.25:
factor_param = random.uniform(0.01, 0.5)
image = image + factor_param * image.std() * np.random.randn(image.shape[0], image.shape[1])
# Unsharp filter -- only for images
if random.random() < 0.25:
radius_param = random.uniform(0, 5)
amount_param = random.uniform(0.5, 2)
image = unsharp_mask(image, radius = radius_param, amount=amount_param)
f, ax = plt.subplots(1, 2, figsize=(8, 8))
ax[0].imshow(image)
ax[1].imshow(mask)
return image, mask
I think I have a simple solution:
If the images are concatenated, the transformations are applied to all of them identically:
import torch
import torchvision.transforms as T
# Create two fake images (identical for test purposes):
image = torch.randn((3, 128, 128))
target = image.clone()
# This is the trick (concatenate the images):
both_images = torch.cat((image.unsqueeze(0), target.unsqueeze(0)),0)
# Apply the transformations to both images simultaneously:
transformed_images = T.RandomRotation(180)(both_images)
# Get the transformed images:
image_trans = transformed_images[0]
target_trans = transformed_images[1]
# Compare the transformed images:
torch.all(image_trans == target_trans).item()
>> True

X.shape[1] size doesn't fit the expected value

I'm currently working on my final degree project in robotics, and I decided to create an open-source robot capable of replicating human emotions. The robot is all set up and ready to receive orders, but I'm still busy coding it. I'm currently basing my code off this method. The idea is to extract 68 facial landmarks from
a low FPS video feed (using RPi Camera V2), feed those landmarks to a trained SVM classifier and have it return a numeral from 0-6 depending on the expression it detected (Angry, Disgust, Fear, Happy, Sad, Surprise and Neutral). I'm testing out the capabilities of my model with some pictures I took using the RPi Camera, and this is what I've managed to put together so far in terms of code:
# import the necessary packages
from imutils import face_utils
import dlib
import cv2
import numpy as np
import time
import argparse
import os
import sys
if sys.version_info >= (3, 0):
import _pickle as cPickle
else:
import cPickle
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from data_loader import load_data
from parameters import DATASET, TRAINING, HYPERPARAMS
def get_landmarks(image, rects):
if len(rects) > 1:
raise BaseException("TooManyFaces")
if len(rects) == 0:
raise BaseException("NoFaces")
return np.matrix([[p.x, p.y] for p in predictor(image, rects[0]).parts()])
# initialize dlib's face detector (HOG-based) and then create
# the facial landmark predictor
print("Initializing variables...")
p = "shape_predictor_68_face_landmarks.dat"
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(p)
# path to pretrained model
path = "saved_model.bin"
# load pretrained model
print("Loading model...")
model = cPickle.load(open(path, 'rb'))
# initialize final image height & width
height = 48
width = 48
# initialize landmarks variable as empty array
landmarks = []
# load the input image and convert it to grayscale
print("Loading image...")
gray = cv2.imread("foo.jpg")
# detect faces in the grayscale image
print("Detecting faces in loaded image...")
rects = detector(gray, 0)
# loop over the face detections
print("Looping over detections...")
for (i, rect) in enumerate(rects):
# determine the facial landmarks for the face region, then
# convert the facial landmark (x, y)-coordinates to a NumPy
# array
shape = predictor(gray, rect)
shape = face_utils.shape_to_np(shape)
# loop over the (x, y)-coordinates for the facial landmarks
# and draw them on the image
for (x, y) in shape:
cv2.circle(gray, (x, y), 2, (0, 255, 0), -1)
# show the output image with the face detections + facial landmarks
print("Storing saved image...")
cv2.imwrite("output.jpg", gray)
print("Image stored as /'output.jpg/'")
# arrange landmarks in array
print("Collecting and arranging landmarks...")
# scipy.misc.imsave('temp.jpg', image)
# image2 = cv2.imread('temp.jpg')
face_rects = [dlib.rectangle(left=1, top=1, right=47, bottom=47)]
landmarks = get_landmarks(gray, face_rects)
# load data
print("Loading collected data into predictor...")
print("Extracted landmarks: ", landmarks)
landmarks = np.array(landmarks.flatten())
# predict expression
print("Making prediction")
predicted = model.predict(landmarks)
However, after running the code everything seems to be fine up until this point:
Making prediction
Traceback (most recent call last):
File "face.py", line 97, in <module>
predicted = model.predict(landmarks)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 576, in predict
y = super(BaseSVC, self).predict(X)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 325, in predict
X = self._validate_for_predict(X)
File "/usr/local/lib/python2.7/dist-packages/sklearn/svm/base.py", line 478, in _validate_for_predict
(n_features, self.shape_fit_[1]))
ValueError: X.shape[1] = 136 should be equal to 2728, the number of features at training time
I searched for similar issues on this website, but being such a specific purpose I didn't quite find what I needed. I've been working on the design and research for quite some time, but finding all the snippets needed to make the code work has taken the most time out of me, and I'd love to polish this concept as soon as possible since the presentation date is approaching quickly. Any and all contributions are greatly welcomed!
Here's the trained model I'm currently using, by the way.
I am probably being silly, but it looks like you define path after you use it to load your model.
Also path seems like a very bad name for a variable containing a file location, perhaps modelFileLocation is less likely to already be defined.
Solved it! Turns out my model was trained using a combination of HOG features and Dlib landmarks, however I was only feeding the landmarks to the predictor, which resulted in the size discrepancy.

Incorrect facenet recognition

I've been working on a face recognition attendance management system. I've built the pipeline from scratch but in the end,the script recognizes the wrong face among a group of 10 classes.
I've implemented the following pipeline using Tensorflow and Python.
Capture images, resize, align them using dlib's shape predictor and store them in named folders for later comparison while performing recognition.
Pickle the images into a data.pickle file for later deserialization.
Using OpenCV to implement MTCNN algorithm to detect faces in a frame captured by webcam
passing these frames into a facenet network to create 128-D embeddings and compared accordingly with the embeddings present in pickle database.
Given Below is the main file which runs step 3 and 4:
from keras import backend as K
import time
from multiprocessing.dummy import Pool
K.set_image_data_format('channels_first')
import cv2
import os
import glob
import numpy as np
from numpy import genfromtxt
import tensorflow as tf
from keras.models import load_model
from fr_utils import *
from inception_blocks_v2 import *
from mtcnn.mtcnn import MTCNN
import dlib
from imutils import face_utils
import imutils
import pickle
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
FRmodel = load_model('face-rec_Google.h5')
# detector = dlib.get_frontal_face_detector()
detector = MTCNN()
# FRmodel = faceRecoModel(input_shape=(3, 96, 96))
#
# # detector = dlib.get_frontal_face_detector()
# # predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
# def triplet_loss(y_true, y_pred, alpha = 0.3):
# """
# Implementation of the triplet loss as defined by formula (3)
#
# Arguments:
# y_pred -- python list containing three objects:
# anchor -- the encodings for the anchor images, of shape (None, 128)
# positive -- the encodings for the positive images, of shape (None, 128)
# negative -- the encodings for the negative images, of shape (None, 128)
#
# Returns:
# loss -- real number, value of the loss
# """
#
# anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
#
# pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1)
# neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1)
# basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
# loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
#
# return loss
#
# FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
# load_weights_from_FaceNet(FRmodel)
def ret_model():
return FRmodel
def prepare_database():
pickle_in = open("data.pickle","rb")
database = pickle.load(pickle_in)
return database
def unpickle_something(pickle_file):
pickle_in = open(pickle_file,"rb")
unpickled_file = pickle.load(pickle_in)
return unpickled_file
def webcam_face_recognizer(database):
cv2.namedWindow("preview")
vc = cv2.VideoCapture(0)
while vc.isOpened():
ret, frame = vc.read()
img_rgb = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
img = frame
# We do not want to detect a new identity while the program is in the process of identifying another person
img = process_frame(img,img)
cv2.imshow("Preview", img)
cv2.waitKey(1)
vc.release()
def process_frame(img, frame):
"""
Determine whether the current frame contains the faces of people from our database
"""
# rects = detector(img)
rects = detector.detect_faces(img)
# Loop through all the faces detected and determine whether or not they are in the database
identities = []
for (i,rect) in enumerate(rects):
(x,y,w,h) = rect['box'][0],rect['box'][1],rect['box'][2],rect['box'][3]
img = cv2.rectangle(frame,(x, y),(x+w, y+h),(255,0,0),2)
identity = find_identity(frame, x-50, y-50, x+w+50, y+h+50)
cv2.putText(img, identity,(10,500), cv2.FONT_HERSHEY_SIMPLEX , 4,(255,255,255),2,cv2.LINE_AA)
if identity is not None:
identities.append(identity)
if identities != []:
cv2.imwrite('example.png',img)
return img
def find_identity(frame, x,y,w,h):
"""
Determine whether the face contained within the bounding box exists in our database
x1,y1_____________
| |
| |
|_________________x2,y2
"""
height, width, channels = frame.shape
# The padding is necessary since the OpenCV face detector creates the bounding box around the face and not the head
part_image = frame[y:y+h, x:x+w]
return who_is_it(part_image, database, FRmodel)
def who_is_it(image, database, model):
encoding = img_to_encoding(image, model)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current "emb" from the database.
dist = np.linalg.norm(db_enc.flatten() - encoding.flatten())
print('distance for %s is %s' %(name, dist))
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name
if dist < min_dist:
min_dist = dist
identity = name
if min_dist >0.1:
print('Unknown person')
else:
print(identity)
return identity
if __name__ == "__main__":
database = prepare_database()
webcam_face_recognizer(database)
What am I doing wrong here?
Here the FRmodel is the facenet trained model
Few points:
I don't see resizing, aligning and whitening of the input face image that is fed into the network.
You cannot add a fixed margin of 50 to a variable-sized face. There has to be a scaling such that the face region fills almost the same region in every input image.
I am not sure about the model you are using, but if you are using FaceNet, your accepted matching threshold, 0.1, seems to be very low. It will not accept any matches unless it is the same exact image (with a distance of 0.0), or has a very minimal variation from the gallery image.

tensorflow dataset api create 100 images from 1

what is the most efficient way to load an image with tensorflow and crop 100 images from that one image.
what I tried is:
import numpy as np
import tensorflow as tf
import cv2
filenames_train = ['image1.jpg', 'image2.jpg', 'image3.jpg']
def _opencv_operation(image,label):
# operation with image without tensorflow
kernel = np.ones((5, 5), np.float32) / 25
image = cv2.filter2D(image, -1, kernel)
return image, int(label)
def _read_images_and_crop(image_path):
image = tf.read_file(image_path)
image = tf.image.decode_jpeg(image)
print image.shape
image.set_shape([None, None, None])
image = tf.cast(image, tf.float32)
image = tf.scalar_mul(2./255.,image)-1.
image = tf.image.resize_images(image, [299, 299])
image = tf.reshape(image,(299, 299,3))
label = 1
#r_values1 = #random values#
#image1 = tf.image.crop_and_resize(image, r_values1)
# ...
#r_values100 = # random values#
#image100 = tf.image.crop_and_resize(image, r_values100)
#label = r_values1 ... r_values100
return image, label
# but what i actually want to return is: return [image1, image2,..image100], [label1, label2,.. label100]
# Training dataset
dataset_train = tf.data.Dataset.from_tensor_slices((filenames_train))
dataset_train = dataset_train.map(_read_images_and_crop)
dataset_train = dataset_train.map(
lambda filename, label: tuple(tf.py_func(
_opencv_operation, [filename, label], [tf.float32, tf.int64])))
dataset_train = dataset_train.batch(5)
iterator = tf.data.Iterator.from_structure(dataset_train.output_types,
dataset_train.output_shapes)
(next_images,next_labels) = iterator.get_next()
training_init_op = iterator.make_initializer(dataset_train)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i_epoch in xrange(5):
sess.run(training_init_op)
curr_images, curr_labels = sess.run([next_images, next_labels])
so, what my script does, is reading one image from a file and resizing it.
and gives this image as an output.
what I need is to crop that image afterwords with 100 different crop parameters, so i have 100 images as an output and 100 labels.
but at the and I want bust as many outputs as big like the batch size.
Is it possible with the dataset api or is it just possible to load one image from file and process that image till its the output of the dataset_train.
I dont want to load an image 100 times and process it 100 times.
I want to load it one time and process it 100 times(e.g.crop, blur with different parameters and so on..)

Face Recognization in Python & Open CV

I am able to find the faces and save them in my local directory using python and open cv as per code below from video
import cv2
import numpy as np
import os
vc = cv2.VideoCapture('new1.avi')
c=1
if vc.isOpened():
rval , frame = vc.read()
else:
rval = False
while rval:
rval, frame = vc.read()
cv2.imwrite(str(c) + '.jpg',frame)
image_name=str(c)+'.jpg'
cascPath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
image=cv2.imread(image_name)
gray=cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(30, 30),
flags = cv2.cv.CV_HAAR_SCALE_IMAGE
)
print "Found {0} faces!".format(len(faces))
if len(faces)>=1:
for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("Faces found" ,image)
cv2.waitKey(0)
else:
a="rm "+image_name
os.popen(a)
c = c + 1
cv2.waitKey(1)
vc.release()
But now i want to get identification of that person which has face in that video....
How can i define the person's identification?
Like to scan the face and match it into my local face database and if match found give the name and etc etc
To differentiate between people in photos is not a trivial task, but there are some examples out there. As mentioned by Derman in an earlier comment the best way is to use machine learning to teach the program what different persons faces looks like. One way is to manually find and extract features in peoples faces, such as the distance between eyes ratio to distance between eyes and mouth and so on. This would though need attention to the effects of lens distortion and perspective. There is multiple research papers discussing the best techniques, like this paper using eigen vectors from a set of faces to find most probable match
Face Recognition Using Eigen Faces
There is a machine learning toolbox for Python which is called scikit-learn which implements support for classification, regression, clustering and so on. You can use it to train neural networks and support vector machines among others. Here is a complete example of how to implement the Eigenface method using SVM with scikit-learn and python:
Complete implementation using Python
You can use Either EigenFaceRecognizer or FisherFaceRecognizer or LBHP
All these three algorithms are inbuilt in python
# Create a recognizer object
recognizer = cv2.face.createEigenFaceRecognizer()
# But Remember for EigenFaces all the images whether training or testing has to be of same shape
#==========================================================================
# get_images_and_labels function will give us list of images and list of labels to train our recognizer that we created in the first line
# function requires the path of the directory where all the images is stored
#===========================================================================
def get_images_and_labels(path):
# Append all the absolute image paths in a list image_paths
image_paths = [os.path.join(path, f) for f in os.listdir(path) if not
f.endswith('.sad')]
# images will contains face images
images = []
# labels will contains the label that is assigned to the image
labels = []
final_images = []
largest_image_size = 0
largest_width = 0
largest_height = 0
for image_path in image_paths:
# Read the image and convert to grayscale
image_pil = Image.open(image_path).convert('L')
# Convert the image format into numpy array
image = np.array(image_pil, 'uint8')
# Get the label of the image
nbr = int(os.path.split(image_path)[1].split(".")[0].replace("subject", ""))
# Detect the face in the image
faces = faceCascade.detectMultiScale(image)
# If face is detected, append the face to images and the label to labels
for (x, y, w, h) in faces:
images.append(image[y: y + h, x: x + w])
labels.append(nbr)
cv2.imshow("Adding faces to traning set...", image[y: y + h, x: x + w])
cv2.waitKey(50)
# return the images list and labels list
for image in images:
if image.size > largest_image_size:
largest_image_size = image.size
largest_width, largest_height = image.shape
for image in images:
image = cv2.resize(image, (largest_width, largest_height), interpolation=cv2.INTER_CUBIC)
final_images.append(image)
return final_images, labels, largest_width, largest_height
#===================================================================
# Perform the tranining
# trainer takes two parameters as input
# first parameter is the list of images
# second parameter is a numpy array of their corresponding labels
#===================================================================
recognizer.train(images, np.array(labels)) # training takes as input the list
image_paths = [os.path.join(path, f) for f in os.listdir(path) if f.endswith('.sad')]
for image_path in image_paths:
predict_image_pil = Image.open(image_path).convert('L')
predict_image = np.array(predict_image_pil, 'uint8')
faces = faceCascade.detectMultiScale(predict_image)
for (x, y, w, h) in faces:
result = cv2.face.MinDistancePredictCollector()
predict_image = predict_image[y: y + h, x: x + w]
predict_image = cv2.resize(predict_image, (max_width, max_heigth), interpolation=cv2.INTER_CUBIC)
# =========================================================
# predict method will give us the prediction
# we will get the label in the next statement
# predicted_image is the image that you want to recognize
# =========================================================
recognizer.predict(predict_image, result, 0) # this statement will give the prediction
# ==========================================
# This statement below will give us label
# ==========================================
nbr_predicted = result.getLabel()
# ==========================================
# conf will tell us how much confident our recognizer is in it's prediction
# ==========================================
conf = result.getDist()
nbr_actual = int(os.path.split(image_path)[1].split(".")[0].replace("subject", ""))
if nbr_actual == nbr_predicted:
print("{} is Correctly Recognized with confidence {}".format(nbr_actual, conf))
else:
print("{} is Incorrect Recognized as {}".format(nbr_actual, nbr_predicted))
sys.exit()

Categories

Resources