If a car is recognized take a picture - python

Run this code for Python and OpenCv. What i want to do, is to save in my dataset/test all the images of all the cars that the tool is detecting.
Run my code with
python3 car_detection y0d$ python3 build_car_dataset.py -c cars.xml -o dataset/test
So when i detect the face and put the rectangle on the face, i created an if function that is saying that if the face is recognized and has the rectangle on the image, then please save the pic of that face to my desired output
if rects:
p = os.path.sep.join([args["output"], "{}.png".format(str(total).zfill(5))])
cv2.imwrite(p, orig)
total += 1
So the error i got is: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() What should i do? Thank you in advance!
My full code is:
# USAGE
# python3 build_car_dataset.py --cascade haarcascade_frontalface_default.xml --output dataset/test
# python3 build_face_dataset.py -c haarcascade_licence_plate_rus_16stages_original.xml -o dataset/test
#python3 build_face_dataset.py -c haarcascade_licence_plate_rus_16stages_original.xml -o dataset/test
#python3 build_car_dataset.py -c cars.xml -o dataset/test
from imutils.video import VideoStream
import argparse, imutils, time, cv2, os
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-c", "--cascade", required=True,
help = "path to where the face cascade resides")
ap.add_argument("-o", "--output", required=True,
help="path to output directory")
args = vars(ap.parse_args())
# load OpenCV's Haar cascade for face detection from disk
detector = cv2.CascadeClassifier(args["cascade"])
# initialize the video stream, allow the camera sensor to warm up and initialize the total number of example faces written to disk thus far
print("[INFO] starting video stream...")
vs = VideoStream(src=0).start()
# vs = VideoStream(usePiCamera=True).start()
time.sleep(2.0)
total = 0
# loop over the frames from the video stream
while True:
# grab the frame from the threaded video stream, clone it, (just in case we want to write it to disk), and then resize the frame
# so we can apply face detection faster
frame = vs.read()
orig = frame.copy()
frame = imutils.resize(frame, width=400)
# detect faces in the grayscale frame
rects = detector.detectMultiScale(
cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY), scaleFactor=1.1,
minNeighbors=5, minSize=(30, 30))
# loop over the face detections and draw them on the frame
for (x, y, w, h) in rects:
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
if rects:
p = os.path.sep.join([args["output"], "{}.png".format(str(total).zfill(5))])
cv2.imwrite(p, orig)
total += 1
# show the output frame
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# do a bit of cleanup
print("[INFO] {} face images stored".format(total))
print("[INFO] cleaning up...")
cv2.destroyAllWindows()
vs.stop()

Replace:
if rects:
with:
if rects is not None :
or with:
if rects != None :
and you'll be golden =)
I mean, you still won't be able to detect cars, but at least the error will go away. For the car detection I'd recommend to use CNN (convolution neural networks), google for "YOLO CNN" or "SSD CNN" -- there are plenty of already existing projects that detect cars, you may easily give yourself a good head start.

Let's say rects = [[1, 2, 3, 4], [3,4, 5, 6]]
for (x, y, w, h) in rects:
print("I got here:", x, y, w, h)
would print:
I got here: 1 2 3 4
I got here: 3 4 5 6
But if rects = None, you'd get the error, 'NoneType' object is not iterable
If rects = [] you get no output and nothing inside the loop runs.
Basically what I'm saying is that because your if rects code is inside a loop that is looping through rects, you are already guaranteed that rects has info in it since your code needed rects to be a non-empty iterable to get that far.
What you probably really want to do is check if rects prior to looping over it. To be pythonic, we'll ask forgiveness rather than permission:
rects = None
try:
for (x, y, w, h) in rects:
print("I got here:", x, y, w, h)
except TypeError:
print("no rects")
# no rects
Note that your error has little to do with the majority of your code. Be sure to try to reduce your problem to the smallest possible reproducible example that has the same issue. Often by doing so, it helps solve the issue.

Related

Detectmultiscale() returns an empty tuple sometimes

I am trying to use the haar-cascade in OpenCV 4.0 to detect faces for emotion, gender & age estimation. sometimes the detectmultiscale() function returns an empty tuple which raises an error in the later parts of recognition.
I tried creating a while loop until the face is detected, but it seems once the face is not detected it is not being detected again(in the same captured frame), I get empty tuples returned. the weird thing is that sometimes the program works flawlessly.
the detection model is being loaded correctly, since cv2.CascadeClassifier.empty(face_cascade) returns False.
there seems to be no problem with the captured frame since I can display it properly.
after searching I found that detectmultiscale() does, in fact, return an empty tuple when no faces are detected.
Python OpenCV face detection code sometimes raises `'tuple' object has no attribute 'shape'`
face_cascade = cv2.CascadeClassifier(
'C:\\Users\\kj\\Desktop\\jeffery 1\\trained_models\\detection_models\\haarcascade_frontalface_alt.xml')
retval = cv2.CascadeClassifier.empty(face_cascade)
print(retval)
returns False
def video_cap(out_queue):
video_capture = cv2.VideoCapture(0, cv2.CAP_DSHOW)
#video_capture.set(3, 768)
#video_capture.set(4, 1024)
while True:
ret, bgr_image = video_capture.read()
cv2.imshow('frame',bgr_image)
cv2.waitKey(1000)
cv2.destroyAllWindows()
if video_capture.isOpened() == False :
video_capture.open(0)
if(ret):
gray_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2GRAY)
rgb_image = cv2.cvtColor(bgr_image, cv2.COLOR_BGR2RGB)
faces = detect_faces(face_detection, gray_image)
ret_list = [gray_image, rgb_image, faces]
print("DEBUG: VIDEO_CAPTURE MODULE WORKING")
out_queue.put(ret_list)
return
video_cap function is threaded
def detect_faces(detection_model, gray_image_array):
faces1 = detection_model.detectMultiScale(gray_image_array, scaleFactor= 2, minNeighbors=10,minSize=(64,64))
while(len(faces1)== 0 ):
faces1 = detection_model.detectMultiScale(gray_image_array, scaleFactor=2, minNeighbors=10, minSize=(64, 64))
print(faces1)
if(len(faces1)!=0):
break
return faces1
I get the output:
()
()
()
()....
goes on until I terminate.
how do I fix the problem?
This is a snippet of the code I used. I removed the ARGUMENTS in the detectMultiScale() function and it ran fine.
Also, make sure you have the correct path to the xml files.
classifier = cv2.CascadeClassifier("../../../l-admin/anaconda3/lib/python3.6/site-packages/cv2/data/haarcascade_frontalface_default.xml")
img = cv2.imread('../Tolulope/Adetula Tolulope (2).jpg')
face = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = classifier.detectMultiScale(face)
print(type(faces), faces)
for (x, y, w, h) in faces:
img = cv2.imwrite("facesa.png", cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 3))
On a Secondary note, the reason my own did work might be because my camera did locate my face due to the lightning. So I suggest you try it out with a picture first before using the video.
I have a similar issue when I use jpg format but the main problem is always in the format of the image as when i used png it automatically give the tuple with correct values.
classifier = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
# reading the image
img = cv2.imread('i.png')
# showing the image
#cv2.imshow('shaswat face detection ',img)
# making image to gray scale as black and white
grayscaled_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# cv2.imshow('shaswat face detection ',grayscaled_img)
# detecting the image
# return top left and bottom right points
faces = classifier.detectMultiScale(grayscaled_img)
print(faces)
#cv2.rectangle(img , face_coordinates[0] , face_coordinates[1] , (255,0,0) , 10)
the output shows
[[ 87 114 361 361]]

Motion Tracking in opencv python

So I have been trying to make a motion tracker to track a dog moving in a video (recorded Top-down) retrieve a cropped video showing the dog and ignore the rest of the background.
I tried first with object tracking using the available algorithms in opencv 3 (BOOSTING, MIL, KCF, TLD, MEDIANFLOW, GOTURN(returns an error, couldn't solve it yet)) from this link and I even tried a basic algorithm for motion tracking by subtracting the first frame, but none of them gives a good result. Link
I would prefer a code with a preset rectangle box that surrounds the area of motion once it is detected. Something like in this video
I'm not very familiar with OPENCV, but I believe single motion tracking is not supposed to be an issue since a lot of work has been done already. Should I consider other libraries/APIs or is there a better code/tutorial I can follow to get this done? my point is to use this later with neural network (which is why I'm trying to solve it using python/opencv)
Thanks for any help/advice
Edit:
I removed the previous code to make the post cleaner.
Also, based on the feedback I got and further research, I was able to modify some code to make it close to my wanted result. However, I still have an annoying problem with the tracking. It seems like the first frame affects the rest of the tracking since even after the dog moves, it keeps detecting its first location. I tried to limit the tracking to only 1 action using a flag, but the detection gets messed up. This is the code and pictures showing results:
jimport imutils
import time
import cv2
previousFrame = None
def searchForMovement(cnts, frame, min_area):
text = "Undetected"
flag = 0
for c in cnts:
# if the contour is too small, ignore it
if cv2.contourArea(c) < min_area:
continue
#Use the flag to prevent the detection of other motions in the video
if flag == 0:
(x, y, w, h) = cv2.boundingRect(c)
#print("x y w h")
#print(x,y,w,h)
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
text = "Detected"
flag = 1
return frame, text
def trackMotion(ret,frame, gaussian_kernel, sensitivity_value, min_area):
if ret:
# Convert to grayscale and blur it for better frame difference
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (gaussian_kernel, gaussian_kernel), 0)
global previousFrame
if previousFrame is None:
previousFrame = gray
return frame, "Uninitialized", frame, frame
frameDiff = cv2.absdiff(previousFrame, gray)
thresh = cv2.threshold(frameDiff, sensitivity_value, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.dilate(thresh, None, iterations=2)
_, cnts, _ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
frame, text = searchForMovement(cnts, frame, min_area)
#previousFrame = gray
return frame, text, thresh, frameDiff
if __name__ == '__main__':
video = "Track.avi"
video0 = "Track.mp4"
video1= "Ntest1.avi"
video2= "Ntest2.avi"
camera = cv2.VideoCapture(video1)
time.sleep(0.25)
min_area = 5000 #int(sys.argv[1])
cv2.namedWindow("Security Camera Feed")
while camera.isOpened():
gaussian_kernel = 27
sensitivity_value = 5
min_area = 2500
ret, frame = camera.read()
#Check if the next camera read is not null
if ret:
frame, text, thresh, frameDiff = trackMotion(ret,frame, gaussian_kernel, sensitivity_value, min_area)
else:
print("Video Finished")
break
cv2.namedWindow('Thresh',cv2.WINDOW_NORMAL)
cv2.namedWindow('Frame Difference',cv2.WINDOW_NORMAL)
cv2.namedWindow('Security Camera Feed',cv2.WINDOW_NORMAL)
cv2.resizeWindow('Thresh', 800,600)
cv2.resizeWindow('Frame Difference', 800,600)
cv2.resizeWindow('Security Camera Feed', 800,600)
# uncomment to see the tresh and framedifference displays
cv2.imshow("Thresh", thresh)
cv2.imshow("Frame Difference", frameDiff)
cv2.putText(frame, text, (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.imshow("Security Camera Feed", frame)
key = cv2.waitKey(3) & 0xFF
if key == 27 or key == ord('q'):
print("Bye")
break
camera.release()
cv2.destroyAllWindows()
This picture shows how the very first frame is still affecting the frame difference results, which forces the box to cover area with no motion.
This one shows a case when motion is ignored an no-longer existing motion (frame difference from the second and first frames of the video) being falsely detected. When I allow multiple tracking it tracks both, which is still wrong since it detects an empty area.
Does anyone have an idea where the code is wrong or lacking ? I keep trying but cannot get it to work properly.
Thank you in advance !!
To include motion detection I have created generic components on NPM Registry and docker hub
This detects the motion on client web cam( React app) and uses server in Python based on open CV
so Client just captures web cam images and server analyses these images using OPENCV to determine if there is a motion or not
client can specify a call back function which server calls each time there is a motion
Server is just a docker image which you can pull and run and specify its URL to client
NPM Registry(Client)
Registry Link:
https://www.npmjs.com/settings/kunalpimparkhede/packages
Command
npm install motion-detector-client
Docker Image (Server)
Link
https://hub.docker.com/r/kunalpimparkhede/motiondetectorwebcam
Command
docker pull kunalpimparkhede/motiondetectorwebcam
You just need to write following code to have motion detection
Usage:
import MotionDetectingClient from './MotionDetectingClient';
<MotionDetectingClient server="http://0.0.0.0:8080" callback={handleMovement}/>
function handleMovement(pixels)
{
console.log("Movement By Pixel="+pixels)
}
On server side : just start the docker server on port 8080:
docker run --name motion-detector-server-app -P 8080:5000 motion-detector-server-app

Partial human detection in OpenCV 3

I am working on a human detection program using OpenCV using Python. I saw this very good example and I ran it on the samples it had. It can detect people regardless where they are facing and has decent overlap detection as well as blurred motion as well.
However, when I was running it on some images I had (mostly knee up, waist up, and chest up photos of people), I found out that the software doesn't quite detect people.
You can get the photos from this link. This is the code I am using:
# import the necessary packages
from __future__ import print_function
from imutils.object_detection import non_max_suppression
from imutils import paths
import numpy as np
import argparse
import imutils
import cv2
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--images", required=True, help="path to images directory")
args = vars(ap.parse_args())
# initialize the HOG descriptor/person detector
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
# loop over the image paths
imagePaths = list(paths.list_images(args["images"]))
for imagePath in imagePaths:
# load the image and resize it to (1) reduce detection time
# and (2) improve detection accuracy
image = cv2.imread(imagePath)
image = imutils.resize(image, width=min(400, image.shape[1]))
orig = image.copy()
# detect people in the image
(rects, weights) = hog.detectMultiScale(image, winStride=(4, 4),
padding=(8, 8), scale=1.05)
# draw the original bounding boxes
for (x, y, w, h) in rects:
cv2.rectangle(orig, (x, y), (x + w, y + h), (0, 0, 255), 2)
# apply non-maxima suppression to the bounding boxes using a
# fairly large overlap threshold to try to maintain overlapping
# boxes that are still people
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
pick = non_max_suppression(rects, probs=None, overlapThresh=0.65)
# draw the final bounding boxes
for (xA, yA, xB, yB) in pick:
cv2.rectangle(image, (xA, yA), (xB, yB), (0, 255, 0), 2)
# show some information on the number of bounding boxes
filename = imagePath[imagePath.rfind("/") + 1:]
print("[INFO] {}: {} original boxes, {} after suppression".format(
filename, len(rects), len(pick)))
# show the output images
cv2.imshow("Before NMS", orig)
cv2.imshow("After NMS", image)
cv2.waitKey(0)
It is pretty straightforward. It goes through the images, finds the people in it, then draws bounding rectangles. If rectangles overlap, they are joined together to prevent false positives and detecting more than 1 person in a single person.
However, as I mentioned above, the code fails to recognize people if parts of their feet aren't present.
Is there a way to make OpenCV recognize people who may only have partial of their body (knee up, waist up, chest up) present in a video? In my use case scenarios, I don't think it will be critical to look for arms and legs, as long as the torso and head is present, I should be able to see it.
I found the haar upper body cascade. Though it may not work always (I'll post a new question regarding this), it's a good start.
Here's the code:
import numpy as np
import cv2
img = cv2.imread('path/to/img.jpg',0)
upperBody_cascade = cv2.CascadeClassifier('../path/to/haarcascade_upperbody.xml')
arrUpperBody = upperBody_cascade.detectMultiScale(img)
if arrUpperBody != ():
for (x,y,w,h) in arrUpperBody:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
print 'body found'
cv2.imshow('image',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
But it's not as refined as the solution I lifted off of pyimagesearch.

HOG Descriptor using Python + OpenCV

I am trying to implement HOG Descriptor with OpenCV to detect Pedestrians in a video. I am currently using the pre-made dataset by OpenCV hogcascade_pedestrians.xml. Unfortuntley the documentation on this part is very poor on the internet although the HOG Descriptor is very effective for human detection. I have been writing a code for pedestrians detection with Python, and I have stopped at the following code:
import cv2
import numpy as np
import imutils
VidCap = cv2.VideoCapture('pedestrians.mp4')
HOGCascade = cv2.HOGDescriptor('hogcascade_pedestrians.xml')
HOGCascade.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
while True:
_ , image = VidCap.read()
image = imutils.resize(image, width=700)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
clahe = cv2.createCLAHE(clipLimit=15.0,tileGridSize=(8,8))
gray = clahe.apply(gray)
winStride = (8,8)
padding = (16,16)
scale = 1.05
meanshift = -1
(rects, weights) = HOGCascade.detectMultiScale(gray, winStride=winStride,
padding=padding,
scale=scale,
useMeanshiftGrouping=meanshift)
for (x, y, w, h) in rects:
cv2.rectangle(image, (x, y), (x+w, y+h), (0,200,255), 2)
cv2.imshow('Image', image)
if cv2.waitKey(5) == 27:
break
VidCap.release()
cv2.destroyAllWindows()
I presume that the code scripting would be something like codes written for Haar Cascades. But I have tried that and I got errors. Do anyone have any idea of how to implement the HOG Descriptor on OpenCV with Python.
I have read the following question, but I get nothing from the second answer.
My problem is that I can't find the way to write the code, as the documentation about this part is very poor.
Note: I am using OpenCV 3.1.0-dev with Python 2.7.11
HOGCascade = cv2.HOGDescriptor()
If you want to use this .xml, You have lots of preparation work to do.
When u finally get the available descriptor, you should replace the cv2.HOGDescriptor_getDefaultPeopleDetector() in
setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())

Face Recognization in Python & Open CV

I am able to find the faces and save them in my local directory using python and open cv as per code below from video
import cv2
import numpy as np
import os
vc = cv2.VideoCapture('new1.avi')
c=1
if vc.isOpened():
rval , frame = vc.read()
else:
rval = False
while rval:
rval, frame = vc.read()
cv2.imwrite(str(c) + '.jpg',frame)
image_name=str(c)+'.jpg'
cascPath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
image=cv2.imread(image_name)
gray=cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(30, 30),
flags = cv2.cv.CV_HAAR_SCALE_IMAGE
)
print "Found {0} faces!".format(len(faces))
if len(faces)>=1:
for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)
cv2.imshow("Faces found" ,image)
cv2.waitKey(0)
else:
a="rm "+image_name
os.popen(a)
c = c + 1
cv2.waitKey(1)
vc.release()
But now i want to get identification of that person which has face in that video....
How can i define the person's identification?
Like to scan the face and match it into my local face database and if match found give the name and etc etc
To differentiate between people in photos is not a trivial task, but there are some examples out there. As mentioned by Derman in an earlier comment the best way is to use machine learning to teach the program what different persons faces looks like. One way is to manually find and extract features in peoples faces, such as the distance between eyes ratio to distance between eyes and mouth and so on. This would though need attention to the effects of lens distortion and perspective. There is multiple research papers discussing the best techniques, like this paper using eigen vectors from a set of faces to find most probable match
Face Recognition Using Eigen Faces
There is a machine learning toolbox for Python which is called scikit-learn which implements support for classification, regression, clustering and so on. You can use it to train neural networks and support vector machines among others. Here is a complete example of how to implement the Eigenface method using SVM with scikit-learn and python:
Complete implementation using Python
You can use Either EigenFaceRecognizer or FisherFaceRecognizer or LBHP
All these three algorithms are inbuilt in python
# Create a recognizer object
recognizer = cv2.face.createEigenFaceRecognizer()
# But Remember for EigenFaces all the images whether training or testing has to be of same shape
#==========================================================================
# get_images_and_labels function will give us list of images and list of labels to train our recognizer that we created in the first line
# function requires the path of the directory where all the images is stored
#===========================================================================
def get_images_and_labels(path):
# Append all the absolute image paths in a list image_paths
image_paths = [os.path.join(path, f) for f in os.listdir(path) if not
f.endswith('.sad')]
# images will contains face images
images = []
# labels will contains the label that is assigned to the image
labels = []
final_images = []
largest_image_size = 0
largest_width = 0
largest_height = 0
for image_path in image_paths:
# Read the image and convert to grayscale
image_pil = Image.open(image_path).convert('L')
# Convert the image format into numpy array
image = np.array(image_pil, 'uint8')
# Get the label of the image
nbr = int(os.path.split(image_path)[1].split(".")[0].replace("subject", ""))
# Detect the face in the image
faces = faceCascade.detectMultiScale(image)
# If face is detected, append the face to images and the label to labels
for (x, y, w, h) in faces:
images.append(image[y: y + h, x: x + w])
labels.append(nbr)
cv2.imshow("Adding faces to traning set...", image[y: y + h, x: x + w])
cv2.waitKey(50)
# return the images list and labels list
for image in images:
if image.size > largest_image_size:
largest_image_size = image.size
largest_width, largest_height = image.shape
for image in images:
image = cv2.resize(image, (largest_width, largest_height), interpolation=cv2.INTER_CUBIC)
final_images.append(image)
return final_images, labels, largest_width, largest_height
#===================================================================
# Perform the tranining
# trainer takes two parameters as input
# first parameter is the list of images
# second parameter is a numpy array of their corresponding labels
#===================================================================
recognizer.train(images, np.array(labels)) # training takes as input the list
image_paths = [os.path.join(path, f) for f in os.listdir(path) if f.endswith('.sad')]
for image_path in image_paths:
predict_image_pil = Image.open(image_path).convert('L')
predict_image = np.array(predict_image_pil, 'uint8')
faces = faceCascade.detectMultiScale(predict_image)
for (x, y, w, h) in faces:
result = cv2.face.MinDistancePredictCollector()
predict_image = predict_image[y: y + h, x: x + w]
predict_image = cv2.resize(predict_image, (max_width, max_heigth), interpolation=cv2.INTER_CUBIC)
# =========================================================
# predict method will give us the prediction
# we will get the label in the next statement
# predicted_image is the image that you want to recognize
# =========================================================
recognizer.predict(predict_image, result, 0) # this statement will give the prediction
# ==========================================
# This statement below will give us label
# ==========================================
nbr_predicted = result.getLabel()
# ==========================================
# conf will tell us how much confident our recognizer is in it's prediction
# ==========================================
conf = result.getDist()
nbr_actual = int(os.path.split(image_path)[1].split(".")[0].replace("subject", ""))
if nbr_actual == nbr_predicted:
print("{} is Correctly Recognized with confidence {}".format(nbr_actual, conf))
else:
print("{} is Incorrect Recognized as {}".format(nbr_actual, nbr_predicted))
sys.exit()

Categories

Resources