Unable to identify the iris region in human eye images - python

Using OpenCV to identify the iris region + pupil region (outer grey area + inner black circle) as seen in this image
Tried the following approaches, but unable to extract the iris region 100%.
Approach 1
Iris area detection using detection of color code of the pixels in the image
import cv2
from PIL import Image
#import cv2.cv as cv
img = cv2.imread('i1.jpg')
im = Image.open('i1.jpg')
pix = im.load()
#cv2.imshow('detected Edge',img)
height, width = img.shape[:2]
print height,width
height=height-1
width=width-1
count=0
print pix[width,height]
print pix[0,0]
for eh in range(height):
for ew in range(width):
r,g,b=pix[ew,eh]
if r<=30 and g<=30 and b<=30:
print eh,ew
cv2.circle(img,(ew,eh),1,(0,255,0),1)
print height,width
cv2.imshow('detected Edge',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Click here to view output of the above code.
Approach 2
Iris area detection using Hough Circles method
import cv2
#import cv2.cv as cv
img1 = cv2.imread('i.jpg')
img = cv2.imread('i.jpg',0)
ret, thresh = cv2.threshold(img, 50, 255, cv2.THRESH_BINARY)
edges = cv2.Canny(thresh, 100, 200)
#cv2.imshow('detected ',edges)
cimg=cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
circles = cv2.HoughCircles(edges, cv2.HOUGH_GRADIENT, 1, 10000, param1 = 50, param2 = 30, minRadius = 0, maxRadius = 0)
#circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20,
# param1=50,param2=30,minRadius=0,maxRadius=0)
print circles
for i in circles[0,:]:
i[2]=i[2]+4
cv2.circle(img1,(i[0],i[1]),i[2],(0,255,0),1)
#Code to close Window
cv2.imshow('detected Edge',img1)
cv2.waitKey(0)
cv2.destroyAllWindows()
for i in range (1,5):
cv2.waitKey(1)
Click here to view the output of the code.
Kindly guide us how we can automatically extract the circular black area in human eye pictures.

I used the following reference.
http://www.cvip.uofl.edu/wwwcvip/education/ECE523/Iris%20Biometrics.pdf
To identify iris region in human eye images, you can use the following steps:-
1) Identification of pupil region:- As pupil region intensity would be very close to zero, you can use binary threshold to find pupil region.You can use connected components labelling to get regions of same intensity and then select region having eccentricity near to zero to be identified as pupil circle.The centroid of this connected region would be circle's centre and you can get the radius by the dimensions of connected components box.
2) Identification of Iris region:- Now that you have got your pupil region,you can use hough circle method to get iris region.Use canny edge detection to get edge map.Take the centre of the iris circle in a box around pupil centre and radius of the iris more than pupil radius and less than a fixed amount.Make multiple circles with varying centre and varying radius as specified above and count the number of edge map points lying on such circles.The circle with maximum number of edge points lying on it would be iris circle.
Note:- In my experience, I had found that getting iris circle was very costly as you had to make multiple circles with varying centre and radius.One solution was to keep the circle centre fixed as pupil centre and only varying radius as iris circle would be very near to pupil centre. However, it was giving wrong result as the eyelash edge maps at top and bottom were giving wrong edge map points. To solve this, I did a jugaad. I kept the iris centre fixed as pupil centre and found iris radius only for left hand part of image from pupil centre. Similarly, I found iris radius for right hand side of the image from pupil centre. I used the average of both radius and centre as pupil centre to get iris boundary. It worked for me.

Using approach 2, could you start at the center of the pupil and then travel outwards staying in the same row (travel left or right of the pupil center) until you hit the sclera of the eye. Use this as the radius for the circle containing the iris.
radius_iris = abs(first_column_of_sclera - column_of_pupil_center)
#this is the yellow line in the attatched image
To Find the Sclera: take a small pixel region like a 3x3 block (or similar, this is the green box in the image) and check for two criteria
The variance of the r,g,b channels is small. White (or gray shades) have R=G=B so that means white would have low variance
You also need to check that the rgb value is above some threshold. Someone with grey or black eyes will meet criteria 1, but unless the pixels are very light (near white) we haven't reached the sclera
Create an iris mask by creating a circle centered at the pupil with radius_iris if you want, you can also use the pupil mask to extract ONLY THE iris

To avoid wrong results and improve performance you should always use proper boundaries for HoughCircles. Iris and pupil radii will be in a certain range.
I would look for a black blob of reasonable size in the image to locate the pupil. Once you know where the pupil is you know where to look for the iris. extract a region of interest that will contain the iris (use pupil size to estimate iris size) but not much more. Then do two hough transforms to get iris and pupil position and radius.
Afterwards you can further improve accuracy by fitting a circle/ellipse using the knowledge from your hough transform, if necessary.

Related

Trying to detect all the circles with HoughCircles in openCV (python)

I am following this tutorial: https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/
I was playing around with the parameters ( even those you don't see in the code ex: param2) of HoughCircles and it seems very innacurate, in my project, the disks you see on the picture will be placed on random spots and i need to be able to detect them and their color.
Currently i am only able to detect few circles, and sometimes some random circles are drawn where there is no circles so i am a bit confused.
Is this the best way to do circle detection with openCV or is there a more accurate way of doing it ?
Also why is my code not detecting every circles ?
Initial board : https://imgur.com/BrPB5Ox
Circle drawn : https://imgur.com/dT7k29E
My code :
import cv2
import numpy as np
img = cv2.imread('Photos/board.jpg')
output = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 100)
# ensure at least some circles were found
if circles is not None:
# convert the (x, y) coordinates and radius of the circles to integers
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# draw the circle in the output image, then draw a rectangle
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)
# show the output image
cv2.imshow("output", np.hstack([img, output]))
cv2.waitKey(0)
Thanks a lot.
First of all you can not expect HoughCircles to detect all circles in different type of situations. It is not an AI. It has different parameters according to get desired results. You can check here to learn more about those parameters.
HoughCircles is a contour based function so you should be sure the contours are being detected properly. In your example I am sure bad contour results will come up because of the lighting problem. Metal materials cause light explosion in image processing and this affects finding contours badly.
What you should do:
Solve the lighting problem
Be sure about the HoughCircle parameters to get desired output
Instead of using HoughCircle you can detect each contour and their mass center ( moments help you to find their mass center). Then you can measure each length of contour points to that mass center if all equal then its a circle.
Hough transform works best on monochromatic/binary image, so you may want to preprocess it with some sort of threshold function. Parameter values for the function are very important for proper recognition.
Is this the best way to do circle detection with openCV or is there a more accurate way of doing it ? Also why is my code not detecting every circles ?
there's also findContours function
https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#gadf1ad6a0b82947fa1fe3c3d497f260e0
which, to my liking, is more robust and general; you may want to give it a try

How to plot centroids on image after kmeans clustering?

I have a color image and wanted to do k-means clustering on it using OpenCV.
This is the image on which I wanted to do k-means clustering.
This is my code:
import numpy as np
import cv2
import matplotlib.pyplot as plt
image1 = cv2.imread("./triangle.jpg", 0)
Z1 = image1.reshape((-1))
Z1 = np.float32(Z1)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K1 = 2
ret, mask, center =cv2.kmeans(Z1,K1,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
center = np.uint8(center)
print(center)
res_image1 = center[mask.flatten()]
clustered_image1 = res_image1.reshape((image1.shape))
for c in center:
plt.hlines(c, xmin=0, xmax=max(clustered_image1.shape[0], clustered_image1.shape[1]), lw=1.)
plt.imshow(clustered_image1)
plt.show()
This is what I get from the center variable.
[[112]
[255]]
This is the output image
My problem is that I'm unable to understand the output. I have two lists in the center variable because I wanted two classes. But why do they have only one value?
Shouldn't it be something like this (which makes sense because centroids should be points):
[[x1, y1]
[x2, y2]]
instead of this:
[[x]
[y]]
and if I read the image as a color image like this:
image1 = cv2.imread("./triangle.jpg")
Z1 = image1.reshape((-1, 3))
I get this output:
[[255 255 255]
[ 89 173 1]]
Color image output
Can someone explain to me how I can get 2d points instead of lines? Also, how do I interpret the output I got from the center variable when using the color image?
Please let me know if I'm unclear anywhere. Thanks!!
K-Means-clustering finds clusters of similar values. Your input is an array of color values, hence you find the colors that describe the 2 clusters. [255 255 255] is the white color, [ 89 173 1] is the green color. Similar for [112] and [255] in the grayscale version. What you're doing is color quantization
They are correctly the centroids, but their dimension is color, not location. Therefor you cannot plot it anywhere. Well you can, but I looks like this:
See how the 'color location' determines to which class each pixel belongs?
This is not something you can locate in your image. What you can do is find the pixels that belong to the different clusters, and use the locations of the found pixels to determine their centroid or 'average' position.
To get the 'average' position of each color, you have to separate out the pixel coordinates according to the class/color to which they belong. In the code below I used np.where( img <= 240) where 240 is the threshold. I used 240 out of ease, but you could use K-Means to determine where the threshold should be. (inRange() might be useful at some point)) If you sum the coordinates and divide that by the number of pixels found, you'll have what I think you are looking for:
Result:
Code:
import cv2
# load image as grayscale
img = cv2.imread('D21VU.jpg',0)
# get the positions of all pixels that are not full white (= triangle)
triangle_px = np.where( img <= 240)
# dividing the sum of the values by the number of pixels
# to get the average location
ty = int(sum(triangle_px[0])/len(triangle_px[0]))
tx = int(sum(triangle_px[1])/len(triangle_px[1]))
# print location and draw filled black circle
print("Triangle ({},{})".format(tx,ty))
cv2.circle(img, (tx,ty), 10,(0), -1)
# the same process, but now with only white pixels
white_px = np.where( img > 240)
wy = int(sum(white_px[0])/len(white_px[0]))
wx = int(sum(white_px[1])/len(white_px[1]))
# print location and draw white filled circle
print("White: ({},{})".format(wx,wy))
cv2.circle(img, (wx,wy), 10,(255), -1)
# display result
cv2.imshow('Result',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here is an Imagemagick solution, since I am not proficient with OpenCV.
Basically, I convert your actual image (from your link in the comments) to binary, then use image moments to extract the centroid and other statistics.
I suspect you can do something similar in OpenCV, Skimage, or Python Wand, which is based upon Imagemagick. (See for example:
https://docs.opencv.org/3.4/d3/dc0/group__imgproc__shape.html#ga556a180f43cab22649c23ada36a8a139
https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.moments_coords_central
https://en.wikipedia.org/wiki/Image_moment)
Input:
Your image does not have just two colors. Perhaps this image did not have kmeans clustering applied with 2 colors only. So I will do that with an Imagemagick script that I have built.
kmeans -n 2 -m 5 img.png img2.png
final colors:
count,hexcolor
99234,#65345DFF
36926,#27AD0EFF
Then I convert the two colors to black and white by simply thresholding and stretching the dynamic range to full black and white.
convert img2.png -threshold 50% -auto-level img3.png
Then I get all the image moment statistics for the white pixels, which includes the x,y centroid in pixels relative to the top left corner of the image. It also includes the equivalent ellipse major and minor axes, angle of major axis, eccentricity of the ellipse, and equivalent brightness of the ellipse, plus the 8 Hu image moments.
identify -verbose -moments img3.png
Channel moments:
Gray:
--> Centroid: 208.523,196.302 <--
Ellipse Semi-Major/Minor axis: 170.99,164.34
Ellipse angle: 140.853
Ellipse eccentricity: 0.197209
Ellipse intensity: 106.661 (0.41828)
I1: 0.00149333 (0.380798)
I2: 3.50537e-09 (0.000227937)
I3: 2.10942e-10 (0.00349771)
I4: 7.75424e-13 (1.28576e-05)
I5: 9.78445e-24 (2.69016e-09)
I6: -4.20164e-17 (-1.77656e-07)
I7: 1.61745e-24 (4.44704e-10)
I8: 9.25127e-18 (3.91167e-08)

Image Processing: Algorithm Improvement for Real-Time FedEx Logo Detector

I've been working on a project involving image processing for logo detection. Specifically, the goal is to develop an automated system for a real-time FedEx truck/logo detector that reads frames from a IP camera stream and sends a notification on detection. Here's a sample of the system in action with the recognized logo surrounded in the green rectangle.
Some constraints on the project:
Uses raw OpenCV (no deep learning, AI, or trained neural networks)
Image background can be noisy
The brightness of the image can vary greatly (morning, afternoon, night)
The FedEx truck/logo can have any scale, rotation, or orientation since it could be parked anywhere on the sidewalk
The logo could potentially be fuzzy or blurry with different shades depending on the time of day
There may be many other vehicles with similar sizes or colors in the same frame
Real-time detection (~25 FPS from IP camera)
The IP camera is in a fixed position and the FedEx truck will always be in the same orientation (never backwards or upside down)
The Fedex Truck will always be the "red" variation instead of the "green" variation
Current Implementation/Algorithm
I have two threads:
Thread #1 - Captures frames from the IP camera using cv2.VideoCapture() and resizes frame for further processing. Decided to handle grabbing frames in a separate thread to improve FPS by reducing I/O latency since cv2.VideoCapture() is blocking. By dedicating an independent thread just for capturing frames, this would allow the main processing thread to always have a frame available to perform detection on.
Thread #2 - Main processing/detection thread to detect FedEx logo using color thresholding and contour detection.
Overall Pseudo-algorithm
For each frame:
Find bounding box for purple color of logo
Find bounding box for red/orange color of logo
If both bounding boxes are valid/adjacent and contours pass checks:
Combine bounding boxes
Draw combined bounding boxes on original frame
Play sound notification for detected logo
Color thresholding for logo detection
For color thresholding, I have defined HSV (low, high) thresholds for purple and red to detect the logo.
colors = {
'purple': ([120,45,45], [150,255,255]),
'red': ([0,130,0], [15,255,255])
}
To find the bounding box coordinates for each color, I follow this algorithm:
Blur the frame
Erode and dilate the frame with a kernel to remove background noise
Convert frame from BGR to HSV color format
Perform a mask on the frame using the lower and upper HSV color bounds with set color thresholds
Find largest contour in the mask and obtain bounding coordinates
After performing a mask, I obtain these isolated purple (left) and red (right) sections of the logo.
False positive checks
Now that I have the two masks, I perform checks to ensure that the found bounding boxes actually form a logo. To do this, I use cv2.matchShapes() which compares the two contours and returns a metric showing the similarity. The lower the result, the higher the match. In addition, I use cv2.pointPolygonTest() which finds the shortest distance between a point in the image and a contour for additional verification. My false positive process involves:
Checking if the bounding boxes are valid
Ensuring the two bounding boxes are adjacent based on their relative proximity
If the bounding boxes pass the adjacency and similarity metric test, the bounding boxes are combined and a FedEx notification is triggered.
Results
This check algorithm is not really robust as there are many false positives and failed detections. For instance, these false positives were triggered.
While this color thresholding and contour detection approach worked in basic cases where the logo was clear, it was severely lacking in some areas:
There is latency problems from having to compute bounding boxes on each frame
It occasionally false detects when the logo is not present
Brightness and time of day had a great impact on detection accuracy
When the logo was on a skewed angle, color threshold detection worked but was unable to detect the logo due to the check algorithm.
Would anyone be able to help me improve my algorithm or suggest alternative detection strategies? Is there any other way to perform this detection since color thresholding is highly dependent on exact calibration? If possible, I would like to move away from color thresholding and the multiple layers of filters since it's not very robust. Any insight or advice is greatly appreciated!
You might want to take a look at feature matching. The goal is to find features in two images, a template image, and a noisy image and match them. This would allow you to find the template (the logo) in the noisy image (the camera image).
A feature is, in essence, things that humans would find interesting in an image, such as corners or open spaces. I would recommend using a scale-invariant feature transform (SIFT) as a feature detection algorithm. The reason I suggest using SIFT is that it is invariant to image translation, scaling, and rotation, partially invariant to illumination changes and robust to local geometric distortion. This matches your specification.
I generated the above image using code modified from the OpenCV docs docs on SIFT feature detection:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('main.jpg',0) # target Image
# Create the sift object
sift = cv2.xfeatures2d.SIFT_create(700)
# Find keypoints and descriptors directly
kp, des = sift.detectAndCompute(img, None)
# Add the keypoints to the final image
img2 = cv2.drawKeypoints(img, kp, None, (255, 0, 0), 4)
# Show the image
plt.imshow(img2)
plt.show()
You will notice when doing this that a large number of the features do land on the FedEx logo (Above).
The next thing I did was try matching the features from the video feed to the features in the FedEx logo. I did this using the FLANN feature matcher. You could have gone with many approaches (including brute force) but because you are working on a video feed this is probably your best option. The code below is inspired from the OpenCV docs on feature matching:
import numpy as np
import cv2
from matplotlib import pyplot as plt
logo = cv2.imread('logo.jpg', 0) # query Image
img = cv2.imread('main2.jpg',0) # target Image
# Create the sift object
sift = cv2.xfeatures2d.SIFT_create(700)
# Find keypoints and descriptors directly
kp1, des1 = sift.detectAndCompute(img, None)
kp2, des2 = sift.detectAndCompute(logo,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv2.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
# Draw lines
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = 0)
# Display the matches
img3 = cv2.drawMatchesKnn(img,kp1,logo,kp2,matches,None,**draw_params)
plt.imshow(img3, )
plt.show()
Using this I was able to get the following features matched as seen below. You will notice that there are outliers. However the majority of features match:
The final step would then to be to simply draw a bounding box around this image. I will link you to another stack overflow question which does something similar but with the orb detector. Here is another way to get a bounding box using the OpenCV docs.
I hope this helps!
You can help the detector with preprocessing the image, then you don't need as many training images.
First we reduce the barrel distortion.
import cv2
img = cv2.imread('fedex.jpg')
margin = 150
# add border as the undistorted image is going to be larger
img = cv2.copyMakeBorder(
img,
margin,
margin,
margin,
margin,
cv2.BORDER_CONSTANT,
0)
import numpy as np
width = img.shape[1]
height = img.shape[0]
distCoeff = np.zeros((4,1), np.float64)
k1 = -4.5e-5;
k2 = 0.0;
p1 = 0.0;
p2 = 0.0;
distCoeff[0,0] = k1;
distCoeff[1,0] = k2;
distCoeff[2,0] = p1;
distCoeff[3,0] = p2;
cam = np.eye(3, dtype=np.float32)
cam[0,2] = width/2.0 # define center x
cam[1,2] = height/2.0 # define center y
cam[0,0] = 12. # define focal length x
cam[1,1] = 12. # define focal length y
dst = cv2.undistort(img, cam, distCoeff)
Then we transform the image in a way as if the camera is facing the FedEx truck right on. That is wherever along the curb the truck is parked, the FedEx logo will have almost the same size and orientation.
# use four points for homography estimation, coordinated taken from undistorted image
# 1. top-left corner of F
# 2. bottom-left corner of F
# 3. top-right of E
# 4. bottom-right of E
pts_src = np.array([[1083, 235], [1069, 343], [1238, 301],[1201, 454]])
pts_dst = np.array([[1069, 235],[1069, 320],[1201, 235],[1201, 320]])
h, status = cv2.findHomography(pts_src, pts_dst)
im_out = cv2.warpPerspective(dst, h, (dst.shape[1], dst.shape[0]))

Detecting corners using Opencv Python

I am trying to find the corners of the 4 pillars which are of yellow in colour and also detecting extreme corners of the board which is of white in colour.
Basically i want to calculate the area of whole space after subtracting the area of each pillar.
For that first am trying to identifying the corner of pillars to find the area of each pillar.
Here is the code which I tried, I am almost half way through it.
import numpy as np
import cv2
img = cv2.imread('Corner_0.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
corners = cv2.goodFeaturesToTrack(gray, 100, 0.01, 10)
corners = np.int0(corners)
for corner in corners:
x,y = corner.ravel()
cv2.circle(img,(x,y),3,255,-1)
cv2.imwrite('Detected_Corner_0.jpg',img)
I would like to detect corner and calculating the area of the pillar.
When I use Grabcut I am able to apply for one pillar, does this make sense?
Corner detectors often cannot be relied on. The show extra corners and miss the ones you expect. What's more, you have to identify an regroup them.
You obtain interesting results by computing a saturation image (S in LSH). Then by binarization and blob analysis, you can easily find the areas.

Opencv: How can I get the eye color

I am using dblib to get the eyes of a face. Below are some examples of the results.
I have tried several methods to accomplish the objective. For instance, I tried to detect the center of the eye based on this project; from that, it would be easy to detect the pupil and the iris, however, I did not achieve good results. I also have tried to use Hough Circles but in some cases the results are quite bad.
My best bet is to detect the pupil, which is the only part of the eye with a common color (black) for every eye. I would like to get some ideas to do so.
My first idea is to set a region (between 20 and 60 in the x axis), then, in gray-scale, make the dark pixels (less than 25, for instance) black, and the rest, white. That would create a mask, that can be blurred to use Hough Circles and detect the region of the pupil. Finally, I can set a radius for the iris.
Any idea would be appreciated.
Thanks.
Actually your idea of detecting the shape of the pupil is good but your pictures are not good enough to do it directly. An easy way is to pre-process those to remove all useless data.
I did some example with one of your original pics to show you (on Gimp)
Go to grey scale
Do a high pass filter to remove all small color fluctuations (you have very distinct colors so it should enhance borders very well)
Link to example filtered pic
Apply a threshold on your picture to remove remaining fluctuations (you can calculate the reference threshold value by analyzing your grey scale image color histogram)
Link to example thresholded pic
After those three steps you should have enough data to run your shape detection.
Most of the answers I have read till now say to use the Hough circle method to detect the iris region, but it doesn't really work on all images.
So my approach is pretty simple, which involves following steps
Detect face from the image
Find eye region from the face
Get the RGB values just below the pupil region(thereby getting the iris region RGB values)
And pass the obtained RGB values to find_color function
NOTE: Pass High-resolution image as the input for better results. If you pass low-resolution images such as 480x620, 320x240, you might end up getting poor results.
Below is the code for the same
import cv2
import imutils
from imutils import face_utils
import dlib
import numpy as np
import webcolors
flag=0
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
img= cv2.imread('blue2.jpg')
img_rgb= cv2.cvtColor(img,cv2.COLOR_BGR2RGB) #convert to RGB
#cap = cv2.VideoCapture(0) #turns on the webcam
(left_Start, left_End) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
#points for left eye and right eye
(right_Start, right_End) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
def find_color(requested_colour): #finds the color name from RGB values
min_colours = {}
for name, key in webcolors.CSS3_HEX_TO_NAMES.items():
r_c, g_c, b_c = webcolors.hex_to_rgb(name)
rd = (r_c - requested_colour[0]) ** 2
gd = (g_c - requested_colour[1]) ** 2
bd = (b_c - requested_colour[2]) ** 2
min_colours[(rd + gd + bd)] = key
closest_name = min_colours[min(min_colours.keys())]
return closest_name
#ret, frame=cap.read()
#frame = cv2.flip(frame, 1)
#cv2.imshow(winname='face',mat=frame)
gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY)
# detect dlib face rectangles in the grayscale frame
dlib_faces = detector(gray, 0)
for face in dlib_faces:
eyes = [] # store 2 eyes
# convert dlib rect to a bounding box
(x,y,w,h) = face_utils.rect_to_bb(face)
cv2.rectangle(img_rgb,(x,y),(x+w,y+h),(255,0,0),1) #draws blue box over face
shape = predictor(gray, face)
shape = face_utils.shape_to_np(shape)
leftEye = shape[left_Start:left_End]
# indexes for left eye key points
rightEye = shape[right_Start:right_End]
eyes.append(leftEye) # wrap in a list
eyes.append(rightEye)
for index, eye in enumerate(eyes):
flag+=1
left_side_eye = eye[0] # left edge of eye
right_side_eye = eye[3] # right edge of eye
top_side_eye = eye[1] # top side of eye
bottom_side_eye = eye[4] # bottom side of eye
# calculate height and width of dlib eye keypoints
eye_width = right_side_eye[0] - left_side_eye[0]
eye_height = bottom_side_eye[1] - top_side_eye[1]
# create bounding box with buffer around keypoints
eye_x1 = int(left_side_eye[0] - 0 * eye_width)
eye_x2 = int(right_side_eye[0] + 0 * eye_width)
eye_y1 = int(top_side_eye[1] - 1 * eye_height)
eye_y2 = int(bottom_side_eye[1] + 0.75 * eye_height)
# draw bounding box around eye roi
#cv2.rectangle(img_rgb,(eye_x1, eye_y1), (eye_x2, eye_y2),(0,255,0),2)
roi_eye = img_rgb[eye_y1:eye_y2 ,eye_x1:eye_x2] # desired EYE Region(RGB)
if flag==1:
break
x=roi_eye.shape
row=x[0]
col=x[1]
# this is the main part,
# where you pick RGB values from the area just below pupil
array1=roi_eye[row//2:(row//2)+1,int((col//3)+3):int((col//3))+6]
array1=array1[0][2]
array1=tuple(array1) #store it in tuple and pass this tuple to "find_color" Funtion
print(find_color(array1))
cv2.imshow("frame",roi_eye)
cv2.waitKey(0)
cv2.destroyAllWindows()
Below are some examples.
An actress with blue eyes
Now this is the output of our code when the above image is given as the input: lightsteelblue
An actress with brown eyes
The output of our code when the above image is given as the input: saddlebrown
Mila kunis (one brown eye and other is green)
The output of our code when the above image is given as the input: sienna(shade of brown)
An actress with grey eyes
The output of our code when the above image is given as the input: darkgrey
So, you can see how close the results are to the actual eye color. This works pretty well with high-resolution images as I already mentioned.
PS: Correct me if am wrong, open to suggestions.

Categories

Resources