Eye pupil corner detection (openCV) - python

I am trying to make a program in openCV (python) that can tell when an eye pupil is straight or at corners (left or right). What I have done so far is:
Took an image, cropped the eye part (detected the region through eye_cascade.detectMultiScale(gray)) (picture is attached).
Got BGR value ofpixel of img[(3*h)/4,w/2] (h=height, w=width)
Tried to mask the skin by converting BRG to HSV, did thresh_BINARY to get only remaining white shade of around pupil.
count white pixels and checked if either side has less than 40% of white pixels then decide the position of pupil.
This method gives somewhat good answer for picture but when I start the webcam, masking fails thus failing further entire process.
Anyone has better idea on how I can do this? (Already read all the answered questions on this forum but couldn't find a satisfactory solution).
Image

I think it is because binary thresholding outputs pixels greater than Scalar 127 as white and the remaining as black. This gives really bad results in dark ( where most pixels are less than 127 ) and light ( where most pixels have more than 127 value).
What you can do is to find the mean and standard deviation.
Then you can define a mask in such a way that all pixels above
mean+k*std_deviation
as white pixels. You can find the constant k by experimenting with various images.
Good Luck!

Related

Color black holes to white in binary image

I've got the following image:
I want to fill in the black hole and make it white. How can I fill in this hole?
Thank you
You could floodfill with white starting at the top-left corner, which will leave you with this - which should allow you to locate the "hole".
I have bordered artificially with red so you can see the extent of the image.
apply this method Anyone knows an algorithm for finding "shapes" in 2d arrays?
Images are basically arrays and you can apply this algorithm with little bit modification in order to find holes and set every black pixel in closed shape and set blacks to white

General questions about (canny) edge detection

I'm facing some general problems regarding the edge detection in an image (the image should be irrelevant for my question).
I want the canny edge detector to ignore a certain pixel value. For example: It should only look for edges if the gray value is not 0. Otherwise there will be "false edges" detected.
I usually use the cv2.canny function which works quite fast and well. Problem is, it is not customizable. So I took this code of a custom canny edge detector (https://rosettacode.org/wiki/Canny_edge_detector#Python) in order to customize it. It works but it's calculating the edges way too slow (It takes several minutes, whereas the cv2.canny function takes a fraction of a second).
This is my first problem.
Is there another way to make the cv2.canny function "ignore" pixels of a certein value. Imagine somewhere in the picture is a area filled with black (soo the image below). I don't want the edge detector to detect the edge of this black area.
Once I have some clear edges detected in my image, I want to create masks based on those edges. I couldn't find any examples for this online. So if anyone knows where to find a good tutorial on how to create masks from edges it would be great if you could help me out.
Thanks in advance
Here's an approach:
Calculate your Canny as usual using the fast OpenCV function.
Now locate all the black pixels in the image - you can do that with _,thr = cv2.threshold(im,1,255,cv2.THRESH_BINARY) and dilate those areas by 1 pixel with morphology to allow edges to be offset a little as they often are.
Multiply the normal Canny image with the mask you created so that anything it found in the black areas gets multiplied by zero, i.e. lost.

Increase the degree of a specific color range in an image for object detection

I have a number of lobster images as shown in the photo. My goal is to identify the edge between the carapace and the body and its coordinates. However, it seems like the methods of finding contours based on HSV thresholds or Canny edge detection output didn't work well when I applied to these images.
My idea is to find a way to 'amplify' the color difference between two areas of an image to make it easier for finding mask/contours based on color threshold?
For example, if I can make the yellow of the edge in the image stronger, then I can easily find the mask of this area using color threshold, can we do that?
Thanks.
This color is closer to the rest of the picture than one think (in the HLS space).
IMO, the best way to enhance it is by means of the Euclidean distance to that particular orange in RGB space.
For instance, the pixels at distance 32√3:

How to calculate a marked area within boundary in SimpleCV or OpenCV

I have this image:
Here I have an image on a green background and an area marked with a red line within it. I want to calculate the area of the marked portion with respect to the Image.
I am cropping the image to remove the green background and calculating the area of the cropped Image. From here I don't know how to proceed.
I have noticed that Contour can be used for this but the problem is how do I draw the contour in this case.
I guess if I can create the contour and fill the marked area with some color, I can subtract it from the whole(cropped) image and get both the areas.
In your link, they use the method threshold with a colour in parameter. Basically it takes your source image and sets as white all pixels greater than this value, or black otherwise (This means that your source image needs to be a greyscale image). This threshold is what enables you to "fill the marked area" in order to make a contour detection possible.
However, I think you should try to use the method inRange on your cropped picture. It is pretty much the same as threshold, but instead of having one threshold, you have a minimum and a maximum boundary. If your pixel is in the range of colours given by your boundaries, then it will be set as white. If it isn't, then it will be set as black. I don't know if this will work, but if you try to isolate the "most green" colours in your range, then you might get your big white area on the top right.
Then you apply the method findContours on your binarized image. It will give you all the contours it found, so if you have small white dots on other places in your image it doesn't matter, you'll only have to select the biggest contour found by the method.
Be careful, if the range of inRange isn't appropriate, the big white zone you should find on top right might contain some noise, and it could mess with the detection of contours. To avoid that, you could blur your image and do some stuff like erosion/dilation. This way you might get a better detection.
EDIT
I'll add some code here, but it can't be used as is. As I said, I have no knowledge in Python so all I can do here is provide you the OpenCV methods with the parameters to provide.
Let's make also a review of the steps:
Binarize your image with inRange. You need to find appropriate values for your minimum and maximum boundaries. What you want to do here is isolate the green colours since it is mostly what composes the area inside your contour. I can't really suggest you something better than trial and error to find the best thresholds. Let's start with those min and max values : (0, 125, 0) and (255, 250, 255)
inRange(source_image, Scalar(0, 125, 0), Scalar(255, 250, 255), binarized_image)
Check your result with imshow
imshow("bin", binarized_image)
If you binarization is ok (you can detect the area you want quite well), apply findContours. I'm sorry I don't understand the syntax used in your tutorial nor in the documentation, but here are the parameters:
binarized_mat: your binarized image
contours: an array of arrays of Point which will contain all the contours detected. Each contour is stored as an array of points.
mode: you can choose whatever you want, but I'd suggest RETR_EXTERNAL in your case.
Get the array with the biggest size, since it might be the contour with the highest number of points (the largest one then).
Calculate the area inside
Hope this helps!

Finding images with pure colours

I've read a number of questions on finding the colour palette of an image, but my problem is slightly different. I'm looking for images made up of pure colours: pictures of the open sky, colourful photo backgrounds, red brick walls etc.
So far I've used the App Engine Image.histogram() function to produce a histogram, filter out values below a certain occurrence threshold, and average the remaining ones down. That still seems to leave in a lot of extraneous photographs where there are blobs of pure colour in a mixed bag of other photos.
Any ideas much appreciated!
How about doing this?
Blur the image using some fast blurring algorithm. (Search for stack blur or box blur)
Compute standard deviation of the pixels in RGB domain, once for each color.
Discard the image if the standard deviation is beyond a certain threshold.
In my opinion a histogram will not be the ideal tool for the this task since it typically looks at separately at each color channel and you will loose information like this. So for example if you get peaks at 255 red, green and blue this can either mean that there is lots of red (0xFF0000), green (0x00FF00) and blue ( 0x0000FF) in the image or that the whole image is simply entirely white ( 0xFFFFFF).
I recommend you to use a color quantization algorithm on your image: http://en.wikipedia.org/wiki/Color_quantization and have it return you the 16 most dominant colors. Then maybe convert them to HSL and check for values with a high saturation.

Categories

Resources