I've read a number of questions on finding the colour palette of an image, but my problem is slightly different. I'm looking for images made up of pure colours: pictures of the open sky, colourful photo backgrounds, red brick walls etc.
So far I've used the App Engine Image.histogram() function to produce a histogram, filter out values below a certain occurrence threshold, and average the remaining ones down. That still seems to leave in a lot of extraneous photographs where there are blobs of pure colour in a mixed bag of other photos.
Any ideas much appreciated!
How about doing this?
Blur the image using some fast blurring algorithm. (Search for stack blur or box blur)
Compute standard deviation of the pixels in RGB domain, once for each color.
Discard the image if the standard deviation is beyond a certain threshold.
In my opinion a histogram will not be the ideal tool for the this task since it typically looks at separately at each color channel and you will loose information like this. So for example if you get peaks at 255 red, green and blue this can either mean that there is lots of red (0xFF0000), green (0x00FF00) and blue ( 0x0000FF) in the image or that the whole image is simply entirely white ( 0xFFFFFF).
I recommend you to use a color quantization algorithm on your image: http://en.wikipedia.org/wiki/Color_quantization and have it return you the 16 most dominant colors. Then maybe convert them to HSL and check for values with a high saturation.
Related
I have a use case where I need to classify some images as grey scale or color. My initial step was based on the feature that grey scale images should have r,g,b values at a pixel, the same values as it is single channel. Were as for color images, r,g,b values at the same pixel may not be the same.
So I am checking by getting the difference between (r,g), (b,g) and (r,b) and if all three has only zero, its grey scale else, its color.
This approach helped me to identify many grey scale images but still there are some images which does not follow this logic. Can anyone specify some good features on which we can classify an image as color or grey scale using opencv?
Do not ask me to check the number of channels and classify, it gives 3 for both the classes as we are loading it in .jpg format.
Thanks in advance
I suspect, some never were grey-scale images after digitizing (e.g. a color scan of gray-scale picture). Due to noise, there are minimal differences in the RGB values. A low threshold greater than perfect zero should do the trick.
Please note that JPEG totally has a gray-scale option. However, when storing the picture, you request that mode. Compressors usually do not pick it up automatically. Also, you explicitly need to set the flag IMREAD_UNCHANGED while reading with OpenCV's imread.
With the method Suggested by #QuangHoang I got a result of 85+% accuracy.
Here is the approach explained.
#test image
img=cv2.imread('test.jpg')
r,g,b=cv2.split(img)
#spliting b,g,r and getting differences between them
r_g=np.count_nonzero(abs(r-g))
r_b=np.count_nonzero(abs(r-b))
g_b=np.count_nonzero(abs(g-b))
diff_sum=float(r_g+r_b+g_b)
#finding ratio of diff_sum with respect to size of image
ratio=diff_sum/img.size
if ratio>0.005:
label='color'
else:
label='grey'
Thanks for all the suggestions.
I am trying to make a program in openCV (python) that can tell when an eye pupil is straight or at corners (left or right). What I have done so far is:
Took an image, cropped the eye part (detected the region through eye_cascade.detectMultiScale(gray)) (picture is attached).
Got BGR value ofpixel of img[(3*h)/4,w/2] (h=height, w=width)
Tried to mask the skin by converting BRG to HSV, did thresh_BINARY to get only remaining white shade of around pupil.
count white pixels and checked if either side has less than 40% of white pixels then decide the position of pupil.
This method gives somewhat good answer for picture but when I start the webcam, masking fails thus failing further entire process.
Anyone has better idea on how I can do this? (Already read all the answered questions on this forum but couldn't find a satisfactory solution).
Image
I think it is because binary thresholding outputs pixels greater than Scalar 127 as white and the remaining as black. This gives really bad results in dark ( where most pixels are less than 127 ) and light ( where most pixels have more than 127 value).
What you can do is to find the mean and standard deviation.
Then you can define a mask in such a way that all pixels above
mean+k*std_deviation
as white pixels. You can find the constant k by experimenting with various images.
Good Luck!
I have a number of lobster images as shown in the photo. My goal is to identify the edge between the carapace and the body and its coordinates. However, it seems like the methods of finding contours based on HSV thresholds or Canny edge detection output didn't work well when I applied to these images.
My idea is to find a way to 'amplify' the color difference between two areas of an image to make it easier for finding mask/contours based on color threshold?
For example, if I can make the yellow of the edge in the image stronger, then I can easily find the mask of this area using color threshold, can we do that?
Thanks.
This color is closer to the rest of the picture than one think (in the HLS space).
IMO, the best way to enhance it is by means of the Euclidean distance to that particular orange in RGB space.
For instance, the pixels at distance 32√3:
I have this image:
Here I have an image on a green background and an area marked with a red line within it. I want to calculate the area of the marked portion with respect to the Image.
I am cropping the image to remove the green background and calculating the area of the cropped Image. From here I don't know how to proceed.
I have noticed that Contour can be used for this but the problem is how do I draw the contour in this case.
I guess if I can create the contour and fill the marked area with some color, I can subtract it from the whole(cropped) image and get both the areas.
In your link, they use the method threshold with a colour in parameter. Basically it takes your source image and sets as white all pixels greater than this value, or black otherwise (This means that your source image needs to be a greyscale image). This threshold is what enables you to "fill the marked area" in order to make a contour detection possible.
However, I think you should try to use the method inRange on your cropped picture. It is pretty much the same as threshold, but instead of having one threshold, you have a minimum and a maximum boundary. If your pixel is in the range of colours given by your boundaries, then it will be set as white. If it isn't, then it will be set as black. I don't know if this will work, but if you try to isolate the "most green" colours in your range, then you might get your big white area on the top right.
Then you apply the method findContours on your binarized image. It will give you all the contours it found, so if you have small white dots on other places in your image it doesn't matter, you'll only have to select the biggest contour found by the method.
Be careful, if the range of inRange isn't appropriate, the big white zone you should find on top right might contain some noise, and it could mess with the detection of contours. To avoid that, you could blur your image and do some stuff like erosion/dilation. This way you might get a better detection.
EDIT
I'll add some code here, but it can't be used as is. As I said, I have no knowledge in Python so all I can do here is provide you the OpenCV methods with the parameters to provide.
Let's make also a review of the steps:
Binarize your image with inRange. You need to find appropriate values for your minimum and maximum boundaries. What you want to do here is isolate the green colours since it is mostly what composes the area inside your contour. I can't really suggest you something better than trial and error to find the best thresholds. Let's start with those min and max values : (0, 125, 0) and (255, 250, 255)
inRange(source_image, Scalar(0, 125, 0), Scalar(255, 250, 255), binarized_image)
Check your result with imshow
imshow("bin", binarized_image)
If you binarization is ok (you can detect the area you want quite well), apply findContours. I'm sorry I don't understand the syntax used in your tutorial nor in the documentation, but here are the parameters:
binarized_mat: your binarized image
contours: an array of arrays of Point which will contain all the contours detected. Each contour is stored as an array of points.
mode: you can choose whatever you want, but I'd suggest RETR_EXTERNAL in your case.
Get the array with the biggest size, since it might be the contour with the highest number of points (the largest one then).
Calculate the area inside
Hope this helps!
I've got the following image.
Other Samples
I want to detect the six square-shaped green portions and the one circular portion above them. I basically want a binary image with these portions marked 1 (white) and everything else 0 (black).
What have I done so far?
I found a range of H, S, and V within which these colors fall which works fine for a single image, but I've got multiple such images, some under different illumination (brightness) conditions and the ranges do not work in those cases. What should I do to make the thresholding as invariant to brightness as possible? Is there a different approach I should take for thresholding?
What you did was manually analyze the values you need for thresholding for a specific image, and then apply that. What you see is that analysis done on one image doesn't necessarily fit other images.
The solution is to do the analysis automatically for each image. This can be achieved by creating a histogram for each of the channels, and if you're working in HSV, I'm guessing that the H channel would be pretty much useless in this case.
Anyway, once you have the histograms, you should analyze the threshold using something like Lloyd-Max, which is basically a K-Means type clustering of intensities. This should give the centroids for the intensity of the white background, and the other colors. Then you choose the threshold based on the cluster standard deviation.
For example, in the image you gave above, the histogram of the S channel looks like:
You can see the large blob near 0 is the white background that has the lowest saturation.