image segmentation of RGB image by K means clustering in python - python

I want to segment RGB images(satellite imagery) for land cover using k means clustering in such a fashion that the different regions of the image are marked by different colors and if possible boundaries are created separating different regions.
Is it possible to achieve this by K-means clustering?
I have been searching all over internet and many tutorials do it by k means clustering but only after converting the image to grey scale. I want to do it with an RGB image only. Is there any source that could help me begin with with?
Please suggest something.

I imagine it is not relevant for RachJain but if someone needs in the future:
A simple use of sklearn KMean algorithm will give the wanted outcome:
from sklearn.cluster import KMeans
pic = np.float64(misc.imread(filepath)/255)
kmeans = KMeans(n_clusters=13, random_state=0).fit(pic)
pic2show = kmeans.cluster_centers_[kmeans.labels_]
plt.imshow(pic2show)

What do you mean that they convert the image to greyscale? The formula calculates the Euclidean distance of a point from a centroid. Therefore R, G, B values are used. Read this student report for a comparison of using different colour spaces - RGB or HSV: http://www.cs.bgu.ac.il/~ben-shahar/Teaching/Computational-Vision/StudentProjects/ICBV121/ICBV-2012-1-OfirNijinsky-AvivPeled/report.pdf

Related

How to separate monochromatic objects of different sizes in opencv

I want to separate a noiseless 1-bit (black and white) image with white circles based on the concave part of the outline.
Please refer to the picture below.
This is the white object to separate:
The target result is:
Here is my implementation with the watershed algorithm:
The above result is not what I want.
If the size of the separated objects is similar, my algorithm is fine, but if the size difference is large, a problem occurs as shown in the picture above.
I would like to implement an opencv algorithm that can segment a region like the second picture.
However, the input photo is not necessarily a perfect circle.
It can be oval like the picture below:
Or it can be squished:
However, I would like to separate it based on the concave part of the outline anyway.
I think it can be implemented by using the distanceTransform function well, but I'm not sure how to approach it.
Please let me know which way to refer.
Thank you.
Here is an algorithm which should give you a good start.
Compute all contours.
For each contour compute the convexity defects. If there is no defect the contour is an isolated circle and you can segment it out.
After you handled all the isolated circles, you can work out the remaining contours by counting the convexity defects: the number of circles N for each contour is the number of convexity defects divided by 2.
Use a clustering algorithm (https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html should do well given the shapes you have) and cluster the "white" points using N as the number of clusters to be found.
If you want to find the minimal openings, you can use a medial axis based approach.
Pseudo code:
compute contours of bitmap
compute medial-axis of bitmap
for each point on medial-axis:
get minimal distance d from medial axis algorithm
for each local minimum of distance d:
get two points on bitmap contours with minimal distance that are at least d apart from each other
use these points for deviding line
If you need a working implementation in python, please let me know. I would use skimage lib. For other languages you might have to implement medial-axis on your own. But that shouldn't be a big deal.

Extract feature vector from 2d image in numpy

I have a series of 2d images of two types, either a star or a pentagon. My aim is to classify all of these images respectively. I have 30 star images and 30 pentagon images. An example of each image is shown side by side here:
Before I apply the KNN classification algorithm, I need to extract a feature vector from all the images. The feature vectors must all be of the same size however the 2d images all vary in size. I have extracted read in my image and I get back a 2d array with zeros and ones.
image = pl.imread('imagepath.png')
My question is how do I process image in order produce a meaningful feature vector that contains enough information to allow me to do the classification. It has to be a single vector per image which I will use for training and testing.
If you want to use opencv then:
Resize images to a standard size:
import cv2
import numpy as np
src = cv2.imread("/path.jpg")
target_size = (64,64)
dst = cv2.resize(src, target_size)
Convert to a 1D vector:
dst = dst.reshape(target_size.shape[0] * target_size.shape[1])
Before you start coding, you have to decide whuch features are useful for this task:
The easiest way out is trying the approach in #Jordan's answer and converting the entire image to a feature. This could work because the classes are simple patterns, and is interesting if you are using KNN. If this does not work well, the following steps show how you should approach the problem.
The number of black pixels might not help, because the size of the
star and pentagon can vary.
The number of sharp corners is very likely to be useful.
The number of straight line segments might be useful, but this could
be unreliable because the shapes are hand-drawn.
Supposing you want to have a go at using the number of corners as a feature, you can refer to this page to learn how to extract corners.

Object (simple shapes) Detection in Image

I've got the following image.
Other Samples
I want to detect the six square-shaped green portions and the one circular portion above them. I basically want a binary image with these portions marked 1 (white) and everything else 0 (black).
What have I done so far?
I found a range of H, S, and V within which these colors fall which works fine for a single image, but I've got multiple such images, some under different illumination (brightness) conditions and the ranges do not work in those cases. What should I do to make the thresholding as invariant to brightness as possible? Is there a different approach I should take for thresholding?
What you did was manually analyze the values you need for thresholding for a specific image, and then apply that. What you see is that analysis done on one image doesn't necessarily fit other images.
The solution is to do the analysis automatically for each image. This can be achieved by creating a histogram for each of the channels, and if you're working in HSV, I'm guessing that the H channel would be pretty much useless in this case.
Anyway, once you have the histograms, you should analyze the threshold using something like Lloyd-Max, which is basically a K-Means type clustering of intensities. This should give the centroids for the intensity of the white background, and the other colors. Then you choose the threshold based on the cluster standard deviation.
For example, in the image you gave above, the histogram of the S channel looks like:
You can see the large blob near 0 is the white background that has the lowest saturation.

Can anyone provide me with some clustering examples?

I am having a hard time understanding what scipy.cluster.vq really does!!
On Wikipedia it says Clustering can be used to divide a digital image into distinct regions for border detection or object recognition.
on other sites and books it says we can use clustering methods for clustering images for finding groups of similar images.
AS i am interested in image processing ,I really need to fully understand what clustering is .
So
Can anyone show me simple examples about using scipy.cluster.vq with images??
The kind of clustering performed by scipy.cluster.vq is definitely of the latter (groups of similar images) variety.
The only clustering algorithm implemented in scipy.cluster.vq is the K-Means algorithm, which typically treats input data as points in n-dimensional euclidean space, and attempts to divide that space so that new, incoming data can be summarized by saying "example x is most like centroid y". Centroids can be thought of as prototypical examples of the input data. Vector quantization leads to concise, or compressed representations because, instead of remembering all 100 pixels of each new image we see, we can remember a single integer which points at the prototypical example that the new image is most like.
If you had many small grayscale images:
>>> import numpy as np
>>> images = np.random.random_sample((100,10,10))
So, we've got 100 10x10 pixel images. Let's assume they already all have similar brightness and contrast. The scipy kmeans implementation expects flat vectors:
>>> images = images.reshape((100,100))
>>> images.shape
(100,100)
Now, let's train the K-Means algorithm so that any new incoming image can be assigned to one of 10 clusters:
>>> from scipy.cluster.vq import kmeans, vq
>>> codebook,distortion = kmeans(images,10)
Finally, let's say we have five new images we'd like to assign to one of the ten clusters:
>>> newimages = np.random.random_samples((5,10,10))
>>> clusters = vq(newimages.reshape((5,100)),codebook)
clusters will contain the integer index of the best matching centroid for each of the five examples.
This is kind of a toy example, and won't yield great results unless the objects of interest in the images you're working with are all centered. Since objects of interest might appear anywhere in larger images, it's typical to learn centroids for smaller image "patches", and then convolve them (compare them at many different locations) with larger images to promote translation-invariance.
The second is what clustering is: group objects that are somewhat similar (and that could be images). Clustering is not a pure imaging technique.
When processing a single image, it can for example be applied to colors. This is a quite good approach for reducing the number of colors in an image. If you cluster by colors and pixel coordinates, you can also use it for image segmentation, as it will group pixels that have a similar color and are close to each other. But this is an application domain of clustering, not pure clustering.

Automatically recognize patterns in images

Recently I downloaded some flags from the CIA world factbook. Now I want to "classify them.
Get the colors
Get some shapes (stars, moons etc.)
While browsing I came across the Python Image Library which allows me to extract the colors (i.e. for Austria:
#!/usr/bin/env python
import Image
bild = Image.open("au-lgflag.gif").convert("RGB")
bild.getcolors()
[(44748, (255, 255, 255)), (452, (236, 145, 146)), (653, (191, 147, 149)), ...)]
What I found strange here is that the austrian flag only has two colors in it, but the above output shows more than ten. Do you know why? My idea was to only count the top 5 colors and as I'm not interested in every color I would do some "normalize" the numbers to multiples of 64 (so (236, 145, 146) becomes (192, 128, 128)).
However at the moment I have no idea what is the best way to extract more information (Ist there a star in the image? or else). Could you give me some hints on how to do it?
Thanks in advance
The Python Imaging Library - PIL just does basic image manipulation - opening, some transforms or filters, and saving to other formats.
Pattern recognition, is part of an advanced image processign field and evolving -- it deos use algorithms far different than those present in PIL.
There are some libraries and frameworks you can use in Python for pattern recognition - (recognising stars, and moons, and so) - Although I advance you: if you want this just to classify one0-hundered-and-a-few coutnry flags, you should do it manually, rather than try to dive in pattern recognition.
Your comment on the number of colors tells that you are not used with computer images at all. And pattern recognition is hardcore, even with a python front-end. (You can't expect any current framework to know beforehand what is a "moon" or a "star" for example)
So, for less than 500 images, you can resort to software that allows you to tag images manually and write some code to link the tags to each flag.
As for the colors: Computer rasterized images are formed of pixels. These are Square. At the boundary between different colors, if a pixel is on one color (say white), and its neighbor is a complete different color (like red), this boundary will show up jagged. This is known as "aliasing". To diminish this, computer software mixes colors at hard boundaries, creating intermediate colors - that is why a PNG even with 2 apparent colors can have several colors internally. For .JPG it is even worse, because the rounded decimal numbers for RGB colors we use are not even stored as they are in the image.
Unlike pattern recognizing, you can downsize the number of colours seen by using just the most significant bits of each component. I'd say the two most significant bits would be enough.
The following python function could do that using a color count given by PIL:
def get_main_colors(col_list):
main_colors = set()
for index, color in col_list:
main_colors.add(tuple(component >> 6 for component in color))
return [tuple(component << 6 for component in color) for color in main_colors]
call it with "get_main_colors(bild.get_colors()) " for example.
Here is another question dealing with the pattern recognition part:
python image recognition
First some quick terminology, just in case:
A classifier learns a map of inputs to outputs. You train a classifier by giving it input/output pairs, for example feature vectors like color information and labels like 'czech flag'. In practice, the labels are represented as scalar numbers. In your example, you have a multi-class problem, which simply means that there are more than two possible labels (obviously, since there are more than two country flags). Training a multi-class classifier can a little trickier than the vanilla binary classifier, so you may want to search for terms like "multi-class classifier" or "one-vs-many classifier" to investigate the best approach for you.
On to the problem:
I think your problem might be easily-solved using a simple classifier, like k-nearest neighbors, with color histograms as feature vectors. In particular, I would use HSV feature vectors as opposed to RGB feature vectors. Some great results have been reported in the literature using just this kind of simple classifier system, for example: SVMs for Histogram-Based Image Classification. In that paper, the authors use a particular classifier known as a Support Vector Machine (SVM) and HSV feature vectors. HSV feature vectors also sidestep the issue of image scale and rotation, for example a flag that is 1024x768 vs 640x480, or a flag that is rotated in an image by 45 degrees.
The pseudocode for training the algorithm would look something like this:
# training simple kNN -- just compute feature vectors, collect labels
X = [] # tuple (input example, label)
for training_image in data:
x = get_hsv_vector(training_image)
y = get_label(training_image)
X.append((x,y))
# classification -- pick k closest feature vectors
K = 3 # the 'k' in kNN -- how many similar featvecs to use
d = [] # (distance, label) tuples for scoring
x_test = get_hsv_vector(test_image) # feature vector to be classified
for x_train in X:
d.append((distance(x_test[0], x_train), x_test[1])
# sort distances, d, by closeness and pick top K labels for scoring
d.sort()
output = get_majority_vote([x[1] for x in d[:K]])
The kNN classifier is available in several python packages, with good documentation. It should be pretty easy to convert to HSV colorspace as well. If you don't achieve your desired results, you can try to improve your feature vectors or your classifier.

Categories

Resources