Can anyone provide me with some clustering examples? - python

I am having a hard time understanding what scipy.cluster.vq really does!!
On Wikipedia it says Clustering can be used to divide a digital image into distinct regions for border detection or object recognition.
on other sites and books it says we can use clustering methods for clustering images for finding groups of similar images.
AS i am interested in image processing ,I really need to fully understand what clustering is .
So
Can anyone show me simple examples about using scipy.cluster.vq with images??

The kind of clustering performed by scipy.cluster.vq is definitely of the latter (groups of similar images) variety.
The only clustering algorithm implemented in scipy.cluster.vq is the K-Means algorithm, which typically treats input data as points in n-dimensional euclidean space, and attempts to divide that space so that new, incoming data can be summarized by saying "example x is most like centroid y". Centroids can be thought of as prototypical examples of the input data. Vector quantization leads to concise, or compressed representations because, instead of remembering all 100 pixels of each new image we see, we can remember a single integer which points at the prototypical example that the new image is most like.
If you had many small grayscale images:
>>> import numpy as np
>>> images = np.random.random_sample((100,10,10))
So, we've got 100 10x10 pixel images. Let's assume they already all have similar brightness and contrast. The scipy kmeans implementation expects flat vectors:
>>> images = images.reshape((100,100))
>>> images.shape
(100,100)
Now, let's train the K-Means algorithm so that any new incoming image can be assigned to one of 10 clusters:
>>> from scipy.cluster.vq import kmeans, vq
>>> codebook,distortion = kmeans(images,10)
Finally, let's say we have five new images we'd like to assign to one of the ten clusters:
>>> newimages = np.random.random_samples((5,10,10))
>>> clusters = vq(newimages.reshape((5,100)),codebook)
clusters will contain the integer index of the best matching centroid for each of the five examples.
This is kind of a toy example, and won't yield great results unless the objects of interest in the images you're working with are all centered. Since objects of interest might appear anywhere in larger images, it's typical to learn centroids for smaller image "patches", and then convolve them (compare them at many different locations) with larger images to promote translation-invariance.

The second is what clustering is: group objects that are somewhat similar (and that could be images). Clustering is not a pure imaging technique.
When processing a single image, it can for example be applied to colors. This is a quite good approach for reducing the number of colors in an image. If you cluster by colors and pixel coordinates, you can also use it for image segmentation, as it will group pixels that have a similar color and are close to each other. But this is an application domain of clustering, not pure clustering.

Related

Apply bundle adjustment to rectify images globally in the context of image stitching in python/openCV

I am trying to perform image registration on potentially hundreds of aerial images taken from a camera mounted on a UAV. I think it is safe to assume that I know the ordering of the images, and hopefully, sequential images will overlap.
I have read some papers that suggest using a CNN to find the homography matrix can vastly outperform the old school feature descriptor matching with RANSAC song and dance. My issue is that I don't quite understand how to stitch more than 2 images together. It seems to me that to register image 100 in the same coordinate frame as image 1 using the cv2.warpPerspective function, I would do I100H1H2*H3...H99. Even if the error in each transform is small after 100 applications it seems like it would be huge. My understanding is that the solution to this problem is bundle adjustment.
I have looked into bundle adjustment a little bit but Im struggling to see how exactly I can use it. I have read the paper that many related stack overflow posts suggest "Automatic Panoramic Image Stitching using Invariant Features". In the section on bundle adjustment IF I understand the authors suggest that after building the initial panorama it is likely that image A will eventually overlap with multiple other images. Using the matched feature points in any images that overlap with A they basically calculate some adjustment...? I think to image A?
My question is using openCV how do I apply this adjustment? Let's say I have 3 images I1, I2, I3 all overlapping for a minimal example.
#assuimg CNN model predicts transform
#I think the first step is find the homography between all images
H12 = cnnMod.predict(I1,I2)
H13 = cnnMod.predict(I1,I3)
H23 = cnnMod.predict(I2,I3)
outI2 = cv2.warpPerspective(I2,H12,(maxWidth, maxHeight),flags=cv2.INTER_LINEAR)
outI3 = cv2.warpPerspective(I2,H23,(maxWidth, maxHeight),flags=cv2.INTER_LINEAR)
#now would I do some bundle voodoo?
#what would it look like?
#which of the bundler classes should I use?
#would it look like this?
#or maybe the input is features?
voodoo = cv2.bundleVoodoo([H12,H13,H23])
golaballyRectifiedI2 = cv2.warpPerspective(outI2,voodoo[2],(maxWidth, maxHeight),flags=cv2.INTER_LINEAR)
The code is my best guess at what a solution might look like but clearly I have no idea what I am doing. I've not been able to find anything that actually shows how the bundle adjustment is done.
The basic idea underlying image alignment through bundle adjustment is that, rather than matching pairs of 2D points (x, x') across pairs of images, you posit the existence of 3d points X that, ideally, project onto matched tuples of 2D points (x, x', x'', ...) matched among corresponding tuples of images. You then solve for the location of the X's and the camera parameters (extrinsics, and intrinsics if the camera is uncalibrated) that minimize the (robustified, usually) RMS reprojection error over all 2d points and images.
Depending on your particular setup and scene, you may make some simplifying assumptions, e.g.:
That the X's all belong to the same plane (which you can arbitrarily choose as the world's Z=0 plane). This is useful, for example, when stitching images of a painting, or aerial images on relatively flat ground with relatively small extent so one can ignore the earth's curvature.
Or that the X's are all on the WGS84 ellipsoid.
Both the above assumptions remove one free coordinate from X, effectively reducing the problem's dimensionality.

shape detection

I have tried 3 algorithms:
Compare by Compare_ssim.
Difference detection by PIL (ImageChops.difference).
Images subtraction.
The first algorithm:
(score, diff) = compare_ssim(img1, img2, full=True)
diff = (diff * 255).astype("uint8")
The second algorithm:
from PIL import Image ,ImageChops
img1=Image.open("canny1.jpg")
img2=Image.open("canny2.jpg")
diff=ImageChops.difference(img1,img2)
if diff.getbbox():
diff.show()
The third algorithm:
image3= cv2.subtract(image1,image2)
The problem is these algorithms are so sensitive. If the images have different noise, they consider that the two images are totally different. Any ideas to fix that?
These pictures are different in many ways (deformation, lighting, colors, shape) and simple image processing just cannot handle all of this.
I would recommend a higher level method that tries to extract the geometry and color of those tubes, in the form of a simple geometric graph. Then compare the graphs rather than the images.
I acknowledge that this is easier said than done, and will only work with this particular kind of scene.
It is very difficult to help since we don't really know which parameters you can change, like can you keep your camera fixed? Will it always be just about tubes? What about tubes colors?
Nevertheless, I think what you are looking for is a framework for image registration and I propose you to use SimpleElastix. It is mainly used for medical images so you might have to get familiar with the library SimpleITK. What's interesting is that you have a lot of parameters to control the registration. I think that you will have to look into the documentation to find out how to control a specific image frequency, the one that create the waves and deform the images. Hereafter I did not configured it to have enough local distortion, you'll have to find the best trade-off, but I think it should be flexible enough.
Anyway, you can get such result with the following code, I don't know if it helps, I hope so:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import SimpleITK as sitk
fixedImage = sitk.ReadImage('1.jpg', sitk.sitkFloat32)
movingImage = sitk.ReadImage('2.jpg', sitk.sitkFloat32)
elastixImageFilter = sitk.ElastixImageFilter()
affine_registration_parameters = sitk.GetDefaultParameterMap('affine')
affine_registration_parameters["NumberOfResolutions"] = ['6']
affine_registration_parameters["WriteResultImage"] = ['false']
affine_registration_parameters["MaximumNumberOfSamplingAttempts"] = ['4']
parameterMapVector = sitk.VectorOfParameterMap()
parameterMapVector.append(affine_registration_parameters)
parameterMapVector.append(sitk.GetDefaultParameterMap("bspline"))
elastixImageFilter.SetFixedImage(fixedImage)
elastixImageFilter.SetMovingImage(movingImage)
elastixImageFilter.SetParameterMap(parameterMapVector)
elastixImageFilter.Execute()
registeredImage = elastixImageFilter.GetResultImage()
transformParameterMap = elastixImageFilter.GetTransformParameterMap()
resultImage = sitk.Subtract(registeredImage, fixedImage)
resultImageNp = np.sqrt(sitk.GetArrayFromImage(resultImage) ** 2)
cv2.imwrite('gray_1.png', sitk.GetArrayFromImage(fixedImage))
cv2.imwrite('gray_2.png', sitk.GetArrayFromImage(movingImage))
cv2.imwrite('gray_2r.png', sitk.GetArrayFromImage(registeredImage))
cv2.imwrite('gray_diff.png', resultImageNp)
Your first image resized to 256x256:
Your second image:
Your second image registered with the first one:
Here is the difference between the first and second image which could show what's different:
This is one of the classical problems of image treatment - and one which does not have an answer which holds universally. The possible answers depend highly on what type of images you have, and what type of information you want to extract from them and the differences between them.
You can reduce noise by two means:
a) take several images of the same object, such that the object does not change. You can stack the images and noise is reduced by square-root of the number of images.
b) You can run a blur filter over the image. The more you blur, the more noise is averaged. Noise is here reduced by square-root of the number of pixels you average over. But so is detail in the images.
In both cases (a) and (b) you run the difference analysis after you applied either method.
Probably not applicable to you as you likely cannot get hold of either: it helps, if you can get hold of flatfields which give the inhomogeneity of illumination and pixel sensitivity of your camera and allow correcting the images prior to any treatment. Similar goes for darkfields which give an estimate of the influence of the read-out noise of the camera and allow correcting images for those.
There is somewhat another 3rd option, which is more high-level: run your object analysis first at a detailed-enough level. And compare the results.

Extract feature vector from 2d image in numpy

I have a series of 2d images of two types, either a star or a pentagon. My aim is to classify all of these images respectively. I have 30 star images and 30 pentagon images. An example of each image is shown side by side here:
Before I apply the KNN classification algorithm, I need to extract a feature vector from all the images. The feature vectors must all be of the same size however the 2d images all vary in size. I have extracted read in my image and I get back a 2d array with zeros and ones.
image = pl.imread('imagepath.png')
My question is how do I process image in order produce a meaningful feature vector that contains enough information to allow me to do the classification. It has to be a single vector per image which I will use for training and testing.
If you want to use opencv then:
Resize images to a standard size:
import cv2
import numpy as np
src = cv2.imread("/path.jpg")
target_size = (64,64)
dst = cv2.resize(src, target_size)
Convert to a 1D vector:
dst = dst.reshape(target_size.shape[0] * target_size.shape[1])
Before you start coding, you have to decide whuch features are useful for this task:
The easiest way out is trying the approach in #Jordan's answer and converting the entire image to a feature. This could work because the classes are simple patterns, and is interesting if you are using KNN. If this does not work well, the following steps show how you should approach the problem.
The number of black pixels might not help, because the size of the
star and pentagon can vary.
The number of sharp corners is very likely to be useful.
The number of straight line segments might be useful, but this could
be unreliable because the shapes are hand-drawn.
Supposing you want to have a go at using the number of corners as a feature, you can refer to this page to learn how to extract corners.

image segmentation of RGB image by K means clustering in python

I want to segment RGB images(satellite imagery) for land cover using k means clustering in such a fashion that the different regions of the image are marked by different colors and if possible boundaries are created separating different regions.
Is it possible to achieve this by K-means clustering?
I have been searching all over internet and many tutorials do it by k means clustering but only after converting the image to grey scale. I want to do it with an RGB image only. Is there any source that could help me begin with with?
Please suggest something.
I imagine it is not relevant for RachJain but if someone needs in the future:
A simple use of sklearn KMean algorithm will give the wanted outcome:
from sklearn.cluster import KMeans
pic = np.float64(misc.imread(filepath)/255)
kmeans = KMeans(n_clusters=13, random_state=0).fit(pic)
pic2show = kmeans.cluster_centers_[kmeans.labels_]
plt.imshow(pic2show)
What do you mean that they convert the image to greyscale? The formula calculates the Euclidean distance of a point from a centroid. Therefore R, G, B values are used. Read this student report for a comparison of using different colour spaces - RGB or HSV: http://www.cs.bgu.ac.il/~ben-shahar/Teaching/Computational-Vision/StudentProjects/ICBV121/ICBV-2012-1-OfirNijinsky-AvivPeled/report.pdf

Automatically recognize patterns in images

Recently I downloaded some flags from the CIA world factbook. Now I want to "classify them.
Get the colors
Get some shapes (stars, moons etc.)
While browsing I came across the Python Image Library which allows me to extract the colors (i.e. for Austria:
#!/usr/bin/env python
import Image
bild = Image.open("au-lgflag.gif").convert("RGB")
bild.getcolors()
[(44748, (255, 255, 255)), (452, (236, 145, 146)), (653, (191, 147, 149)), ...)]
What I found strange here is that the austrian flag only has two colors in it, but the above output shows more than ten. Do you know why? My idea was to only count the top 5 colors and as I'm not interested in every color I would do some "normalize" the numbers to multiples of 64 (so (236, 145, 146) becomes (192, 128, 128)).
However at the moment I have no idea what is the best way to extract more information (Ist there a star in the image? or else). Could you give me some hints on how to do it?
Thanks in advance
The Python Imaging Library - PIL just does basic image manipulation - opening, some transforms or filters, and saving to other formats.
Pattern recognition, is part of an advanced image processign field and evolving -- it deos use algorithms far different than those present in PIL.
There are some libraries and frameworks you can use in Python for pattern recognition - (recognising stars, and moons, and so) - Although I advance you: if you want this just to classify one0-hundered-and-a-few coutnry flags, you should do it manually, rather than try to dive in pattern recognition.
Your comment on the number of colors tells that you are not used with computer images at all. And pattern recognition is hardcore, even with a python front-end. (You can't expect any current framework to know beforehand what is a "moon" or a "star" for example)
So, for less than 500 images, you can resort to software that allows you to tag images manually and write some code to link the tags to each flag.
As for the colors: Computer rasterized images are formed of pixels. These are Square. At the boundary between different colors, if a pixel is on one color (say white), and its neighbor is a complete different color (like red), this boundary will show up jagged. This is known as "aliasing". To diminish this, computer software mixes colors at hard boundaries, creating intermediate colors - that is why a PNG even with 2 apparent colors can have several colors internally. For .JPG it is even worse, because the rounded decimal numbers for RGB colors we use are not even stored as they are in the image.
Unlike pattern recognizing, you can downsize the number of colours seen by using just the most significant bits of each component. I'd say the two most significant bits would be enough.
The following python function could do that using a color count given by PIL:
def get_main_colors(col_list):
main_colors = set()
for index, color in col_list:
main_colors.add(tuple(component >> 6 for component in color))
return [tuple(component << 6 for component in color) for color in main_colors]
call it with "get_main_colors(bild.get_colors()) " for example.
Here is another question dealing with the pattern recognition part:
python image recognition
First some quick terminology, just in case:
A classifier learns a map of inputs to outputs. You train a classifier by giving it input/output pairs, for example feature vectors like color information and labels like 'czech flag'. In practice, the labels are represented as scalar numbers. In your example, you have a multi-class problem, which simply means that there are more than two possible labels (obviously, since there are more than two country flags). Training a multi-class classifier can a little trickier than the vanilla binary classifier, so you may want to search for terms like "multi-class classifier" or "one-vs-many classifier" to investigate the best approach for you.
On to the problem:
I think your problem might be easily-solved using a simple classifier, like k-nearest neighbors, with color histograms as feature vectors. In particular, I would use HSV feature vectors as opposed to RGB feature vectors. Some great results have been reported in the literature using just this kind of simple classifier system, for example: SVMs for Histogram-Based Image Classification. In that paper, the authors use a particular classifier known as a Support Vector Machine (SVM) and HSV feature vectors. HSV feature vectors also sidestep the issue of image scale and rotation, for example a flag that is 1024x768 vs 640x480, or a flag that is rotated in an image by 45 degrees.
The pseudocode for training the algorithm would look something like this:
# training simple kNN -- just compute feature vectors, collect labels
X = [] # tuple (input example, label)
for training_image in data:
x = get_hsv_vector(training_image)
y = get_label(training_image)
X.append((x,y))
# classification -- pick k closest feature vectors
K = 3 # the 'k' in kNN -- how many similar featvecs to use
d = [] # (distance, label) tuples for scoring
x_test = get_hsv_vector(test_image) # feature vector to be classified
for x_train in X:
d.append((distance(x_test[0], x_train), x_test[1])
# sort distances, d, by closeness and pick top K labels for scoring
d.sort()
output = get_majority_vote([x[1] for x in d[:K]])
The kNN classifier is available in several python packages, with good documentation. It should be pretty easy to convert to HSV colorspace as well. If you don't achieve your desired results, you can try to improve your feature vectors or your classifier.

Categories

Resources