Vectorized implementation of kmeans++ - python

I have implemented the kmeans++ algorithm to initialize clusters when performing K-means clustering. The loop has to run k times. I was wondering if there was any way to vectorize the algorithm to get it to run faster?
points is an array of points in d-dimensions and k is the number of centroids to return.
It works by calculating the minimum distances from the already found clusters, to all the points and then calculating the probability of choosing the next cluster from the points.
The issue is really that it scales badly when k is large.
def init_plus_plus(points, k):
centroids = np.zeros_like(points[:k])
r = np.random.randint(0, points.shape[0])
centroids[0] = points[r]
for i in range(1, k):
min_distances = self.euclidian_distance(centroids[:i], points).min(1)
prob = min_distances / min_distances.sum()
cs = np.cumsum(prob)
idx = np.sum(cs < np.random.rand())
centroids[i] = points[int(idx)]
return centroids

Related

Why does my k-means convergence condition give different results than sklearn?

I've written a function that executes k-means clustering with data, K, and n (number of iterations) as inputs. I can set n=100 so that the function calculates Euclidean distance, assigns clusters and calculates new cluster centroids 100 times over before completing. When comparing the results of this with sklearn's KMeans (using the same data and K) I get the same centroids.
I would like my function to run only until convergence (so as to avoid unnecessary iterations), so it no longer needs number of iterations as an input. To implement this I've used a while loop, whereas in my original function I used a for loop that iterates n times. The general steps of my code are as follows:
Randomly create the centroids and store in the variable centroids.
Calculate the Euclidean distance of each data point from the centroids generated above and store in the variable EucDist.
Assign each data point a cluster based on the EucDist from centroids.
Regroup the data points based on the cluster index cluster. Then compute the mean of separated clusters and assign it as new centroids.
Steps 2-4 are then looped over until convergence. Below is the code for my while loop (I'm aware there is probably a bit of unnecessary code in here, however I'm unsure where the problem is originating from, if it is even a problem).
while True:
previouscentroids = centroids
EucDist = np.array([]).reshape(a, 0)
for k in range(K):
Dist = np.sqrt(np.sum((data - centroids[:, k]) ** 2, axis = 1))
EucDist = np.c_[EucDist, Dist]
cluster = np.argmin(EucDist, axis = 1) + 1
points = {}
for k in range(K):
points[k + 1] = np.array([]).reshape(2, 0)
for i in range(a):
points[cluster[i]] = np.c_[points[cluster[i]], data[i]]
for k in range(K):
points[k + 1] = points[k + 1].T
for k in range(K):
centroids[:, k] = np.mean(points[k + 1], axis = 0)
if np.all(previouscentroids - centroids == 0):
break
My understanding of this is that once centroids is the same as it was at the start of the iteration (previouscentroids), the loop will break and I'll have my final clusters. I assumed this would be the same as the centroids produced from my original function (and the same as from sklearn), as I thought once convergence is reached, no matter how many times you iterate the loop, the clusters will remain unchanged (thus so will the centroids). Below is the for loop from my original function for comparison (very little has been changed).
for i in range(n):
EucDist = np.array([]).reshape(a, 0)
for k in range(K):
Dist = np.sqrt(np.sum((data - centroids[:, k]) ** 2, axis = 1))
EucDist = np.c_[EucDist, Dist]
cluster = np.argmin(EucDist, axis = 1) + 1
points = {}
for k in range(K):
points[k + 1] = np.array([]).reshape(2, 0)
for i in range(a):
points[cluster[i]] = np.c_[points[cluster[i]], data[i]]
for k in range(K):
points[k + 1] = points[k + 1].T
for k in range(K):
centroids[:, k] = np.mean(points[k + 1], axis = 0)
K-means is a non deterministic algorithm. If you don't have the same initialization than in scikit-learn, you are not sure to find the same clusters. In scikit-learn documentation, it is written :
Given enough time, K-means will always converge, however this may be to a local minimum. This is highly dependent on the initialization of the centroids. As a result, the computation is often done several times, with different initializations of the centroids.
Kmeans will change from each time you run it because the initial clustercenters will be chosen at random. So unless you have excatly same initial clustercenters the results wont be the same.

Get inertia for nltk k means clustering using cosine_similarity

I have used nltk for k mean clustering as I would like to change the distance metric. Does nltk k means have an inertia similar to that of sklearn? Can't seem to find in their documentation or online...
The code below is how people usually find inertia using sklearn k means.
inertia = []
for n_clusters in range(2, 26, 1):
clusterer = KMeans(n_clusters=n_clusters)
preds = clusterer.fit_predict(features)
centers = clusterer.cluster_centers_
inertia.append(clusterer.inertia_)
plt.plot([i for i in range(2,26,1)], inertia, 'bx-')
plt.xlabel('k')
plt.ylabel('Sum_of_squared_distances')
plt.title('Elbow Method For Optimal k')
plt.show()
you can write your own function to obtain the inertia for Kmeanscluster in nltk.
As per your question posted by you, How do I obtain individual centroids of K mean cluster using nltk (python) . Using the same dummy data, which look like this. after making 2 cluster..
Refereing to docs https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html, inertia is Sum of squared distances of samples to their closest cluster center.
feature_matrix = df[['feature1','feature2','feature3']].to_numpy()
centroid = df['centroid'].to_numpy()
def nltk_inertia(feature_matrix, centroid):
sum_ = []
for i in range(feature_matrix.shape[0]):
sum_.append(np.sum((feature_matrix[i] - centroid[i])**2)) #here implementing inertia as given in the docs of scikit i.e sum of squared distance..
return sum(sum_)
nltk_inertia(feature_matrix, centroid)
#op 27.495250000000002
#now using kmeans clustering for feature1, feature2, and feature 3 with same number of cluster 2
scikit_kmeans = KMeans(n_clusters= 2)
scikit_kmeans.fit(vectors) # vectors = [np.array(f) for f in df.values] which contain feature1, feature2, feature3
scikit_kmeans.inertia_
#op
27.495250000000006
The previous comment is actually missing a small detail:
feature_matrix = df[['feature1','feature2','feature3']].to_numpy()
centroid = df['centroid'].to_numpy()
cluster = df['predicted_cluster'].to_numpy()
def nltk_inertia(feature_matrix, centroid):
sum_ = []
for i in range(feature_matrix.shape[0]):
sum_.append(np.sum((feature_matrix[i] - centroid[cluster[i]])**2))
return sum(sum_)
You have to select the corresponding cluster centroid when calculating distance between centroids and data points. Notice the cluster variable in the above code.

Is there a better and faster way to convert from Scipy condensed distance matrix to a Scipy sparse distance matrix under a threshold

I'm trying to calculate the euclidean distance between n-dimensional points, and then get a sparse distance matrix of all points where the distance is under a set threshold.
I've already got a method working, but it is too slow. For 12000 points in 3D, it takes about 8 seconds. The rest of the program runs in under a second, so this is the main bottleneck. This will be ran hundreds of times, so improving this step will increase performance by a large amount.
This is my current implementation.
def make_sparse_dm(points: np.array, thresh):
n = points.shape[0]
distance_matrix =
spatial.distance.squareform(spatial.distance.pdist(points))
# pairwise_distances(points)
[i, j] = np.meshgrid(np.arange(n), np.arange(n))
points_under_thresh = distance_matrix <= thresh
i = i[points_under_thresh]
j = j[points_under_thresh]
v = distance_matrix[points_under_thresh]
return sparse.coo_matrix((v, (i, j)), shape=(n, n)).tocsr()
The output is then fed into a library which is much faster when the input is in scipy sparse distance matrix form.

Should np.linalg.norm be squared when implementing k-means clustering algorithm?

The k-means clustering algorithm objective is to find:
I looked at several implementations of it in python, and in some of them the norm is not squared.
For example (taken from here):
def form_clusters(labelled_data, unlabelled_centroids):
"""
given some data and centroids for the data, allocate each
datapoint to its closest centroid. This forms clusters.
"""
# enumerate because centroids are arrays which are unhashable
centroids_indices = range(len(unlabelled_centroids))
# initialize an empty list for each centroid. The list will
# contain all the datapoints that are closer to that centroid
# than to any other. That list is the cluster of that centroid.
clusters = {c: [] for c in centroids_indices}
for (label,Xi) in labelled_data:
# for each datapoint, pick the closest centroid.
smallest_distance = float("inf")
for cj_index in centroids_indices:
cj = unlabelled_centroids[cj_index]
distance = np.linalg.norm(Xi - cj)
if distance < smallest_distance:
closest_centroid_index = cj_index
smallest_distance = distance
# allocate that datapoint to the cluster of that centroid.
clusters[closest_centroid_index].append((label,Xi))
return clusters.values()
And to give the contrary, expected, implementation (taken from here; this is just the distance calculation):
import numpy as np
from numpy.linalg import norm
def compute_distance(self, X, centroids):
distance = np.zeros((X.shape[0], self.n_clusters))
for k in range(self.n_clusters):
row_norm = norm(X - centroids[k, :], axis=1)
distance[:, k] = np.square(row_norm)
return distance
Now, I know there are several ways to calculate the norm\distance, but I looked only at implementations that used np.linalg.norm with ord=None or ord=2, and as I said, in some of them the norm is not squared, yet they cluster correctly.
Why?
By experience, to use the norm or the squared norm as the objective function of an optimization algorithm yields to similar results. The minimum value of the objetive function will change, but the parameters obtained will be the same. I always guessed that the inner product generates a quadratic function and the root of that product only changed the magnitude but not the objetive function topology. A more detailed answer can be found in here. https://math.stackexchange.com/questions/2253443/difference-between-least-squares-and-minimum-norm-solution
Hope it helps.

Python while loop execution speed up [duplicate]

I am trying to implement a very simple greedy clustering algorithm in python, but am hard-pressed to optimize it for speed. The algorithm will take a distance matrix, find the column with the most components less than a predetermined distance cutoff, and store the row indices (with components less than the cutoff) as the members of the cluster. The centroid of the cluster is the column index. The columns and rows of each member index are then removed from the distance matrix (resulting in a smaller --but still square-- matrix), and the algorithm iterates through successively smaller distance matrices until all clusters are found. Because each iteration depends on the last (a new distance matrix is formed so that there are no overlapping members between clusters), I think I can not avoid a slow for loop in python. I've tried numba (jit) to speed it up but I think it is reverting to python mode and so does not result in any speed gains. Here are two implementations of the algorithm. The first is slower than the latter. Any suggestions for speedups are most welcome. I am aware of other clustering algorithms as implemented in scipy or sklearn (such as DBSCAN, kmeans/medoids, etc), but am very keen to use the current one for my application. Thanks in advance for any suggestions.
Method 1 (slower):
def cluster(distance_matrix, cutoff=1):
indices = np.arange(0, len(distance_matrix))
boolean_distance_matrix = distance_matrix <= cutoff
centroids = []
members = []
while boolean_distance_matrix.any():
centroid = np.argmax(np.sum(boolean_distance_matrix, axis=0))
mem_indices = boolean_distance_matrix[:, centroid]
mems = indices[mem_indices]
boolean_distance_matrix[mems, :] = False
boolean_distance_matrix[:, mems] = False
centroids.append(centroid)
members.append(mems)
return members, centroids
Method 2 (faster, but still slow for large matrices):
It takes as input an adjacency (sparse) matrix formed from sklearn's nearest neighbors implementation. This is the simplest and fastest way I could think to get the relevant distance matrix for clustering. I believe working with the sparse matrix also speeds up the clustering algorithm.
nbrs = NearestNeighbors(metric='euclidean', radius=1.5,
algorithm='kd_tree')
nbrs.fit(data)
adjacency_matrix = nbrs.radius_neighbors_graph(data)
def cluster(adjacency_matrix, gt=1):
rows = adjacency_matrix.nonzero()[0]
cols = adjacency_matrix.nonzero()[1]
members = []
member = np.ones(len(range(gt+1)))
centroids = []
appendc = centroids.append
appendm = members.append
while len(member) > gt:
un, coun = np.unique(cols, return_counts=True)
centroid = un[np.argmax(coun)]
appendc(centroid)
member = rows[cols == centroid]
appendm(member)
cols = cols[np.in1d(rows, member, invert=True)]
rows = rows[np.in1d(rows, member, invert=True)]
return members, centroids

Categories

Resources