I have some food images stored in a single folder. All the images are unlabeled, nor are they stored into separate folder such as "pasta" or "meat". My current goal is to cluster the images into a number of categories so that I can later assess if the taste of foods depicted in images of the same cluster is similar.
To do that, I load the images and process them in a format that can be fed into the VGG16 for feature extraction and then pass the features to my KMeans to cluster the images. The code I am using is:
path = r'C:\Users\Hi\Documents\folder'
train_dir = os.path.join(path)
model = VGG16(weights='imagenet', include_top=False)
vgg16_feature_list = []
files = glob.glob(r'C:\Users\Hi\Documents\folder\*.jpg')
for i in enumerate(files):
img = image.load_img(img_path,target_size=(224,224))
img_data=image.img_to_array(img)
img_data=np.expand_dims(img_data,axis=0)
img_data=preprocess_input(img_data)
vgg16_feature = model.predict(img_data)
vgg16_feature_np = np.array(vgg16_feature)
vgg16_feature_list.append(vgg16_feature_np.flatten())
vgg16_feature_list_np=np.array(vgg16_feature_list)
print(vgg16_feature_list_np.shape)
print(vgg16_feature_np.shape)
kmeans = KMeans(n_clusters=3, random_state=0).fit(vgg16_feature_list_np)
print(kmeans.labels_)
The issue is that I get the following warning:
ConvergenceWarning: Number of distinct clusters (1) found smaller than n_clusters (3). Possibly due to duplicate points in X.
How can I fix that?
This is one of these situations where, although your code is fine from a programming point of view, it does not produce satisfactory results due to an ML-related issue (data, model, or both), hence it is rather difficult to "debug" (I'm quoting the word, since this is not the typical debugging procedure, as the code itself runs fine).
At first instance, the situation seems to imply that there is not enough diversity in your features to justify 3 different clusters. And, provided that we remain in a K-means context, there is not much you can do; among the few options available (refer to the documentation for details of the respective parameters):
Increase the number of iterations max_iter (default 300)
Increase the number of different centroid initializations n_init (default 10)
Change the init argument to random (the default is k-means++) or, even better, provide a 3-element array with one sample from each of your targeted clusters (if you already have an idea which these clusters may actually be in your data)
Run the model with different random_state values
Combine the above
If nothing of the above works, it would very likely mean that K-means is actually not applicable here, and you may have to look for alternative approaches (which are out of the scope of this thread). Truth is, as correctly pointed out in the comment below, K-means does not usually work that well with data of such high dimensionality.
Related
I'm trying to understand how to use Stellargraph's EdgeSplitter class. In particular, the examples on the documentation for training a link prediction model based on Node2Vec splits the graph in the following parts:
Distrution of samples across train, val and test set
Following the examples on the documentation, first you sample 10% of the links of the full graph in order to obtain the test set:
# Define an edge splitter on the original graph:
edge_splitter_test = EdgeSplitter(graph)
# Randomly sample a fraction p=0.1 of all positive links, and same number of negative links, from graph, and obtain the
# reduced graph graph_test with the sampled links removed:
graph_test, examples_test, labels_test = edge_splitter_test.train_test_split(
p=0.1, method="global"
)
As far as I understand from the docs, graph_test is the original graph but with the test links removed. Then you perform the same operation with the training set,
# Do the same process to compute a training subset from within the test graph
edge_splitter_train = EdgeSplitter(graph_test)
graph_train, examples, labels = edge_splitter_train.train_test_split(
p=0.1, method="global"
)
Following the previous logic, graph_train corresponds to graph_test with the training links removed.
Further down the code, my understanding is that we use graph_train to train the embedding and the training samples (examples, labels) to train the classifier. So I have several questions here:
Why are we using disjoint sets of training data to train different parts of the model? Shouldn´t we train both the embedding and the classifier with the full training set of links?
Why is the test set so big? Wouldn´t it be better to have most samples in the training set?
What is the correct way of using the EdgeSplitter class?
Thanks you in advance for your help!
Why disjoint sets:
This may or may not matter depending on the embedding algorithm.
The risk with edges that are both seen by the embedding algorithm and the classifier as targets is that the embedding algorithm may encode non-generalizable features.
For example, theoretically one feature of the embedding could be the node id, and then you could have other features encoding the entire neighborhood of the node. When combining two node's embeddings into a link vector in a weird way, or when using a multilayer model, one could therefore create a binary feature which is 1 if the two nodes are connected during embedding training and 0 otherwise.
In this case the classifier would perhaps just learn to use this trivial feature which is not present (i.e. has value 0) when you go to the test data.
The above would not happen in a real scenario, but more subtle features could have the same effect to a lesser degree.
In the end, this only risks to make model selection bad.
That is, the first split is to make the test reliable. The second split is to improve model selection. You can therefore omit the second split if you wish.
Why test set so big:
You are likely to get higher score with a bigger train set. As long as the experiment is repeated with different splits and variance is under control, it should be fine to increase train size.
What is the correct way to use EdgeSplitter:
I dont know what 'correct' means here. I think graph splitting is still an active research field.
I'm trying to do multilabel classification with SVM.
I have nearly 8k features and also have y vector of length with nearly 400. I already have binarized Y vectors, so I didn't use MultiLabelBinarizer() but when I use it with my Y data's raw form, it still gives same thing.
I'm running this code:
X = np.genfromtxt('data_X', delimiter=";")
Y = np.genfromtxt('data_y', delimiter=";")
training_X = X[:2600,:]
training_y = Y[:2600,:]
test_sample = X[2600:2601,:]
test_result = Y[2600:2601,:]
classif = OneVsRestClassifier(SVC(kernel='rbf'))
classif.fit(training_X, training_y)
print(classif.predict(test_sample))
print(test_result)
After all fitting process when it comes to prediction part, it says Label not x is present in all training examples (x is a few different numbers in range of my y vector length which is 400). After that it gives predicted y vector which is always zero vector with length of 400(y vector length).
I'm new at scikit-learn and also in machine learning. I couldn't figure out the problem here. What's the problem and what should I do to fix it?
Thanks.
There are 2 problems here:
1) The missing label warning
2) You are getting all 0's for predictions
The warning means that some of your classes are missing from the training data. This is a common problem. If you have 400 classes, then some of them must only occur very rarely, and on any split of the data, some classes may be missing from one side of the split. There may also be classes that simply don't occur in your data at all. You could try Y.sum(axis=0).all() and if that is False, then some classes do not occur even in Y. This all sounds horrible, but realistically, you aren't going to be able to correctly predict classes that occur 0, 1, or any very small number of times anyway, so predicting 0 for those is probably about the best you can do.
As for the all-0 predictions, I'll point out that with 400 classes, probably all of your classes occur much less than half the time. You could check Y.mean(axis=0).max() to get the highest label frequency. With 400 classes, it might only be a few percent. If so, a binary classifier that has to make a 0-1 prediction for each class will probably pick 0 for all classes on all instances. This isn't really an error, it is just because all of the class frequencies are low.
If you know that each instance has a positive label (at least one), you could get the decision values (clf.decision_function) and pick the class with the highest one for each instance. You'll have to write some code to do that, though.
I once had a top-10 finish in a Kaggle contest that was similar to this. It was a multilabel problem with ~200 classes, none of which occurred with even a 10% frequency, and we needed 0-1 predictions. In that case I got the decision values and took the highest one, plus anything that was above a threshold. I chose the threshold that worked the best on a holdout set. The code for that entry is on Github: Kaggle Greek Media code. You might take a look at it.
If you made it this far, thanks for reading. Hope that helps.
I have a bunch of sentences and I want to cluster them using scikit-learn spectral clustering. I've run the code and get the results with no problem. But, every time I run it I get different results. I know this is the problem with initiation but I don't know how to fix it. This is my a part of my code that runs on sentences:
vectorizer = TfidfVectorizer(norm='l2',sublinear_tf=True,tokenizer=tokenize,stop_words='english',charset_error="ignore",ngram_range=(1, 5),min_df=1)
X = vectorizer.fit_transform(data)
# connectivity matrix for structured Ward
connectivity = kneighbors_graph(X, n_neighbors=5)
# make connectivity symmetric
connectivity = 0.5 * (connectivity + connectivity.T)
distances = euclidean_distances(X)
spectral = cluster.SpectralClustering(n_clusters=number_of_k,eigen_solver='arpack',affinity="nearest_neighbors",assign_labels="discretize")
spectral.fit(X)
Data is a list of sentences. Everytime the code runs, my clustering results differs. How can I get consistent results using Spectral clustering. I also have the same problem with Kmean. This is my code for Kmean:
vectorizer = TfidfVectorizer(sublinear_tf=True,stop_words='english',charset_error="ignore")
X_data = vectorizer.fit_transform(data)
km = KMeans(n_clusters=number_of_k, init='k-means++', max_iter=100, n_init=1,verbose=0)
km.fit(X_data)
I appreciate your helps.
When using k-means, you want to set the random_state parameter in KMeans (see the documentation). Set this to either an int or a RandomState instance.
km = KMeans(n_clusters=number_of_k, init='k-means++',
max_iter=100, n_init=1, verbose=0, random_state=3425)
km.fit(X_data)
This is important because k-means is not a deterministic algorithm. It usually starts with some randomized initialization procedure, and this randomness means that different runs will start at different points. Seeding the pseudo-random number generator ensures that this randomness will always be the same for identical seeds.
I'm not sure about the spectral clustering example though. From the documentation on the random_state parameter: "A pseudo random number generator used for the initialization of the lobpcg eigen vectors decomposition when eigen_solver == 'amg' and by the K-Means initialization." OP's code doesn't seem to be contained in those cases, though setting the parameter might be worth a shot.
As the others already noted, k-means is usually implemented with randomized initialization. It is intentional that you can get different results.
The algorithm is only a heuristic. It may yield suboptimal results. Running it multiple times gives you a better chance of finding a good result.
In my opinion, when the results vary highly from run to run, this indicates that the data just does not cluster well with k-means at all. Your results are not much better than random in such a case. If the data is really suited for k-means clustering, the results will be rather stable! If they vary, the clusters may not have the same size, or may be not well separated; and other algorithms may yield better results.
I had a similar issue, but it's that I wanted the data set from another distribution to be clustered the same way as the original data set. For example, all color images of the original data set were in the cluster 0 and all gray images of the original data set were in the cluster 1. For another data set, I want color images / gray images to be in cluster 0 and cluster 1 as well.
Here is the code I stole from a Kaggler - in addition to set the random_state to a seed, you use the k-mean model returned by KMeans for clustering the other data set. This works reasonably well. However, I can't find the official scikit-Learn document saying that.
# reference - https://www.kaggle.com/kmader/normalizing-brightfield-stained-and-fluorescence
from sklearn.cluster import KMeans
seed = 42
def create_color_clusters(img_df, cluster_count = 2, cluster_maker=None):
if cluster_maker is None:
cluster_maker = KMeans(cluster_count, random_state=seed)
cluster_maker.fit(img_df[['Green', 'Red-Green', 'Red-Green-Sd']])
img_df['cluster-id'] = np.argmin(cluster_maker.transform(img_df[['Green', 'Red-Green', 'Red-Green-Sd']]),-1)
return img_df, cluster_maker
# Now K-Mean your images `img_df` to two clusters
img_df, cluster_maker = create_color_clusters(img_df, 2)
# Cluster another set of images using the same kmean-model
another_img_df, _ = create_color_clusters(another_img_df, 2, cluster_maker)
However, even setting random_state to a int seed cannot ensure the same data will always be grouped in the same order across machines. The same data may be clustered as group 0 on one machine and clustered as group 1 on another machine. But at least with the same K-Means model (cluster_maker in my code) we make sure data from another distribution will be clustered in the same way as the original data set.
Typically when running algorithms with many local minima it's common to take a stochastic approach and run the algorithm many times with different initial states. This will give you multiple results, and the one with the lowest error is usually chosen to be the best result.
When I use K-Means I always run it several times and use the best result.
I have some images with which I will test an object detection classifier.
I'll have the classifier output the coordinates for the rectangles where it believes the target objects are, but I wonder how the results are tested?
I'm guessing I should have a reference file of coordinates of true object positions against which I can compare the classifier's results.
What if the classifier does make a correct classification, just with the coordinates not exactly the same as the ones in the reference file?
How's this usually solved?
The answer depends of the method you are going to use. One of them you've proposed, and in that case I would put an fixed possible error rate for bad detecting case which, if the classifier result lays inside the error interval for one of the examples from data test file, is considered as a proper classified example. Of course this fixed error rate should be as small as not to "overdetect" in the data set.
I would suggest trying cross-validation as a technique to test classifier. From data set, it chooses some vectors (images) as a test set and the rest of them as a training set. Repeat few times and average the errors resulting in estimated classificator error. And you don't have to have separate data test set and you don't have to worry about the problem you stated.
I am using a classification tree from sklearn and when I have the the model train twice using the same data, and predict with the same test data, I am getting different results. I tried reproducing on a smaller iris data set and it worked as predicted. Here is some code
from sklearn import tree
from sklearn.datasets import iris
clf = tree.DecisionTreeClassifier()
clf.fit(iris.data, iris.target)
r1 = clf.predict_proba(iris.data)
clf.fit(iris.data, iris.target)
r2 = clf.predict_proba(iris.data)
r1 and r2 are the same for this small example, but when I run on my own much larger data set I get differing results. Is there a reason why this would occur?
EDIT After looking into some documentation I see that DecisionTreeClassifier has an input random_state which controls the starting point. By setting this value to a constant I get rid of the problem I was previously having. However now I'm concerned that my model is not as optimal as it could be. What is the recommended method for doing this? Try some randomly? Or are all results expected to be about the same?
The DecisionTreeClassifier works by repeatedly splitting the training data, based on the value of some feature. The Scikit-learn implementation lets you choose between a few splitting algorithms by providing a value to the splitter keyword argument.
"best" randomly chooses a feature and finds the 'best' possible split for it, according to some criterion (which you can also choose; see the methods signature and the criterion argument). It looks like the code does this N_feature times, so it's actually quite like a bootstrap.
"random" chooses the feature to consider at random, as above. However, it also then tests randomly-generated thresholds on that feature (random, subject to the constraint that it's between its minimum and maximum values). This may help avoid 'quantization' errors on the tree where the threshold is strongly influenced by the exact values in the training data.
Both of these randomization methods can improve the trees' performance. There are some relevant experimental results in Lui, Ting, and Fan's (2005) KDD paper.
If you absolutely must have an identical tree every time, then I'd re-use the same random_state. Otherwise, I'd expect the trees to end up more or less equivalent every time and, in the absence of a ton of held-out data, I'm not sure how you'd decide which random tree is best.
See also: Source code for the splitter
The answer provided by Matt Krause does not answer the question entirely correctly.
The reason for the observed behaviour in scikit-learn's DecisionTreeClassifier is explained in this issue on GitHub.
When using the default settings, all features are considered at each split. This is governed by the max_features parameter, which specifies how many features should be considered at each split. At each node, the classifier randomly samples max_features without replacement (!).
Thus, when using max_features=n_features, all features are considered at each split. However, the implementation will still sample them at random from the list of features (even though this means all features will be sampled, in this case). Thus, the order in which the features are considered is pseudo-random. If two possible splits are tied, the first one encountered will be used as the best split.
This is exactly the reason why your decision tree is yielding different results each time you call it: the order of features considered is randomized at each node, and when two possible splits are then tied, the split to use will depend on which one was considered first.
As has been said before, the seed used for the randomization can be specified using the random_state parameter.
The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data and max_features=n_features, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, random_state has to be fixed.
Source: http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier#Notes
I don't know anything about sklearn but...
I guess DecisionTreeClassifier has some internal state, create by fit, which only gets updated/extended.
You should create a new one?