Say I have a sklearn training data:
features, labels = assign_dataSets() #assignment operation
Here the feature is a 2-D array, whereas label consists is a 1-D array consisting of values [0,1]
The classification operation:
f1x = [features[i][0] for i in range(0, len(features)) if labels[i]==0]
f2x = [features[i][0] for i in range(0, len(features)) if labels[i]==1]
f1y = [features[i][1] for i in range(0, len(features)) if labels[i]==0]
f2y = [features[i][1] for i in range(0, len(features)) if labels[i]==1]
Now I plot the said data:
import matplotlib.pyplot as plt
plt.scatter(f1x,f1y,color='b')
plt.scatter(f2x,f2y,color='y')
plt.show()
Now I want to run the fitting operation with a classifier for example SVC.
from sklearn.svm import SVC
clf = SVC()
clf.fit(features, labels)
Now my question is as support vectors are really slow, is there a way to monitor the decision boundary of the classifier in real-time (I mean as the fitting operation is occurring)? I know that I can plot the decision boundary after the fitting operation has occurred, but I want the plotting of the classifier to occur in real time. Perhaps with threading and running predictions of an array of points declared by a linespace. Does fit function even allow such operations, or do I need to go for a some other library?
Just so you know, I am new to machine-learning.
scikit-learn has this feature, but it's is limited to a few classifiers from my understanding (e.g. GradientBoostingClassifier, MPLClassifier). To turn on this feature, you need to set verbose=True. For example:
clf = GradientBoostingClassifier(verbose=True)
I tried it with SVC and didn't work as expected (probably for the reason sascha mentioned in the comment section). Here is a different variation of your question on StackOverflow.
With regards to your second question, if you switch to Tensorflow (another machine learning library), you can use the tensorboard feature to monitor a few of metrics (e.g. error decay) in real time.
However, to the best of my knowledge SVM implementation is still experimental in v1.5. Tensorflow is really good when working with neural network based models.
If you decide to use a DNN for classification using Tensorflow then here is a discussion about implementation on StackOverflow: No easy way to add Tensorboard output to pre-defined estimator functions DnnClassifier?
Useful References:
Tensorflow SVM (only linear support for now - v1.5): https://www.tensorflow.org/api_docs/python/tf/contrib/learn/SVM
Tensorflow Kernals Methods: https://www.tensorflow.org/versions/master/tutorials/kernel_methods
Tensorflow Tensorboard: https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard
Tensorflow DNNClassifier Estimator: https://www.tensorflow.org/api_docs/python/tf/estimator/DNNClassifier
Related
Usually people use scikit-learn to train a model this way:
from sklearn.ensemble import GradientBoostingClassifier as gbc
clf = gbc()
clf.fit(X_train, y_train)
predicted = clf.predict(X_test)
It works fine as long as users' memory is large enough to accommodate the entire dataset. The dilemma for me is exactly this--the dataset is too big for my memory. My current solution is to enlarge the virtual memory of my machine and I have already made the system extremely slow by having too much virtual memory--so I start to think whether or not is it possible to feed the fit() method with samples in batches like this (and the answre is no, please keep reading and stop reminding me that the answer is no):
clf = gbc()
for i in range(X_train.shape[0]):
clf.fit(X_train[i], y_train[i])
so that I can read the training set from hard drive only when needed. I read the sklearn's manual and it seems to me that it does not support this:
Calling fit() more than once will overwrite what was learned by any previous fit()
So, is this possible?
This do not work in scikit-learn as explained in the comment section as well as in the documentation. However you can use river ( which is a python package for online/streaming machine learning). This package should be well-suited for you problematic.
Below is an example of training a LinearRegression using river.
from river import datasets
from river import linear_model
from river import metrics
from river import preprocessing
dataset = datasets.TrumpApproval()
model = (
preprocessing.StandardScaler() |
linear_model.LinearRegression(intercept_lr=.1)
)
metric = metrics.MAE()
for x, y, in dataset:
y_pred = model.predict_one(x)
# Update the running metric with the prediction and ground truth value
metric.update(y, y_pred)
# Train the model with the new sample
model.learn_one(x, y)
It is not clear in your question is which steps in the machine learning are slow for you. As also noted in the manual for riverml and this post in sklearn there is an option to do a partial fit. You will be restricted in terms of the models you can use for this incremental learning.
So using your example lets say we use a stochastic gradient descent classifier:
from sklearn.linear_model import SGDClassifier
from sklearn.datasets import make_classification
X,y = make_classification(100000)
clf = SGDClassifier(loss='log')
all_classes = list(set(y))
for ix in np.split(np.arange(0,X.shape[0]),100):
clf.partial_fit(X[ix,:],y[ix],classes = all_classes)
After reading the section 6. Strategies to scale computationally: bigger data of the official manual mentioned by #StupidWolf in this post, I am aware that this question is more to this than meets the eye.
The real difficulty is about the design of a lot of models.
Take Random Forest as an example, one of the most important techniques used to improve its performance compared with the simpler Decision Tree is the application of bagging, which means that the algorithm has to pick some random samples from the entire dataset to construct several weak learners as the basis of the Random Forest. It means that feeding the model with one sample after another won't work with this design.
Although it is still possible for scikit-learn to define an interface for end-users to implement so that scikit-learn can pick a random sample by calling this interface and end-users will decide how their implementation of the interface is about to return the needed data by scanning the dataset on the hard drive, it becomes way more complicated than I initially thought and the performance gain may not be very significant given that the IO-heavy "full table scan" (in database's term) is frequently needed.
I am using the scikit-learn implementation of Gaussian Process Regression here and I want to fit single points instead of fitting a whole set of points. But the resulting alpha coefficients should remain the same e.g.
gpr2 = GaussianProcessRegressor()
for i in range(x.shape[0]):
gpr2.fit(x[i], y[i])
should be the same as
gpr = GaussianProcessRegressor().fit(x, y)
But when accessing gpr2.alpha_ and gpr.alpha_, they are not the same. Why is that?
Indeed, I am working on a project where new data points arise. I dont want to append the x, y arrays and fit on the whole dataset again as it is very time intense. Let x be of size n, then I am having:
n+(n-1)+(n-2)+...+1 € O(n^2) fittings
when considering that the fitting itself is quadratic (correct me if I'm wrong), the run time complexity should be in O(n^3). It would be more optimal, if I do a single fitting on n points:
1+1+...+1 = n € O(n)
What you refer to is actually called online learning or incremental learning; it is in itself a huge sub-field in machine learning, and is not available out-of-the-box for all scikit-learn models. Quoting from the relevant documentation:
Although not all algorithms can learn incrementally (i.e. without seeing all the instances at once), all estimators implementing the partial_fit API are candidates. Actually, the ability to learn incrementally from a mini-batch of instances (sometimes called “online learning”) is key to out-of-core learning as it guarantees that at any given time there will be only a small amount of instances in the main memory.
Following this excerpt in the linked document above, there is a complete list of all scikit-learn models currently supporting incremental learning, from where you can see that GaussianProcessRegressor is not one of them.
Although sklearn.gaussian_process.GaussianProcessRegressor does not directly implement incremental learning, it is not necessary to fully retrain your model from scratch.
To fully understand how this works, you should understand the GPR fundamentals. The key idea is that training a GPR model mainly consists of optimising the kernel parameters to minimise some objective function (the log-marginal likelihood by default). When using the same kernel on similar data these parameters can be reused. Since the optimiser has a stopping condition based on convergence, reoptimisation can be sped up by initialising the parameters with pre-trained values (a so-called warm-start).
Below is an example based on the one in the sklearn docs.
from time import time
from sklearn.datasets import make_friedman2
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel
X, y = make_friedman2(n_samples=1000, noise=0.1, random_state=0)
kernel = DotProduct() + WhiteKernel()
start = time()
gpr = GaussianProcessRegressor(kernel=kernel,
random_state=0).fit(X, y)
print(f'Time: {time()-start:.3f}')
# Time: 4.851
print(gpr.score(X, y))
# 0.3530096529277589
# the kernel is copied to the regressor so we need to
# retieve the trained parameters
kernel.set_params(**(gpr.kernel_.get_params()))
# use slightly different data
X, y = make_friedman2(n_samples=1000, noise=0.1, random_state=1)
# note we do not train this model
start = time()
gpr2 = GaussianProcessRegressor(kernel=kernel,
random_state=0).fit(X, y)
print(f'Time: {time()-start:.3f}')
# Time: 1.661
print(gpr2.score(X, y))
# 0.38599549162834046
You can see retraining can be done in significantly less time than training from scratch. Although this might not be fully incremental, it can help speed up training in a setting with streaming data.
I used eli5 to apply the permutation procedure for feature importance. In the documentation, there is some explanation and a small example but it is not clear.
I am using a sklearn SVC model for a classification problem.
My question is: Are these weights the change (decrease/increase) of the accuracy when the specific feature is shuffled OR is it the SVC weights of these features?
In this medium article, the author states that these values show the reduction in model performance by the reshuffle of that feature. But not sure if that's indeed the case.
Small example:
from sklearn import datasets
import eli5
from eli5.sklearn import PermutationImportance
from sklearn.svm import SVC, SVR
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2]
y = iris.target
clf = SVC(kernel='linear')
perms = PermutationImportance(clf, n_iter=1000, cv=10, scoring='accuracy').fit(X, y)
print(perms.feature_importances_)
print(perms.feature_importances_std_)
[0.38117333 0.16214 ]
[0.1349115 0.11182505]
eli5.show_weights(perms)
I did some deep research.
After going through the source code here is what I believe for the case where cv is used and is not prefit or None. I use a K-Folds scheme for my application. I also use a SVC model thus, score is the accuracy in this case.
By looking at the fit method of thePermutationImportance object, the _cv_scores_importances are computed (https://github.com/TeamHG-Memex/eli5/blob/master/eli5/sklearn/permutation_importance.py#L202). The specified cross-validation scheme is used and the base_scores, feature_importances are returned using the test data (function: _get_score_importances inside _cv_scores_importances).
By looking at get_score_importances function (https://github.com/TeamHG-Memex/eli5/blob/master/eli5/permutation_importance.py#L55), we can see that base_score is the score on the non shuffled data and feature_importances (called differently there as: scores_decreases) are defined as non shuffled score - shuffled score (see https://github.com/TeamHG-Memex/eli5/blob/master/eli5/permutation_importance.py#L93)
Finally, the errors (feature_importances_std_) are the SD of the above feature_importances (https://github.com/TeamHG-Memex/eli5/blob/master/eli5/sklearn/permutation_importance.py#L209) and the feature_importances_ is the mean of the above feature_importances (non-shuffled score minus (-) shuffled score).
A fair bit shorter answer to your original question, regardless of the setting for the cv parameter, eli5 will calculate the average decrease in the scorer you provide. Because you're using the sklearn wrapper, the scorer will come from scikit-learn: in your case accuracy. Overall as a word on the package, some of these details are particularly difficult to figure out without going into the deeper into the source code, might be worth trying to submit a pull request to make the documentation more detailed where possible.
Im trying to figure out how to configure a neural network using Neupy. The problem is that I cant seem to find much options for a GRNN, only the sigma value as described here:
There is a parameter, y_i, that I want to be able to adjust, but there doesn't seem to be a way to do it on the package. I'm parsing through the code but i'm not a developer so i've trouble following all the steps, maybe a more experienced set of eyes can find a way to tweak that parameter.
Thanks
From the link that you've provided it looks like y_i is the target variable. In your case it's your target training variable. In the neupy code it's used during the prediction. https://github.com/itdxer/neupy/blob/master/neupy/algorithms/rbfn/grnn.py#L140
GRNN uses lazy learning, which means that it doesn't train, it just re-uses all your training data per each prediction. The self.target_train variable is just a copy that you use during the training phase. You can update this value before making prediction
from neupy import algorithms
grnn = algorithms.GRNN(std=0.1)
grnn.train(x_train, y_train)
grnn.train_target = modify_grnn_algorithm(grnn.train_target)
predicted = grnn.predict(x_test)
Or you can use GRNN code for prediction instead of default predict function
import numpy as np
from neupy import algorithms
from neupy.algorithms.rbfn.utils import pdf_between_data
grnn = algorithms.GRNN(std=0.1)
grnn.train(x_train, y_train)
# In this part of the code you can do any moifications you want
ratios = pdf_between_data(grnn.input_train, x_test, grnn.std)
predicted = (np.dot(grnn.target_train.T, ratios) / ratios.sum(axis=0)).T
I'm trying to get the top 10 most informative (best) features for a SVM classifier with RBF kernel. As I'm a beginner in programming, I tried some codes that I found online. Unfortunately, none work. I always get the error: ValueError: coef_ is only available when using a linear kernel.
This is the last code I tested:
scaler = StandardScaler(with_mean=False)
enc = LabelEncoder()
y = enc.fit_transform(labels)
vec = DictVectorizer()
feat_sel = SelectKBest(mutual_info_classif, k=200)
# Pipeline for SVM classifier
clf = SVC()
pipe = Pipeline([('vectorizer', vec),
('scaler', StandardScaler(with_mean=False)),
('mutual_info', feat_sel),
('svc', clf)])
y_pred = model_selection.cross_val_predict(pipe, instances, y, cv=10)
# Now fit the pipeline using your data
pipe.fit(instances, y)
def show_most_informative_features(vec, clf, n=10):
feature_names = vec.get_feature_names()
coefs_with_fns = sorted(zip(clf.coef_[0], feature_names))
top = zip(coefs_with_fns[:n], coefs_with_fns[:-(n + 1):-1])
for (coef_1, fn_1), (coef_2, fn_2) in top:
return ('\t%.4f\t%-15s\t\t%.4f\t%-15s' % (coef_1, fn_1, coef_2, fn_2))
print(show_most_informative_features(vec, clf))
Does someone no a way to get the top 10 features from a classifier with a RBF kernel? Or another way to visualize the best features?
I am not sure if what you are asking is possible for an RBF kernel in a similar way to the example you show (which, as your error suggests, only works with a linear kernel).
However, you could always try feature ablation; remove each feature one by one and test how that affects performance. The 10 features that most affect performance are your "top 10 features".
Obviously, this is only possible if (1) you have relatively few features and/or (2) training and testing your model does not take a long time.
There are at least two options available for feature selection for an SVM classifier with RBF kernel within the scikit-learn Python module
If you are performing univariate classification, you can use SelectKBest
For classification feature selection, one of the following scoring functions can be specified within SelectKBest :
chi-squared , for non-negative features
ANOVA F-value , assuming linear dependency
Mutual Information, for general dependency. Requires more samples.
For sparse data sets, the SequentialFeatureSelector can progressively add or remove features (forward or backward selection) based on the cross-validation score for the classifier.
While SFS does not require the classifier model to expose a "coef_" or "feature_importances_" attribute, it may be slow as it will require running m*k fits (i.e., adding/removing m features with k-fold cross-validation).
A recursive feature elimination (RFE) feature selection approach has also been proposed Liu et.al. in Feature selection for support vector machines with RBF kernel.