partial_fit Sklearn's MLPClassifier - python

I've been trying to use Sklearn's neural network MLPClassifier. I have a dataset that is of size 1000 instances (with binary outputs) and I want to apply a basic Neural Net with 1 hidden layer to it.
The issue is that my data instances are not available all at the same time. At any point in time, I only have access to 1 data instance. I thought that partial_fit method of MLPClassifier can be used for this so I simulated the problem with an imaginary dataset of 1000 inputs and looped over the inputs one at a time and partial_fit to each instance but when I run the code, the neural net learns nothing and the predicted output is all zeros.
I am clueless as to what might be causing the problem. Any thought is hugely appreciated.
from __future__ import division
import numpy as np
from sklearn.datasets import make_classification
from sklearn.neural_network import MLPClassifier
#Creating an imaginary dataset
input, output = make_classification(1000, 30, n_informative=10, n_classes=2)
input= input / input.max(axis=0)
N = input.shape[0]
train_input = input[0:N/2,:]
train_target = output[0:N/2]
test_input= input[N/2:N,:]
test_target = output[N/2:N]
#Creating and training the Neural Net
clf = MLPClassifier(activation='tanh', algorithm='sgd', learning_rate='constant',
alpha=1e-4, hidden_layer_sizes=(15,), random_state=1, batch_size=1,verbose= True,
max_iter=1, warm_start=True)
classes=[0,1]
for j in xrange(0,100):
for i in xrange(0,train_input.shape[0]):
input_inst = [train_input[i,:]]
input_inst = np.asarray(input_inst)
target_inst= [train_target[i]]
target_inst = np.asarray(target_inst)
clf=clf.partial_fit(input_inst,target_inst,classes)
#Testing the Neural Net
y_pred = clf.predict(test_input)
print y_pred

Explanation of the problem
The problem is with self.label_binarizer_.fit(y) in line 895 in multilayer_perceptron.py.
Whenever you call clf.partial_fit(input_inst,target_inst,classes), you call self.label_binarizer_.fit(y) where y has only one sample corresponding to one class, in this case. Therefore, if the last sample is of class 0, then your clf will classify everything as class 0.
Solution
As a temporary fix, you can edit multilayer_perceptron.py at line 895.
It is found in a directory similar to this python2.7/site-packages/sklearn/neural_network/
At line 895, change,
self.label_binarizer_.fit(y)
to
if not incremental:
self.label_binarizer_.fit(y)
else:
self.label_binarizer_.fit(self.classes_)
That way, if you are using partial_fit, then self.label_binarizer_ fits on the classes rather than on the individual sample.
Further, the code you posted can be changed to the following to make it work,
from __future__ import division
import numpy as np
from sklearn.datasets import make_classification
from sklearn.neural_network import MLPClassifier
#Creating an imaginary dataset
input, output = make_classification(1000, 30, n_informative=10, n_classes=2)
input= input / input.max(axis=0)
N = input.shape[0]
train_input = input[0:N/2,:]
train_target = output[0:N/2]
test_input= input[N/2:N,:]
test_target = output[N/2:N]
#Creating and training the Neural Net
# 1. Disable verbose (verbose is annoying with partial_fit)
clf = MLPClassifier(activation='tanh', algorithm='sgd', learning_rate='constant',
alpha=1e-4, hidden_layer_sizes=(15,), random_state=1, batch_size=1,verbose= False,
max_iter=1, warm_start=True)
# 2. Set what the classes are
clf.classes_ = [0,1]
for j in xrange(0,100):
for i in xrange(0,train_input.shape[0]):
input_inst = train_input[[i]]
target_inst= train_target[[i]]
clf=clf.partial_fit(input_inst,target_inst)
# 3. Monitor progress
print "Score on training set: %0.8f" % clf.score(train_input, train_target)
#Testing the Neural Net
y_pred = clf.predict(test_input)
print y_pred
# 4. Compute score on testing set
print clf.score(test_input, test_target)
There are 4 main changes in the code. This should give you a good prediction on both the training and the testing set!
Cheers.

Related

AttributeError: 'MLPClassifier' object has no attribute '_label_binarizer'

I'm trying to implement batch training using sklearn's MLPClassifier leveraging partial_fit() function, but I get the following error:
AttributeError: 'MLPClassifier' object has no attribute '_label_binarizer'.
I have consulted some issues related to this (partial_fit Sklearn's MLPClassifier). This is the piece of code that I have used to reproduce the error (from the attached reference):
from __future__ import division
import numpy as np
from sklearn.datasets import make_classification
from sklearn.neural_network import MLPClassifier
#Creating an imaginary dataset
input, output = make_classification(1000, 30, n_informative=10, n_classes=2)
input= input / input.max(axis=0)
N = input.shape[0]
train_input = input[0:500,:]
train_target = output[0:500]
test_input= input[500:N,:]
test_target = output[500:N]
#Creating and training the Neural Net
# 1. Disable verbose (verbose is annoying with partial_fit)
clf = MLPClassifier(activation='tanh', learning_rate='constant',
alpha=1e-4, hidden_layer_sizes=(15,), random_state=1, batch_size=1,verbose= False,
max_iter=1, warm_start=False)
# 2. Set what the classes are
clf.classes_ = [0,1]
for j in range(0,100):
for i in range(0,train_input.shape[0]):
input_inst = train_input[[i]]
target_inst= train_target[[i]]
clf=clf.partial_fit(input_inst,target_inst)
# 3. Monitor progress
print("Score on training set: %0.8f" % clf.score(train_input, train_target))
#Testing the Neural Net
y_pred = clf.predict(test_input)
print(y_pred)
# 4. Compute score on testing set
print(clf.score(test_input, test_target))
I have also modified multilayer_perceptron.py code at line 895 to replace this, as mentioned here:
self.label_binarizer_.fit(y)
With this:
if not incremental:
self.label_binarizer_.fit(y)
else:
self.label_binarizer_.fit(self.classes_)
And still doesn't work. Any help is really appreciated.
Thanks!
This would work:
from __future__ import division
import numpy as np
from sklearn.datasets import make_classification
from sklearn.neural_network import MLPClassifier
#Creating an imaginary dataset
input, output = make_classification(1000, 30, n_informative=10, n_classes=2)
input= input / input.max(axis=0)
N = input.shape[0]
train_input = input[0:500,:]
train_target = output[0:500]
test_input= input[500:N,:]
test_target = output[500:N]
#Creating and training the Neural Net
# 1. Disable verbose (verbose is annoying with partial_fit)
clf = MLPClassifier(activation='tanh', learning_rate='constant',
alpha=1e-4, hidden_layer_sizes=(15,), random_state=1, batch_size=1,verbose= False,
max_iter=1, warm_start=False)
for j in range(0,100):
for i in range(0,train_input.shape[0]):
input_inst = train_input[[i]]
target_inst= train_target[[i]]
clf.partial_fit(input_inst,target_inst,[0,1])
# 3. Monitor progress
print("Score on training set: %0.8f" % clf.score(train_input, train_target))
#Testing the Neural Net
y_pred = clf.predict(test_input)
print(y_pred)
# 4. Compute score on testing set
print(clf.score(test_input, test_target))
This line was causing error:
# 2. Set what the classes are
clf.classes_ = [0,1]
And you have to pass classes here:
clf.partial_fit(input_inst,target_inst,[0,1])

Looking for ways and experiences to improve Keras model accuracy

I am an electrical engineer and I am looking for a solution to calculate the DC current of a permanent synchronous motor. So I decided to check the ANN solutions with Keras and so on.Long story short, I'll show you a screenshot of some measured signals.
The first 5 signals are the measured signals. The last one is the DC current, which I will estimate. Here the value was recorded with the help of a current clamp. Okay, I started building a model in Python and tried some things that I assume will increase the accuracy of the model. But after all that, I am not getting that good results from the model and my hope is that maybe I am choosing wrong parameters or not an ideal model for this purpose.
Here is my code:
import numpy as np
from keras.layers import Dense, LSTM
from keras.models import Sequential
from keras.callbacks import EarlyStopping
import pandas as pd
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from matplotlib import pyplot as plt
import seaborn as sns
# Import input (x) and output (y) data, and asign these to df1 and df1
df = pd.read_csv('train_data.csv')
df = df[['rpm','iq','uq','udc','idc']]
X = df[df.columns[:-1]]
Y = df.idc
plt.figure()
sns.heatmap(df.corr(),annot=True)
plt.show()
# Split the data into input (x) training and testing data, and ouput (y) training and testing data,
# with training data being 80% of the data, and testing data being the remaining 20% of the data
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)#, shuffle=True)
# Scale both training and testing input data
X_train = preprocessing.maxabs_scale(X_train)
X_test = preprocessing.maxabs_scale(X_test)
model = Sequential()
model.add(Dense(4, input_shape=(4,)))
model.add(Dense(4, input_shape=(4,)))
model.add(Dense(1, input_shape=(4,)))
model.compile(optimizer="adam", loss="msle", metrics=['mean_squared_logarithmic_error','accuracy'])
# Pass several parameters to 'EarlyStopping' function and assign it to 'earlystopper'
earlystopper = EarlyStopping(monitor='val_loss', min_delta=0, patience=15, verbose=1, mode='auto')
model.summary()
history = model.fit(X_train, y_train, epochs = 2000, validation_split = 0.3, verbose = 2, callbacks = [earlystopper])
# Runs model (the one with the activation function, although this doesn't really matter as they perform the same)
# with its current weights on the training and testing data
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
# Calculates and prints r2 score of training and testing data
print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred)))
print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred)))
df = pd.read_csv('test_two_data.csv')
df = df[['rpm','iq','uq','udc','idc']]
X = df[df.columns[:-1]]
Y = df.idc
X_validate = preprocessing.maxabs_scale(X)
y_pred = model.predict(X_validate)
plt.plot(Y)
plt.plot(y_pred)
plt.show()
(weight_0,bias_0) = model.layers[0].get_weights()
(weight_1,bias_1) = model.layers[1].get_weights()
One limitation is that I can't use LSTM layers or other complex algorithms because I need to implement the trained model in a microcontroller on a motor application later.
I guess you could find some words for me to make my model a little better in accuracy.
At the end here is a figure where I show you the worse prediction performance. Orange is the prediction and blue is the measured current.
The training dataset was this one.
The correlation between the individual values can be found here. Since the values of id and ud have no correlation to idc, I decided to delete them.
The most important thing to keep in mind when trying to improve the accuracy of the model is ALWAYS Normalise the input data which basically means rescaling real-valued numeric attributes into the range 0 and 1. I am not able to understand the way you are providing the training data to the model. Could you please explain that. It would be better in understanding and identifying the scope of higher accuracy.
Now if we talk about parameters, I would suggest you the addition of a Tuning Algorithm for the parameters to get the optimized value of each parameter.
It is always a good parctice to include hidden layers which could provide better feature extract.

How can correct sample_weight in sklearn.naive_bayes?

I'm implementing Naive Bayes by sklearn with imbalanced data.
My data has more than 16k records and 6 output categories.
I tried to fit the model with the sample_weight calculated by sklearn.utils.class_weight
The sample_weight received something like:
sample_weight = [11.77540107 1.82284768 0.64688602 2.47138047 0.38577435 1.21389195]
import numpy as np
data_set = np.loadtxt("./data/_vector21.csv", delimiter=",")
inp_vec = data_set[:, 1:22]
out_vec = data_set[:, 22:]
#
# # Split dataset into training set and test set
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(inp_vec, out_vec, test_size=0.2) # 80% training and 20% test
#
# class weight
from keras.utils.np_utils import to_categorical
output_vec_categorical = to_categorical(y_train)
from sklearn.utils import class_weight
y_ints = [y.argmax() for y in output_vec_categorical]
c_w = class_weight.compute_class_weight('balanced', np.unique(y_ints), y_ints)
cw = {}
for i in set(y_ints):
cw[i] = c_w[i]
# Create a Gaussian Classifier
from sklearn.naive_bayes import *
model = GaussianNB()
# Train the model using the training sets
print(c_w)
model.fit(X_train, y_train, c_w)
# Predict the response for test dataset
y_pred = model.predict(X_test)
# Import scikit-learn metrics module for accuracy calculation
from sklearn import metrics
# Model Accuracy, how often is the classifier correct?
print("\nClassification Report: \n", (metrics.classification_report(y_test, y_pred)))
print("\nAccuracy: %.3f%%" % (metrics.accuracy_score(y_test, y_pred)*100))
I got this message:
ValueError: Found input variables with inconsistent numbers of samples: [13212, 6]
Can anyone tell me what did I do wrong and how can fix it?
Thanks a lot.
The sample_weight and class_weight are two different things.
As their name suggests:
sample_weight is to be applied to individual samples (rows in your data). So the length of sample_weight must match the number of samples in your X.
class_weight is to make the classifier give more importance and attention to the classes. So the length of class_weight must match the number of classes in your targets.
You are calculating class_weight and not sample_weight by using the sklearn.utils.class_weight, but then try to pass it to the sample_weight. Hence the dimension mismatch error.
Please see the following questions for more understanding of how these two weights interact internally:
What is the difference between sample weight and class weight options in scikit learn?
https://stats.stackexchange.com/questions/244630/difference-between-sample-weight-and-class-weight-randomforest-classifier
This way I was able to calculate the weights to deal with class imbalance.
from sklearn.utils import class_weight
sample = class_weight.compute_sample_weight('balanced', y_train)
#Classifier Naive Bayes
naive = naive_bayes.MultinomialNB()
naive.fit(X_train,y_train, sample_weight=sample)
predictions_NB = naive.predict(X_test)

sklearn class_weight in random forest does the opposite of what I expect

I'm using the RandomForestClassifier of sklearn on data with highly imbalanced classes--lots of 0 and few of 1. I'm interested in the number of 1s in the prediction. Example (credit):
# Load libraries
from sklearn.ensemble import RandomForestClassifier
import numpy as np
from sklearn import datasets
# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Make class highly imbalanced by removing first 40 observations
X = X[46:,:]
y = y[46:]
# Create target vector indicating if class 0, otherwise 1
y = np.where((y == 0), 1, 0)
#split into training and testing
trainx = X[::2]
trainy = y[::2]
testx = X[1::2]
testy = y[1::2]
# Create decision tree classifer object
clf = RandomForestClassifier()
# Train model
clf.fit(trainx, trainy)
print(clf.predict(testx).sum())
This returns 2. Which is fine, except on my real data the result is a bit lower than the true answer. I want to deal with this by using the class_weight parameter. However when I do:
clf = RandomForestClassifier(class_weight="balanced")
# Train model
clf.fit(trainx, trainy)
print(clf.predict(testx).sum())
I get a result of 0. Same if I use class_weight={1:10}. If I use class_weight={1:.1} I get 2 again.
I get similar behavior on my real data: the higher the weight I give to class 1, the fewer 1s I get in the prediction.
This is the opposite of the behavior I expect (and the opposite of what the class_weight parameter does in svm). What's going on here? This question suggests sklearn is assigning the class labels by some kind of default, but that seems bizarre. Why wouldn't it use the class labels I gave it?

Fetching the loss values (MAE) per iteration using sklearn MLPRegressor

I want to check my loss values using MSE during the training process, how to fetching the loss values using MSE at each of iteration?., thank you.
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_absolute_error
dataset = open_dataset("forex.csv")
dataset_vector = [float(i[-1]) for i in dataset]
normalized_dataset_vector = normalize_vector(dataset_vector)
training_vector, validation_vector, testing_vector = split_dataset(training_size, validation_size, testing_size, normalized_dataset_vector)
training_features = get_features(training_vector)
training_fact = get_fact(training_vector)
validation_features = get_features(validation_vector)
validation_fact = get_fact(validation_vector)
model = MLPRegressor(activation=activation, alpha=alpha, hidden_layer_sizes=(neural_net_structure[1],), max_iter=number_of_iteration, random_state=seed)
model.fit(training_features, training_fact)
pred = model.predict(training_features)
err = mean_absolute_error(pred, validation_fact)
print(err)
There's no callbacks object like there is in Keras so you'll have to loop over the fitting process to get it for each iteration. Something like the below will work for you
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import mean_absolute_error
# create some toy data
X = np.random.random((100, 5))
y = np.random.choice([0, 1], 100)
max_iter = 500
mlp = MLPClassifier(hidden_layer_sizes=(10, 10, 10), max_iter=max_iter)
errors = []
for i in range(max_iter):
mlp.partial_fit(X, y, classes=[0, 1])
pred = mlp.predict(X)
errors.append(mean_absolute_error(y, pred))
Which throws an annoying DeprecationWarning at the moment, but that can be ignored. The only problem with using this method is that you have to manually keep track of whether or not your model has converged. Personally I would suggest using Keras instead of sklearn if you want to work with neural networks.

Categories

Resources