When using this code I got from some tutorial I got the error that says The model is not configured to compute accuracy and that I should pass accuracy , The weird part is I am already passing metrics = ['accuracy']
I've searched a lot and all the codes I have seen works fine except mine.
Evaluating the ANN
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from tensorflow.python.keras.models import Sequential #Used to initialize the NN
from tensorflow.python.keras.layers import Dense #Used to create the layers in the ANN
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu',input_dim = 11))
classifier.add(Dense(units= 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics= ['accuracy'])
return classifier
# Needs to be revised from evaluting video in the course if needed
classifier = KerasClassifier(build_fn = build_classifier, batch_size = 10, nb_epoch = 100)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)
I expect the output to be the accuarcies vector, instead i got:
ValueError: The model is not configured to compute accuracy. You should pass metrics=["accuracy"] to the model.compile() method.
Changing the parameter from metrics=['accuracy'] by metrics=['acc'] works for me.
Regards,
Joseph
Related
After running the hyperparameter tuning with GridSearchCV with the code as below:
## Tuning the ANN
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
def build_regressor(hidden_nodes, hidden_layers, optimizer):
regressor = Sequential()
regressor.add(Dense(units = hidden_nodes, kernel_initializer = 'uniform', activation = 'relu', input_dim = 7))
for layer_size in range(hidden_layers):
regressor.add(Dense(hidden_nodes, kernel_initializer = 'uniform', activation = 'relu'))
regressor.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'linear'))
regressor.compile(optimizer = optimizer, loss = 'mse', metrics = ['mse'])
return regressor
regressor = KerasRegressor(build_fn = build_regressor, epochs = 100)
# Create a dictionary of tuning parameters
parameters = {'hidden_nodes': list(range(2,101)), 'hidden_layers': [1,2,3], 'batch_size': [25,32], 'optimizer' : ['adam', 'nadam','RMSprop', 'adamax']}
grid_search = GridSearchCV(estimator = regressor, param_grid = parameters, scoring = 'neg_mean_squared_error', cv = 10, n_jobs = 4)
grid_search = grid_search.fit(X_train, y_train)
best_parameters = grid_search.best_params_
best_score = grid_search.best_score_
best_model = grid_search.best_estimator_
Do we have any way to extract the weights of the best model from GridSearchCV?
Thank you so much in advance,
As you want the model weights saved in a csv file you can do the following:
import numpy as np
weight = best_model.layers[0].get_weights()[0]
np.savetxt('weight.csv' , weight , fmt='%s', delimiter=',')
I am not able to get my code to run when using n_jobs = -1(on the last line).
I get the same message :
"BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable."
The code works with n_jobs = 1, but I need all processors as the code will take very long to execute.
I have tried using if __name__ == '__main__': , but I am not sure how to use it and cannot get the code to run.
I have tried for ages but to no avail. Any help is highly appreciated. Here is the relevant code:
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier,batch_size = 10, epochs = 100)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)
Hi so im learning about k fold cross validation, this first snippet of code is the building of a simple ANN:
def buildModel():
# Fitting classifier to the Training set
# Create your classifier here
model = Sequential()
model.add(Dense(units = 6, input_dim = X.shape[1], activation = 'relu'))
model.add(Dense(units = 6, activation = 'relu'))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return model
I then used cross_val_score validation in sklearn to run the ANN.
Keras is also runing on my gpu.
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
model = KerasClassifier(build_fn = buildModel, batch_size = 10, epochs =100)
accuracies = cross_val_score(estimator = model, X = X_train, y = y_train, cv = 10, n_jobs = -1)
But if i put n_jobs = -1 to try and use all cores i get an error (ps i have 11 features):
Blas GEMM launch failed : a.shape=(10, 11), b.shape=(11, 6), m=10, n=6, k=11
[[node dense_1/MatMul (defined at C:\Users\Brandon Cardillo\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\framework\ops.py:1751) ]]
[Op:__inference_keras_scratch_graph_1030]
Function call stack:
keras_scratch_graph
Ps. I am also running on jupyter notebook
Any help is very much appriciated.
Thank you.
A neural network trained on iris dataset using [4, 4] hidden layers and created separately in tensorflow and keras gives different results.
While the tensorflow model gives 96.6 % accuracy on test, keras model gives only around 50%. The various hyper parameters like learning rate, optimiser, mini batch size, etc were the same in both cases.
Keras model
model = Sequential()
model.add(Dense(units = 4, activation = 'relu', input_dim = 4))
model.add(Dropout(0.25))
model.add(Dense(units = 4, activation = 'relu'))
model.add(Dropout(0.25))
model.add(Dense(units = 3, activation = 'softmax'))
adam = Adam(epsilon = 10**(-6), lr = 0.01)
model.compile(optimizer = 'adagrad', loss = 'categorical_crossentropy', metrics = ['accuracy'])
one_hot_labels = keras.utils.to_categorical(y_train, num_classes = 3)
model.fit(X_train, one_hot_labels, epochs = 50, batch_size = 40)
Tensorflow model
feature_columns = [tf.feature_column.numeric_column(key = name,
shape = (1),
dtype = tf.float32) for name in list(X_train.columns)]
classifier = tf.estimator.DNNClassifier(hidden_units = [4, 4],
feature_columns = feature_columns,
n_classes = 3,
dropout = 0.25,
model_dir = './DNN_model')
train_input_fn = tf.estimator.inputs.pandas_input_fn(x = X_train,
y = y_train,
batch_size = 40,
num_epochs = 50,
shuffle = False)
classifier.train(input_fn = train_input_fn, steps = None)
For the keras model, I did try changing the learning rate, increasing the number of epochs, using different optimisers, etc. As such, the accuracy remained poor. Clearly, both the models are doing different things, but on the surface, they seem identical to me for all the key aspects.
Any help is appreciated.
they have the same architecture, and that's all.
The difference in performance is coming from one or more of these factors:
You have Dropout. Therefore your networks in every start behaving differently (check how the Dropout works);
Weight initializations, which method you're using in Keras and TensorFlow?
Check all parameters of the optimizer.
How to tune the network automatically instead of adjusting the number of hidden layers and epochs everytime manually? (Using Keras)
from keras.models import Sequential
from keras.layers import Dense
import numpy
seed = 9
numpy.random.seed(seed)
from pandas import read_csv
filename = 'BBCN.csv'
dataframe = read_csv(filename)
array = dataframe.values
x = array[:,0 : 11]
y = array[:, 11]
model = Sequential()
model.add(Dense(11, input_dim=11, kernel_initializer = 'uniform', z = 'relu'))
model.add(Dense(8, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dense(8, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dense(1, kernel_initializer = 'uniform', activation = 'sigmoid'))
model.compile(loss='binary_crossentropy', optimizer ='adam', metrics = ['accuracy'])
model.fit(x, y,nb_epoch = 50, batch_size = 10 )
scores = model.evaluate(x,y)
print("%s, %.2f%%" % (model.metrics_names[1], scores[1]*100))
The result I need is to show the process and the percentage of the accuracy.
Thanks a lot!
You could start with a simple loop over some hyperparameters and train with these for some epochs and then compare the results.
You can also look into grid search which is a more systematic approach. Basically you setup a function that creates a model and use it with a set of hyperparameters that you want to try out and an array of values. For more details and boilerplate code I recommend this tutorial.