How to fine tune the network automatically in Keras? - python

How to tune the network automatically instead of adjusting the number of hidden layers and epochs everytime manually? (Using Keras)
from keras.models import Sequential
from keras.layers import Dense
import numpy
seed = 9
numpy.random.seed(seed)
from pandas import read_csv
filename = 'BBCN.csv'
dataframe = read_csv(filename)
array = dataframe.values
x = array[:,0 : 11]
y = array[:, 11]
model = Sequential()
model.add(Dense(11, input_dim=11, kernel_initializer = 'uniform', z = 'relu'))
model.add(Dense(8, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dense(8, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dense(1, kernel_initializer = 'uniform', activation = 'sigmoid'))
model.compile(loss='binary_crossentropy', optimizer ='adam', metrics = ['accuracy'])
model.fit(x, y,nb_epoch = 50, batch_size = 10 )
scores = model.evaluate(x,y)
print("%s, %.2f%%" % (model.metrics_names[1], scores[1]*100))
The result I need is to show the process and the percentage of the accuracy.
Thanks a lot!

You could start with a simple loop over some hyperparameters and train with these for some epochs and then compare the results.
You can also look into grid search which is a more systematic approach. Basically you setup a function that creates a model and use it with a set of hyperparameters that you want to try out and an array of values. For more details and boilerplate code I recommend this tutorial.

Related

Keras MLP extracting predictions from each cross validation

I have built a sequential Keras MLP and would like to extract the predicted labels (Not Accuracy) at each cross validation (CV=5). I also need to extract the cross validation X_test data that it tests on at each iteration.
Is this possible?
model = Sequential()
model.add(Dense(units = 56, input_dim = 11, activation = "relu",
kernel_initializer= initializer))
model.add(Dropout(0.2))
model.add(Dense(units = 28, activation = "relu"))
model.add(Dropout(0.2))
model.add(Dense(units = 1, activation = "sigmoid"))
model.compile(loss='binary_crossentropy', optimizer=GradDesc12,
metrics=['accuracy'])
model.fit(X_train,y_train, epochs= 20, batch_size =
128,verbose=0)
kFold = StratifiedKFold(n_splits=5)
for train, test in kFold.split(X_data, Y_labels):
Thank you

Setting output variable in deep learning

I have this code:
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
dataset = loadtxt('pima-indians-diabetes.csv', delimiter=',')
X = dataset[:,0:8]
y = dataset[:,8]
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=150, batch_size=10)
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
I need to change the output column so it predicts/learns from a score(for instance 1 to a million) instead of 0 or 1(sigmoid).
As for your case you need to use relu as your activation function in the last layer (output layer) instead of sigmoid
The range of relu is [0,inf).Then in that case you need to use 'MSE' as your loss metric.
Conceptually, the problem which you are trying to solve is a regression type of problem.

ValueError: The model is not configured to compute accuracy

When using this code I got from some tutorial I got the error that says The model is not configured to compute accuracy and that I should pass accuracy , The weird part is I am already passing metrics = ['accuracy']
I've searched a lot and all the codes I have seen works fine except mine.
Evaluating the ANN
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from tensorflow.python.keras.models import Sequential #Used to initialize the NN
from tensorflow.python.keras.layers import Dense #Used to create the layers in the ANN
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu',input_dim = 11))
classifier.add(Dense(units= 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics= ['accuracy'])
return classifier
# Needs to be revised from evaluting video in the course if needed
classifier = KerasClassifier(build_fn = build_classifier, batch_size = 10, nb_epoch = 100)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)
I expect the output to be the accuarcies vector, instead i got:
ValueError: The model is not configured to compute accuracy. You should pass metrics=["accuracy"] to the model.compile() method.
Changing the parameter from metrics=['accuracy'] by metrics=['acc'] works for me.
Regards,
Joseph

Same NN architecture giving different accuracies in tensor flow and keras

A neural network trained on iris dataset using [4, 4] hidden layers and created separately in tensorflow and keras gives different results.
While the tensorflow model gives 96.6 % accuracy on test, keras model gives only around 50%. The various hyper parameters like learning rate, optimiser, mini batch size, etc were the same in both cases.
Keras model
model = Sequential()
model.add(Dense(units = 4, activation = 'relu', input_dim = 4))
model.add(Dropout(0.25))
model.add(Dense(units = 4, activation = 'relu'))
model.add(Dropout(0.25))
model.add(Dense(units = 3, activation = 'softmax'))
adam = Adam(epsilon = 10**(-6), lr = 0.01)
model.compile(optimizer = 'adagrad', loss = 'categorical_crossentropy', metrics = ['accuracy'])
one_hot_labels = keras.utils.to_categorical(y_train, num_classes = 3)
model.fit(X_train, one_hot_labels, epochs = 50, batch_size = 40)
Tensorflow model
feature_columns = [tf.feature_column.numeric_column(key = name,
shape = (1),
dtype = tf.float32) for name in list(X_train.columns)]
classifier = tf.estimator.DNNClassifier(hidden_units = [4, 4],
feature_columns = feature_columns,
n_classes = 3,
dropout = 0.25,
model_dir = './DNN_model')
train_input_fn = tf.estimator.inputs.pandas_input_fn(x = X_train,
y = y_train,
batch_size = 40,
num_epochs = 50,
shuffle = False)
classifier.train(input_fn = train_input_fn, steps = None)
For the keras model, I did try changing the learning rate, increasing the number of epochs, using different optimisers, etc. As such, the accuracy remained poor. Clearly, both the models are doing different things, but on the surface, they seem identical to me for all the key aspects.
Any help is appreciated.
they have the same architecture, and that's all.
The difference in performance is coming from one or more of these factors:
You have Dropout. Therefore your networks in every start behaving differently (check how the Dropout works);
Weight initializations, which method you're using in Keras and TensorFlow?
Check all parameters of the optimizer.

concatenate flatten output with and other datasets keras python

have 2 datasets, for the first data set i want to apply convolution and keep the result of flatten layyer then concatenate it with an other data set and a do a simple feed forward it is possible with keras ?
def build_model(x_train,y_train):
np.random.seed(7)
left = Sequential()
left.add(Conv1D(nb_filter= 6, filter_length=3, input_shape= (48,1),activation = 'relu', kernel_initializer='glorot_uniform'))
left.add(Conv1D(nb_filter= 6, filter_length=3, activation= 'relu'))
#model.add(MaxPooling1D())
print model
#model.add(Dropout(0.2))
# flatten layer
#https://www.quora.com/What-is-the-meaning-of-flattening-step-in-a-convolutional-neural-network
left.add(Flatten())
left.add(Reshape((48,1)))
right = Sequential()
#model.add(Reshape((48,1)))
# Compile model
model.add(Merge([left, right], mode='sum'))
model.add(Dense(10, 10))
epochs = 100
lrate = 0.01
decay = lrate/epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
#clipvalue=0.5)
model.compile(loss='mean_squared_error', optimizer='Adam')
model.fit(x_train,y_train, nb_epoch =epochs, batch_size=10, verbose=1)
#model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'] , )
return model
You need to look at the functional API. The sequential model you are using is not designed to take multiple network inputs.
Follow the "Multi-input and multi-output models" example and you will have it working in no time!

Categories

Resources