Running Keras Sequential model with different optimizers - python

I want to check the performance of my model against various optimizers
(sgd, rmsprop, adam, adamax etc)
So i define a keras sequential model and then i do this
epochs = 50
print('--sgd start---')
model.compile(optimizer='sgd', loss='mse', metrics=['accuracy'])
checkpointer_sgd = ModelCheckpoint(filepath='my_model_sgd.h5',
verbose=1, save_best_only=True)
history_sgd = model.fit(X_train, y_train,
validation_split=0.2,epochs=epochs, batch_size=32, callbacks=[checkpointer_sgd],verbose=1)
print('--sgd end---')
print('--------------------------------------------')
print('--rmsprop start---')
model.compile(optimizer='rmsprop', loss='mse', metrics=['accuracy'])
checkpointer_rmsprop = ModelCheckpoint(filepath='my_model_rmsprop.h5',
verbose=1, save_best_only=True)
history_rmsprop = model.fit(X_train, y_train,
validation_split=0.2,
epochs=epochs, batch_size=32, callbacks=[checkpointer_rmsprop],verbose=1)
print('--rmsprop end---')
I do this for all the optimizers (in the code above have mentioned only sgd and rmsprop) and then execute the statements. So now what happens is the first optimizer starts from low accuracy and then accuracy is increased as more epochs happen. But the next optimizer starts from already a high accuracy.
Is the above code correct or do i need to reset the model everytime
before i compile
See below the first epoch output for different optimizers
--sgd start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 46s 27ms/step - loss: 0.0510 - acc: 0.2985 - val_loss: 0.0442 - val_acc: 0.6986
--rmsprop start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 46s 27ms/step - loss: 0.0341 - acc: 0.5940 - val_loss: 0.0148 - val_acc: 0.6963
--adagrad start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 44s 26ms/step - loss: 0.0068 - acc: 0.6951 - val_loss: 0.0046 - val_acc: 0.6963
--adadelta start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 52s 30ms/step - loss: 8.0430e-04 - acc: 0.8125 - val_loss: 9.4660e-04 - val_acc: 0.7850
--adam start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 47s 27ms/step - loss: 7.7599e-04 - acc: 0.8201 - val_loss: 9.8981e-04 - val_acc: 0.7757
--adamax start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 54s 31ms/step - loss: 6.4941e-04 - acc: 0.8359 - val_loss: 9.2495e-04 - val_acc: 0.7991

use K.clear_session() which will clean up everything.
from keras import backend as K
def get_model():
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1,activation='sigmoid'))
return model
model = get_model()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
K.clear_session() # it will destroy keras object
model1 = get_model()
model1.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
model1.fit(X, Y, epochs=150, batch_size=10, verbose=0)
K.clear_session()
This solution should solve your problem. Let me know if it works.

Recompiling the model does not change it's state. Weights learn before compilation will be same after compilation. You need to delete the model object to clear the weights and create a new one before compiling again.

Related

Loss is not decreasing while training Keras Sequential Model

I'm creating a very simple 2 layer feed forward network but am finding that the loss is not updating at all. I have some ideas but I wanted to get additional feedback/guidance.
Details about the data:
X_train:
(336876, 158)
X_dev:
(42109, 158)
Y_train counts:
0 285793
1 51083
Name: default, dtype: int64
Y_dev counts:
0 35724
1 6385
Name: default, dtype: int64
And here is my model architecture:
# define the architecture of the network
model = Sequential()
model.add(Dense(50, input_dim=X_train.shape[1], init="uniform", activation="relu"))
model.add(Dense(3print("[INFO] compiling model...")
adam = Adam(lr=0.01)
model.compile(loss="binary_crossentropy", optimizer=adam,
metrics=['accuracy'])
model.fit(np.array(X_train), np.array(Y_train), epochs=12, batch_size=128, verbose=1)Dense(1, activation = 'sigmoid'))
Now, with this, my loss after the first few epochs are as follows:
Epoch 1/12
336876/336876 [==============================] - 8s - loss: 2.4441 - acc: 0.8484
Epoch 2/12
336876/336876 [==============================] - 7s - loss: 2.4441 - acc: 0.8484
Epoch 3/12
336876/336876 [==============================] - 6s - loss: 2.4441 - acc: 0.8484
Epoch 4/12
336876/336876 [==============================] - 7s - loss: 2.4441 - acc: 0.8484
Epoch 5/12
336876/336876 [==============================] - 7s - loss: 2.4441 - acc: 0.8484
Epoch 6/12
336876/336876 [==============================] - 7s - loss: 2.4441 - acc: 0.8484
Epoch 7/12
336876/336876 [==============================] - 7s - loss: 2.4441 - acc: 0.8484
Epoch 8/12
336876/336876 [==============================] - 6s - loss: 2.4441 - acc: 0.8484
Epoch 9/12
336876/336876 [==============================] - 6s - loss: 2.4441 - acc: 0.8484
And when I test the model after, my f1_score is 0. My main thought was that I may need more data but I'd still expect it to perform better than it is now on the test set. Could it be that it is overfitting? I added Dropout but no luck there either.
Any help would be much appreciated.
at first glance, I believe that your learning rate is too high. Also, please consider normalizing your data especially if different features have different ranges of values (look at Scaling). Also, please consider changing your layer activations depending on whether your labels are multi-class or not. Assuming your code is of this form (you seem to have some typos in problem description):
# define the architecture of the network
model = Sequential()
#also what is the init="uniform" argument? I did not find this in keras documentation, consider removing this.
model.add(Dense(50, input_dim=X_train.shape[1], init="uniform",
activation="relu"))
model.add(Dense(1, activation = 'sigmoid')))
#a slightly more conservative learning rate, play around with this.
adam = Adam(lr=0.0001)
model.compile(loss="binary_crossentropy", optimizer=adam,
metrics=['accuracy'])
model.fit(np.array(X_train), np.array(Y_train), epochs=12, batch_size=128,
verbose=1)
This should lead the loss to converge. If not, please consider deepening your neural net (think about how many parameters you may need).
Consider adding the classification layer before compiling your model.
model.add(Dense(1, activation = 'sigmoid'))
adam = Adam(lr=0.01)
model.compile(loss="binary_crossentropy", optimizer=adam,
metrics=['accuracy'])
model.fit(np.array(X_train), np.array(Y_train), epochs=12, batch_size=128, verbose=1)

Keras network fit: loss is 'nan', accuracy doesn't change

I try to fit keras network, but in each epoch loss is 'nan' and accuracy doesn't change... I tried to change epoch, layers count, neurons count, learning rate, optimizers, I checked nan data in datasets, normalize data by different ways, but problem was not solved. Thanks for your help.
np.random.seed(1337)
# example of input vector: [-1.459746, 0.2694708, ... 0.90043]
# example of output vector: [1, 0] or [0, 1]
model = Sequential()
model.add(Dense(1000, activation='tanh', init='normal', input_dim=503))
model.add(Dense(2, init='normal', activation='softmax'))
opt = optimizers.sgd(lr=0.01)
model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=['accuracy'])
print(model.summary())
model.fit(x_train, y_train, batch_size=1000, nb_epoch=100, verbose=1)
99804/99804 [==============================] - 5s 52us/step - loss: nan - acc: 0.4938
Epoch 1/100
99804/99804 [==============================] - 5s 49us/step - loss: nan - acc: 0.4938
Epoch 2/100
99804/99804 [==============================] - 5s 51us/step - loss: nan - acc: 0.4938
Epoch 3/100
99804/99804 [==============================] - 5s 52us/step - loss: nan - acc: 0.4938
Epoch 4/100
99804/99804 [==============================] - 5s 52us/step - loss: nan - acc: 0.4938
Epoch 5/100
99804/99804 [==============================] - 5s 51us/step - loss: nan - acc: 0.4938
...
Oh, problem has been found! After normalization, one nan neuron appeared in the input vector
First convert your output to categorical, as described in Keras documentation:
Note: when using the categorical_crossentropy loss, your targets should be in categorical format. In order to convert integer targets into categorical targets, you can use the Keras utility to_categorical:
from keras.utils import to_categorical
categorical_labels = to_categorical(int_labels, num_classes=None)

Constant Validation Accuracy with a high loss in machine learning

I'm currently trying to do create an image classification model using Inception V3 with 2 classes. I have 1428 images which are balanced about 70/30. When I run my model I get a pretty high loss of as well as a constant validation accuracy. What might be causing this constant value?
data = np.array(data, dtype="float")/255.0
labels = np.array(labels,dtype ="uint8")
(trainX, testX, trainY, testY) = train_test_split(
data,labels,
test_size=0.2,
random_state=42)
img_width, img_height = 320, 320 #InceptionV3 size
train_samples = 1145
validation_samples = 287
epochs = 20
batch_size = 32
base_model = keras.applications.InceptionV3(
weights ='imagenet',
include_top=False,
input_shape = (img_width,img_height,3))
model_top = keras.models.Sequential()
model_top.add(keras.layers.GlobalAveragePooling2D(input_shape=base_model.output_shape[1:], data_format=None)),
model_top.add(keras.layers.Dense(350,activation='relu'))
model_top.add(keras.layers.Dropout(0.2))
model_top.add(keras.layers.Dense(1,activation = 'sigmoid'))
model = keras.models.Model(inputs = base_model.input, outputs = model_top(base_model.output))
for layer in model.layers[:30]:
layer.trainable = False
model.compile(optimizer = keras.optimizers.Adam(
lr=0.00001,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-08),
loss='binary_crossentropy',
metrics=['accuracy'])
#Image Processing and Augmentation
train_datagen = keras.preprocessing.image.ImageDataGenerator(
zoom_range = 0.05,
#width_shift_range = 0.05,
height_shift_range = 0.05,
horizontal_flip = True,
vertical_flip = True,
fill_mode ='nearest')
val_datagen = keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(
trainX,
trainY,
batch_size=batch_size,
shuffle=True)
validation_generator = val_datagen.flow(
testX,
testY,
batch_size=batch_size)
history = model.fit_generator(
train_generator,
steps_per_epoch = train_samples//batch_size,
epochs = epochs,
validation_data = validation_generator,
validation_steps = validation_samples//batch_size,
callbacks = [ModelCheckpoint])
This is my log when I run my model:
Epoch 1/20
35/35 [==============================]35/35[==============================] - 52s 1s/step - loss: 0.6347 - acc: 0.6830 - val_loss: 0.6237 - val_acc: 0.6875
Epoch 2/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6364 - acc: 0.6756 - val_loss: 0.6265 - val_acc: 0.6875
Epoch 3/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6420 - acc: 0.6743 - val_loss: 0.6254 - val_acc: 0.6875
Epoch 4/20
35/35 [==============================]35/35 [==============================] - 14s 414ms/step - loss: 0.6365 - acc: 0.6851 - val_loss: 0.6289 - val_acc: 0.6875
Epoch 5/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6359 - acc: 0.6727 - val_loss: 0.6244 - val_acc: 0.6875
Epoch 6/20
35/35 [==============================]35/35 [==============================] - 15s 415ms/step - loss: 0.6342 - acc: 0.6862 - val_loss: 0.6243 - val_acc: 0.6875
I think you have too low learning rate and too few epochs. try with lr = 0.001 and epochs = 100.
Your accuracy is 68.25%. Given that your classes are split roughly 70/30 it is likely that your model is just predicting the same thing every time, ignoring the input. That would give the accuracy you are seeing. Your model has not yet learned from your data.
As Novak said, your learning rate seems very low, so maybe try increasing that first to see if that helps.

Neural network in keras not converging

I'm building a simple Neural network in Keras, like the following:
# create model
model = Sequential()
model.add(Dense(1000, input_dim=x_train.shape[1], activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='mean_squared_error', metrics=['accuracy'], optimizer='RMSprop')
# Fit the model
model.fit(x_train, y_train, epochs=20, batch_size=700, verbose=2)
# evaluate the model
scores = model.evaluate(x_test, y_test, verbose=0)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
The shape of the used data is:
x_train = (49972, 601)
y_train = (49972, 1)
My problem is that the network is not converging, the accuracy is fixed on 0.0168, like below:
Epoch 1/20
- 1s - loss: 3.2222 - acc: 0.0174
Epoch 2/20
- 1s - loss: 3.1757 - acc: 0.0187
Epoch 3/20
- 1s - loss: 3.1731 - acc: 0.0212
Epoch 4/20
- 1s - loss: 3.1721 - acc: 0.0220
Epoch 5/20
- 1s - loss: 3.1716 - acc: 0.0225
Epoch 6/20
- 1s - loss: 3.1711 - acc: 0.0235
Epoch 7/20
- 1s - loss: 3.1698 - acc: 0.0245
Epoch 8/20
- 1s - loss: 3.1690 - acc: 0.0251
Epoch 9/20
- 1s - loss: 3.1686 - acc: 0.0257
Epoch 10/20
- 1s - loss: 3.1679 - acc: 0.0261
Epoch 11/20
- 1s - loss: 3.1674 - acc: 0.0267
Epoch 12/20
- 1s - loss: 3.1667 - acc: 0.0277
Epoch 13/20
- 1s - loss: 3.1656 - acc: 0.0285
Epoch 14/20
- 1s - loss: 3.1653 - acc: 0.0288
Epoch 15/20
- 1s - loss: 3.1653 - acc: 0.0291
I used Sklearn library to build the same structure with the same data, and it works perfectly, shown me an accuracy higher than 0.5:
model = Pipeline([
('classifier', MLPClassifier(hidden_layer_sizes=(1000), activation='relu',
max_iter=20, verbose=2, batch_size=700, random_state=0))
])
I'm totally sure that I used the same data for both models, and this is how I prepare it:
def load_data():
le = preprocessing.LabelEncoder()
with open('_DATA_train.txt', 'rb') as fp:
train = pickle.load(fp)
with open('_DATA_test.txt', 'rb') as fp:
test = pickle.load(fp)
x_train = train[:,0:(train.shape[1]-1)]
y_train = train[:,(train.shape[1]-1)]
y_train = le.fit_transform(y_train).reshape([-1,1])
x_test = test[:,0:(test.shape[1]-1)]
y_test = test[:,(test.shape[1]-1)]
y_test = le.fit_transform(y_test).reshape([-1,1])
print(x_train.shape, ' ' , y_train.shape)
print(x_test.shape, ' ' , y_test.shape)
return x_train, y_train, x_test, y_test
What is the problem with the Keras structure?
Edited:
it's a multi-class classification problem: y_training [0 ,1, 2, 3]
For a multiclass problem your labels should be one hot encoded. For example if the options are [0 ,1, 2, 3] and the label is 1 then it should be [0, 1, 0, 0].
Your final layer should be a dense layer with 4 units and an activation of softmax.
model.add(Dense(4, activation='softmax'))
And your loss should be categorical_crossentropy
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='RMSprop')

Issue with Combining LSTM and CNN? (Python, Keras)

I want to predict 8-characters license plates, so I wrote the below model in Keras:
x = Input(shape=(HEIGHT, WIDTH, CHANNELS))
base_model = InceptionV3(include_top=False, weights='imagenet', input_shape=(HEIGHT, WIDTH, CHANNELS))
base_model.trainable = False
y = base_model(x)
y = Reshape((8, 9 * 256))(y)
y = LSTM(units=20, return_sequences='true')(y)
y = Dropout(0.5)(y)
y = TimeDistributed(Dense(TOTAL_CHARS, activation="softmax", activity_regularizer=regularizers.l2(REGUL_PARAM)))(y)
y = Dropout(0.25)(y)
model = Model(input=x, output=y)
model.compile(loss="categorical_crossentropy", optimizer='rmsprop', metrics=['accuracy'])
I have about 6000 data for training which I augment them with ImageGenerator. My problem is that the loss and accuracy are approximately constant during time:
************************************************************
Epoch: 1
************************************************************
Train on 6869 samples, validate on 1718 samples
Epoch 1/1
6856/6869 [============================>.] - ETA: 0s - loss: 5.4525 - acc: 0.1924Epoch 00001: val_loss improved from 2.17175 to 2.15020, saving model to ./trained_model_V10.hdf5
6869/6869 [==============================] - 25s 4ms/step - loss: 5.4535 - acc: 0.1924 - val_loss: 2.1502 - val_acc: 0.2232
************************************************************
Epoch: 2
************************************************************
Train on 6869 samples, validate on 1718 samples
Epoch 1/1
6848/6869 [============================>.] - ETA: 0s - loss: 5.4543 - acc: 0.1959Epoch 00001: val_loss improved from 2.15020 to 2.11809, saving model to ./trained_model_V10.hdf5
6869/6869 [==============================] - 26s 4ms/step - loss: 5.4537 - acc: 0.1958 - val_loss: 2.1181 - val_acc: 0.2281
************************************************************
Epoch: 3
************************************************************
Train on 6869 samples, validate on 1718 samples
Epoch 1/1
6856/6869 [============================>.] - ETA: 0s - loss: 5.4284 - acc: 0.1977Epoch 00001: val_loss improved from 2.11809 to 2.09679, saving model to ./trained_model_V10.hdf5
6869/6869 [==============================] - 25s 4ms/step - loss: 5.4282 - acc: 0.1978 - val_loss: 2.0968 - val_acc: 0.2304
************************************************************
Epoch: 4
************************************************************
Train on 6869 samples, validate on 1718 samples
Epoch 1/1
6856/6869 [============================>.] - ETA: 0s - loss: 5.4500 - acc: 0.2004Epoch 00001: val_loss did not improve
6869/6869 [==============================] - 25s 4ms/step - loss: 5.4490 - acc: 0.2004 - val_loss: 2.1146 - val_acc: 0.2355
************************************************************
Epoch: 5
************************************************************
Train on 6869 samples, validate on 1718 samples
Epoch 1/1
6848/6869 [============================>.] - ETA: 0s - loss: 5.4399 - acc: 0.2006Epoch 00001: val_loss did not improve
6869/6869 [==============================] - 25s 4ms/step - loss: 5.4374 - acc: 0.2009 - val_loss: 2.1102 - val_acc: 0.2324
************************************************************
Epoch: 6
************************************************************
Train on 6869 samples, validate on 1718 samples
Epoch 1/1
6856/6869 [============================>.] - ETA: 0s - loss: 5.4636 - acc: 0.1977Epoch 00001: val_loss improved from 2.09679 to 2.09076, saving model to ./trained_model_V10.hdf5
6869/6869 [==============================] - 25s 4ms/step - loss: 5.4629 - acc: 0.1978 - val_loss: 2.0908 - val_acc: 0.2341
************************************************************
Now, I am not sure exactly the correctness of my model and I think the problem is my model. Is this the correct way to combine CNN and LSTM?
I also have tried the below mode:
REGUL_PARAM = 0
image = Input(shape=(HEIGHT, WIDTH, CHANNELS))
x = Reshape((8, HEIGHT, int(WIDTH/8), CHANNELS))(image)
y = TimeDistributed(Conv2D(16, (3, 3), activation='relu', padding='same', activity_regularizer=regularizers.l2(REGUL_PARAM)))(x)
y = TimeDistributed(MaxPooling2D((2, 2)))(y)
y = TimeDistributed(Conv2D(32, (3, 3), activation='relu', padding='same', activity_regularizer=regularizers.l2(REGUL_PARAM)))(y)
y = TimeDistributed(MaxPooling2D((2, 2)))(y)
y = TimeDistributed(Conv2D(64, (3, 3), activation='relu', padding='same', activity_regularizer=regularizers.l2(REGUL_PARAM)))(y)
y = Reshape((int(y.shape[1]), int(y.shape[4]*y.shape[3]*y.shape[2])))(y)
y = Bidirectional(LSTM(units=50, return_sequences='true'))(y)
y = TimeDistributed(Dense(64, activity_regularizer=regularizers.l2(REGUL_PARAM), activation='relu'))(y)
y = Dropout(0.25)(y)
y = TimeDistributed(Dense(TOTAL_CHARS, activity_regularizer=regularizers.l2(REGUL_PARAM), activation='softmax'))(y)
y = Dropout(0.25)(y)
model = Model(inputs=image, outputs=y)
the accuracy for this is about 70%, but the point is that I cannot overfit even on a small potion of my data.
Apparently, your model doesn't work well.
You may take a look at this code.
'''Train a recurrent convolutional network on the IMDB sentiment
classification task.
Gets to 0.8498 test accuracy after 2 epochs. 41s/epoch on K520 GPU.
'''
from __future__ import print_function
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import LSTM
from keras.layers import Conv1D, MaxPooling1D
from keras.datasets import imdb
# Embedding
max_features = 20000
maxlen = 100
embedding_size = 128
# Convolution
kernel_size = 5
filters = 64
pool_size = 4
# LSTM
lstm_output_size = 70
# Training
batch_size = 30
epochs = 2
'''
Note:
batch_size is highly sensitive.
Only 2 epochs are needed as the dataset is very small.
'''
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, embedding_size, input_length=maxlen))
model.add(Dropout(0.25))
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(LSTM(lstm_output_size))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)

Categories

Resources