Keras - Save output of intermediate layer - python

What I have done?
I'm using from this code to implement my keras'model:
X, tx, Y, ty = train_test_split(X, Y, test_size=0.2, random_state=np.random.seed(7), shuffle=True)
X = np.reshape(X, (X.shape[0], 1, X.shape[1]))
tx = np.reshape(tx, (tx.shape[0], 1, tx.shape[1]))
model = Sequential()
model.add(LSTM(100, return_sequences=False, input_shape=(X.shape[1], X.shape[2])))
model.add(Dense(Y.shape[1], activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
filepath="weights-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
model.fit(X, Y, validation_split=.20,
epochs=1000, batch_size=50, callbacks=callbacks_list, verbose=0)
OUTPUT:
Below is a part of the program's output:
Epoch 00993: val_acc did not improve
Epoch 00994: val_acc did not improve
Epoch 00995: val_acc did not improve
Epoch 00996: val_acc did not improve
Epoch 00997: val_acc did not improve
Epoch 00998: val_acc improved from 0.93900 to 0.94543, saving model to weights-998-0.94.hdf5
Epoch 00999: val_acc did not improve
PROBLEM:
I need to save the output of LSTM layer in any epoch but i do not know how?
Any idea?

You can use Keras functional API. You will have to rewrite your model creation but it's not much work. Then when you write something like this:
lstm_output = LSTM(128, ...)(x)
the output of the LSTM layer will be in the lstm_output variable and you can save it in every iteration of every epoch.
I hope this answers your question.

Related

Keras CNN, High training while low testing, each time my training increase, the validation decrease

am doing a text classification, my dataset size is 16000 KB, my problem is I have 95% of training and 90% in testing, I have some values are zero in my dataset, is that effect ?.. can I increase testing ? and how?
here is my training and validation curves
here is my code
model = Sequential()
model.add(Conv1D( filters=256,kernel_size=5, activation = 'relu',input_shape=(7,1)))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(Dense(11, activation='softmax'))
model.summary()
model.compile(Adam(lr=0.001),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train, y_train,
epochs=200,
verbose=True,
validation_data=(X_test, y_test),
batch_size=128)
loss, accuracy = model.evaluate(X_train, y_train, verbose=True)
print("Training Accuracy: {:.4f}".format(accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print("Testing Accuracy: {:.4f}".format(accuracy))

Python: Improve accuracy of an keras sequential model

While training the model the accuracy percentage of the model is less as compared to the loss, Tried to change the epochs, loss but still the accuracy is not improving.
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.33, random_state = 42)
num_class = len(np.unique(data.IsAddedToReport.values))
model = Sequential()
model.add(Embedding(2000, 128,input_length = x.shape[1]))
model.add(SpatialDropout1D(0.5))
model.add(Dense(32, activation='relu'))
model.add(LSTM(196, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(num_class, activation='softmax'))
model.compile(loss = 'binary_crossentropy', optimizer='adam',metrics = ['acc'])
checkpointer = ModelCheckpoint(filepath, monitor='val_acc', verbose=2, save_best_only=True, mode='max')
history = model.fit(X_train,y_train,batch_size=128,epochs=10, validation_split=0.2, callbacks=[checkpointer])
Output :
Epoch 10/10
17013/17013 [==============================] - 1s 57us/step - loss: 0.6295 - acc: 0.6115 - val_loss: 0.7401 - val_acc: 0.5461
Epoch 00010: val_acc did not improve from 0.55125

How can I restart model.fit after x epochs if loss remains high?

I am having trouble sometimes getting a fit on my data, and when I restart fit (with shuffle=true) then I sometimes get a good fit.
See my previous question:
https://datascience.stackexchange.com/questions/62516/why-does-my-model-sometimes-not-learn-well-from-same-data
As a work around, I want to automatically restart the fitting process, if loss is high after x epochs. How can I achieve this?
I assume I would need to use a custom version of EarlyStopping callback? How could I differentiate between ES because of finding a low loss ( < 0.5) so training is finished, or because loss > 0.5 after x epochs so need to restart training?
Here is a simplified structure:
def train_till_good():
while not_finished:
train()
def train():
load_data()
model = VerySimpleNet2();
checkpoint = keras.callbacks.ModelCheckpoint(filepath=images_root + dataset_name + '\\CheckPoint.hdf5')
myOpt = keras.optimizers.Adam(lr=0.001,decay=0.01)
model.compile(optimizer=myOpt, loss='categorical_crossentropy', metrics=['accuracy'])
LRS = CyclicLR(base_lr=0.000005, max_lr=0.0003, step_size=200.)
tensorboard = keras.callbacks.TensorBoard(log_dir='C:\\Tensorflow', histogram_freq=0,write_graph=True, write_images=False)
ES = keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)
model.fit(train_images, train_labels, shuffle=True, epochs=num_epochs,
callbacks=[checkpoint,
tensorboard,
ES,
LRS],
validation_data = (test_images, test_labels)
)
def VerySimpleNet2():
model = keras.Sequential([
keras.layers.Dense(112, activation=tf.nn.relu, input_shape=(224, 224, 3)),
keras.layers.Dropout(0.4),
keras.layers.Flatten(),
keras.layers.Dense(3, activation=tf.nn.softmax)
])
return model

Keras, get output of a layer at each epochs

What I have done?
I implemented a keras model as follow:
train_X, test_X, train_Y, test_Y = train_test_split(X, Y, test_size=0.2, random_state=np.random.seed(7), shuffle=True)
train_X = np.reshape(train_X, (train_X.shape[0], 1, train_X.shape[1]))
test_X = np.reshape(test_X, (test_X.shape[0], 1, test_X.shape[1]))
model = Sequential()
model.add(LSTM(100, return_sequences=False, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(train_Y.shape[1], activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(train_X, train_Y, validation_split=.20,
epochs=1000, batch_size=50)
What i want?
I want to give support vector machine(SVM) the output of the penultimate layer (LSTM), in any epoch(that is 1000) to svm also be trained.
But I do not know how to do this?
Any idea?
UPDATED:
I use from ModelCheckpoint as follow:
model = Sequential()
model.add(LSTM(100, return_sequences=False, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(train_Y.shape[1], activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
# checkpoint
filepath="weights-{epoch:02d}-{val_acc:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
model.fit(train_X, train_Y, validation_split=.20,
epochs=1000, batch_size=50, callbacks=callbacks_list, verbose=0)
OUTPUT:
Epoch 00991: val_acc did not improve
Epoch 00992: val_acc improved from 0.93465 to 0.93900, saving model to weights-992-0.94.hdf5
Epoch 00993: val_acc did not improve
Epoch 00994: val_acc did not improve
Epoch 00995: val_acc did not improve
Epoch 00996: val_acc did not improve
Epoch 00997: val_acc did not improve
Epoch 00998: val_acc improved from 0.93900 to 0.94543, saving model to weights-998-0.94.hdf5
Epoch 00999: val_acc did not improve
PROBLEM:
How to load all these models to obtain the output of the LSTM layer in each epochs as #IonicSolutions said?
What works best in your situation depends on how exactly you set up and train your SVM, but there are at least two options using callbacks:
You could use the ModelCheckpoint callback to save a copy of the model you are training at each epoch and then later load all these models to obtain the output of the LSTM layer.
You can also create your own callback by implementing the Callback base class. Within the callback, the model can be accessed and you can use on_epoch_end to extract the LSTM output at the end of each epoch.
Edit: To get convenient access to the penultimate layer, you can do the following:
# Create the model with the functional API
inp = Input((train_X.shape[1], train_X.shape[2],))
lstm = LSTM(100, return_sequences=False)(inp)
dense = Dense(train_Y.shape[1], activation='softmax')(lstm)
# Create the full model
model = Model(inputs=inp, outputs=dense)
# Create the model for access to the LSTM layer
access = Model(inputs=inp, outputs=lstm)
Then, you can pass access to your callback when you instantiate it. The key thing to note here is that model and access share the very same LSTM layer, whose weights will change when training model.
In order to get prediction output at each epoch here is what we can do:
import tensorflow as tf
import keras
# define your custom callback for prediction
class PredictionCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
y_pred = self.model.predict(self.validation_data[0])
print('prediction: {} at epoch: {}'.format(y_pred, epoch))
# ...
# register the callback before training starts
model.fit(X_train, y_train, batch_size=32, epochs=25,
validation_data=(X_valid, y_valid),
callbacks=[PredictionCallback()])

Keras earlystopping: print selected epoch

Simple question. I am using Keras earlystopping in the following form:
Earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1, mode='auto')
How can I get Keras to print the selected epoch once the model has been fit? I think you have to use logs but don't quite know how.
Thanks.
Edit:
The full code is very long! Let me add a bit more than I gave. Hopefully it will help.
# Define model
def design_flexiNN(m_type, neurons, shape_timestep, shape_feature, activation, kernel_ini):
model = Sequential()
model.add(Dense(neurons, input_dim=shape_feature, activation = activation, use_bias=True, kernel_initializer=kernel_ini))
model.add(Dense(1, use_bias=True))
model.compile(loss='mae', optimizer='Adam')
return model
# fit model
def fit_flexiNN(m_type, train_X, train_y, epochs, batch_size, test_X, test_y):
history = model.fit(train_X, train_y, epochs=epochs, batch_size=batch_size, callbacks=callbacks_list, validation_data=(test_X, test_y), verbose=0, shuffle=False)
Earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1, mode='auto')
callbacks_list = [Earlystop]
model = design_flexiNN(m_type, neurons, neurons_step, train_X_feature_shape, activation, kernel_ini);
history = fit_flexiNN(m_type, train_X, train_y, ini_epochs, batch_size, test_X, test_y)
I've been able to infer the selected epoch by doing len(history.history['val_loss']) minus 1, but that doesn't work if you have a patience above zero.
Been trying to solve this myself and realised that the len(history.history['val_loss']) method is almost correct. All you need to add is:
len(history.history['val_loss']) - patience
which should give you the epoch number for the selected model (assuming that the model didnt run for the full number of epochs).
A slightly more thorough method would be:
model_loss = history.history["val_loss"]
epoch_chosen = model_loss.index(min(model_loss)) +1
print(epoch_chosen)
Hope this helps!

Categories

Resources