Keep training Keras model with loading and saving the weights - python

Since I cannot install h5py due to package inconsistency I am wondering if it is possible to save and load the weights in Keras to keep training your model on a new data. I know I can do the following:
old_weights = model.get_weights()
del model
new_model.set_weights(old_weights)
where model is the old model and new_model is the new one.Here is a complete example:
for i in training data:
model = Sequential()
model.add(Dense(20, activation='tanh', input_dim=Input))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.fit(X, y, epochs=8, batch_size=16, shuffle=False, verbose=0)
new_model = Sequential()
new_model.add(Dense(20, activation='tanh', input_dim=Input))
new_model.add(Dense(1))
new_model.compile(optimizer='adam', loss='mse')
old_weights = model.get_weights()
del model
new_model.set_weights(old_weights)
model=new_model
I want after reading each training example (X and y are different at each iteration) save the weights and load it again and start from pre-trained model. I am not sure if my code does that since I am defining optimizer and model.compile again. Can anyone help me if the following code save the model and every iteration starts from pre-trained model.

You don't need to keep recompiling the model. Instead just fit your model multiple times after loading your samples.
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(20, activation='tanh', input_dim=Input))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
# load the data into training_data
for data in training_data:
model.fit(data[0], data[1], epochs=8, batch_size=16, shuffle=False, verbose=0)

Related

ValueError: Input 0 of layer "sequential_8" is incompatible with the layer: expected shape=(None, 6557, 40), found shape=(None, 40)

I'm working with audio data, extracting features, storing it in a CSV file, and loading data from the CSV file. Getting the above error please help me to solve it! I've made all the attempts I knew. I'm pretty lost here! My code is added below: I'm getting this error for fit() function.
def build_model(input_shape):
# build network topology
model = keras.Sequential()
# 2 LSTM layers
model.add(keras.layers.LSTM(64, input_shape= input_shape, return_sequences=True)) #added flatten new
model.add(keras.layers.LSTM(64)) #return_sequences=True)
# dense layer
model.add(keras.layers.Dense(64, activation='relu'))
model.add(keras.layers.Dropout(0.3))
# output layer
model.add(keras.layers.Dense(10, activation='softmax'))
return model
Creating network:
# create network
input_shape = (X_train.shape[0], X_train.shape[1])
model = build_model(input_shape)
# compile model
optimiser = keras.optimizers.Adam(learning_rate=0.0001)
model.compile(optimizer=optimiser,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.summary()
# train model
history = model.fit(X_train, y_train, validation_data=(X_validation, y_validation), batch_size=32, epochs=30)
# plot accuracy/error for training and validation
plot_history(history)
# evaluate model on test set
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=2)
print('\nTest accuracy:', test_acc)
1. Model Summary screenshot for the model
2. Error Message screenshot for that model
Solution: Solution for this problem

feature importance in neural network (classification problem)

i have a classification problem and i need to find important features.
my code as follow:
model = Sequential()
model.add(Dropout(0.99, input_dim=len(X_train.columns)))
model.add(Dense(100, activation='relu',name='layer1'))
model.add(Dense(1, activation='sigmoid',name='layer'))
print(model.summary())
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model=model.fit(X_train, y_train,validation_data = (X_val, y_val), epochs=100, batch_size=1,)
how can i find important features? as you can see in the code it is not a regression problem.

Got different accuracy for training model and load model

I've tried both model.save_weights() and model.save(), and their corresponding load statements (model.load_weights() and load_model()). If I just push all my evaluation code at the end of the training code, things work out fine.
The problem is with stopping Python, starting a new Python script, and reading in the weights/model to use them for inference.
The main method I've tried when loading to: define the model (using same code from training run that saved the weights), then run model.load_weights(), then compile the model. This is what's not working. And yet, as I say, going the model.save() and load_model() route produces similar garbage output.
My Code:
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Flatten, Dense, Embedding, Conv1D, GlobalMaxPooling1D, Dropout
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
model = Sequential()
model.add(Embedding(vocab_size, embedding_dim, input_length=train_padded.shape[1]))
model.add(Conv1D(48, 5, activation='relu', padding='valid'))
model.add(GlobalMaxPooling1D())
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(11, activation='softmax'))
model.compile(loss= 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
epochs = 50
batch_size = 32
history = model.fit(train_padded, training_labels, shuffle=False ,
epochs=epochs, batch_size=batch_size,
validation_split=0.2,
callbacks=[ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.0001),
EarlyStopping(monitor='val_loss', mode='min', patience=2, verbose=1),
EarlyStopping(monitor='val_accuracy', mode='max', patience=5, verbose=1)])
score = model.evaluate(train_padded, training_labels, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))
#save model
tf.keras.models.save_model(model,'model.h5',overwrite=True)
#load model
model = tf.keras.models.load_model('model.h5')
score = model.evaluate(train_padded, training_labels, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))
#or
#save weights
model.save_weights('model.h5')
#load weights
model.load_weights('model.h5')
model.compile(loss= 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
score = model.evaluate(train_padded, training_labels, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], score[1]*100))
Output
83.14% #training model
17.60% #load_model
21.37% #load_weights
Please help me.. The weights for training model and load model is same but the accuracy is different. The accuracy when using load_model() has gone from ~80% (training model) to ~20% on the same data.
Thanks
My issue got solved by fixing the seed for keras which uses NumPy random generator and since I am using Tensorflow as backend, I also fixed the seed for it. These 4 lines I added at the top of my file where the model is also defined.
from numpy.random import seed
seed(1) # keras seed fixing import
import tensorflow
tensorflow.random.set_seed(2) # tensorflow seed fixing
For more information have a look at this- https://machinelearningmastery.com/reproducible-results-neural-networks-keras/

Machine Learning with Keras: Different Validation Loss for the Same Model

I am trying to use keras to train a simple feedforward network. I tried two different methods of what I think is the same network, but one is performing significantly better. The first one and the better performing one is the following:
inputs = keras.Input(shape=(384,))
dense = layers.Dense(64, activation="relu")
x = dense(inputs)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(384)(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="simple_model")
model.compile(loss='mse',optimizer='Adam')
history = model.fit(X_train,
y_train_tf,
epochs=20,
validation_data=(X_test, y_test),
steps_per_epoch=100,
validation_steps=50)
and it settles on a validation loss of about 0.2. The second model performs much worse:
model = keras.models.Sequential()
model.add(Dense(64, input_shape=(384,), activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(384, activation='relu'))
optimizer = tf.keras.optimizers.Adam()
model.compile(loss='mse', optimizer=optimizer)
history = model.fit(X_train,
y_train_tf,
epochs=20,
validation_data=(X_test, y_test),
steps_per_epoch=100,
validation_steps=50)
and this has validation loss of around 5. But when I do model.summary, they look virtually the same. Is there something wrong with the second model?
I am not sure that they are the same since second model has relu activation after last layer (384 units) and first doesn't. This might be the issue since default activation of the Keras dense layer is None.

Running into key error when building simple feedforward neural network in Keras

Here is a snapshot of my dataset, including its shape:
Now, here is the code I am using to build the NN:
# define the architecture of the network
model = Sequential()
model.add(Dense(50, input_dim=X_train.shape[1], init="uniform", activation="relu"))
model.add(Dense(38, activation="relu", kernel_initializer="uniform"))
model.add(Dense(1, activation = 'sigmoid'))
print("[INFO] compiling model...")
adam = Adam(lr=0.01)
model.compile(loss="binary_crossentropy", optimizer=adam,
metrics=["accuracy"])
model.fit(X_train, Y_train, epochs=50, batch_size=128,
verbose=1)
When I do this, I get the following error:
KeyError: '[233946 164308 296688 166151 276165 88219 117980 163503 182033 164328\n 188083 30380 37984 245771 308534 6215 181186 307488 172375 60446\n 29397 166681 5587 243263 103579 262182 107823 234790 258973 116433\n 199283 86118 172148 257334 286452 248407 81280 ...] not in index'
I haven't been able to find a solution to this. Any help would be much appreciated.
I believe that the input is not a numpy array as described in this github issue on the keras page
Try fitting the model using this:
model.fit(np.array(X_train), np.array(Y_train), epochs=50, batch_size=128,
verbose=1)
Which will cast the arrays as numpy arrays when fitting the data.

Categories

Resources