I created a neural network to classify messages. Now I want to collect the predictions into a list in python. How do I do this?
So here is the model:
model = Sequential()
model.add(layers.Dense(500, activation = "relu", input_shape=(7600,)))
# Hidden - Layers
model.add(layers.Dropout(0.4, noise_shape=None, seed=None))
model.add(layers.Dense(300, activation = "relu"))
model.add(layers.Dropout(0.4, noise_shape=None, seed=None))
model.add(layers.Dense(100, activation = "relu"))
model.add(layers.Dropout(0.4, noise_shape=None, seed=None))
model.add(layers.Dense(20, activation = "softmax"))
model.summary()
model.compile(loss="categorical_crossentropy",
optimizer="adam",
metrics=['accuracy'])
model.fit( np.array(vectorized_training), np.array(y_train_neralnet),
batch_size=2000,
epochs=3,
verbose=1,
validation_data=(np.array(vectorized_validation), np.array(y_validation_neralnet)))
Here I tried to print the shape of validation_data that is inside of the model.fit() method but it gives an error.
NameError: name 'validation_data' is not defined
This is what you are looking for:
preds = model.predict(X_test)
Related
sequence_input = Input(shape=(MAX_LENGTH_SEQUENCE,), dtype='int32')
embedded_sequences = embedding_layer(sequence_input)
l_lstm = Bidirectional(LSTM(10))(embedded_sequences)
preds = Dense(len(macronum), activation='softmax')(l_lstm)
model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['acc'])
I need to add additional hidden layer and an attention layer to the above LSTM model , usually i construct the model in this way:
model = tensorflow.keras.Sequential()
model.add(tensorflow.keras.layers.LSTM(128, dropout=0.3,
recurrent_dropout=0.2,input_shape=(N, K), return_sequences=True))
#model.add(tensorflow.keras.layers.LSTM(128, dropout=0.3,
recurrent_dropout=0.2,input_shape=(N, K), return_sequences=True))
model.add(Attention(name='attention_weight'))
model.add(tensorflow.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
print(_x_train.shape)
model.fit(_x_train, _y_train, epochs=10, batch_size=1)
scores = model.evaluate(_x_test, _y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
as shown in the second code i normaly stack and add as many layer as i need using
model.add
but for the first code i'm not familiar with this approach,
if i want to add extra LSTM layer and Attention layer to the first code, where should i include them inside the code? what is the right syntax ?
I am trying to run a simple RNN with some data extracted from a csv file. I have already preprocessed my data and split them into train set and validation set, but I get the error above.
This is my network structure and what I tryied so far. My shapes are (33714,12) for x_train, (33714,) for y_train, (3745,12) for x_val and (3745,) for y_val.
model = Sequential()
# LSTM LAYER IS ADDED TO MODEL WITH 128 CELLS IN IT
model.add(LSTM(128, input_shape=x_train.shape, activation='tanh', return_sequences=True))
model.add(Dropout(0.2)) # 20% DROPOUT ADDED FOR REGULARIZATION
model.add(BatchNormalization())
model.add(LSTM(128, input_shape=x_train.shape, activation='tanh', return_sequences=True)) # ADD ANOTHER LAYER
model.add(Dropout(0.1))
model.add(BatchNormalization())
model.add(LSTM(128, input_shape=x_train.shape, activation='tanh', return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(32, activation='relu')) # ADD A DENSE LAYER
model.add(Dropout(0.2))
model.add(Dense(2, activation='softmax')) # FINAL CLASSIFICATION LAYER WITH 2 CLASSES AND SOFTMAX
# ---------------------------------------------------------------------------------------------------
# OPTIMIZER SETTINGS
opt = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE, decay=DECAY)
# MODEL COMPILE
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# CALLBACKS
tensorboard = TensorBoard(log_dir=f"logs/{NAME}")
filepath = "RNN_Final-{epoch:02d}-{val_acc:.3f}"
checkpoint = ModelCheckpoint("models/{}.model".format(filepath, monitor='val_acc', verbose=1,
save_best_only=True, mode='max')) # save only the best ones
# RUN THE MODEL
history = model.fit(x_train, y_train, epochs=EPOCHS, batch_size=BATCH_SIZE,
validation_data=(x_val, y_val), callbacks=[tensorboard, checkpoint])
Though it will give you a large value, what may be best to do would be to flatten the one with the larger dimension.
A tensorflow.keras.layers.Flatten() will basically make your output shape the values multiplied, i.e. input: (None, 5, 5) -> Flatten() -> (None, 25)
For your example, this will give you:
(None, 33714,12) -> (None, 404568).
I'm not entirely sure if this will work when you change the shape sizes, but that is how I overcame my issue with incompatible shapes: expected: (None, x), got: (None, y, x).
I have the following code to run a vgg
checkpoint = tf.keras.callbacks.ModelCheckpoint(checkpoint_filepath+'MAssCalcVGG-noAug.h5',
monitor='val_loss', mode='auto', verbose=1,
save_best_only=True, save_freq='epoch'
)
#add custom fully-connected network on top of the already-trained base network
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation="relu"))
model.add(layers.Dense(1, activation="sigmoid"))
#freeze convolutional base
conv_base.trainable = False
model.compile(loss="binary_crossentropy",
optimizer=optimizers.Adam(lr=1e-3), # lr = 0.0001
metrics=METRICS)
#train fully-connected added part
history = model.fit(train_generat.flow(train_dataset_split,
train_labels_split,
batch_size=BATCH_SIZE,
shuffle=False),
steps_per_epoch=len(train_dataset_split) // BATCH_SIZE,
epochs=100,
validation_data=valid_generat.flow(valid_dataset_split,
valid_labels_split,
batch_size=BATCH_SIZE,
shuffle=False),
validation_steps=len(valid_labels_split) // BATCH_SIZE,
callbacks=[es, checkpoint, GarbageCollectorCallback()])
#model.save(save(os.path.join(checkpoint_filepath, 'MAssCalcVGG-noAug.h5'))))
model.summary()
but I get this:
NotImplementedError: Layer ModuleWrapper has arguments in `__init__` and therefore must override `get_config`
how can I fix it?
the code was perfectly working before probably I intentionally made a change to cause this.
I am trying to use keras to train a simple feedforward network. I tried two different methods of what I think is the same network, but one is performing significantly better. The first one and the better performing one is the following:
inputs = keras.Input(shape=(384,))
dense = layers.Dense(64, activation="relu")
x = dense(inputs)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(384)(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="simple_model")
model.compile(loss='mse',optimizer='Adam')
history = model.fit(X_train,
y_train_tf,
epochs=20,
validation_data=(X_test, y_test),
steps_per_epoch=100,
validation_steps=50)
and it settles on a validation loss of about 0.2. The second model performs much worse:
model = keras.models.Sequential()
model.add(Dense(64, input_shape=(384,), activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(384, activation='relu'))
optimizer = tf.keras.optimizers.Adam()
model.compile(loss='mse', optimizer=optimizer)
history = model.fit(X_train,
y_train_tf,
epochs=20,
validation_data=(X_test, y_test),
steps_per_epoch=100,
validation_steps=50)
and this has validation loss of around 5. But when I do model.summary, they look virtually the same. Is there something wrong with the second model?
I am not sure that they are the same since second model has relu activation after last layer (384 units) and first doesn't. This might be the issue since default activation of the Keras dense layer is None.
model = Sequential()
model.add(Conv2D(50, (5,5), activation='relu', input_shape =(5,5,1), kernel_initializer='he_normal'))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.summary()
# compile the model
model.compile(loss='binary_crossentropy', optimizer= 'adam', metrics=['accuracy'])
model_checkpoint=ModelCheckpoint(r'C:\Users\globo\Desktop\Test_CNN\Results\Kernel5x5\Weights'+'\\'+test+'\model_test{epoch:02d}.h5',save_freq=1,save_weights_only=True)
# fit the model
history = model.fit(X_train, Y_train, epochs=10, batch_size=32, verbose=1, callbacks=[model_checkpoint], shuffle=True, validation_split=0.5)
I'm already extracting weights for each epoch with "ModelCheckpoint", but how can I extract flatten layer output for each epoch and save them?
doing this with sequential models is not feasible at all.
you should use functional API
inp = Input((5,5,1))
x = Conv2D(50, (5,5), activation='relu', kernel_initializer='he_normal')(inp)
xflatten = Flatten()(x)
out = Dense(1, activation='sigmoid')(xflatten)
main_model = Model(inp, out) # this works same as your model
flatten_model = Model(inp, xflatten) # and this only outputs the flatten layer and is not necessary to compile it because we won't train it, it just shows the output of a layer
main_model.compile(loss='binary_crossentropy', optimizer= 'adam', metrics=['accuracy'])
history = main_model.fit(X_train, Y_train, epochs=10, batch_size=32, verbose=1, callbacks=[model_checkpoint], shuffle=True, validation_split=0.5)
to see the flatten layers's output:
flatten_model.predict(X)