Saving Layer Values of CNN- Keras - python

I have created the following simple CNN in Keras (borrowed from a DeepLizard tutorial).
model = Sequential([
Conv2D(filters = 10, kernel_size = (3, 3), activation = 'relu', padding = 'same', input_shape = (320, 320, 3)),
MaxPool2D(pool_size = (2, 2), strides = 2),
Conv2D(filters = 10, kernel_size = (3, 3), activation = 'relu', padding = 'same'),
MaxPool2D(pool_size = (2, 2), strides = 2),
Flatten(),
Dense(20, activation = 'softmax'),
])
model.summary()
model.compile(optimizer = Adam(lr = 0.0001), loss = 'categorical_crossentropy', metrics =['accuracy'])
model.fit(x = train_batches, validation_data = valid_batches, epochs = 10, verbose = 2)
predictions = model.predict(x = test_batches, verbose = 0)
As you can see, I am saving the predictions generated by the model to a dataframe named "predictions". But I am also interested in saving the outputs for each of the MaxPool2D layers, the Conv2D layer, and the flatten layer as well. Is there a way that I can save the outputs of those layers to dataframes/lists as well? Is there a functionality for this in Keras?
Thank you!

You can use model.get_layer() function to extract any layer of your model. Visit the documentation here: https://keras.io/api/models/model/#getlayer-method

Thank you for your responses. They led me in the right direction. Here is the solution I ended up utilizing. I recreated the model, but configured the predictions to output the desired layer (in this case, "conv2d", the first convolutional layer). This produces a 4-D array as an output, where the 1st dimension corresponds to the input, the 2nd and 3rd dimensions are the two dimensions of a filter's outputted feature map, and the 4th dimension corresponds to the n-filters being used in that layer (in this case, the 4th dimension would be '10'). My next quest is to find a way to split that 4 dimensional array into separate 3-dimensional arrays, where each new array corresponds to a distinct filter. In this case, I would be looking for 10 3-dimensional arrays, one for each of the filters used in the first convolutional layer.
from keras.models import Model
model = Sequential([
Conv2D(filters = 10, kernel_size = (3, 3), activation = 'relu', padding = 'same', input_shape = (320, 320, 3)),
MaxPool2D(pool_size = (2, 2), strides = 2),
Conv2D(filters = 10, kernel_size = (3, 3), activation = 'relu', padding = 'same'),
MaxPool2D(pool_size = (2, 2), strides = 2),
Flatten(),
Dense(20, activation = 'softmax'),
])
layer_name = 'conv2d'
intermediate_layer_model = Model(inputs=model.input, outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(valid_batches)

Related

Different accuracy on same CNN

I have this CNN:
def cnn(trainImages, trainLabels, testImages, testLabels):
trainImages = np.array(trainImages)
trainLabels = np.array(trainLabels)
testImages = np.array(testImages)
testLabels = np.array(testLabels)
trainImages = trainImages / 255
testImages = testImages / 255
model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (3, 3), padding = 'same', activation = 'relu', input_shape = (224, 224, 3)))
model.add(MaxPool2D(pool_size = (2, 2), strides = (2, 2)))
model.add(Conv2D(filters = 64, kernel_size = (3, 3), padding = 'same', activation = 'relu'))
model.add(MaxPool2D(pool_size = (2, 2), strides = (2, 2)))
model.add(Conv2D(filters = 128, kernel_size = (3, 3), padding = 'same', activation = 'relu'))
model.add(MaxPool2D(pool_size = (2, 2), strides = (2, 2)))
model.add(Flatten())
model.add(Dense(256, activation = 'relu'))
model.add(Dense(9))
model.compile(optimizer = 'adam', loss = tensorflow.keras.losses.SparseCategoricalCrossentropy(from_logits = True), metrics = ['accuracy'])
model.fit(trainImages, trainLabels, epochs = 10)
predictionResult = model.predict(testImages)
pred = []
for i in range(len(predictionResult)):
pred.append(np.argmax(predictionResult[i], axis = -1))
print('Accuracy: ', metrics.accuracy_score(testLabels, pred))
print(metrics.classification_report(testLabels, pred))
print(metrics.confusion_matrix(testLabels, pred))
1). I got different accuracy everytime when I run the CNN, between 87% and 93%. How can I get permanently when I run the same accuracy? I tried tensorflow.set_random_seed(), but without effect.
2). What should I improve at my network to get over 95%? Input has shape (224, 224, 3). 2831 training images and 665 testing. 9 output classes. A problem of color recognition.
As I posted in a comment, one possibility for such a behavior is the use of GPUs: cuda introduces some little variability and therefore you could experience some fluctuations in the accuracy of two models trained it the same way. You could try to disable the GPU:
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
And setting the seed as before. However, this would limit your performance and you will need more time to complete the same training, since you would use only CPUs and not your GPU

Concatenating parallel layers in tensorflow

I am going to implement neural network below in tensorflow
Neural network with paralle layers
and i wrote code below for it
# Defining model input
input_ = Input(shape=(224, 224, 3))
# Defining first parallel layer
in_1 = Conv2D(filters=16, kernel_size=(3, 3), activation=relu)(input_)
conv_1 = BatchNormalization()(in_1)
conv_1 = AveragePooling2D(pool_size=(2, 2), strides=(3, 3))(conv_1)
# Defining second parallel layer
in_2 = Conv2D(filters=16, kernel_size=(5, 5), activation=relu)(input_)
conv_2 = BatchNormalization()(in_2)
conv_2 = AveragePooling2D(pool_size=(2, 2), strides=(3, 3))(conv_2)
# Defining third parallel layer
in_3 = Conv2D(filters=16, kernel_size=(5, 5), activation=relu)(input_)
conv_3 = BatchNormalization()(in_3)
conv_3 = MaxPooling2D(pool_size=(2, 2), strides=(3, 3))(conv_3)
# Defining fourth parallel layer
in_4 = Conv2D(filters=16, kernel_size=(9, 9), activation=relu)(input_)
conv_4 = BatchNormalization()(in_4)
conv_4 = MaxPooling2D(pool_size=(2, 2), strides=(3, 3))(conv_4)
# Concatenating layers
concat = Concatenate([conv_1, conv_2, conv_3, conv_4])
flat = Flatten()(concat)
out = Dense(units=4, activation=softmax)(flat)
model = Model(inputs=[in_1, in_2, in_3, in_4], outputs=[out])
model.summary()
After running the code i got error below:
TypeError: Inputs to a layer should be tensors.
Got: <tensorflow.python.keras.layers.merge.Concatenate object at 0x7febd46f6ac0>
there were various error in your code, no padding, wrong concatenation, wrong input, and the activation are defined in a not reproducible way, this works:
from keras.layers.merge import concatenate # please share the import next time
from keras.layers import Conv2D, AveragePooling2D, MaxPooling2D, Flatten, Dense, Concatenate, Input
from keras import Model
# Defining model input
input_ = Input(shape=(224, 224, 3))
# Defining first parallel layer
in_1 = Conv2D(filters=16, kernel_size=(3, 3), activation='relu', padding='same')(input_)
conv_1 = BatchNormalization()(in_1)
conv_1 = AveragePooling2D(pool_size=(2, 2), strides=(3, 3))(conv_1)
# Defining second parallel layer
in_2 = Conv2D(filters=16, kernel_size=(5, 5), activation='relu', padding='same')(input_)
conv_2 = BatchNormalization()(in_2)
conv_2 = AveragePooling2D(pool_size=(2, 2), strides=(3, 3))(conv_2)
# Defining third parallel layer
in_3 = Conv2D(filters=16, kernel_size=(5, 5), activation='relu', padding='same')(input_)
conv_3 = BatchNormalization()(in_3)
conv_3 = MaxPooling2D(pool_size=(2, 2), strides=(3, 3))(conv_3)
# Defining fourth parallel layer
in_4 = Conv2D(filters=16, kernel_size=(9, 9), activation='relu', padding='same')(input_)
conv_4 = BatchNormalization()(in_4)
conv_4 = MaxPooling2D(pool_size=(2, 2), strides=(3, 3))(conv_4)
# Concatenating layers
concat = concatenate([conv_1, conv_2, conv_3, conv_4])
flat = Flatten()(concat)
out = Dense(units=4, activation='softmax')(flat)
model = Model(inputs=[input_], outputs=[out])
model.summary()
so you either do:
concat = Concatenate()([conv_1, conv_2, conv_3, conv_4])
or:
concat = concatenate([conv_1, conv_2, conv_3, conv_4])

Get the output of just bottleneck layer from autoencoder

I'm new to Autoencoder. I have built a simple convolution autoencoder as shown below:
# ENCODER
input_img = Input(shape=(64, 64, 1))
encode1 = Conv2D(32, (3, 3), activation=tf.nn.leaky_relu, padding='same')(input_img)
encode2 = MaxPooling2D((2, 2), padding='same')(encode1)
l = Flatten()(encode2)
l = Dense(100, activation='linear')(l)
# DECODER
d = Dense(1024, activation='linear')(l)
d = Reshape((32,32,1))(d)
decode3 = Conv2D(64, (3, 3), activation=tf.nn.leaky_relu, padding='same')(d)
decode4 = UpSampling2D((2, 2))(decode3)
model = models.Model(input_img, decode4)
model.compile(optimizer='adam', loss='mse')
# Train it by providing training images
model.fit(x, y, epochs=20, batch_size=16)
Now after training this model, I want to get output from bottleneck layer i.e dense layer. That means if I throw array of shape (1000, 64, 64) to model, I want compressed array of shape (1000, 100).
I have tried one method as shown below, but it's giving me some error.
model = Model(inputs=[x], outputs=[l])
err:
ValueError: Input tensors to a Functional must come from `tf.keras.Input`.
I have also tried some other method but that's also not working. Can someone tell me how can I get compressed array back after training the model.
You need to create the separate model for the encoder. After you train the whole system encoder-decoder, you can use only encoder for prediction. Code example:
# ENCODER
input_img = layers.Input(shape=(64, 64, 1))
encode1 = layers.Conv2D(32, (3, 3), activation=tf.nn.leaky_relu, padding='same')(input_img)
encode2 = layers.MaxPooling2D((2, 2), padding='same')(encode1)
l = layers.Flatten()(encode2)
encoder_output = layers.Dense(100, activation='linear')(l)
# DECODER
d = layers.Dense(1024, activation='linear')(encoder_output)
d = layers.Reshape((32,32,1))(d)
decode3 = layers.Conv2D(64, (3, 3), activation=tf.nn.leaky_relu, padding='same')(d)
decode4 = layers.UpSampling2D((2, 2))(decode3)
model_encoder = Model(input_img, encoder_output)
model = Model(input_img, decode4)
model.fit(X, y, epochs=20, batch_size=16)
model_encoder.predict(X) should return a vector for each image.
Getting the output of intermediate layer (bottleneck_layer).
# ENCODER
input_img = Input(shape=(64, 64, 1))
encode1 = Conv2D(32, (3, 3), activation=tf.nn.leaky_relu, padding='same')(input_img)
encode2 = MaxPooling2D((2, 2), padding='same')(encode1)
l = Flatten()(encode2)
bottleneck = Dense(100, activation='linear', name='bottleneck_layer')(l)
# DECODER
d = Dense(1024, activation='linear')(bottleneck)
d = Reshape((32,32,1))(d)
decode3 = Conv2D(64, (3, 3), activation=tf.nn.leaky_relu, padding='same')(d)
decode4 = UpSampling2D((2, 2))(decode3)
# full model
model_full = models.Model(input_img, decode4)
model_full.compile(optimizer='adam', loss='mse')
model_full.fit(x, y, epochs=20, batch_size=16)
# bottleneck model
bottleneck_output = model_full.get_layer('bottleneck_layer').output
model_bottleneck = models.Model(inputs = model_full.input, outputs = bottleneck_output)
bottleneck_predictions = model_bottleneck.predict(X_test)

How to store the flatten result of a CNN?

I have the following convolutional neural network to apply to images:
classifier = Sequential()
classifier.add(Convolution2D(128, (3, 3), input_shape = (128, 128, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Convolution2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
After applying the convolutional and maxpooling layers, I flatten the results and want to store only that result (later I want to work with this result using unsupervised methods). How do I do that? The only examples I have continue the proccess to fit the model and I never store the flatten layers.
This is covered in the Keras documentation for pretrained models. See the examples about feature extraction, https://keras.io/applications/#extract-features-with-vgg16
Once you have your model, you just do:
features = model.predict(x)

How do I insert a Keras layer before previously defined layers?

I'm training an autoencoder in Keras right now, below is the network structure.
input_img = Input(shape=(target_size[0], target_size[1], 3))
x = Conv2D(8, (3, 3), activation = 'relu', padding = 'same')(input_img)
x = MaxPooling2D((2, 2), padding = 'same')(x)
x = Conv2D(16, (3, 3), activation = 'relu', padding = 'same')(x)
x = MaxPooling2D((2, 2), padding = 'same')(x)
x = Conv2D(32, (3, 3), activation = 'relu', padding = 'same')(x)
encoded = MaxPooling2D((2, 2), padding = 'same')(x)
x = Conv2D(32, (3, 3), padding = 'same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation = 'relu', padding = 'same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation = 'relu', padding = 'same')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(3, (3, 3), activation = 'sigmoid', padding = 'same')(x)
I'm having trouble thinking of a way to:
Insert an input layer between where "encoded" is defined and the Conv2D layer after it - the intention is to get the encoding of two different images, create a bunch of iterative "steps" between their encoding, then input these encodings into the "decoder" half of this network to generate an image for every step. I want to make a gif of the output "morphing" from one image to the other.
I wanted to try inserting another Conv2D(...) and MaxPooling2D(...) pair right after the input layer, and a corresponding UpSampling2D(...) and Conv2D(...) at the end. I had this idea from NVidia's "Progressive Growing of GANs for Improved Quality, Stability, and Variation" paper, where they trained their GAN to generate really good images at low resolutions, then progressively added more layers at the beginning and end of their network and trained the whole network with the new layers.
Does this make sense? Please let me know if I can clarify anything, I feel like this is very specific problem that's hard to explain over text all at once.
Thanks!

Categories

Resources