Is it possible to see the output after Conv2D layer in Keras - python

I am trying to understand each layer of Keras while implementing CNN.
In Conv2D layer i understand that it creates different convolution layer depending on various feature map values.
Now, My question is that
Can i see different feature map matrix that are applied on input image to get the convolution layer
Can i see the value of matrix that is generated after completion of Conv2D step.
Thanks in advance

You can get the output of a certain convolutional layer in this way:
import keras.backend as K
func = K.function([model.get_layer('input').input], model.get_layer('conv').output)
conv_output = func([numpy_input]) # numpy array
where 'input' and 'conv' denote the names of your input layer and convolutional layer. And you can get the weights of a certain layer like this:
conv_weights = model.get_layer('conv').get_weights() # numpy array

Related

Validation of Keras Conv2D convolution using NumPy return different layer output

I am trying to validate the output of the first layer of my network build using standard Keras. The name of the first layer is conv2d.
I built a new Model just to get the output of the first layer, using the following code:
inter_layer = None
weights = None
biases = None
for layer in qmodel.layers:
if layer.name == "conv2d":
print("Found layer: " + layer.name)
inter_layer = layer
weights = layer.get_weights()[0]
biases = layer.get_weights()[1]
inter_model = Model(qmodel.input,inter_layer.output)
inter_model.compile()
Then, I did the following (img_test is one of the cifar10 images):
first_layer_output = inter_model.predict(img_test)
# Get the 3x3 pixel upper left patch of the 3 channels of the input image
img_test_slice = img_test[0,:3,:3,:]
# Get only the first filter of the layer
weigths_slice = weights[:,:,:,0]
# Get the bias of the first filter of the layer
bias_slice = biases[0]
# Get the 3x3 pixel upper left patch of the first channel of the output of the layer
output_slice = first_layer_output[0,:3,:3,0]
I printed the shape of each slice, and got the correct shapes:
img_test_slice: (3,3,3)
weigths_slice: (3,3,3)
output_slice: (3,3)
As far as I understand, if I make this:
partial_sum = np.multiply(img_test_slice,weigths_slice)
output_pixel = partial_sum.sum() + bias_slice
output_pixel shoul be one of the values of output_slice (the value in index [1,1] actually, because the layer has padding = 'SAME').
But.... it is not.
Perhaps I am missing something very simple about how the calculation of the convolution works, but as far as I understand, doing the elementwise multiplication and then doing the sum of all values plus the bias should be one of the output pixels of the layer.
Perhaps the output data of the layer is arranged in a different manner than the input of the layer?
The problem was the use of the get_weights method.
My model was using the QKeras layers, and when you use this layers, you shouldn't use get_weights to get the layer weights, but insted do something like:
for quantizer, weight in zip(layer.get_quantizers(), layer.get_weights()):
if quantizer:
weight = tf.constant(weight)
weight = tf.keras.backend.eval(quantizer(weight))
If you extract the weights using this for loop, you get the real quantized weights, so now the calculations are correct.

What value do i pass in Input(shape=...) in keras

Suppose I have a numpy array with shape = (1303, 3988, 1). What value do I need to pass to Input() so that my ai learns or do I need it, do I need to reshape it?
I understand that your data is 1303 instances of vectors size (3988,1).
The answer depend on the layer goes after the input:
If you feed it after to Conv1D layer so the input layer should be:
Input(3988,1)
Otherwise you should squeeze the layer with:
np.squeeze(your_numpy_array)
or just flatten the input after the first layer:
x=Input(3988,1)
x=Flatten()(x)

Fully connected layer for each channel individually in tensorflow

Is there a way to apply a different dense layer to each channel of the input?
I.e. having a input tensor of [batch,height,width,channels] I would like to apply a different dense layer to each channel, thus having #channels layers, each with a input of height*width.
From tensorflow dense docs I see that
Note: that if inputs have a rank greater than 2, then inputs is flattened prior to the initial matrix multiply by weights.
which is not the desired outcome in this situation.

Keras: feed images into CNN and get image output

So far, I've been practicing neural networks on numerical datasets in pandas, but now I need to create a model that will take an image as input and output a binary mask of that image.
I have my training data as numpy arrays of shape (602, 2048, 2048, 1). 602 images of dimensions 2048x2048 with one channel. The array of output masks have the same dimensions.
What I can't figure out is how to define the first layer or how to correctly feed the data into the model. I would greatly appreciate your help on this issue
Well, this is not a "rule", but probably you will be using mostly 2D conv and related layers.
You feed everything as numpy arrays, as usual, maybe normalizing the values. Common options are:
Between 0 and 1 (just divide by 255.)
Between -1 and 1 (divide by 255., multiply by 2, subtract 1)
Caffe style: subtract from each channel a specific value to "center" the values based on their usual mean without rescaling them.
Your model should start with something like:
inputTensor = Input((2048,2048,1))
output = Conv2D(filters, kernel_size, .....)(inputTensor)
Or, in sequential models: model.add(Conv2D(...., input_shape=(2048,2048,1))
Later, it's up to you to decide which layers to use.
Conv2D
MaxPooling2D
Upsampling2D
Whether you're going to create a linear model or if you're going to divide branches, join branches, etc. is also your call.
Models in a U-Net style should be a good start for you.
What you can't do:
Don't use Flatten layers (actually you can, if you later reshape the output for having image dimensions... but why?)
Don't use Global Pooling layers (you don't want to sacrifice your spatial dimensions)

How to pass a pair of images first through a conv net and then through a recurrent net in Keras?

I would like to compare two images with both a convolutional and a recurrent network. First I want to pass my first image through some VGG-like stack, then feed it into a first RNN input. Then the second image should pass THE SAME VGG and after that go into a second input of the RNN.
How do I implement this topology with Keras?
The recurrent network should remember the first image while processing the second.
UPDATE
Suppose I have two inputs:
input1 = layers.Input(...)
input2 = layers.Input(...)
Currently I have two VGG branches
x1 = vgg_stack(...)(x1)
x2 = vgg_stack(...)(x2)
x = layers.concatenate([x1, x2])
x = final_MLP(...)(x)
How would I replace it with signle vgg_stack applied to both inputs, and then these results are passed to RNN?
You should try to use the TimeDistributed wrapper. You can find the doc here
It basically takes the first dimension after the batch as a 'temporal dimension' and it applies the layer (or model?) that you give as an argument to every temporal step. So use it like this :
from keras.layers import TimeDistributed
input_layer = Input((num_of_images, image_dims...))
# m_cnn is your VGG like model, taking one image as input.
layer1 = TimeDistributed(m_cnn)(input_layer)
layer2 = YourRNNLayer(...)(layer1)
I hope this makes sense to you :)

Categories

Resources