I trained a UNet based image segmentation model in tf.keras which predicts if and where an object is in a given image. I train with an input shape of (None, 256, 256, 1) and output a (None, 256, 256, 3) shaped prediction.
I now want to predict larger images (eg. (520, 696)) and want to use the same model. I am aware that one can change the input shape of the model to size (None, None, None, 1). However, now it can still only predict square images – for the image mentioned above, it returns a Dimensionality Error as shapes don't match (520 != 696).
Does anyone know how to avoid this or have a working function to stitch together smaller square outputs?
Steps to error:
img = skimage.io.imread(X) # shaped (520, 696)
pred = model.predict(img[None,...,None])
InvalidArgumentError: _MklConcatOp : Dimensions of inputs should match: shape[0][1]= 64 vs. shape[1][1] = 65
[[{{node concatenate_4/concat}}]]
I found a solution – due to the fact, that I trained a UNet (with concatenation-layers after upsampling), it can only combine powers of 2 (eg. 256 / 512). I therefore have to add padding to bring it to the next power of two before prediction and remove padding from the output.
Related
I am using the Qubvel segmentation models https://github.com/qubvel/segmentation_models repository to train an Inception-V3-encoder based model for a binary segmentation task. I am using (256 width x 256 height) images to train the models and they are working good. If I double one of the dimensions, say for example, (256 width x 512 height), it works fine as well. However, when I make adjustments for the aspect ratio and resize the images to a custom dimension, say (272 width x 256 height), the model throws an error as follows:
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concatenation axis. Received: input_shape=[(None, 16, 18, 2048), (None, 16, 17, 768)]
Is there a way to use such custom dimensions to train these models?
Your value error says that you are trying concatenate batch of inputs with varying dimensions.
This might be due to your dynamic aspect ratio based resizing of images. Say for example one batch of images might have shape (None, 16, 18, 2048) while another batch may have shape (None, 16, 17, 768).
Concatenate operation requires inputs with matching shapes except for
the concatenation axis.
A compatible concatenation will have inputs like (3, 256, 512, 3) and (15, 256, 512, 3) if we are trying to concat on axis=0 which is the concatenation axis. Notice how the shapes are matching except in the concatenation axis. Output will be of shape (18, 256, 512, 3).
Clearly with your input shapes its not possible with any axis. Keep your height and width fixed while training and if any image doesn't fit the size then resize it before passing it for training. This resizing can be done as part of preprocessing before training operation.
I have a dataset containing a huge amount of samples 1686663 and 107 features (1686663, 107). I'm building a neural network using keras, and wanted to apply a 1D convolution Conv1D.
The input for the Conv1D is (batch size, number_features, timestep). the batch size is basically the number of samples, however in my case i cannot use the number of samples which is too large for my RAM. So i selected a batch size = 512.
in_shape = (batch_size,x_train.shape[1],1)
Hence, my input shape is now (512, 107, 1).
I reshaped the training vectors to match the convolution :
x_train = x_train.reshape(x_train.shape[0],x_train_shape[1],1)
When running training i get the following error:
ValueError: Input 0 of layer "sequential_10" is incompatible with the layer: expected shape=(None, 512, 107, 1), found shape=(None, 107, 1)
Could anyone tell me what I am missing here ?
When you specify the input shape, either by adding a tf.keras.Input layer as first layer, or by setting the argument input_shape directly in the first layer of your model, you don't have to add the batch size. So in your case it would be:
in_shape = (x_train.shape[1], 1)
The batch size is automatically set as first dimension of your input shape, by taking the value you set in the batch_size argument of the fit() method.
But if you do like this (batch_size, x_train.shape[1], 1), it will add the batch size twice.
The error is basically saying that it expected to find (batch size, 512, 107, 1) but found (batch size, 107, 1). It was expecting that additional 512, because you added the batch size twice.
I trained a CNN model with (5x128x128x3) size of input shape
and I got trained weight of (5x128x128x3)
by the way, I wanna use this weight for training (7x128x128x3) size of input data
So, this is my question
should I use only same shape of input?
I wonder if I can use another size (in this case, 7x128x128x3) of input for transfer learning
ValueError: Error when checking input: expected input_1 to have shape (5, 128, 128, 3) but got arry with shape (7, 128, 128, 3)```
Let's break down the dimensions (5x128x128x3):
The first dimension is the batch size (which was 5 when the original model was trained). This is irrelevant and you can set it to None as pointed out in the comments to feed arbitrary sized batches to the model.
The second to third dimensions (128x128) are the width and height of the image and you may be able to change these, but it's hard to say for sure without knowing the model architecture and which layer output you're using for transfer learning. The reason you can change these is that 2d convolutional filters are repeated across the 2d dimensions (width and height) of the image, so they will remain valid for different widths and heights (assuming compatible padding). But if you change the 2d dimensions too much, it is possible that the receptive fields of the layers are changed in a way that hurts transfer learning performance. Eg. if the 7th conv layer in the network for 128x128 input can see the entire input image in each activation (a receptive field of 128x128), then if you double the width and height, it won't anymore and the layer may not recognize certain global features.
The fourth dimension is the number of channels in the input images and you can't change this, as the filters in the first layer will have 3 weights across the depth dimension.
I'm running a classification and predition neural network algorithme using pre-trained model with keras.
Now I know the shape of the input for keras is (224,224,3) but my input has this shape (180, 200, 20) and I get the following error:
ValueError: Dimension 0 in both shapes must be equal, but are 3 and 64. Shapes are [3,3,20,64] and [64,3,3,3]. for 'Assign_32' (op: 'Assign') with input shapes: [3,3,20,64], [64,3,3,3].
and here is the code:
from keras import applications
from keras.layers import Input
input_tensor = Input(shape = (180, 200, 20))
vgg_model = applications.VGG16(weights = 'imagenet', include_top = False, input_tensor = input_tensor)
vgg_model.summary()
Any idea how to get around this? Thank you
From Documentation:
input_shape: optional shape tuple, only to be specified if include_top
is False (otherwise the input shape has to be (224, 224, 3) (with
'channels_last' data format) or (3, 224, 224) (with 'channels_first'
data format). It should have exactly 3 inputs channels, and width and
height should be no smaller than 32. E.g. (200, 200, 3) would be one
valid value.
You can try to create a vgg16 from scratch from this link. VGG16 model for Keras
You need to resize your input image
from keras.preprocessing import image
img = image.load_img("image1.jpeg",target_size=(224,224))
If you want to learn to do transfer learning from scratch in keras you can read this article. This article has step by step implementation.
https://medium.com/#1297rohit/transfer-learning-from-scratch-using-keras-339834b153b9
In your case, since you are not dealing with images of the right size (or number of channels) you may want to cut out large parts of the vgg network to still save the information contained in the middle layers, but I am not sure how efficient it would be.
You would need to remove the first convolution layer, and all the dense layers at the end, replacing them with your own layers. You would certainly need to retrain the whole network, so rather than transfer learning you would be doing very smart initialization.
I do not understand why the channel dimension is not included in the output dimension of a conv2D layer in Keras.
I have the following model
def create_model():
image = Input(shape=(128,128,3))
x = Conv2D(24, kernel_size=(8,8), strides=(2,2), activation='relu', name='conv_1')(image)
x = Conv2D(24, kernel_size=(8,8), strides=(2,2), activation='relu', name='conv_2')(x)
x = Conv2D(24, kernel_size=(8,8), strides=(2,2), activation='relu', name='conv_3')(x)
flatten = Flatten(name='flatten')(x)
output = Dense(1, activation='relu', name='output')(flatten)
model = Model(input=image, output=output)
return model
model = create_model()
model.summary()
The model summary is given the figure at the end of my question. The input layer takes RGB images with width = 128 and height = 128. The first conv2D layer tells me the output dimension is (None, 61, 61, 24). I have used the kernel size of (8, 8), a stride of (2, 2) no padding. The values 61 = floor( (128 - 8 + 2 * 0)/2 + 1) and 24 (number of kernels/filters) makes sense. But why isn't the dimension for the different channels included in the dimension? As far as I can see the parameters for the 24 filters on each of the channels is included in the number of the parameters. So I would expect the output dimension to be (None, 61, 61, 24, 3) or (None, 61, 61, 24 * 3). Is this just a strange notation in Keras or am I confused about something else?
This question is asked in various forms all over the internet and has a simple answer which is often missed or confused:
SIMPLE ANSWER:
The Keras Conv2D layer, given a multi-channel input (e.g. a color image), will apply the filter across ALL the color channels and sum the results, producing the equivalent of a monochrome convolved output image.
An example, from a CIFAR-10 CNN example:
(1) You're training with the CIFAR image dataset, which is made up of 32x32 color images, i.e. each image is shape (32,32,3) (RGB = 3 channels)
(2) Your first layer of your network is a Conv2D Layer with 32 filters, each specified as 3x3, so:
Conv2D(32, (3,3), padding='same', input_shape=(32,32,3))
(3) Counter-intuitively, Keras will configure each filter as (3,3,3), i.e. a 3D volume covering the 3x3 pixels PLUS all the color channels. As a minor detail each filter has an additional weight for a BIAS value, as per normal neural network layer arithmetic.
(4) Convolution proceeds absolutely as normal, except a 3x3x3 VOLUME from the input image is convolved at each step with the 3x3x3 filter, and a single (monochrome) output value (i.e. like a pixel) is produced at each step.
(5) The result is a Keras Conv2D convolution of a specified (3,3) filter on a (32,32,3) image produces a (32,32) result because the actual filter used is (3,3,3).
(6) In this example, we have also specified 32 filters in the Conv2D layer, so the actual output is (32,32,32) for each input image (i.e. you might think of this as 32 images, one for each filter, each 32x32 monochrome pixels).
As a check, you can look at the count of weights (Param #) for the layer produced by model.summary():
Layer (type) Output shape Param#
conv2d_1 (Conv2D) (None, 32, 32, 32) 896
There are 32 filters, each 3x3x3 (i.e. 27 weights) plus 1 for the bias (i.e. total 28 weights each). And 32 filters x 28 weights each = 896 Parameters.
Each of the convolutional filters (8 x 8) is connected to a (8 x 8) receptive field for all the channels of the image. That is why we have (61, 61, 24) as the output of the second layer. The different channels are encoded implicitly into the weights of the 24 filters. This means, that each filter does not have 8 x 8 = 64 weights but instead 8 x 8 x Number of channels = 8 x 8 x 3 = 192 weights.
See this quote from CS231
Left: An example input volume in red (e.g. a 32x32x3 CIFAR-10 image),
and an example volume of neurons in the first Convolutional layer.
Each neuron in the convolutional layer is connected only to a local
region in the input volume spatially, but to the full depth (i.e. all
color channels). Note, there are multiple neurons (5 in this example)
along the depth, all looking at the same region in the input - see
discussion of depth columns in the text below. Right: The neurons from the
Neural Network chapter remains unchanged: They still compute a dot
product of their weights with the input followed by a non-linearity,
but their connectivity is now restricted to be local spatially.
My guess is that you're misunderstanding how convolutional layers defined.
My notation for the shape of the convolutional layer is (out_channels, in_channels, k, k) where k is a the size of the kernel. The out_channels is the number of filters (i.e. convolutional neurons). Consider following image:
The 3d convolutional kernel weights in the picture slide across different data windows of A_{i-1}(i.e. input image). Patches of 3D data of that image of shape (in_channels, k, k) are paired with individual 3d convolutional kernels of matching dimensionality. How many such 3d kernels are there? As the number of output channels out_channels. The depth dimension that kernel adopts is the in_channels of A_{i-1}. Therefore, the dimension in_channels of A_{i-1} is contracted away by the depth-wise dot product that builds up the output tensor with out_channels channels. The precise way in which the sliding windows are constructed is defined by the sampling tuple (kernel_size, stride, padding) and results in output tensor with spatial dimensions determined by the formula that you're correctly applied.
If you want to understand more, including backpropagation and implementation take a look at this paper.
The formula you're using is correct. It may be little confusing because many popular tutorial use number of filters equal to number of channels in the image. TensorFlow/Keras implementation produces its output by computing num_input_channels * num_output_channels intermediate feature maps of size (kernel_size[0], kernel_size[1]). So for each input channel it produces num_output_channels feature maps which then get multiplied and concatenated together to create output shape of (kernel_size[0], kernel_size[1], num_output_channels) Hope this clarifies Vlad's detailed answer