How can I fix this problem related to Conv2D of Keras? - python

I have created the following neural networks:
model = keras.Sequential()
model.add(layers.Conv2D(3, (3,3), activation="relu", padding="same", input_shape=constants.GRID_SHAPE))
model.add(layers.MaxPooling2D((3,3)))
model.add(layers.Flatten())
model.add(layers.Dense(constants.NUM_ACTIONS), activation="softmax")
where constants.GRID_SHAPE is (4,12).
I get the following error:
ValueError: Input 0 of layer "conv2d" is incompatible with the layer: expected min_ndim=4, found ndim=3. Full shape received: (None, 4, 12)
How can I fix this problem?

Make sure you have a 3D input shape excluding the batch size if you plan to use the Conv2D layer. Currently you have a 2D input shape. Also make sure the activation function softmax is part of the Dense layer:
import tensorflow as tf
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(3, (3,3), activation="relu", padding="same", input_shape=(4, 12, 1)))
model.add(tf.keras.layers.MaxPooling2D((3,3)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(5, activation="softmax"))
If your input data is of the shape (samples, 4, 12), you can use data = tf.expand_dims(data, axis=-1) to add an extra dimension to your data to make it compatible with the Conv2D layer.
If you do not want to add a new dimension, you could also simply use a Conv1D layer:
import tensorflow as tf
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv1D(3, 3, activation="relu", padding="same", input_shape=(4, 12)))
model.add(tf.keras.layers.MaxPooling1D(3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(5, activation="softmax"))

Related

Input 0 of layer conv1d_39 is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (None, 64)

I want to apply the CNN model by adding CNN followed by fully connected followed by CNN, but I get an error?
#defining model
model=Sequential()
#part 3 CNN followed by fully connected followed by CNN
#adding convolution layer
model.add(Conv1D(32,3, activation='relu', padding='same',
input_shape = (X_train.shape[1],1)))
#adding fully connected layer
model.add(Flatten())
model.add(Dense(256,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Dense(32,activation='relu'))
#adding convolution layer
model.add(Conv1D(64,3, activation='relu', padding='same'))
#adding pooling layer
model.add(MaxPool1D(pool_size=(2,), strides=2, padding='same'))
#adding output layer
model.add(Dense(2,activation='softmax'))
#compiling the model
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
the error :
ValueError: Input 0 of layer conv1d_39 is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (None, 64)
The convolution1D layer expects 3D input. model.add(Dense(32,activation='relu')) layer's output is 2D, which is being passed as an input to model.add(Conv1D(64,3, activation='relu', padding='same')). You can use the Reshape layer to avoid the error.
model.add(Conv1D(32,3, activation='relu', padding='same',
input_shape = (X_train.shape[1],1)))
model.add(Flatten())
model.add(Dense(256,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Dense(32,activation='relu'))
#adding Reshape layer
model.add(Reshape((1,32)))
model.add(Conv1D(64,3, activation='relu', padding='same'))
model.add(MaxPool1D(pool_size=(2,), strides=2, padding='same'))
model.add(Dense(2,activation='softmax'))

ValueError: Input 0 of layer lstm_5 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 43264)

hello guys i am trying to make model with AlexNet + LSTM using raw image as input
but i got an error like this :
ValueError: Input 0 of layer lstm_5 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 43264)
my model code:
model = tf.keras.models.Sequential([
# 1st conv
tf.keras.layers.Conv2D(96, (11,11),strides=(4,4), activation='relu', input_shape=(227, 227, 3)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(2, strides=(2,2)),
# 2nd conv
tf.keras.layers.Conv2D(256, (11,11),strides=(1,1), activation='relu',padding="same"),
tf.keras.layers.BatchNormalization(),
# 3rd conv
tf.keras.layers.Conv2D(384, (3,3),strides=(1,1), activation='relu',padding="same"),
tf.keras.layers.BatchNormalization(),
# 4th conv
tf.keras.layers.Conv2D(384, (3,3),strides=(1,1), activation='relu',padding="same"),
tf.keras.layers.BatchNormalization(),
# 5th Conv
tf.keras.layers.Conv2D(256, (3, 3), strides=(1, 1), activation='relu',padding="same"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.MaxPooling2D(2, strides=(2, 2)),
# To Flatten layer
tf.keras.layers.Flatten(),
# LSTM Layer
tf.keras.layers.LSTM(3),
# To FC layer 1
tf.keras.layers.Dense(4096, activation='relu'),
# add dropout 0.5 ==> tf.keras.layers.Dropout(0.5),
#To FC layer 2
tf.keras.layers.Dense(4096, activation='relu'),
# add dropout 0.5 ==> tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(output_class_units, activation='softmax')
])
but when i do it with only the AlexNet its working fine so i think the problem are on the LSTM layers but i still have no clue how to fix it
kinda new to this so hope anyone can help me fix this
thank you so much !
Let's understand the error:
ValueError: Input 0 of layer lstm_5 is incompatible with the layer:
expected ndim=3, found ndim=2. Full shape received: (None, 43264)
LSTM was expecting a 3D layer: first dim is the batch_size, second dim is the time is the third dim your data.
The output of tf.keras.layers.Flatten() was a 2D layer, where the first dim is the batch_size and the second dim is your data. We don't have the time dimension here.
To achieve what you want, you should wrap your layers around a TimeDistributed layer. We have an example here

Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (None, 1)

I am trying to get the prediction of my model
prediction = model.predict(validation_names)
print(prediction)
but I get the following error:
ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (None, 1)
I understand that this is due to the fact that the model accepts data of dimension 4
Model:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation = 'relu',
input_shape = (300, 300, 3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(32, (3,3), activation = 'relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation = 'relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation = 'relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation = 'relu'),
tf.keras.layers.Dense(3, activation = 'softmax')
])
How can I process the prediction data to solve this problem?
Conv2D expects 4+D tensor with shape: batch_shape + (channels, rows, cols) if data_format='channels_first' or 4+D tensor with shape: batch_shape + (rows, cols, channels) if data_format='channels_last'
Working sample code:
# The inputs are 28x28 RGB images with `channels_last` and the batch
# size is 4.
import tensorflow as tf
input_shape = (4, 28, 28, 3)
x = tf.random.normal(input_shape)
y = tf.keras.layers.Conv2D(2, 3, activation='relu', input_shape=input_shape[1:])(x)

Combining CNN and bidirectional LSTM

I am trying to combine CNN and LSTM for image classification.
I tried the following code and I am getting an error. I have 4 classes on which I want to train and test.
Following is the code:
from keras.models import Sequential
from keras.layers import LSTM,Conv2D,MaxPooling2D,Dense,Dropout,Input,Bidirectional,Softmax,TimeDistributed
input_shape = (200,300,3)
Model = Sequential()
Model.add(TimeDistributed(Conv2D(
filters=16, kernel_size=(12, 16), activation='relu', input_shape=input_shape)))
Model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2),strides=2)))
Model.add(TimeDistributed(Conv2D(
filters=24, kernel_size=(8, 12), activation='relu')))
Model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2),strides=2)))
Model.add(TimeDistributed(Conv2D(
filters=32, kernel_size=(5, 7), activation='relu')))
Model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2),strides=2)))
Model.add(Bidirectional(LSTM((10),return_sequences=True)))
Model.add(Dense(64,activation='relu'))
Model.add(Dropout(0.5))
Model.add(Softmax(4))
Model.compile(loss='sparse_categorical_crossentropy',optimizer='adam')
Model.build(input_shape)
I am getting the following error:
"Input tensor must be of rank 3, 4 or 5 but was {}.".format(n + 2))
ValueError: Input tensor must be of rank 3, 4 or 5 but was 2.
I found a lot of problems in the code:
your data are in 4D so simple Conv2D are ok, TimeDistributed is not needed
your output is 2D so set return_sequences=False in the last LSTM cell
your last layers are very messy: no need to put a dropout between a layer output and an activation
you need categorical_crossentropy and not sparse_categorical_crossentropy because your target is one-hot encoded
LSTM expects 3D data. So you need to pass from 4D (the output of convolutions) to 3D. There are two possibilities you can adopt: 1) make a reshape (batch_size, H, W * channel); 2) (batch_size, W, H * channel). In this way, u have 3D data to use inside your LSTM
here a full model example:
def ReshapeLayer(x):
shape = x.shape
# 1 possibility: H,W*channel
reshape = Reshape((shape[1],shape[2]*shape[3]))(x)
# 2 possibility: W,H*channel
# transpose = Permute((2,1,3))(x)
# reshape = Reshape((shape[1],shape[2]*shape[3]))(transpose)
return reshape
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(12, 16), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2),strides=2))
model.add(Conv2D(filters=24, kernel_size=(8, 12), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2),strides=2))
model.add(Conv2D(filters=32, kernel_size=(5, 7), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2),strides=2))
model.add(Lambda(ReshapeLayer)) # <========== pass from 4D to 3D
model.add(Bidirectional(LSTM(10, activation='relu', return_sequences=False)))
model.add(Dense(nclasses,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam')
model.summary()
here the running notebook

Adding LSTM layers before the softmax layer

I would like to add an LSTM layer before the softmax layer so that I can keep track of the context of a sequence and use it for prediction. Following is my implementation but I get every time the following error. Please help me to solve this error.
ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2
common_model = Sequential()
common_model.add(Conv2D(32, (3, 3), input_shape=self.state_size, padding='same', activation='relu'))
common_model.add(Dropout(0.2))
common_model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
common_model.add(MaxPooling2D(pool_size=(2, 2)))
common_model.add(Flatten())
common_model.add(Dense(512, activation='relu'))
common_model.add(Dropout(0.5))
common_model.add(Dense(512, activation='relu'))
common_model.add(Dropout(0.5))
common_model.add(Dense(512, activation='relu'))
common_model.add(Dropout(0.5))
agent_model = Sequential()
agent_model.add(common_model)
agent_model.add(LSTM(512, return_sequences=False))
agent_model.add(Dense(self.action_size, activation='softmax'))
agent_model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=self.agent_learning_rate))
critic_model = Sequential()
critic_model.add(common_model)
critic_model.add(Dense(1, activation='linear'))
critic_model.compile(loss="mse", optimizer=Adam(lr=self.critic_learning_rate))
I still don't quite understand the purpose of appending LSTM after Dense, but that error can be explained:
Because in Keras, the LSTM accept the input tensor like (?, m, n) which need to have 3 dims, while the output of Dense is (?, p) which has 2 dims.
You may want to try Embedding or Reshape layer, for example:
model.add(Embedding(512, 64, input_length=512))
or
model.add(Reshape((512, 64)))
Also it is good to check some examples of using LSTM: https://github.com/keras-team/keras/tree/master/examples

Categories

Resources