Value error in convolutional neural network for shape input - python

I am facing a problem using a convolutional neural network using Keras with Tensorflow as backend with Anaconda Python.
While defining my CNN and compiling, an error occurs:
def cnn_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=(3, 48, 48),
activation='relu'))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), padding='same',
activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), padding='same',
activation='relu'))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(NUM_CLASSES, activation='softmax'))
return model
The error that I get is:
File
"C:\Users\pandey\Anaconda3\lib\site-packages\keras\engine\training.py",
line 113, in _standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected conv2d_10_input to
have 4 dimensions, but got array with shape (0, 1)
I am using channel first in Keras and have defined the data format as channel first in starting only.
Any help is appreciated.

Related

How to get shapes of all the layers in a model?

Consider the following model
def create_model():
x_1=tf.Variable(24)
bias_initializer = tf.keras.initializers.HeNormal()
model = Sequential()
model.add(Conv2D(64, (5, 5), input_shape=(28,28,1),activation="relu", name='conv2d_1', use_bias=True,bias_initializer=bias_initializer))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (5, 5), activation="relu",name='conv2d_2', use_bias=True,bias_initializer=bias_initializer))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(120, name='dense_1',activation="relu", use_bias=True,bias_initializer=bias_initializer),)
model.add(Dense(10, name='dense_2', activation="softmax", use_bias=True,bias_initializer=bias_initializer),)
Is there any way I can get the shape/size/dimensions of the all the layer(s) of a model ?
For example in the above model, 'conv2d_1' has shape of (64,1,5,5) while 'conv2d_2' has shape of (32,64,5,5)?
You can use model.summary(). Or you can loop through all layers and print the output shape:
for layer in model.layers:
print(f'{layer.name} {layer.output_shape}')

How many hidden layers are there?

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
As far as I understand, model.add(Conv2D(32, (3, 3), input_shape=input_shape)) is the input layer here and model.add(Activation('sigmoid')) is the output layer.
There are a total 13 other layers between the input and output layers. So are there 13 hidden layers in the model? Or less? What are the names of the layers that should be counted as hidden layers?
I am confused about whether Activation or MaxPooling2D or Dropout should be counted as a single hidden layer or not.
Activation functions are not the hidden layers.
Layers will be - Conv2D,MaxPooling2D,Flatten,Dense
You can use below code to get the model architecture details.
model.summary()

Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_3/MaxPool' (op: 'MaxPool') with input shapes: [?,1,148,32]

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 150, 150),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature
vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
#sgd = optimizers.SGD(lr=0.0001, decay=1e-6, momentum=0.9)
model.compile(loss='sparse_categorical_crossentropy',
`optimizer=Adam(lr=0.001), # Adam optimizer with 1.0e-4 learning rate
metrics = ['accuracy']) # Metrics to be evaluated by the model
When I compile the above code i get this error
Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_8/MaxPool' (op: 'MaxPool') with input shapes: [?,1,75,32].
I tried with the same padding and it still doesn't work
Pretty sure if you change
model.add(Conv2D(32, (3, 3), input_shape=(3, 150, 150),padding='same'))
to
model.add(Conv2D(32, (3, 3), input_shape=(150, 150, 3),padding='same'))
(you may have to change the shape of your data too)
it will work as intended.

Loading Keras model error: No Opkernel was registered to support Op 'Assign' with these attrs

I've used Floyd hub to to train the following model and saved it
# Create the model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 32, 32), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 50
adammax = keras.optimizers.Adamax(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='categorical_crossentropy', optimizer=adammax, metrics=['accuracy'])
print(model.summary())
When I try to load it on my PC, it's working fine. But when I load it on the Raspberry Pi I get the following error. I tried also to save just the weights and load them, but it didn't work and I got the same error. I am using the same version of Tensorflow as Floyd hub on the Raspberry Pi.
As mentioned above, you're passing T=DT_INT64, while that is not one of the supported kernels for this op. You could see if the int64 version is just not shipped in the .so file, write the op kernel yourself, or try casting to tf.int32 right before this op in the python code. The last one worked well for me.

Dimension Mismatch in VGG Keras

I want to create VGG model with Keras.
However, following error was displayed:
expected lstm_input_2 to have 4 dimensions, but got array with shape
(60000, 10)
I created the following sequential model:
model = Sequential()
model.add(Conv2D(16, kernel_size=(3, 3),
padding='same',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Conv2D(16, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), padding='same', activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dense(50, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Dropout(0.5))
model.add(Activation('softmax'))
Please tell me why this error created.
You just need to add a Flatten layer like so:
…
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # <-- this layer is missing in your code
model.add(Dense(50, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Dropout(0.5))
model.add(Activation('softmax'))
…
This transforms your last 2d layer (MaxPooling2D) to a 1-dimensional shape that you than can feed into your Dense layer.

Categories

Resources