ValueError: Shapes (None, 1) and (None, 64) are incompatible Keras - python

I'm trying to build a sequential model . I have 32 features as the input dimension and it's a classification problem.
this is the result of the summary :
and this is my model:
#Create an ANN using Keras and Tensorflow backend
from keras.wrappers.scikit_learn import KerasClassifier
from keras.models import Sequential
from keras.layers import Dense, Dropout,Activation
from keras.optimizers import Adam,SGD
nb_epoch = 200
batch_size = 64
input_d = X_train.shape[1]
model = Sequential()
model.add(Dense(512, activation='relu', input_dim=input_d))
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu', input_dim=input_d))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.3))
model.add(Activation('softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
rms = 'rmsprop'
model.compile(loss='categorical_crossentropy',
optimizer=sgd,
metrics=['accuracy'])
the test and train shape are both 32. I get the ValueError: Shapes (None, 1) and (None, 64) are incompatible error whnever I want to fit the model but I have no idea why.
Much thanks.

The loss function is expecting a tensor of shape (None, 1) but you give it (None, 64). You need to add a Dense layer at the end with a single neuron which will get the final results of the calculation:
model = Sequential()
model.add(Dense(512, activation='relu', input_dim=input_d))
model.add(Dropout(0.5))
model.add(Dense(128, activation='relu', input_dim=input_d))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(1, activation='softmax'))

Related

How many layers should I stack in a sequential model?

I am trying to train a sequential model using the LSTM layer.
The size of sequence data for learning is as follows:
x = np.array(sequences)
y = to_categorical(labels).astype(int)
x.shape => (1800, 34, 48)
y.shape => (1800, 20)
After that, I make a sequential model and try to stack the LSTM layer and the dense layer, but I don't know how much to do that.
First, I did something like this:
model = Sequential()
model.add(LSTM(64, return_sequences=True, activation='relu', input_shape=x_train.shape[1:3]))
model.add(LSTM(128, return_sequences=True, activation='relu'))
model.add(LSTM(64, return_sequences=False, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(actions.shape[0], activation='softmax'))
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['acc'])
model.summary()
However, this doesn't seem to fit my case as I followed someone else's code.
How many layers should I stack in a sequential model?

How to give a sequece of numpy array as input to CNN

How can I reshape a sequence of arrays of shape (90,30,1662)? Meaning 90 arrays with 30 frames each and 1662 keypoints for each frames.And 90 array meaning 30 videos of numpy arrays for a word with 30 frames per video.
x_train, x_test, y_train, y_test=train_test_split(x, y, test_size=0.05)
x_train.shape ---->(85, 30, 1662)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.callbacks import TensorBoard
model = Sequential()
model.add(LSTM(64, return_sequences=True, activation='relu', input_shape=(30,1662)))
model.add(LSTM(128, return_sequences=True, activation='relu'))
model.add(LSTM(64, return_sequences=False, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(actions.shape[0], activation='softmax'))
How can I add CNN before the LSTM?
Reference: https://machinelearningmastery.com/cnn-long-short-term-memory-networks/
You can add CNN model first(with input shape=(30,90,1622)), and use LTSM model to encapsulate CNN model.
It will look like this:
cnn = Sequential()
cnn.add(Conv2D(your output size, (your filter size,your filter size),
activation='relu', padding='same', input_shape=(30,90,1622)))
cnn.add(MaxPooling2D(pool_size=(2, 2)))
cnn.add(Flatten())
model = Sequential()
model.add(TimeDistributed(cnn, ...)) # convert to LTSM type
model.add(LSTM(..))
model.add(Dense(...))

ValueError: Input 0 is incompatible with layer flatten_4: expected min_ndim=3, found ndim=2

I am trying to build a resnet-50 model, but I am getting the folowing error: ValueError: Input 0 is incompatible with layer flatten_4: expected min_ndim=3, found ndim=2. Can anyone help me.
This is my code:
from keras.models import Sequential
from keras.layers import Flatten, Dense, Dropout, BatchNormalization
input_shape=(224,224,3)
model = Sequential()
model.add(ResNet50(include_top=False,
input_tensor=None,
input_shape= input_shape,
pooling='avg',
classes=2,
weights=resnet_weights_path))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(BatchNormalization())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(BatchNormalization())
model.add(Dense(1, activation='sigmoid'))
model.layers[0].trainable = False
model.summary()

Keras Creating CNN Model "The added layer must be an instance of class Layer"

from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.layers import Dropout, Flatten, Input, Dense
def create_model():
def add_conv_block(model, num_filters):
model.add(Conv2D(num_filters, 3, activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(num_filters, 3, activation='relu', padding='valid'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
return model
model = tf.keras.models.Sequential()
model.add(Input(shape=(32, 32, 3)))
model = add_conv_block(model, 32)
model = add_conv_block(model, 64)
model = add_conv_block(model, 128)
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = create_model()
model.summary()
enter image description here
The solution is to use InputLayer instead of Input. InputLayer is meant to be used with Sequential models. You can also omit the InputLayer entirely and specify input_shape in the first layer of the sequential model.
Input is meant to be used with the TensorFlow Keras functional API, not the sequential API.
from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.layers import Dropout, Flatten, InputLayer, Dense
def create_model():
def add_conv_block(model, num_filters):
model.add(Conv2D(num_filters, 3, activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(num_filters, 3, activation='relu', padding='valid'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
return model
model = tf.keras.models.Sequential()
model.add(InputLayer((32, 32, 3)))
model = add_conv_block(model, 32)
model = add_conv_block(model, 64)
model = add_conv_block(model, 128)
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = create_model()
model.summary()
I think that the problem is related to the TF version... however I suggest u this implementation. In this way, you can specify the input_shape in the first layer of the sequential model and override the problem
def create_model():
def add_conv_block(model, num_filters, input_shape=None):
if input_shape:
model.add(Conv2D(num_filters, 3, activation='relu', padding='same', input_shape=input_shape))
else:
model.add(Conv2D(num_filters, 3, activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(num_filters, 3, activation='relu', padding='valid'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
return model
model = tf.keras.models.Sequential()
model = add_conv_block(model, 32, input_shape=(32, 32, 3))
model = add_conv_block(model, 64)
model = add_conv_block(model, 128)
model.add(Flatten())
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = create_model()
model.summary()

Saving the specific layer from within a sequential Keras model

I am building an auto-encoder and training the model so the targeted output is the same as the input.
I am using a sequential Keras model. When I use model.predict I would like it to export the array from a specific layer (Dense256) not the output.
This is my current model:
model = Sequential()
model.add(Dense(4096, input_dim = x.shape[1], activation = 'relu'))
model.add(Dense(2048, activation='relu'))
model.add(Dense(1024, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(1024, activation='relu'))
model.add(Dense(2048, activation='relu'))
model.add(Dense(4096, activation='relu'))
model.add(Dense(x.shape[1], activation ='sigmoid'))
model.compile(loss = 'mean_squared_error', optimizer = 'adam')
history = model.fit(data_train,data_train,
verbose=1,
epochs=10,
batch_size=256,
shuffle=True,
validation_data=(data_test, data_test))
After training, create a new model (model2) from your trained model (model) ending in your desired layer.
You can do so either with layer name:
(In model.summary(), your dense's layer 'name' with 256 neurons is dense_5)
from keras.models import Model
model2= Model(model.input,model.get_layer('dense_5').output)
Or with layer order:
(your dense layer with 256 neurons is fifth in model.summary())
from keras.models import Model
model2= Model(model.input,model.layers[4].output)
Then you can use predict
preds=model2.predict(x)
layer.get_weights() returns the weights of a layer as a numpy array which can then be saved, for example with np.save.
To set the weights from a numpy array, layer.set_weights(weights) can be used.
You can access your layer either by name (model.get_layer(LAYER_NAME) or by its number (model.layers[LAYER_INDEX]).

Categories

Resources