I am trying to build a 1D CNN for numerical dataset. My dataset has 520 rows and 13 features. Here is the code below.
It gives
"ValueError: Input 0 of layer sequential_21 is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (1, 13)" error.
How do I need to set input shape, or do I have to reshape X_train? Any help is highly appreciated.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
n_features = 13
model = Sequential()
model.add(Conv1D(filters=1, kernel_size=1, activation='relu', input_shape=(1, n_features)))
model.add(Conv1D(filters=1, kernel_size=1, activation='relu'))
model.add(Dropout(0.5))
model.add(MaxPooling1D(pool_size=1))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=20, batch_size=1)
yhat_classes = model.predict_classes(testX, verbose=0)
The problem is that batching your numpy dataset yields rows. Now you use batch size 1, so the generator yields 1 row resulting in a array of shape (1, n_features) but you want shape (batch_size, 1, n_features).
Adding a dimension to the dataset before spliting it should fix the problem
X = X.reshape(-1, 1, n_features)
Related
I am starting out on learning keras and ran into this issue that does not make sense to me. I have a very simple model and I want to pass a trivial data to train on. I want to pass the model two training examples, each 6 elements long, as input. I have two 3 element arrays as labels in one-hot-encoding format. I am getting an error:
ValueError: Shapes (None, 3) and (None, 6, 3) are incompatible
I am not sure why this is or if I need to do some preprocessing of the data to be in a format keras likes. I appeciate any help!
finalModel.add(Dense(6, input_shape=(6,1), activation='relu'))
finalModel.add(Dense(3, activation='relu'))
finalModel.add(Dense(3, activation='relu'))
finalModel.add(Dense(3, activation='sigmoid'))
finalModel.compile(loss="categorical_crossentropy",
metrics=['accuracy'])
x_train = np.asarray([[0,0,1,0,0,0],[1,0,1,0,1,0]]).astype(np.float32)
y_train = np.asarray([[1,0,0],[0,0,1]]).astype(np.float32)
finalModel.fit(x_train, y_train, epochs=5)
Your x_train has shape (BATCH_SIZE, 6), so the input_shape to your model should be (6,), not (6,1). Try this instead:
finalModel.add(Dense(6, input_shape=(6,), activation='relu'))
finalModel.add(Dense(3, activation='relu'))
finalModel.add(Dense(3, activation='relu'))
finalModel.add(Dense(3, activation='sigmoid'))
finalModel.compile(loss="categorical_crossentropy",
metrics=['accuracy'])
x_train = np.asarray([[0,0,1,0,0,0],[1,0,1,0,1,0]]).astype(np.float32)
y_train = np.asarray([[1,0,0],[0,0,1]]).astype(np.float32)
finalModel.fit(x_train, y_train, epochs=5)
I'm trying to make a binary Classification by combining CNN (con1D) with GRU. my dataset dataset is like that :
X_train shape : (223461, 5)
y_train shape :(223461,)
the X_train is like that and the Y_train is a labels (0,1) like that
first I convert that train dataset :
dataset = X_train.values
dataset=dataset[1:]
dataset = dataset.astype('float32')
dataset
the same for y-train:
dataset_target = y_train.values
dataset_target=dataset_target[1:]
dataset_target = dataset_target.astype('float32')
dataset_target
now the shapes are dataset.shape =(223460, 5)
, dataset_target.shape = (223460,)
than my model structure is :
verbose, epochs, batch_size = 0, 100, 64
n_timesteps, n_features, n_outputs = dataset.shape[0], dataset.shape[1], dataset_target.shape[0]
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape= (n_timesteps,n_features)))
model.add(MaxPooling1D(pool_size=2))
model.add(GRU(64))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(1, activation='sigmoid'))
opt = Adam(learning_rate=0.01)
model.compile(loss=tf.keras.losses.BinaryCrossentropy(), optimizer=opt , metrics=['accuracy'])
model.summary()
and when I want to fit dataset to my model:
# fit network
model.fit(dataset, dataset_target, epochs=epochs, batch_size=batch_size, verbose=1)
# evaluate model
_, accuracy = model.evaluate(X_test, y_test, batch_size=batch_size, verbose=1)
#accuracy
I get an error Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 223460, 5), found shape=(64, 5)
Is the first axis of the dataset (233460 samples) actually time steps, and do you have 5 'channels' of data? In that case, it would help if you slice the dataset along the first axis and then assign to each 'slice' single label, for example, the last value related to the slice from the y_train. In that case, n_timesteps would be the length of the slice, and the shape of the dataset something like (n_samples, n_timesteps, 5). Basically, Conv1D expects each training sample to be 2D, but in you case it's 1D, because the first dimension is just a number of samples.
I might have interpreted the dataset the wrong way. In that case, please clarify how it works so I would fix my suggestion.
Here's the example:
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.layers import Conv1D, MaxPooling1D, GRU, \
Dropout, Flatten, Dense
from tensorflow.keras import Sequential
from tensorflow.keras.optimizers import Adam
import numpy as np
X_train = np.random.normal(0, 1, (223461, 5))
y_train = np.random.randint(0, 2, 223461)
dataset = X_train[1:]
dataset_target = y_train[1:]
n_timesteps = 10
# Slice dataset and target
dataset = np.stack(np.split(dataset, n_timesteps)[:-1])
dataset_target = np.stack([y[-1] for y in np.split(dataset_target, n_timesteps)[:-1]])
Define and train the model:
def get_model(dataset, n_timesteps):
verbose, epochs, batch_size = 0, 100, 64
n_timesteps, n_features = dataset.shape[1], dataset.shape[2]
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape = (n_timesteps, n_features)))
model.add(MaxPooling1D(pool_size=2))
model.add(GRU(64))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(1, activation='sigmoid'))
opt = Adam(learning_rate=0.01)
model.compile(loss=tf.keras.losses.BinaryCrossentropy(), optimizer=opt , metrics=['accuracy'])
model.summary()
return model
verbose, epochs, batch_size = 0, 1, 64
model = get_model(dataset, n_timesteps)
model.fit(dataset, dataset_target, epochs=epochs, batch_size=batch_size, verbose=1)
Hope it helps!
I am new to programming. I am trying to classify two classes (Crash, Non-Crash) based on two features (Length, Traffic_Volume) using 1 dimensional CNN. When I am trying to train the following model,
# Training and Testing Data
X_train, y_train = train[['Traffic_Volume', 'length']].values, train['Crash'].values
X_test, y_test = SH[['Traffic_Volume', 'length']].values, SH['Crash'].values
print ('Training data shape : ', X_train.shape, y_train.shape)
print ('Testing data shape : ', X_test.shape, y_test.shape)
# Training data shape : (316, 2) (316,)
# Testing data shape : (343, 2) (343,)
# Fit and Evaluate a Model
def baseline_model(n_features=343, seed=100):
numpy.random.seed(seed)
# set_random_seed(seed)
tensorflow.random.set_seed(seed)
# create model
model = Sequential()
model.add(Conv1D(32, 3, padding = "same", input_shape=(343, 2)))
model.add(Activation('relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))#
model.add(Dense(64, activation='relu'))#
model.add(Dense(2))
model.add(Activation('softmax'))
# Compile model
numpy.random.seed(seed)
tensorflow.random.set_seed(seed)
model.compile(loss='categorical_crossentropy', optimizer='adagrad', metrics=['accuracy'])
print (model.summary())
return model
# Classification
n_features=2
n_classes=2
batch_size=10
from multi_adaboost_CNN import AdaBoostClassifier as Ada_CNN
n_estimators =10
epochs =1
bdt_real_test_CNN = Ada_CNN(
base_estimator=baseline_model(n_features=n_features),
n_estimators=n_estimators,
learning_rate=1,
epochs=epochs)
bdt_real_test_CNN.fit(X_train, y_train, batch_size)
y_pred_CNN = bdt_real_test_CNN.predict(X_train)
print('\n Training accuracy of bdt_real_test_CNN (AdaBoost+CNN): {}'.format(accuracy_score(bdt_real_test_CNN.predict(X_train),y_train)))
I found this ValueError:
ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (None, 2)
I want to know what I should change to get an efficient model (Data.shape, n_features, n_classes, etc.)?
I am trying to create a multi-channel 1D CNN for analyzing ECG signals. I have 258 12 lead ECGs with length 300 samples, so my input dimension is (258, 300, 12).
model = Sequential()
model.add(Conv1D(filters=64, kernel_size=10, activation='relu', input_shape=(n_timesteps,n_features), padding='same'))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(filters=64, kernel_size=10, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(filters=64, kernel_size=10, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(filters=64, kernel_size=10, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=10, batch_size=20, verbose=1, validation_split = 0.2)
I'm running the code above, and getting the following error
ValueError: Error when checking target: expected dense_8 to have shape (2,) but got array with shape (1,)
Thanks for any help!
You are trying to train the model like this,
model.fit(X_train, y_train, epochs=10, batch_size=20, verbose=1, validation_split = 0.2)
The shape of y_train is something like (n, 1), where n is the number of samples used to train.
Now, you are building a model with last layer like this,
model.add(Dense(num_classes, activation='softmax'))
From the error message, it can be deduced that you are setting num_classes=2. So, the last layer will have 2 nodes. Such a model expects y_train to be of shape (n,2). But you are using y_train of shape (n,1).
In order to fix the error, you can change the last layer as,
num_classes = 1
model.add(Dense(num_classes, activation='sigmoid'))
Note that, the activation function should be changed to sigmoid.
So, you are solving a binary classification problem.
The error message indicates our model expects label with shape (2,) and i assume you are using num_classes=2. However, Your label is either 1 or 0 as the provided shape of label is (1,). To solve this error, you have yo change the output dense layer of your model, and the layer should have one neuron with sigmoid activation function.
model.add(Dense(num_classes, activation='sigmoid')) # num_classes=1
I'm creating a CNN using Keras 2.0.8, with tensorflow backend. I'm trying to get the weight matrix of the first convolution layer as shown below:
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=(3,3),
input_shape=
(9,9,1),activation='relu',kernel_regularizer =l2(regularization_coef)))
model.add(Conv2D(filters=64, kernel_size=
(3,3),activation='relu',kernel_regularizer = l2(regularization_coef)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128,activation='relu',kernel_regularizer =
l2(regularization_coef)))
model.add(Dropout(0.5))
model.add(Dense(2,activation='softmax',kernel_regularizer =
l2(regularization_coef)))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',metrics=['accuracy'])
model.summary()
model.fit(X_train, Y_train, batch_size=batch_size, epochs=nb_epoch,
verbose=0, validation_split=0.1)
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
filters= model.layers[0].get_weights()[0]
print(filters.shape)
The first layer, as you can see is a 2d convolution layer with 16 filters, of kernel size (3,3) and 1 input channel. So the last line should give me a shape of (16,1,3,3), but instead I get a shape of (3,3,1,16). I want to visualise the weights as 16 3x3 matrices, but I'm not able to do that because of this shape problem.
Can someone please help me out?
Thanks in advance!
You can transpose the array to move the 16 to the beginning, and then reshape it to (16, 3, 3).
filters= model.layers[0].get_weights()[0]
print(filters.shape)
# (3,3,1,16)
filters = filters.transpose(3,0,1,2)
print(filters.shape)
# (16, 3, 3, 1)
filters = filters.reshape((16,3,3))
print(filters.shape)
# (16, 3, 3)