I'm currently trying to build a classification model in keras but I keep getting a shape error. This is my model right now. Is there anything that I am doing wrong?
predictors=["Length", "Diameter", "Height", "Shucked weight", "Viscera weight", "Shell weight", "Rings"]
x_train, x_test, y_train, y_test =train_test_split(db[predictors], db["Sex"], test_size=.2)
x_train= x_train.to_numpy()
x_test = x_test.to_numpy()
y_train = y_train.to_numpy()
y_test = y_test.to_numpy()
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(7,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(64, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = y_train[:1000]
partial_y_train = y_train[1000:]
partial_x_train.shape
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
ValueError: Shapes (None, 1) and (None, 64) are incompatible
Data Source https://www.kaggle.com/rodolfomendes/abalone-dataset
The output of the last layer consists of 64 different values, while your labels are of 1 value only.
This error is because you have 3 classes(labels) in your dataset and you are not defining those in your model's last layer. (As mentioned by #subspring)
model = Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(7,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(3)) # You need to mention this in the last dense layer
As the label data in this dataset is not numeric.
y_train.unique() #array(['I', 'M', 'F'], dtype=object)
For that, you can use LabelEncoder as below:
from sklearn.preprocessing import LabelEncoder
def Labels(y_train, y_test):
LabEnc = LabelEncoder()
LabEnc.fit(y_train)
Enc_y_train = LabEnc.transform(y_train)
Enc_y_test = LabEnc.transform(y_test)
return Enc_y_train, Enc_y_test
y_train, y_test = Labels(y_train, y_test)
y_train # array([1, 1, 2, ..., 2, 2, 0])
Now train the model by converting the input data (x_train,x_test) into an array.
x_train= np.array(x_train)
x_test = np.array(x_test)
#compile the model
model.compile(optimizer='rmsprop',
loss=tf.keras.losses.MeanSquaredError(),
metrics=['accuracy'])
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = y_train[:1000]
partial_y_train = y_train[1000:]
partial_x_train.shape
#train the model
history = model.fit(partial_x_train,
partial_y_train,
epochs=5,
batch_size=512,
validation_data=(x_val, y_val))
Output:
Epoch 1/5
5/5 [==============================] - 2s 80ms/step - loss: 0.8610 - accuracy: 0.3302 - val_loss: 0.7966 - val_accuracy: 0.2350
Epoch 2/5
5/5 [==============================] - 0s 13ms/step - loss: 0.7997 - accuracy: 0.2563 - val_loss: 0.7491 - val_accuracy: 0.4620
Epoch 3/5
5/5 [==============================] - 0s 16ms/step - loss: 0.7917 - accuracy: 0.3315 - val_loss: 0.7883 - val_accuracy: 0.2680
Epoch 4/5
5/5 [==============================] - 0s 15ms/step - loss: 0.7949 - accuracy: 0.3405 - val_loss: 0.7499 - val_accuracy: 0.3390
Epoch 5/5
5/5 [==============================] - 0s 13ms/step - loss: 0.7884 - accuracy: 0.3306 - val_loss: 0.7605 - val_accuracy: 0.3670
Related
I am training an encoder with a custom dataset in google colab directory (train folder/test folder) using the code below (based on 1):
my_data_train_dir = train_path
labels_train = os.listdir(my_data_train_dir)
data_train = tf.keras.utils.image_dataset_from_directory(train_path, batch_size=1, image_size=(224, 224))
data_train_iterator = data_train.as_numpy_iterator()
batch_train = data_train_iterator.next()
my_data_test_dir = test_path
labels_test = os.listdir(my_data_test_dir)
data_test = tf.keras.utils.image_dataset_from_directory(test_path, batch_size=1, image_size=(224, 224))
data_test_iterator = data_test.as_numpy_iterator()
batch_test = data_test_iterator.next()
# Found 10903 files belonging to 67 classes.
# Found 1619 files belonging to 67 classes
encoder = keras.models.Sequential([
keras.layers.Flatten(input_shape=[224, 224, 3]),
keras.layers.Dense(400, activation="relu"),
keras.layers.Dense(200, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(50, activation="relu"),
keras.layers.Dense(25, activation="relu"),
])
decoder = keras.models.Sequential([
keras.layers.Dense(50, activation="relu", input_shape=[25]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(200, activation="relu"),
keras.layers.Dense(400, activation="relu"),
keras.layers.Dense(224 * 224 * 3, activation="sigmoid"),
keras.layers.Reshape([224, 224, 3])
])
stacked_autoencoder = keras.models.Sequential([encoder, decoder])
stacked_autoencoder.compile(loss="binary_crossentropy",
optimizer='adam', metrics=['accuracy'])
x_train = batch_train[0] / 255
x_test = batch_test[0] / 255
history = stacked_autoencoder.fit(x_train, x_train, epochs=10,
validation_data=[x_test, x_test])
Epoch 1/10
1/1 [==============================] - 0s 77ms/step - loss: 0.5549 - accuracy: 0.9994 - val_loss: 0.7653 - val_accuracy: 0.5443
Epoch 2/10
1/1 [==============================] - 0s 51ms/step - loss: 0.5549 - accuracy: 0.9992 - val_loss: 0.7669 - val_accuracy: 0.5444
Epoch 3/10
1/1 [==============================] - 0s 51ms/step - loss: 0.5549 - accuracy: 0.9994 - val_loss: 0.7646 - val_accuracy: 0.5443
As you can see, the training at each epoch is done through 1 iteration only knowing that we have 10903 training images in total (10903/1 = 10903 iterations). How can I solve this problem ?
Thank in advance !
Dataset description:
(1) X_train: (6000,4) shape
(2) y_train: (6000,4) shape
(3) X_validation: (2000,4) shape
(4) y_validation: (2000,4) shape
(5) X_test: (2000,4) shape
(6) y_test: (2000,4) shape
Relationship between X and Y is shown here
For single label classification, the activation function of the last layer is Softmax and the loss function is categorical_crossentrop.
And I know the mathematical calculation method for the loss function.
And for multi-class multi-label classification problems, the activation function of the last layer is sigmoid, and the loss function is binary_crossentrop.
I want to know how the mathematical calculation method of the loss function works
It would be a great help to me if you let me know.
def MinMaxScaler(data):
numerator = data - np.min(data)
denominator = np.max(data) - np.min(data)
return numerator / (denominator + 1e-5)
kki = pd.read_csv(filename,names=['UE0','UE1','UE2','UE3','selected_UE0','selected_UE1','selected_UE2','selected_UE3'])
print(kki)
def LoadData(file):
xy = np.loadtxt(file, delimiter=',', dtype=np.float32)
print("Data set length:", len(xy))
tr_set_size = int(len(xy) * 0.6)
xy[:, 0:-number_of_UEs] = MinMaxScaler(xy[:, 0:-number_of_UEs]) #number_of_UES : 4
X_train = xy[:tr_set_size, 0: -number_of_UEs] #6000 row
y_train = xy[:tr_set_size, number_of_UEs:number_of_UEs*2]
X_valid = xy[tr_set_size:int((tr_set_size/3) + tr_set_size), 0:-number_of_UEs]
y_valid = xy[tr_set_size:int((tr_set_size/3) + tr_set_size), number_of_UEs:number_of_UEs *2]
X_test = xy[int((tr_set_size/3) + tr_set_size):, 0:-number_of_UEs]
y_test = xy[int((tr_set_size/3) + tr_set_size):, number_of_UEs:number_of_UEs*2]
print("Training X shape:", X_train.shape)
print("Training Y shape:", y_train.shape)
print("validation x shape:", X_valid.shape)
print("validation y shape:", y_valid.shape)
print("Test X shape:", X_test.shape)
print("Test Y shape:", y_test.shape)
return X_train, y_train, X_valid, y_valid, X_test, y_test, tr_set_size
X_train, y_train, X_valid, y_valid, X_test, y_test, tr_set_size = LoadData(filename)
model = Sequential()
model.add(Dense(64,activation='relu', input_shape=(X_train.shape[1],)))
model.add(Dense(46, activation='relu'))
model.add(Dense(24, activation='relu'))
model.add(Dense(12, activation='relu'))
model.add(Dense(4, activation= 'sigmoid'))
model.compile( loss ='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
hist = model.fit(X_train, y_train, epochs=5, batch_size=1, verbose= 1, validation_data=(X_valid, y_valid), callbacks= es)
This is a learning process, and even if epochs are repeated,
Accuracy does not improve.
Epoch 1/10
6000/6000 [==============================] - 14s 2ms/step - loss: 0.2999 - accuracy: 0.5345 - val_loss: 0.1691 - val_accuracy: 0.5465
Epoch 2/10
6000/6000 [==============================] - 14s 2ms/step - loss: 0.1554 - accuracy: 0.4883 - val_loss: 0.1228 - val_accuracy: 0.4710
Epoch 3/10
6000/6000 [==============================] - 14s 2ms/step - loss: 0.1259 - accuracy: 0.4710 - val_loss: 0.0893 - val_accuracy: 0.4910
Epoch 4/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.1094 - accuracy: 0.4990 - val_loss: 0.0918 - val_accuracy: 0.5540
Epoch 5/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0967 - accuracy: 0.5223 - val_loss: 0.0671 - val_accuracy: 0.5405
Epoch 6/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0910 - accuracy: 0.5198 - val_loss: 0.0836 - val_accuracy: 0.5380
Epoch 7/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0870 - accuracy: 0.5348 - val_loss: 0.0853 - val_accuracy: 0.5775
Epoch 8/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0859 - accuracy: 0.5518 - val_loss: 0.0515 - val_accuracy: 0.6520
Epoch 9/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0792 - accuracy: 0.5508 - val_loss: 0.0629 - val_accuracy: 0.4350
Epoch 10/10
6000/6000 [==============================] - 13s 2ms/step - loss: 0.0793 - accuracy: 0.5638 - val_loss: 0.0632 - val_accuracy: 0.6270
Mistake 1 -
The shape of y_train, y_validation and y_test should be (6000,), (2000,) and (2000,) respectively.
Mistake 2 -
For multi-class classification, the loss should be categorical_crossentropy and activation should be a softmax. So, change these two lines, like this:
model.add(Dense(4, activation= 'softmax'))
model.compile(loss ='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Suggestion -
Why are you splitting data by yourself? Use scikit-learn train_test_split. This code will give you proper splits:
from sklearn.model_selection import train_test_split
x, x_test, y, y_test = train_test_split(xtrain, labels, test_size=0.2, train_size=0.8)
x_train, x_validation, y_train, y_validation = train_test_split(x, y, test_size = 0.25, train_size =0.75)
I'm trying to predict next number in sequence.
You can see the data sample in google colab here:
https://colab.research.google.com/drive/1QnkNtIo56V9wdQ4CMTm3LRSQaa6A9VmP?usp=sharing
(51 columns c0-c49 and the last 'y' is the first value from the next row)
data is scaled with StandardScaler:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_features = df.copy()
features = scaled_features[columns_a]
scaler = StandardScaler().fit(features.values)
features = scaler.transform(features.values)
scaled_features[columns] = features
after that is splitting in train and test data:
from sklearn.model_selection import train_test_split
train, test = train_test_split(scaled_features, test_size=0.2, shuffle=False)
and reshaped for LSTM input
Y_train=train["y"]
X_train=train.drop("y", axis=1)
Y_test=test["y"]
X_test=test.drop("y", axis=1)
X_train = X_train.to_numpy()
X_test = X_test.to_numpy()
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)
X_train.shape
creating the model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, GRU, Dense, Dropout
from matplotlib import pyplot
model = Sequential()
model.add(LSTM(64, input_shape=(X_train.shape[1], X_train.shape[2]), activation='relu', return_sequences=True))
model.add(LSTM(32, activation='relu' ))
model.add(Dense(1))
#model.add(Dropout(0.2))
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
#print(model.summary())
model.fit(X_train, Y_train, epochs=5, batch_size=32, verbose=2)
scores = model.evaluate(X_test, Y_test, batch_size=32, verbose=0)
print("Model Accuracy: %.2f%%" % (scores[1]*100))
the output
Epoch 1/5
749/749 - 34s - loss: 1.2380 - accuracy: 0.0000e+00
Epoch 2/5
749/749 - 31s - loss: 1.2382 - accuracy: 0.0000e+00
Epoch 3/5
749/749 - 31s - loss: 1.2381 - accuracy: 0.0000e+00
Epoch 4/5
749/749 - 31s - loss: 1.2385 - accuracy: 0.0000e+00
Epoch 5/5
749/749 - 31s - loss: 1.2384 - accuracy: 0.0000e+00
Model Accuracy: 0.00%
I'm pretty new at machine learning/AI and i don't know what's wrong in code
Any idea? Thank you
My problem is to predict the output as which has 3 class label,
Lets say I have 20000 samples in my dataset with each sample is associated with label (0,1,2).
As this is multiclass classification problem.
Can I only give input as Labels which are ( 0, 1,2) to the network and get prediction based on the labels.
Will the data feeded to the network is sufficient to learn and predict the output
Please help me with your inputs
# Below is the code
X_train, X_test, y_train, y_test = train_test_split(values_train[:, 0],
values_train[:, 1],
test_size=0.25,
random_state=42)
print(" X Training Set size is",X_train.shape )
print(" y Training Set size is",y_train.shape )
print(" X Test Set size is",X_test.shape)
print(" y Test Set size is",y_test.shape )
'X Training Set size is (165081,)'
'y Training Set size is (165081,)'
'X Test Set size is (55028,)'
'y Test Set size is (55028,)'
# convert to LSTM friendly format
X_train = X_train.reshape(len(X_train),1, 1)
X_test = X_test.reshape(len(X_test),1,1)
print(X_train.shape, X_test.shape)
(165081, 1, 1) (55028, 1, 1)
# configure network
n_batch = 1
n_epoch = 100
n_neurons = 10
from keras.optimizers import SGD
opt = SGD(lr=0.01)
# design network
model = Sequential()
model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X_train.shape[1],
X_train.shape[2]),
stateful=True))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# fit network
for i in range(n_epoch):
model.fit(X_train, y_train ,validation_data=(X_test, y_test),
epochs=1, batch_size=n_batch, verbose=1, shuffle= False)
model.reset_states()
df_actual = []
dp_predict = []
for i in range(len(X_test)):
testX,testy = X_test[i],y_test[i]
testX = testX.reshape(1, 1, 1)
yhat = model.predict(testX, batch_size=1)
df_actual.append(testy)
dp_predict.append(yhat)
print('>Actual =%.1f, Predicted=%.1f' % (testy, yhat))
I am not able to get correct prediction in this model.
Update:
Please find the below Validation accuracy and Training accuracy with the loss
Train on 154076 samples, validate on 66033 samples
Epoch 1/5
154076/154076 [==============================] - 289s 2ms/step - loss: 1.0033 - accuracy: 0.3816 - val_loss: 1.0018 - val_accuracy: 0.4286
Epoch 2/5
154076/154076 [==============================] - 291s 2ms/step - loss: 1.0021 - accuracy: 0.3817 - val_loss: 1.0020 - val_accuracy: 0.4286
Epoch 3/5
154076/154076 [==============================] - 293s 2ms/step - loss: 1.0018 - accuracy: 0.3804 - val_loss: 1.0014 - val_accuracy: 0.4286
Epoch 4/5
154076/154076 [==============================] - 290s 2ms/step - loss: 1.0016 - accuracy: 0.3812 - val_loss: 1.0012 - val_accuracy: 0.4286
Epoch 5/5
154076/154076 [==============================] - 290s 2ms/step - loss: 1.0015 - accuracy: 0.3814 - val_loss: 1.0012 - val_accuracy: 0.4286
Can anyone suggest me what can be improvement
Note: - I have normalized the input data with MinMaxScalar and used the scaled data, but there is no change in the output
Class labels are of categorical type. Neural networks can't learn on categorical data. You have to one-hot encode it with e.g. keras.utils.to_categorical:
x = values_train[:, 0]
y = values_train[:, 1]
y = keras.utils.to_categorical(y)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=42)
I'm building a simple Neural network in Keras, like the following:
# create model
model = Sequential()
model.add(Dense(1000, input_dim=x_train.shape[1], activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='mean_squared_error', metrics=['accuracy'], optimizer='RMSprop')
# Fit the model
model.fit(x_train, y_train, epochs=20, batch_size=700, verbose=2)
# evaluate the model
scores = model.evaluate(x_test, y_test, verbose=0)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
The shape of the used data is:
x_train = (49972, 601)
y_train = (49972, 1)
My problem is that the network is not converging, the accuracy is fixed on 0.0168, like below:
Epoch 1/20
- 1s - loss: 3.2222 - acc: 0.0174
Epoch 2/20
- 1s - loss: 3.1757 - acc: 0.0187
Epoch 3/20
- 1s - loss: 3.1731 - acc: 0.0212
Epoch 4/20
- 1s - loss: 3.1721 - acc: 0.0220
Epoch 5/20
- 1s - loss: 3.1716 - acc: 0.0225
Epoch 6/20
- 1s - loss: 3.1711 - acc: 0.0235
Epoch 7/20
- 1s - loss: 3.1698 - acc: 0.0245
Epoch 8/20
- 1s - loss: 3.1690 - acc: 0.0251
Epoch 9/20
- 1s - loss: 3.1686 - acc: 0.0257
Epoch 10/20
- 1s - loss: 3.1679 - acc: 0.0261
Epoch 11/20
- 1s - loss: 3.1674 - acc: 0.0267
Epoch 12/20
- 1s - loss: 3.1667 - acc: 0.0277
Epoch 13/20
- 1s - loss: 3.1656 - acc: 0.0285
Epoch 14/20
- 1s - loss: 3.1653 - acc: 0.0288
Epoch 15/20
- 1s - loss: 3.1653 - acc: 0.0291
I used Sklearn library to build the same structure with the same data, and it works perfectly, shown me an accuracy higher than 0.5:
model = Pipeline([
('classifier', MLPClassifier(hidden_layer_sizes=(1000), activation='relu',
max_iter=20, verbose=2, batch_size=700, random_state=0))
])
I'm totally sure that I used the same data for both models, and this is how I prepare it:
def load_data():
le = preprocessing.LabelEncoder()
with open('_DATA_train.txt', 'rb') as fp:
train = pickle.load(fp)
with open('_DATA_test.txt', 'rb') as fp:
test = pickle.load(fp)
x_train = train[:,0:(train.shape[1]-1)]
y_train = train[:,(train.shape[1]-1)]
y_train = le.fit_transform(y_train).reshape([-1,1])
x_test = test[:,0:(test.shape[1]-1)]
y_test = test[:,(test.shape[1]-1)]
y_test = le.fit_transform(y_test).reshape([-1,1])
print(x_train.shape, ' ' , y_train.shape)
print(x_test.shape, ' ' , y_test.shape)
return x_train, y_train, x_test, y_test
What is the problem with the Keras structure?
Edited:
it's a multi-class classification problem: y_training [0 ,1, 2, 3]
For a multiclass problem your labels should be one hot encoded. For example if the options are [0 ,1, 2, 3] and the label is 1 then it should be [0, 1, 0, 0].
Your final layer should be a dense layer with 4 units and an activation of softmax.
model.add(Dense(4, activation='softmax'))
And your loss should be categorical_crossentropy
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='RMSprop')