model.fit(X_train, y_train, batch_size = batch_size,
nb_epoch = 4, validation_data = (X_test, y_test),
show_accuracy = True)
score = model.evaluate(X_test, y_test,
batch_size = batch_size, show_accuracy = True, verbose=0)
gives scalar output and hence the following code doesn't work.
print("Test score", score[0])
print("Test accuracy:", score[1])
The output that I get is:
Train on 20000 samples, validate on 5000 samples
Epoch 1/4
20000/20000 [==============================] - 352s - loss: 0.4515 - val_loss: 0.4232
Epoch 2/4
20000/20000 [==============================] - 381s - loss: 0.2592 - val_loss: 0.3723
Epoch 3/4
20000/20000 [==============================] - 374s - loss: 0.1513 - val_loss: 0.4329
Epoch 4/4
20000/20000 [==============================] - 380s - loss: 0.0838 - val_loss: 0.5044
Keras version 1.0
How can I get the accuracy as well? Please help
If you use Sequential model you can try (CODE UPDATED):
nb_epochs = 4
history = model.fit(X_train, y_train, batch_size = batch_size,
nb_epoch = nb_epochs, validation_data = (X_test, y_test),
show_accuracy = True)
print("Test score", history.history["val_loss"][nb_epochs - 1])
print("Test acc", history.history["val_acc"][nb_epochs - 1])
Thanks Marcin and you are correct.
The code needs to be like this
model.compile(loss='binary_crossentropy',
optimizer = 'adam',
metrics=["accuracy"])
show_accuracy serves no purpose in model.fit and needs to be removed from there.
Related
I'm currently trying to build a classification model in keras but I keep getting a shape error. This is my model right now. Is there anything that I am doing wrong?
predictors=["Length", "Diameter", "Height", "Shucked weight", "Viscera weight", "Shell weight", "Rings"]
x_train, x_test, y_train, y_test =train_test_split(db[predictors], db["Sex"], test_size=.2)
x_train= x_train.to_numpy()
x_test = x_test.to_numpy()
y_train = y_train.to_numpy()
y_test = y_test.to_numpy()
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(7,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(64, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = y_train[:1000]
partial_y_train = y_train[1000:]
partial_x_train.shape
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
ValueError: Shapes (None, 1) and (None, 64) are incompatible
Data Source https://www.kaggle.com/rodolfomendes/abalone-dataset
The output of the last layer consists of 64 different values, while your labels are of 1 value only.
This error is because you have 3 classes(labels) in your dataset and you are not defining those in your model's last layer. (As mentioned by #subspring)
model = Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(7,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(3)) # You need to mention this in the last dense layer
As the label data in this dataset is not numeric.
y_train.unique() #array(['I', 'M', 'F'], dtype=object)
For that, you can use LabelEncoder as below:
from sklearn.preprocessing import LabelEncoder
def Labels(y_train, y_test):
LabEnc = LabelEncoder()
LabEnc.fit(y_train)
Enc_y_train = LabEnc.transform(y_train)
Enc_y_test = LabEnc.transform(y_test)
return Enc_y_train, Enc_y_test
y_train, y_test = Labels(y_train, y_test)
y_train # array([1, 1, 2, ..., 2, 2, 0])
Now train the model by converting the input data (x_train,x_test) into an array.
x_train= np.array(x_train)
x_test = np.array(x_test)
#compile the model
model.compile(optimizer='rmsprop',
loss=tf.keras.losses.MeanSquaredError(),
metrics=['accuracy'])
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = y_train[:1000]
partial_y_train = y_train[1000:]
partial_x_train.shape
#train the model
history = model.fit(partial_x_train,
partial_y_train,
epochs=5,
batch_size=512,
validation_data=(x_val, y_val))
Output:
Epoch 1/5
5/5 [==============================] - 2s 80ms/step - loss: 0.8610 - accuracy: 0.3302 - val_loss: 0.7966 - val_accuracy: 0.2350
Epoch 2/5
5/5 [==============================] - 0s 13ms/step - loss: 0.7997 - accuracy: 0.2563 - val_loss: 0.7491 - val_accuracy: 0.4620
Epoch 3/5
5/5 [==============================] - 0s 16ms/step - loss: 0.7917 - accuracy: 0.3315 - val_loss: 0.7883 - val_accuracy: 0.2680
Epoch 4/5
5/5 [==============================] - 0s 15ms/step - loss: 0.7949 - accuracy: 0.3405 - val_loss: 0.7499 - val_accuracy: 0.3390
Epoch 5/5
5/5 [==============================] - 0s 13ms/step - loss: 0.7884 - accuracy: 0.3306 - val_loss: 0.7605 - val_accuracy: 0.3670
I'm trying to predict next number in sequence.
You can see the data sample in google colab here:
https://colab.research.google.com/drive/1QnkNtIo56V9wdQ4CMTm3LRSQaa6A9VmP?usp=sharing
(51 columns c0-c49 and the last 'y' is the first value from the next row)
data is scaled with StandardScaler:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaled_features = df.copy()
features = scaled_features[columns_a]
scaler = StandardScaler().fit(features.values)
features = scaler.transform(features.values)
scaled_features[columns] = features
after that is splitting in train and test data:
from sklearn.model_selection import train_test_split
train, test = train_test_split(scaled_features, test_size=0.2, shuffle=False)
and reshaped for LSTM input
Y_train=train["y"]
X_train=train.drop("y", axis=1)
Y_test=test["y"]
X_test=test.drop("y", axis=1)
X_train = X_train.to_numpy()
X_test = X_test.to_numpy()
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)
X_train.shape
creating the model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, GRU, Dense, Dropout
from matplotlib import pyplot
model = Sequential()
model.add(LSTM(64, input_shape=(X_train.shape[1], X_train.shape[2]), activation='relu', return_sequences=True))
model.add(LSTM(32, activation='relu' ))
model.add(Dense(1))
#model.add(Dropout(0.2))
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
#print(model.summary())
model.fit(X_train, Y_train, epochs=5, batch_size=32, verbose=2)
scores = model.evaluate(X_test, Y_test, batch_size=32, verbose=0)
print("Model Accuracy: %.2f%%" % (scores[1]*100))
the output
Epoch 1/5
749/749 - 34s - loss: 1.2380 - accuracy: 0.0000e+00
Epoch 2/5
749/749 - 31s - loss: 1.2382 - accuracy: 0.0000e+00
Epoch 3/5
749/749 - 31s - loss: 1.2381 - accuracy: 0.0000e+00
Epoch 4/5
749/749 - 31s - loss: 1.2385 - accuracy: 0.0000e+00
Epoch 5/5
749/749 - 31s - loss: 1.2384 - accuracy: 0.0000e+00
Model Accuracy: 0.00%
I'm pretty new at machine learning/AI and i don't know what's wrong in code
Any idea? Thank you
I'm training a deep learning model on 100000 rows with 80% of the training data and 20% of test data. The data is splitting however my model is showing the output of training with 2242. Below is the training code with model and output given. Any help will be highly appreciated.
Training Code:
import time
start_time = time.time()
from sklearn.feature_extraction.text import TfidfVectorizer
tweet_table = cleaning_table(tweet_table)
def tokenization_tweets(dataset, features):
tokenization = TfidfVectorizer(max_features=features)
tokenization.fit(dataset)
dataset_transformed = tokenization.transform(dataset).toarray()
return dataset_transformed
def splitting(table):
X_train, X_test, y_train, y_test = train_test_split(table.tweet, table.test, test_size=0.2, shuffle=True)
return X_train, X_test, y_train, y_test
if __name__ == "__main__":
tweet_table['test'] = tweet_table['Overall_Sentiment'].apply(lambda x: 1 if x == 'Positive' else (0 if x == 'Negative' else 2))
if __name__ == "__main__":
X_train, X_test, y_train, y_test = splitting(tweet_table)
#print(tweet_table["test"].value_counts())
#print(tweet_table["Overall_Sentiment"].value_counts())
#print(list(set(y_train)))
#print(list(set(y_test)))
#Create a Neural Network
#Create the model
def train(X_train_mod, y_train, features, shuffle, drop, layer1, layer2, epoch, lr, epsilon, validation):
model_nn = Sequential()
model_nn.add(Dense(layer1, input_shape=(features,), activation='relu'))
model_nn.add(Dropout(drop))
model_nn.add(Dense(layer2, activation='sigmoid'))
model_nn.add(Dropout(drop))
model_nn.add(Dense(3, activation='softmax'))
optimizer = keras.optimizers.Adam(lr=lr, beta_1=0.9, beta_2=0.999, epsilon=epsilon, decay=0.0, amsgrad=False)
model_nn.compile(loss='sparse_categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model_nn.fit(np.array(X_train_mod), y_train,
batch_size=32,
epochs=epoch,
verbose=1,
validation_split=validation,
shuffle=shuffle)
return model_nn
def test(X_test, model_nn):
prediction = model_nn.predict(X_test)
return prediction
def model1(X_train, y_train):
features = 3500
shuffle = True
drop = 0.5
layer1 = 512
layer2 = 256
epoch = 5
lr = 0.001
epsilon = None
validation = 0.1
X_train_mod = tokenization_tweets(X_train, features)
model = train(X_train_mod, y_train, features, shuffle, drop, layer1, layer2, epoch, lr, epsilon, validation)
return model;
#model1(X_train, y_train)
#model11(X_train, y_train)
def save_model(model):
# lets assume `model` is main model
model_json = model.to_json()
with open("model.json", "w") as json_file:
json.dump(model_json, json_file)
model.save_weights("model_weights.h5")
#print(len(X_train))
#print(len(y_train))
model_final = model1(X_train, y_train)
Output:
Epoch 1/5
2242/2242 [==============================] - 6s 3ms/step - loss: 0.3426 - accuracy: 0.8476 - val_loss: 0.2690 - val_accuracy: 0.8857
Epoch 2/5
2242/2242 [==============================] - 6s 3ms/step - loss: 0.2399 - accuracy: 0.9015 - val_loss: 0.2471 - val_accuracy: 0.8991
Epoch 3/5
2242/2242 [==============================] - 6s 3ms/step - loss: 0.1912 - accuracy: 0.9205 - val_loss: 0.2447 - val_accuracy: 0.9028
Epoch 4/5
2242/2242 [==============================] - 6s 3ms/step - loss: 0.1454 - accuracy: 0.9399 - val_loss: 0.2547 - val_accuracy: 0.9083
Epoch 5/5
2242/2242 [==============================] - 6s 3ms/step - loss: 0.1046 - accuracy: 0.9552 - val_loss: 0.2874 - val_accuracy: 0.9084
--- 192.1562056541443 seconds ---
Many Thanks
My problem is to predict the output as which has 3 class label,
Lets say I have 20000 samples in my dataset with each sample is associated with label (0,1,2).
As this is multiclass classification problem.
Can I only give input as Labels which are ( 0, 1,2) to the network and get prediction based on the labels.
Will the data feeded to the network is sufficient to learn and predict the output
Please help me with your inputs
# Below is the code
X_train, X_test, y_train, y_test = train_test_split(values_train[:, 0],
values_train[:, 1],
test_size=0.25,
random_state=42)
print(" X Training Set size is",X_train.shape )
print(" y Training Set size is",y_train.shape )
print(" X Test Set size is",X_test.shape)
print(" y Test Set size is",y_test.shape )
'X Training Set size is (165081,)'
'y Training Set size is (165081,)'
'X Test Set size is (55028,)'
'y Test Set size is (55028,)'
# convert to LSTM friendly format
X_train = X_train.reshape(len(X_train),1, 1)
X_test = X_test.reshape(len(X_test),1,1)
print(X_train.shape, X_test.shape)
(165081, 1, 1) (55028, 1, 1)
# configure network
n_batch = 1
n_epoch = 100
n_neurons = 10
from keras.optimizers import SGD
opt = SGD(lr=0.01)
# design network
model = Sequential()
model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X_train.shape[1],
X_train.shape[2]),
stateful=True))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# fit network
for i in range(n_epoch):
model.fit(X_train, y_train ,validation_data=(X_test, y_test),
epochs=1, batch_size=n_batch, verbose=1, shuffle= False)
model.reset_states()
df_actual = []
dp_predict = []
for i in range(len(X_test)):
testX,testy = X_test[i],y_test[i]
testX = testX.reshape(1, 1, 1)
yhat = model.predict(testX, batch_size=1)
df_actual.append(testy)
dp_predict.append(yhat)
print('>Actual =%.1f, Predicted=%.1f' % (testy, yhat))
I am not able to get correct prediction in this model.
Update:
Please find the below Validation accuracy and Training accuracy with the loss
Train on 154076 samples, validate on 66033 samples
Epoch 1/5
154076/154076 [==============================] - 289s 2ms/step - loss: 1.0033 - accuracy: 0.3816 - val_loss: 1.0018 - val_accuracy: 0.4286
Epoch 2/5
154076/154076 [==============================] - 291s 2ms/step - loss: 1.0021 - accuracy: 0.3817 - val_loss: 1.0020 - val_accuracy: 0.4286
Epoch 3/5
154076/154076 [==============================] - 293s 2ms/step - loss: 1.0018 - accuracy: 0.3804 - val_loss: 1.0014 - val_accuracy: 0.4286
Epoch 4/5
154076/154076 [==============================] - 290s 2ms/step - loss: 1.0016 - accuracy: 0.3812 - val_loss: 1.0012 - val_accuracy: 0.4286
Epoch 5/5
154076/154076 [==============================] - 290s 2ms/step - loss: 1.0015 - accuracy: 0.3814 - val_loss: 1.0012 - val_accuracy: 0.4286
Can anyone suggest me what can be improvement
Note: - I have normalized the input data with MinMaxScalar and used the scaled data, but there is no change in the output
Class labels are of categorical type. Neural networks can't learn on categorical data. You have to one-hot encode it with e.g. keras.utils.to_categorical:
x = values_train[:, 0]
y = values_train[:, 1]
y = keras.utils.to_categorical(y)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=42)
I want to check the performance of my model against various optimizers
(sgd, rmsprop, adam, adamax etc)
So i define a keras sequential model and then i do this
epochs = 50
print('--sgd start---')
model.compile(optimizer='sgd', loss='mse', metrics=['accuracy'])
checkpointer_sgd = ModelCheckpoint(filepath='my_model_sgd.h5',
verbose=1, save_best_only=True)
history_sgd = model.fit(X_train, y_train,
validation_split=0.2,epochs=epochs, batch_size=32, callbacks=[checkpointer_sgd],verbose=1)
print('--sgd end---')
print('--------------------------------------------')
print('--rmsprop start---')
model.compile(optimizer='rmsprop', loss='mse', metrics=['accuracy'])
checkpointer_rmsprop = ModelCheckpoint(filepath='my_model_rmsprop.h5',
verbose=1, save_best_only=True)
history_rmsprop = model.fit(X_train, y_train,
validation_split=0.2,
epochs=epochs, batch_size=32, callbacks=[checkpointer_rmsprop],verbose=1)
print('--rmsprop end---')
I do this for all the optimizers (in the code above have mentioned only sgd and rmsprop) and then execute the statements. So now what happens is the first optimizer starts from low accuracy and then accuracy is increased as more epochs happen. But the next optimizer starts from already a high accuracy.
Is the above code correct or do i need to reset the model everytime
before i compile
See below the first epoch output for different optimizers
--sgd start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 46s 27ms/step - loss: 0.0510 - acc: 0.2985 - val_loss: 0.0442 - val_acc: 0.6986
--rmsprop start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 46s 27ms/step - loss: 0.0341 - acc: 0.5940 - val_loss: 0.0148 - val_acc: 0.6963
--adagrad start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 44s 26ms/step - loss: 0.0068 - acc: 0.6951 - val_loss: 0.0046 - val_acc: 0.6963
--adadelta start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 52s 30ms/step - loss: 8.0430e-04 - acc: 0.8125 - val_loss: 9.4660e-04 - val_acc: 0.7850
--adam start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 47s 27ms/step - loss: 7.7599e-04 - acc: 0.8201 - val_loss: 9.8981e-04 - val_acc: 0.7757
--adamax start---
Train on 1712 samples, validate on 428 samples
Epoch 1/50
1712/1712 [==============================] - 54s 31ms/step - loss: 6.4941e-04 - acc: 0.8359 - val_loss: 9.2495e-04 - val_acc: 0.7991
use K.clear_session() which will clean up everything.
from keras import backend as K
def get_model():
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1,activation='sigmoid'))
return model
model = get_model()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
K.clear_session() # it will destroy keras object
model1 = get_model()
model1.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
model1.fit(X, Y, epochs=150, batch_size=10, verbose=0)
K.clear_session()
This solution should solve your problem. Let me know if it works.
Recompiling the model does not change it's state. Weights learn before compilation will be same after compilation. You need to delete the model object to clear the weights and create a new one before compiling again.