Goodmorning, I'm a python beginner, I'm trying to build my first neural network. Is there a way to plot the R2 evolution vs epochs? I evaluate R2 in the following way: r2_score(y_test_pred, y_test). I have built a fully connected neural network in this way:
optimizer = tf.keras.optimizers.Adam(lr=0.001)
model = Sequential()
# ,kernel_regularizer=l2(c), bias_regularizer=l2(c)
model.add(Dense(100, input_shape = (X_train.shape[1],), activation = 'relu',kernel_initializer='glorot_uniform'))
model.add(Dropout(0.2))
model.add(Dense(100, activation = 'relu',kernel_initializer='glorot_uniform'))
model.add(Dropout(0.2))
model.add(Dense(100, activation = 'relu',kernel_initializer='glorot_uniform'))
model.add(Dense(1,activation = 'linear',kernel_initializer='glorot_uniform'))
model.compile(loss = 'mse', optimizer = optimizer, metrics = ['mse'])
history = model.fit(X_train, y_train, epochs = 100,
validation_split = 0.1, shuffle=False, batch_size=250
)
history_dict = history.history
`
the dataset is composed by 18 features and 1 label, it is a regression task.
You just have to add it on your compile line.
model.compile(loss = 'mse', optimizer = optimizer, metrics = ['mse', r2_score])
If you want to do that you need to create a metric that can be understood by keras,
import tf.keras.backend as K
def r2_score(y_true, y_pred):
SS_res = K.sum(K.square(y_true - y_pred))
SS_tot = K.sum(K.square(y_true - K.mean(y_true)))
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
The code is taken from kaggle
Sorry I forgot to add the Tensorboard part.
If you want to see the evolution of loss/metrics during training you can use Tensorboard, as in the doc
logdir = "logs/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
history = model.fit(X_train, y_train, epochs = 100,
validation_split = 0.1, shuffle=False, batch_size=250, calllbacks=[tensorboard_callback])
And then access Tensorboard using this line in a terminal
tensorboard --logdir logs
You can then access tensorboard on your browser by going at localhost:6006
Related
I am trying to train a model using keras, for multiclass classification. There are 5 classes from which to predict. This is an image classification problem, as mentioned before there are five classes of images, bedroom, bathroom, living room, dining room, and kitchen. The problem is the model doesn't seem to learn, it's always stuck at 20% accuracy and the loss never changes from epoch 1. I'm using the convolutional base from the Xception model with my classifier on top. The train, test, and validation datasets are set up using the tf.data API.
Can someone please point out what I am doing wrong?
This is the dataset generation
train_dir = "House_Dataset/Train"
valid_dir = "House_Dataset/Valid"
test_dir = "House_Dataset/Test"
train_ds = trainAug.flow_from_directory(
train_dir,
target_size=(224,224),
shuffle= False,
class_mode= "sparse"
)
valid_ds = image_dataset_from_directory(
valid_dir,
image_size=(224,224),
shuffle=False,
)
test_ds = image_dataset_from_directory(
test_dir,
image_size=(224,224),
shuffle=False,
)
This is the importing of the exception convolution base.
conv_base = keras.applications.Xception(include_top=False, weights="imagenet", input_shape=(224,224,3))
conv_base.trainable = False
This is the model building function.
def pre_trained():
inputs = keras.Input(shape=(224,224,3))
#x = data_augmentation(inputs)
x = keras.applications.xception.preprocess_input(inputs)
x = conv_base(x)
x = layers.GlobalAveragePooling2D()(x)
x = layers.BatchNormalization()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(5, activation = "softmax")(x)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics = ["accuracy"])
return model
Training function call
history = pre_trained_model.fit(train_ds, epochs=25)
This is the picture of the epochs.
While the exact cause for this remains unclear to me, I have found where the problem occurred, and the solution for it.
I added some parameters in the dataset generator function.
train_dir = "House_Dataset/Train"
valid_dir = "House_Dataset/Valid"
test_dir = "House_Dataset/Test"
train_ds = image_dataset_from_directory(
train_dir,
image_size=(224,224),
shuffle= True,
seed=1,
labels="inferred",
label_mode = "categorical"
)
valid_ds = image_dataset_from_directory(
valid_dir,
image_size=(224,224),
shuffle=True,
seed=1,
labels="inferred",
label_mode = "categorical"
)
test_ds = image_dataset_from_directory(
test_dir,
image_size=(224,224),
shuffle=True,
seed=1,
labels="inferred",
label_mode = "categorical"
)
I added the option to shuffle with some seed, and changed the label mode to categorical, which will produce a one-hot encoding of labels. Likewise I also changed the loss from sparse_categorical_crossentropy to categorical_crossentropy. These changes have allowed the model train, and there have been significant improvements in both training and validation loss as well as accuracy.
try my cnn network and see if you get 87% accuracy. cnn extract features in each layer as a filter. the filter then feeds to the category softmax function.
model=Sequential()
model.add(Conv2D(32, (3,3),activation='relu',input_shape=(IMG_SIZE,IMG_SIZE,3)))
#model.add(Dropout(0.25))
model.add(MaxPooling2D(2))
#model.add(BatchNormalization())
model.add(Conv2D(64, (3,3), activation="relu"))
model.add(MaxPooling2D(2,2))
model.add(Conv2D(128, (3,3), activation="relu"))
model.add(MaxPooling2D(2,2))
model.add(Conv2D(128, (3,3), activation="relu"))
model.add(MaxPooling2D(2,2))
#model.add(BatchNormalization())
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(512,activation='relu'))
model.add(Dense(5, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics = ['accuracy'])
model.summary()
I am currently trying to run a CNN model that looks at images of cats and dogs to try to classify them (an old kaggle question). However, I want to see what the effects of running different numbers of epochs is on model accuracy.
Unfortunately, when I run model.fit() multiple times, alongside a model.evaluate(), the accuracy scores seems to be adding on top of one another from the previous model run. For example the INCORRECT output I get is:
100 Epochs Test Accuracy: 67.7%
200 Epochs Test Accuracy: 98.5%
300 Epochs Test Accuracy: 259.0%
My CNN is build as such:
input_layer = Input(shape=(img_shape, img_shape, 3))
convolution_layer_1 = Conv2D(32, kernel_size=(5,5), activation = 'relu')(input_layer)
max_pool_1 = MaxPooling2D(pool_size=(2,2), strides=2)(convolution_layer_1)
convolution_layer_2 = Conv2D(64, kernel_size=(5,5), activation = 'relu')(max_pool_1)
max_pool_2 = MaxPooling2D(pool_size=(2,2),strides=2)(convolution_layer_2)
dense_layer_1 = Dense(32, activation='relu')(max_pool_2)
flatten_layer_1 = Flatten()(dense_layer_1)
dropout_1 = Dropout(0.4)(flatten_layer_1)
output_layer = Dense(1, activation='softmax')(dropout_1)
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
Then I ran the following series of model.fit() and model.evaluate() commands, with only the epochs value (and print statements) being changed each time.
model.fit_generator(
training_set_data,
epochs = 100,
validation_data = test_set_data,
)
score_100 = model.evaluate(test_set_data)
print('\nTest accuracy for 100 epochs: %0.4f%%' % (score_100[0] * 100))
# 100 Epochs Test Accuracy: 67.7%
-------------------------------------------
model.fit_generator(
training_set_data,
epochs = 200,
validation_data = test_set_data,
)
score_200 = model.evaluate(test_set_data)
print('\nTest accuracy for 200 epochs: %0.4f%%' % (score_200[0] * 100))
#200 Epochs Test Accuracy: 98.5%
-------------------------------------------
model.fit_generator(
training_set_data,
epochs = 300,
validation_data = test_set_data,
)
score_300 = model.evaluate(test_set_data)
print('\nTest accuracy for 300 epochs: %0.4f%%' % (score_300[0] * 100))
#300 Epochs Test Accuracy: 259.0%
How do I run these three epochs iterations of the model.fit() separately? Such that the output of the model.evaluate() is printed correctly? For example:
100 Epochs Test Accuracy: 67.7%
200 Epochs Test Accuracy: 70.5%
300 Epochs Test Accuracy: 74.0%
I tried creating 3 separate compilations of the CNN to run each individually, as such:
model1 = Model(inputs=input_layer, outputs=output_layer)
model1.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model2 = Model(inputs=input_layer, outputs=output_layer)
model2.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
then
model1.fit ....etc
model2.fit ....etc
However now the problem seems to be with the model itself? Because if I run model3 directly (for 300 epochs) without running model1 and model2, I still get an accuracy score of 200+%
What's going on!?
It's hard to tell what the problem is in your case but your code, with some minor modifications, trains a model to recognize wether an image is IS_NUMBER or not.
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras import layers
from tensorflow import keras
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
inputs_spec, outputs_spec = ds_train.element_spec
input_layer = layers.Input(shape=inputs_spec.shape)
x = input_layer
x = layers.Conv2D(32, kernel_size=(3,3))(x)
x = layers.MaxPooling2D(pool_size=(2,2), strides=2)(x)
x = layers.Conv2D(64, kernel_size=(3,3))(x)
x = layers.MaxPooling2D(pool_size=(2,2),strides=2)(x)
x = layers.Flatten()(x)
x = layers.Dense(32)(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs=input_layer, outputs=x)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
IS_NUMBER = 3
def is_number(inp, tar):
return inp, tf.cast(tf.equal(tar, IS_NUMBER), dtype=tf.int32)
ds_train = ds_train.map(is_number).repeat()
train = ds_train.padded_batch(50)
model.fit(
train,
steps_per_epoch=200,
epochs=5,
class_weight={0: 0.9, 1: 0.1}
)
I get similar results using:
# ...
x = layers.Dense(2, activation='softmax')(x)
model = keras.Model(inputs=input_layer, outputs=x)
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'
IS_NUMBER = 3
def is_number(inp, tar):
return inp, tf.cast(tf.equal(tar, IS_NUMBER), dtype=tf.int32)
# ...
model.fit(
train,
steps_per_epoch=200,
epochs=5
)
Maybe this helps you figuring out the issue with your own code/data.
You are referring to the wrong parameter score_300[0] will give you the loss value for the evaluation data set and score_300[1] will give you the accuracy value in all the three cases you have referred the wrong value score_100, score_200 and score_300
I want to create a machine learning model with Tensorflow which detects flowers. I went in the nature and took pictures of 4 different species (~600 per class, one class got 700).
I load these images with Tensorflow Train Generator:
train_datagen = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.15,
brightness_range=[0.7, 1.4],
fill_mode='nearest',
vertical_flip=True,
horizontal_flip=True,
rotation_range=15,
width_shift_range=0.1,
height_shift_range=0.1,
validation_split=0.2)
train_generator = train_datagen.flow_from_directory(
pfad,
target_size=(imageShape[0],imageShape[1]),
batch_size=batchSize,
class_mode='categorical',
subset='training',
seed=1,
shuffle=False,
#save_to_dir=r'G:\test'
)
validation_generator = train_datagen.flow_from_directory(
pfad,
target_size=(imageShape[0],imageShape[1]),
batch_size=batchSize,
shuffle=False,
seed=1,
class_mode='categorical',
subset='validation')
Then I am creating a simple model looking like this:
model = tf.keras.Sequential([
keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(imageShape[0], imageShape[1],3)),
keras.layers.MaxPooling2D(2,2),
keras.layers.Dropout(0.5),
keras.layers.Conv2D(256, (3,3), activation='relu'),
keras.layers.MaxPooling2D(2,2),
keras.layers.Conv2D(512, (3,3), activation='relu'),
keras.layers.MaxPooling2D(2,2),
keras.layers.Flatten(),
keras.layers.Dense(280, activation='relu'),
keras.layers.Dense(4, activation='softmax')
])
opt = tf.keras.optimizers.SGD(learning_rate=0.001,decay=1e-5)
model.compile(loss='categorical_crossentropy',
optimizer= opt,
metrics=['accuracy'])
And want to start the training process (CPU):
history=model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batchSize,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batchSize,
epochs = 200,callbacks=[checkpoint,early,tensorboard],workers=-1)
The result should be that my validation Accuracy improves, but it starts with 0.3375 and stays at this level the whole training process. Validation loss (1.3737) decreases by 0.001. Accuracy start with 0.15 but increases.
Why is my validation accuracy stuck?
Am I using the right loss? Or do I build my model wrong? Is my Tensorflow Train Generator hot encoding the labels?
Thanks
I solved the problem by using RMSprop() without any parameters.
So I changed from:
opt = tf.keras.optimizers.SGD(learning_rate=0.001,decay=1e-5)
model.compile(loss='categorical_crossentropy',optimizer= opt, metrics=['accuracy'])
to:
opt = tf.keras.optimizers.RMSprop()
model.compile(loss='categorical_crossentropy',
optimizer= opt,
metrics=['accuracy'])
This is a similar example, except that for 4 categorical classes, the below is binary. You may want to change the loss to categorical cross entropy, class_mode from binary to categorical in the train and test generators and final dense layer activation to softmax. I am still able to use model.fit_generator()
image_dataGen = ImageDataGenerator(rotation_range=20,
width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,
zoom_range=0.1,fill_mode='nearest',horizontal_flip=True,
vertical_flip=True,rescale=1/255)
train_images = image_dataGen.flow_from_directory(train_path,target_size = image_shape[:2],
color_mode = 'rgb',class_mode = 'binary')
test_images = image_dataGen.flow_from_directory(test_path,target_size = image_shape[:2],
color_mode = 'rgb',class_mode = 'binary',
shuffle = False)
model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (3,3),input_shape = image_shape,activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))
model.add(Conv2D(filters = 48, kernel_size = (3,3),input_shape = image_shape,activation = 'relu'))
model.add(MaxPool2D(pool_size = (2,2)))
model.add(Flatten())
model.add(Dense(units = 128,activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.compile(loss = 'binary_crossentropy',metrics = ['accuracy'], optimizer = 'adam')
results = model.fit_generator(train_images, epochs = 10, callbacks = [early_stop],
validation_data = test_images)
Maybe your learning rate is too high.
Use learning rate = 0.000001 and if that does not work then try another optimizer like Adam.
use model.fit_generator() instead of model.fit() Also below points could be helpful.
In order to use .flow_from_directory, you must organize the images in sub-directories. This is an absolute requirement, otherwise the method won't work. The directories should only contain images of one class, so one folder per class of images. Also could you check if the path for the training data and test data is correct ? They cannot point to the same location. I have used the ImageGenerator class for classification problem. You can also try changing the optimizer to 'Adam'
Structure Needed:
Image Data Folder
Class 1
0.jpg
1.jpg
...
Class 2
0.jpg
1.jpg
...
...
Class n
I am having trouble sometimes getting a fit on my data, and when I restart fit (with shuffle=true) then I sometimes get a good fit.
See my previous question:
https://datascience.stackexchange.com/questions/62516/why-does-my-model-sometimes-not-learn-well-from-same-data
As a work around, I want to automatically restart the fitting process, if loss is high after x epochs. How can I achieve this?
I assume I would need to use a custom version of EarlyStopping callback? How could I differentiate between ES because of finding a low loss ( < 0.5) so training is finished, or because loss > 0.5 after x epochs so need to restart training?
Here is a simplified structure:
def train_till_good():
while not_finished:
train()
def train():
load_data()
model = VerySimpleNet2();
checkpoint = keras.callbacks.ModelCheckpoint(filepath=images_root + dataset_name + '\\CheckPoint.hdf5')
myOpt = keras.optimizers.Adam(lr=0.001,decay=0.01)
model.compile(optimizer=myOpt, loss='categorical_crossentropy', metrics=['accuracy'])
LRS = CyclicLR(base_lr=0.000005, max_lr=0.0003, step_size=200.)
tensorboard = keras.callbacks.TensorBoard(log_dir='C:\\Tensorflow', histogram_freq=0,write_graph=True, write_images=False)
ES = keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=5)
model.fit(train_images, train_labels, shuffle=True, epochs=num_epochs,
callbacks=[checkpoint,
tensorboard,
ES,
LRS],
validation_data = (test_images, test_labels)
)
def VerySimpleNet2():
model = keras.Sequential([
keras.layers.Dense(112, activation=tf.nn.relu, input_shape=(224, 224, 3)),
keras.layers.Dropout(0.4),
keras.layers.Flatten(),
keras.layers.Dense(3, activation=tf.nn.softmax)
])
return model
I am new to keras and Neural networks. I am trying to tune the hyperparameters of a simple Neural network using GridSearchCV from scikit-learn with keras in python. Below is an example code for reference.
def base_model(input_layer_nodes = 150, optimizer = 'adam', kernel_initializer = 'normal', dropout_rate = 0.2):
model = Sequential()
model.add(Dense(units = input_layer_nodes, input_dim = 107, kernel_initializer = kernel_initializer, activation='relu'))
Dropout(dropout_rate)
model.add(Dense(units = 1, kernel_initializer = kernel_initializer, activation='sigmoid'))
# Compile model
model.compile(loss = 'binary_crossentropy', optimizer = optimizer, metrics = ['accuracy'])
return model
# Defining parameters for performing GridSearch
# optimizer = ['sgd', 'rmsprop', 'adam']
# dropout_rate = [0.1, 0.2, 0.3, 0.4, 0.5]
# input_layer_nodes = [50, 107, 150, 200]
kernel_initializer = ['uniform', 'normal']
param_grid = dict(kernel_initializer = kernel_initializer)
model = KerasClassifier(build_fn = base_model, epochs = 10, batch_size = 128, verbose = 2)
grid = GridSearchCV(estimator = model, param_grid=param_grid, n_jobs = 1, cv = 5)
grid.fit(X_train, y_train)
# View hyperparameters of best neural network
print("\nBest Training Parameters: ", grid.best_params_)
print("Best Training Accuracy: ", grid.best_score_)
When I execute the above code, I get the below error.
ValueError: ('Some keys in session_kwargs are not supported at this time: %s', dict_keys(['kernel_initializer']))
I am able to tune some of the other parameters of the network, like dropout_rate, optimizer, epochs. If the same code is working for other parameters, why is the kernel_initializer part not working? I am using keras 2.2.2, tensorflow 1.9.0-gpu and python 3.6.6. My OS windows 10 x64. Any help on this would be appreciated.