I'm learning Neural Networks and currently implemented object classification on CFAR-10 dataset using Keras library. Here is my definition of a neural network defined by Keras:
# Define the model and train it
model = Sequential()
model.add(Dense(units = 60, input_dim = 1024, activation = 'relu'))
model.add(Dense(units = 50, activation = 'relu'))
model.add(Dense(units = 60, activation = 'relu'))
model.add(Dense(units = 70, activation = 'relu'))
model.add(Dense(units = 30, activation = 'relu'))
model.add(Dense(units = 10, activation = 'sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train, epochs=50, batch_size=10000)
So I've 1 input layer having the input of dimensions 1024 or (1024, ) (each image of 32 * 32 *3 is first converted to grayscale resulting in dimensions of 32 * 32), 5 hidden layers and 1 output layer as defined in the above code.
When I train my model over 50 epochs, I got the accuracy of 0.9 or 90%. Also when I evaluate it using test dataset, I got the accuracy of approx. 90%. Here is the line of code which evaluates the model:
print (model.evaluate(X_test, y_test))
This prints following loss and accuracy:
[1.611809492111206, 0.8999999761581421]
But When I calculate the accuracy manually by making predictions on each test data images, I got accuracy around 11% (This is almost the same as probability randomly making predictions). Here is my code to calculate it manually:
wrong = 0
for x, y in zip(X_test, y_test):
if not (np.argmax(model.predict(x.reshape(1, -1))) == np.argmax(y)):
wrong += 1
print (wrong)
This prints out 9002 out of 10000 wrong predictions. So what am I missing here? Why both accuracies are exactly reverse (100 - 89 = 11%) of each other? Any intuitive explanation will help! Thanks.
EDIT:
Here is my code which processes the dataset:
# Process the training and testing data and make in Neural Network comfortable
# convert given colored image to grayscale
def rgb2gray(rgb):
return np.dot(rgb, [0.2989, 0.5870, 0.1140])
X_train, y_train, X_test, y_test = [], [], [], []
def process_batch(batch_path, is_test = False):
batch = unpickle(batch_path)
imgs = batch[b'data']
labels = batch[b'labels']
for img in imgs:
img = img.reshape(3,32,32).transpose([1, 2, 0])
img = rgb2gray(img)
img = img.reshape(1, -1)
if not is_test:
X_train.append(img)
else:
X_test.append(img)
for label in labels:
if not is_test:
y_train.append(label)
else:
y_test.append(label)
process_batch('cifar-10-batches-py/data_batch_1')
process_batch('cifar-10-batches-py/data_batch_2')
process_batch('cifar-10-batches-py/data_batch_3')
process_batch('cifar-10-batches-py/data_batch_4')
process_batch('cifar-10-batches-py/data_batch_5')
process_batch('cifar-10-batches-py/test_batch', True)
number_of_classes = 10
number_of_batches = 5
number_of_test_batch = 1
X_train = np.array(X_train).reshape(meta_data[b'num_cases_per_batch'] * number_of_batches, -1)
print ('Shape of training data: {0}'.format(X_train.shape))
# create labels to one hot format
y_train = np.array(y_train)
y_train = np.eye(number_of_classes)[y_train]
print ('Shape of training labels: {0}'.format(y_train.shape))
# Process testing data
X_test = np.array(X_test).reshape(meta_data[b'num_cases_per_batch'] * number_of_test_batch, -1)
print ('Shape of testing data: {0}'.format(X_test.shape))
# create labels to one hot format
y_test = np.array(y_test)
y_test = np.eye(number_of_classes)[y_test]
print ('Shape of testing labels: {0}'.format(y_test.shape))
The reason why this is happening is due to the loss function that you are using. You are using binary cross entropy where you should be using categorical cross entropy as the loss. Binary is only for a two-label problem but you have 10 labels here due to CIFAR-10.
When you show the accuracy metric, it is in fact misleading you because it is showing binary classification performance. The solution is to retrain your model by choosing categorical_crossentropy.
This post has more details: Keras binary_crossentropy vs categorical_crossentropy performance?
Related - this post is answering a different question, but the answer is essentially what your problem is: Keras: model.evaluate vs model.predict accuracy difference in multi-class NLP task
Edit
You mentioned that the accuracy of your model is hovering at around 10% and not improving in your comments. Upon examining your Colab notebook and when you change to categorical cross-entropy, it appears that you are not normalizing your data. Because the pixel values are originally unsigned 8-bit integer, when you create your training set it promotes the values to floating-point, but because of the dynamic range of the data, your neural network has a hard time learning the right weights. When you try to update the weights, the gradients are so small that there are essentially no updates and hence your network is performing just like random chance. The solution is to simply divide your training and test dataset by 255 before you proceed:
X_train /= 255.0
X_test /= 255.0
This will transform your data so that the dynamic range scales from [0,255] to [0,1]. Your model will have an easier time training due to the smaller dynamic range, which should help gradients propagate and not vanish because of the larger scale before normalizing. Because your original model specification has a significant number of dense layers, due to the dynamic range of your data the gradient updates will most likely vanish which is why the performance is poor initially.
When I run your notebook, I get 37% accuracy. This is not unexpected with CIFAR-10 and only a fully-connected / dense network. Also when you run your notebook now, the accuracy and the fraction of wrong examples match.
If you want to increase accuracy, I have a couple of suggestions:
Actually include colour information. Each object in CIFAR-10 has a distinct colour profile that should help in discrimination
Add Convolutional layers. I'm not sure where you are in your learning, but convolutional layers help in learning and extracting the right features in the image so that the most optimal features are presented to the dense layers so that classification on these features increases accuracy. Right now you're classifying raw pixels, which is not advisable given how noisy they can be, or due to how unconstrained things can get (rotation, translation, skew, scale, etc.).
Related
I want to train a network on the isolet dataset, consisting of 6238 samples with 300 features each.
This is my code so far:
import tensorflow as tf
import sklearn.preprocessing as prep
import numpy as np
import matplotlib.pyplot as plt
def main():
X, C, Xtst, Ctst = load_isolet()
#normalize
#X = (X - np.mean(X, axis = 1)[:, np.newaxis]) / np.std(X, axis = 1)[:, np.newaxis]
#Xtst = (Xtst - np.mean(Xtst, axis = 1)[:, np.newaxis]) / np.std(Xtst, axis = 1)[:, np.newaxis]
scaler = prep.MinMaxScaler(feature_range=(0,1))
scaledX = scaler.fit_transform(X)
scaledXtst = scaler.transform(Xtst)
# Build the tf.keras.Sequential model by stacking layers. Choose an optimizer and loss function for training:
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(X.shape[1], activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(26, activation='softmax')
])
ES_callback = tf.keras.callbacks.EarlyStopping(monitor='loss', min_delta=1e-2, patience=10, verbose=1)
initial_learning_rate = 0.01
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate,decay_steps=100000,decay_rate=0.9999,staircase=True)
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(scaledX, C, epochs=100, callbacks=[ES_callback], batch_size = 32)
plt.figure(1)
plt.plot(range(len(history.history['loss'])), history.history['loss']);
plt.plot(range(len(history.history['accuracy'])), history.history['accuracy']);
plt.show()
Up to now, I have pretty much turned every knob I know:
different number of layers
different sizes of layers
different activation functions
different learning rates
different optimizers (we should test with 'adam' and 'stochastic gradient decent'
different batch sizes
different data preparations (the features range from -1 to 1 values. I tried normalizing along the feature axes, batch normalizing (z_i = (x_i - mean) / std(x_i)) and as seen in the code above scaling the values from 0 to 1 (since I guess 'relu' activation won't work well with negative input values)
Pretty much everything I tried gives weird outputs with extremely high loss values (depending on the learning rate) and very low accuracies during learning. The loss is increasing over epochs pretty much all of the time, but seems to be independent from the accuracy values.
For the code, I followed tutorials I got provided, however something is very off, since I should find the best hyper parameters, but I'm not able to find any good whatsoever.
I'd be very glad to get some points, where got the code wrong or need to preprocess the data differently.
Edit: Using loss='categorical_crossentropy'was given, so at least this one should be correct.
first of all:
Your convergence problems may be due to "incorrect" loss function. tf.keras supports a variety of losses that depend on the shape of your input labels.
Try different possibilities like
tf.keras.losses.SparseCategoricalCrossentropy if your labels are one-hot vectors.
tf.keras.losses.CategoricalCrossentropy if your lables are 1,2,3...
or tf.keras.losses.BinaryCrossentropy if your labels are just 0,1.
Honestly, this part of tf.keras is a bit tricky and some settings like that might need tuning.
Second of all - this part:
scaler = prep.MinMaxScaler(feature_range=(0,1))
scaledX = scaler.fit_transform(X)
scaledXtst = scaler.fit_transform(Xtst)
assuming Xtst is your test set you want to scale it based on your training set. So the correct scaling would be just
scaledXtst = scaler.transform(Xtst)
Hope this helps!
I'm currently working with a time series dataset of 46 lines about meteorological measurements on approximately each 3 hours by day during one week. My explanatory variables (X) is composed of 26 variables and some variable has different units of measurement (degree, minimeters, g/m3 etc.). My variable to explain (y) is composed of only one variable temperature.
My goal is to predict temperature (y) on a slot of 12h-24h with the ensemble of variables (X)
For that I used Keras Tensorflow and Python, with MLP regressor model :
X = df_forcast_cap.loc[:, ~df_forcast_cap.columns.str.startswith('l')]
X = X.drop(['temperature_Y'],axis=1)
y = df_forcast_cap['temperature_Y']
y = pd.DataFrame(data=y)
# normalize the dataset X
scaler = MinMaxScaler(feature_range=(0, 1))
scaler.fit_transform(X)
normalized = scaler.transform(X)
# normalize the dataset y
scaler = MinMaxScaler(feature_range=(0, 1))
scaler.fit_transform(y)
normalized = scaler.transform(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# define base model
def norm_model():
# create model
model = Sequential()
model.add(Dense(26, input_dim=26, kernel_initializer='normal', activation='relu'))# 30 is then number of neurons
#model.add(Dense(6, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# evaluate model with standardized dataset
estimator = KerasRegressor(build_fn=norm_model, epochs=(100), batch_size=5, verbose=1)
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(estimator, X, y, cv=kfold)
print(results)
[-0.00454741 -0.00323181 -0.00345096 -0.00847261 -0.00390925 -0.00334816
-0.00239754 -0.00681044 -0.02098541 -0.00140129]
# invert predictions
X_train = scaler.inverse_transform(X_train)
y_train = scaler.inverse_transform(y_train)
X_test = scaler.inverse_transform(X_test)
y_test = scaler.inverse_transform(y_test)
results = scaler.inverse_transform(results)
print("Results: %.2f (%.2f) MSE" % (results.mean(), results.std()))
Results: -0.01 (0.01) MSE
(1) I read that cross-validation is not adapted for time series prediction. So, I'm wondering which others techniques exist and which one is more adapted to time-series.
(2) In a second place, I decided to normalize my data because my X dataset is composed of different metrics (degree, minimeters, g/m3 etc.) and my variable to explain y is in degree. In this way, I know that have to deal with a more complicated interpretation of the MSE because its result won't be in the same unity that my y variable. But for the next step of my study I need to save the result of the y predicted (made by the MLP model) and I need that these values be in degree. So, I tried to inverse the normalization but without success, when I print my results, the predicted values are still in normalized format (see in my code above). Does anyone see my mistake.s ?
The model that you present above is looking at a single instance of 26 measurements to make a prediction. From your description it seems that you would like to make predictions from a sequence of these measurements. I'm not sure if I fully understood the description but I'll assume that you have a sequence of 46 measurements, each with 26 values that you believe should be good predictors of the temperature. If that is the case, the input shape of your model should be (46, 26,). The 46 here is called time_steps, 26 is the number of features.
For a time series you need to select a model design. There are 2 approaches: a recurrent network or a convolutional network (or a mixture of the 2nd). A convolutional network is typically used to detect patterns in the input data which may be located somewhere in the data. For instance, suppose you want to detect a given shape in an image. Convolutional Networks are a good starting point. Recurrent networks, update their internal state after each time step. They can detect patterns as well as a convolutional network, but you can think of them as being less position independent.
Simple example of a convolutional approach.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import *
from tensorflow.keras.models import Sequential, Model
average_tmp = 0.0
model = Sequential([
InputLayer(input_shape=(46,26,)),
Conv1D(16, 4),
Conv1D(32, 4),
Conv1D(64, 2),
Conv1D(128, 4),
MaxPooling1D(),
Flatten(),
Dense(256, activation='relu'),
Dense(1, bias_initializer=keras.initializers.Constant(average_tmp)),
])
model.compile('adam', 'mse')
model.summary()
A mixed approach, would replace the ```Flatten`` layer above with an LSTM node. That would probably be a reasonable starting point to start experimenting.
(1) I read that cross-validation is not adapted for time series prediction. So, I'm wondering which others techniques exist and which one is more adapted to time-series.
cross validation is a technique that is very well suited for this problem. If you try the example model above, I can almost guarantee that it will overfit your dataset very significantly. cross-validation can help you determine the right regularisation parameters for your model in order to avoid overfitting.
Examples of regularisation techniques that you probably want to consider:
Saving the model weights at the epoch with lower validation score.
Dropout and/or BatchNormalization.
kernel regularisation.
(2) In a second place, I decided to normalize my data because my X dataset is composed of different metrics (degree, minimeters, g/m3 etc.) and my variable to explain y is in degree.
Good call. It will avoid training cycles of your model trying to discover the bias at very high values from the random initialisation.
In this way, I know that have to deal with a more complicated interpretation of the MSE because its result won't be in the same unity that my y variable.
This is orthogonal. The inputs are not assumed to be in the same unit as y. We assume in a DNN that we can create a combination of linear transformation of weights (plus non-linear activations). That has no implicit assumption of units.
But for the next step of my study I need to save the result of the y predicted (made by the MLP model) and I need that these values be in degree. So, I tried to inverse the normalization but without success, when I print my results, the predicted values are still in normalized format (see in my code above). Does anyone see my mistake.s ?
scaler.inverse_transform(results) should do the trick.
It doesn't make sense to inverse transform the inputs X_ and Y_. And it would probably help you keep your code straight to not use the same variable name for both the X and Y scalers.
It is also possible to refrain from scaling Y. If you choose to do so, I'd suggest that you initialise the output layer bias with the mean of the Ys.
I'm learning the simplest neural networks using Dense layers using Keras. I'm trying to implement face recognition on a relatively small dataset (In total ~250 images with 50 images per class).
I've downloaded the images from google images and resized them to 100 * 100 png files. Then I've read those files into a numpy array and also created a one hot label array for training my model.
Here is my code for processing the training data:
X, Y = [], []
feature_map = {
'Alia Bhatt': 0,
'Dipika Padukon': 1,
'Shahrukh khan': 2,
'amitabh bachchan': 3,
'ayushmann khurrana': 4
}
for each_dir in os.listdir('.'):
if os.path.isdir(each_dir):
for each_file in os.listdir(each_dir):
X.append(cv2.imread(os.path.join(each_dir, each_file), -1).reshape(1, -1))
Y.append(feature_map[os.path.basename(each_file).split('-')[0]])
X = np.squeeze(X)
X = X / 255.0 # normalize the training data
Y = np.array(Y)
Y = np.eye(5)[Y]
print (X.shape)
print (Y.shape)
This is printing (244, 40000) and (244, 5). Here is my model:
model = Sequential()
model.add(Dense(8000, input_dim = 40000, activation = 'relu'))
model.add(Dense(1200, activation = 'relu'))
model.add(Dense(700, activation = 'relu'))
model.add(Dense(100, activation = 'relu'))
model.add(Dense(5, activation = 'softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=25, batch_size=15)
When I train the model, It stuck at the accuracy 0.2172, which is almost the same as random predictions (0.20).
I've also tried to train mode with grayscale images but still not getting expected accuracy. Also tried with different network architectures by changing the number of hidden layers and neurons in hidden layers.
What am I missing here? Is my dataset too small? or am I missing any other technical detail?
For more details of code, here is my notebook: https://colab.research.google.com/drive/1hSVirKYO5NFH3VWtXfr1h6y0sxHjI5Ey
Two suggestions I can make:
Your data set is probably too small. If you are splitting training and validation at 80/20, that means you are only training on 200 images, which is probably too small. Try increasing your data set to see if results improve.
I would recommend adding Dropout to each layer of your network as your training set is so small. Your network is most likely over-fitting your training data set since it is so small, and Dropout is an easy way to help avoid this problem.
Let me know if these suggestions make a difference!
I agree that the dataset is too small, 50 instances of each person is probably not enough. You can use data augmentation with the keras ImageDataGenerator method to increase the number of images, and rewrite your numpy reshaping code as a pre-processing function for the generator. I also noticed that you haven't shuffled the data, so the network is likely predicting the first class for everything (which is maybe why the accuracy is near random chance).
If increasing the dataset size doesn't help, you'll probably have to play around with the learning rate for the Adam optimizer.
I have a task of multi-label text classification. My dataset has 1369 classes:
# data shape
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
(54629, 500)
(23413, 500)
(54629, 1369)
(23413, 1369)
For this task, I've decided to use LSTM NN with the next parameters:
# define model
maxlen = 400
inp = Input(shape=(maxlen, ))
embed_size = 128
x = Embedding(max_features, embed_size)(inp)
x = LSTM(60, return_sequences=True,name='lstm_layer')(x)
x = GlobalMaxPool1D()(x)
x = Dropout(0.1)(x)
x = Dense(2000, activation="relu")(x)
x = Dropout(0.1)(x)
x = Dense(1369, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy']
batch_size = 32
epochs = 2
model.fit(X_train, Y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1)
Question: Are there any scientific methods for determining Dense and LSTM dimensionality (in my example, LSTM dimension=60, I Dense dimension=2000, and II Dense dimension=1369)?
If there are no scientific methods, maybe there are some heuristics or tips on how to do this with data with similar dimension.
I randomly chose these parameters. I would like to improve the accuracy of the model and correctly approach to solving similar problems.
I heard that optimizing hyper parameters is an np problem, even there is a better way to do it, it may not worth it for your project given the overhead cost.
For the dimension of LSTM layer, I heard some empirically well working numbers from some conference talks, such as 128 or 256 units and 3 stacked layers. If you can plot your loss along training, and you saw the loss decrease dramatically in the first several epoch but then stopped decreasing, you may want to increase the capacity of your model. This means to make it either deeper or wider. Otherwise, should have less parameters as possible.
For the dimension of dense layer, if your task is many-to-many which means you have a label of certain dimension, then you have to have same number of that dimension as number of units in the dense layer.
I am using TF.LEARN with mnist data. I trained my neural network with 0.96 accuracy but now I am not really sure how to predict a value.
Here is my code..
#getting mnist data to a zip in the computer.
mnist.SOURCE_URL = 'https://web.archive.org/web/20160117040036/http://yann.lecun.com/exdb/mnist/'
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
# Define the neural network
def build_model():
# This resets all parameters and variables
tf.reset_default_graph()
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 100, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=8)
#Here is the problem
#lets say I want to predict what my neural network will reply back if I put back the send value from my trainX
the value of trainX[2] is 4
pred = model.predict([trainX[2]])
print(pred)
#What I get is
[[2.6109733880730346e-05, 4.549271125142695e-06, 1.8098366126650944e-05, 0.003199575003236532, 0.20630565285682678, 0.0003870908112730831, 4.902480941382237e-05, 0.006617342587560415, 0.018498118966817856, 0.764894425868988]]
what I want is -> 4
The problem is that I am not sure how to use this predict function and put in the trainX value to get a prediction.
The prediction of tensorflow gives you a probabilistic output. It is sufficient to get the label with maximum probability from pred to get the peediction of the network.
pred = np.argmax(pred, axis=1)
Which in this case is not 4, but 9.
Where np is numpy module imported as import numpy as np, but feel free to replace it with tf.argmax(pred, 1) to use tensorflow's argmax instead.
You are getting a 9, which is quite similar to a 4.
What model.predict returns is score and while the 5-th value in the results array (the 5th value is 4 since it starts with a zero) gets a relatively high score (0.26-second high) - your model gives the last digit (9) the highest score-0.76. It just means your classifier is a bit wrong here - so you should consider using a different one or play with the hyper-parameters.