I am trying to learn some neural networks for fun. I decided to try to classify some pokemon legendary cards, from a data set from kaggle. I read up on documentations and followed machine learning mastery guides, while reading up on medium to try to understand the process.
My problem/ question : i tried predicting and everything is predicting "0". i assume that is false. is my 92% false accuracy? i read something about false accuracy online.
please help!
Some background information : the dataset has 800 rows, 12 columns. i am predicting the last column ( true/false). I am using attributes of the data that has numerical and categorical. i label encoded the numerical categories. 92% of these cards are False. 8% are true.
i sampled and ran a neural network on 200 cards, and got 91% accuracy... i also reset everything and got a 92% accuracy on all 800 cards. am i overfitting?
thank you for help in advance
dataFrame = dataFrame.fillna(value='NaN')
labelencoder = LabelEncoder()
numpy_dataframe = dataFrame.as_matrix()
numpy_dataframe[:, 0] = labelencoder.fit_transform(numpy_dataframe[:, 0])
numpy_dataframe[:, 1] = labelencoder.fit_transform(numpy_dataframe[:, 1])
numpy_dataframe
X = numpy_dataframe[:,0:10]
Y = numpy_dataframe[:,10]
model = Sequential()
model.add(Dense(12, input_dim=10, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, Y, epochs=150, batch_size=10)
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
#this shows that we have 91.88% accuracy with the whole dataframe
dataFrame200False = dataFrame
dataFrame200False['Legendary'] = dataFrame200False['Legendary'].astype(str)
dataFrame200False= dataFrame200False[dataFrame200False['Legendary'].str.contains("False")]
dataFrame65True = dataFrame
dataFrame65True['Legendary'] = dataFrame65True['Legendary'].astype(str)
dataFrame65True= dataFrame65True[dataFrame65True['Legendary'].str.contains("True")]
DataFrameFalseSample = dataFrame200False.sample(200)
DataFrameFalseSample
dataFrameSampledTrueFalse = dataFrame65True.append(DataFrameFalseSample, ignore_index=True)
dataFrameSampledTrueFalse
#label encoding the files
labelencoder = LabelEncoder()
numpy_dataSample = dataFrameSampledTrueFalse.as_matrix()
numpy_dataSample[:, 0] = labelencoder.fit_transform(numpy_dataSample[:, 0])
numpy_dataSample[:, 1] = labelencoder.fit_transform(numpy_dataSample[:, 1])
numpy_dataSample
a = numpy_dataframe[:,0:10]
b = numpy_dataframe[:,10]
model = Sequential()
model.add(Dense(12, input_dim=10, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(a, b, epochs=1000, batch_size=10)
scoresSample = model.evaluate(a, b)
print("\n%s: %.2f%%" % (model.metrics_names[1], scoresSample[1]*100))
dataFramePredictSample = dataFrame.sample(500)
labelencoder = LabelEncoder()
numpy_dataframeSamples = dataFramePredictSample.as_matrix()
numpy_dataframeSamples[:, 0] = labelencoder.fit_transform(numpy_dataframeSamples[:, 0])
numpy_dataframeSamples[:, 1] = labelencoder.fit_transform(numpy_dataframeSamples[:, 1])
Xnew = numpy_dataframeSamples[:,0:10]
Ynew = numpy_dataframeSamples[:,10]
# make a prediction
Y = model.predict_classes(Xnew)
# show the inputs and predicted outputs
for i in range(len(Xnew)):
print("X=%s, Predicted=%s" % (Xnew[i], Y[i]))
Problem:
The problem is that, as you stated, your dataset is heavily imbalanced. This means that you have a lot more training examples for class 0 than class 1. This causes the network, during training, to develop a heavy bias towards predicting class 0.
Evaluation:
The first thing you should do is not use accuracy as your evaluation measure! My suggestion would be to draw a confusion matrix so that you see exactly what the model is predicting. You could also look into macro-averaging (read this if you're not familiar with the technique).
Dealing with the problem:
There are two ways you can improve the performance of the model:
Resample your data, so that it becomes balanced. You have a couple of options here. The most common way is to oversample (e.g. SMOTE) the minority class so that it reaches the population of the majority. Another option is to undersample (e.g. Clustering Centroids) the majority class so that it's population drops to that of the minority.
Use class weights during training. This forces the network to pay more attention to samples from the minority class (read this post for more info).
Related
I am a newbie to machine learning.I've been dealing with some problems for a few weeks.I hope someone can help here:
I have a dataset which is involve discrete values and rest are same.Here is my data set:
I have this features :A,B,C,D,E and ı would like to predict: F
I firstly decide my model should be classifing and ı try to managed this model:
# first neural network with keras tutorial
from numpy import genfromtxt
from keras.models import Sequential
from keras.layers import Dense
# load the dataset
data = genfromtxt("C:/Users/Kerim/Desktop/data/30bin.csv",dtype=complex
,delimiter=",")
x = np.zeros((30000,8), dtype=float)
x[:,0:6] = np.array(data[:,0:6],dtype = "float")
x[:,6:8] = np.array(data.imag[:,2:4],dtype = "float")
dataset[:,0:8]=np.array(x[:,0:8],dtype = "float")
y = np.array(dataset[:,5],dtype = "float")
# define the keras model
model = Sequential()
model.add(Dense(12, input_dim=5, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='tanh'))
# compile the keras model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# fit the keras model on the dataset
model.fit(X, y, epochs=150, batch_size=10)
# evaluate the keras model
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
predictions = model.predict_classes(X)
# summarize the first 5 cases
for i in range(20):
print('%s => %d (expected %d)' % (X[i].tolist(), predictions[i], y[i]))
In this code ı get this warning:
ComplexWarning: Casting complex values to real discards the imaginary part
after removing the cwd from sys.path.
When ı use tanh my accuricy =0.00 and my prediction result: Why my prediction result is always same?
Then I only took the real part to deal with the complex number problem later by making changing loading datapart
# load the dataset
dataset = genfromtxt("C:/Users/Kerim/Desktop/data/30bin.csv",dtype=complex,delimiter=",")
x = np.array(dataset.real[:1000,0:5],dtype = "float")
y = np.array(dataset.real[:1000,5],dtype = "float")
And I did some research ı fall in doubt about these subject
1-)Is my choice of error function correct?
2-)I know sigmoid make prediction range of 0-1 and ı decide to use tanh ,Can this function be used to guess the numbers 07,0.8 and 0.9?
3-)what should I do to adjust the prediction results between these three numbers?
My current CNN has relative high accuracy but low auc score, so I want to train my model considering both accuracy and auc. However, when I tried to add 'auc' as the second metrics to train, I cannot start my epochs.
This is the error message I am getting:
FailedPreconditionError: Error while reading resource variable conv2d_4/kernel from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/conv2d_4/kernel/N10tensorflow3VarE does not exist.
[[{{node conv2d_4/Conv2D/ReadVariableOp}}]]
I have tried the function auc provided in previous discussions. Sorry I can't find the post now.
from keras import backend as K
def auc(y_true, y_pred):
auc = tf.metrics.auc(y_true, y_pred)[1]
K.get_session().run(tf.local_variables_initializer())
return auc
auc_model = models.Sequential()
auc_model.add(layers.Conv1D (kernel_size = (200), filters = 10, input_shape=(1644,1) , activation='relu'))
auc_model.add(layers.MaxPooling1D(pool_size = (50), strides=(10)))
auc_model.add(layers.Reshape((40, 35, 1)))
auc_model.add(layers.Conv2D(16, (3, 3), activation='relu'))
auc_model.add(layers.Conv2D(16, (3, 3), activation='relu'))
auc_model.add(layers.MaxPooling2D((2, 2)))
auc_model.add(layers.Flatten())
auc_model.add(layers.Dense(32, activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)))
auc_model.add(layers.Dropout(rate=0.2))
auc_model.add(layers.Dense(1, activation='sigmoid'))
auc_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy', auc])
auc_model.summary()
from tensorflow.keras.callbacks import EarlyStopping
target = y_tr.columns[0]
rows_tr = np.isfinite(y_tr[target]).values
rows_te = np.isfinite(y_te[target]).values
x_train = x_tr[rows_tr].reshape((x_tr[rows_tr].shape[0], 1644, 1))
x_test = x_te[rows_te].reshape((x_te[rows_te].shape[0], 1644, 1))
auc_model.fit( x_train, y_tr[target][rows_tr],
validation_data=(x_test, y_te[target][rows_te]), epochs = 5)
print('\n# Evaluate on test data')
results = auc_model.evaluate(x_test, y_te[target][rows_te], batch_size = 8, verbose=1)
I want to start my training process considering both accuracy and auc score. Thanks.
Metric is just for reporting an evaluation of your trained model at each epoch. It does not change anything on your training.
If you want to make your model consider the AUC as well, you should modify your loss. Minimizing the loss of binary_crossentropy, naturally, maximize the accuracy without regarding the AUC. This makes it more problematic when you have an imbalanced data set, like one skewed class.
If you want it really only for the metric, you can see this post:
How to compute Receiving Operating Characteristic (ROC) and AUC in keras?
But if you truly want your model to maximize the AUC, you should write a custom loss function on Keras and put it in the loss of your model.
There is a good discussion here:
https://www.kaggle.com/c/invasive-species-monitoring/discussion/32762
I'm learning Neural Networks and currently implemented object classification on CFAR-10 dataset using Keras library. Here is my definition of a neural network defined by Keras:
# Define the model and train it
model = Sequential()
model.add(Dense(units = 60, input_dim = 1024, activation = 'relu'))
model.add(Dense(units = 50, activation = 'relu'))
model.add(Dense(units = 60, activation = 'relu'))
model.add(Dense(units = 70, activation = 'relu'))
model.add(Dense(units = 30, activation = 'relu'))
model.add(Dense(units = 10, activation = 'sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, y_train, epochs=50, batch_size=10000)
So I've 1 input layer having the input of dimensions 1024 or (1024, ) (each image of 32 * 32 *3 is first converted to grayscale resulting in dimensions of 32 * 32), 5 hidden layers and 1 output layer as defined in the above code.
When I train my model over 50 epochs, I got the accuracy of 0.9 or 90%. Also when I evaluate it using test dataset, I got the accuracy of approx. 90%. Here is the line of code which evaluates the model:
print (model.evaluate(X_test, y_test))
This prints following loss and accuracy:
[1.611809492111206, 0.8999999761581421]
But When I calculate the accuracy manually by making predictions on each test data images, I got accuracy around 11% (This is almost the same as probability randomly making predictions). Here is my code to calculate it manually:
wrong = 0
for x, y in zip(X_test, y_test):
if not (np.argmax(model.predict(x.reshape(1, -1))) == np.argmax(y)):
wrong += 1
print (wrong)
This prints out 9002 out of 10000 wrong predictions. So what am I missing here? Why both accuracies are exactly reverse (100 - 89 = 11%) of each other? Any intuitive explanation will help! Thanks.
EDIT:
Here is my code which processes the dataset:
# Process the training and testing data and make in Neural Network comfortable
# convert given colored image to grayscale
def rgb2gray(rgb):
return np.dot(rgb, [0.2989, 0.5870, 0.1140])
X_train, y_train, X_test, y_test = [], [], [], []
def process_batch(batch_path, is_test = False):
batch = unpickle(batch_path)
imgs = batch[b'data']
labels = batch[b'labels']
for img in imgs:
img = img.reshape(3,32,32).transpose([1, 2, 0])
img = rgb2gray(img)
img = img.reshape(1, -1)
if not is_test:
X_train.append(img)
else:
X_test.append(img)
for label in labels:
if not is_test:
y_train.append(label)
else:
y_test.append(label)
process_batch('cifar-10-batches-py/data_batch_1')
process_batch('cifar-10-batches-py/data_batch_2')
process_batch('cifar-10-batches-py/data_batch_3')
process_batch('cifar-10-batches-py/data_batch_4')
process_batch('cifar-10-batches-py/data_batch_5')
process_batch('cifar-10-batches-py/test_batch', True)
number_of_classes = 10
number_of_batches = 5
number_of_test_batch = 1
X_train = np.array(X_train).reshape(meta_data[b'num_cases_per_batch'] * number_of_batches, -1)
print ('Shape of training data: {0}'.format(X_train.shape))
# create labels to one hot format
y_train = np.array(y_train)
y_train = np.eye(number_of_classes)[y_train]
print ('Shape of training labels: {0}'.format(y_train.shape))
# Process testing data
X_test = np.array(X_test).reshape(meta_data[b'num_cases_per_batch'] * number_of_test_batch, -1)
print ('Shape of testing data: {0}'.format(X_test.shape))
# create labels to one hot format
y_test = np.array(y_test)
y_test = np.eye(number_of_classes)[y_test]
print ('Shape of testing labels: {0}'.format(y_test.shape))
The reason why this is happening is due to the loss function that you are using. You are using binary cross entropy where you should be using categorical cross entropy as the loss. Binary is only for a two-label problem but you have 10 labels here due to CIFAR-10.
When you show the accuracy metric, it is in fact misleading you because it is showing binary classification performance. The solution is to retrain your model by choosing categorical_crossentropy.
This post has more details: Keras binary_crossentropy vs categorical_crossentropy performance?
Related - this post is answering a different question, but the answer is essentially what your problem is: Keras: model.evaluate vs model.predict accuracy difference in multi-class NLP task
Edit
You mentioned that the accuracy of your model is hovering at around 10% and not improving in your comments. Upon examining your Colab notebook and when you change to categorical cross-entropy, it appears that you are not normalizing your data. Because the pixel values are originally unsigned 8-bit integer, when you create your training set it promotes the values to floating-point, but because of the dynamic range of the data, your neural network has a hard time learning the right weights. When you try to update the weights, the gradients are so small that there are essentially no updates and hence your network is performing just like random chance. The solution is to simply divide your training and test dataset by 255 before you proceed:
X_train /= 255.0
X_test /= 255.0
This will transform your data so that the dynamic range scales from [0,255] to [0,1]. Your model will have an easier time training due to the smaller dynamic range, which should help gradients propagate and not vanish because of the larger scale before normalizing. Because your original model specification has a significant number of dense layers, due to the dynamic range of your data the gradient updates will most likely vanish which is why the performance is poor initially.
When I run your notebook, I get 37% accuracy. This is not unexpected with CIFAR-10 and only a fully-connected / dense network. Also when you run your notebook now, the accuracy and the fraction of wrong examples match.
If you want to increase accuracy, I have a couple of suggestions:
Actually include colour information. Each object in CIFAR-10 has a distinct colour profile that should help in discrimination
Add Convolutional layers. I'm not sure where you are in your learning, but convolutional layers help in learning and extracting the right features in the image so that the most optimal features are presented to the dense layers so that classification on these features increases accuracy. Right now you're classifying raw pixels, which is not advisable given how noisy they can be, or due to how unconstrained things can get (rotation, translation, skew, scale, etc.).
I'm an student in hydraulic engineering, working on a neural network in my internship so it's something new for me.
I created my neural network but it gives me a high loss and I don't know what is the problem ... you can see the code :
def create_model():
model = Sequential()
# Adding the input layer
model.add(Dense(26,activation='relu',input_shape=(n_cols,)))
# Adding the hidden layer
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
# Adding the output layer
model.add(Dense(2))
# Compiling the RNN
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
return model
kf = KFold(n_splits = 5, shuffle = True)
model = create_model()
scores = []
for i in range(5):
result = next(kf.split(data_input), None)
input_train = data_input[result[0]]
input_test = data_input[result[1]]
output_train = data_output[result[0]]
output_test = data_output[result[1]]
# Fitting the RNN to the Training set
model.fit(input_train, output_train, epochs=5000, batch_size=200 ,verbose=2)
predictions = model.predict(input_test)
scores.append(model.evaluate(input_test, output_test))
print('Scores from each Iteration: ', scores)
print('Average K-Fold Score :' , np.mean(scores))
And whene I execute my code, the result is like :
Scores from each Iteration: [[93.90406122928908, 0.8907562990148529], [89.5892979597845, 0.8907563030218878], [81.26530176050522, 0.9327731132507324], [56.46526102659081, 0.9495798339362905], [54.314151876112994, 0.9579831877676379]]
Average K-Fold Score : 38.0159922589274
Can anyone help me please ? how could I do to make the loss low ?
There are several issues, both with your questions and with your code...
To start with, in general we cannot say that an MSE loss of X value is low or high. Unlike the accuracy in classification problems which is by definition in [0, 1], the loss is not similarly bounded, so there is no general way of saying that a particular value is low or high, as you imply here (it always depends on the specific problem).
Having clarified this, let's go to your code.
First, judging from your loss='mean_squared_error', it would seem that you are in a regression setting, in which accuracy is meaningless; see What function defines accuracy in Keras when the loss is mean squared error (MSE)?. You have not shared what exact problem you are trying to solve here, but if it is indeed a regression one (i.e. prediction of some numeric value), you should get rid of metrics=['accuracy'] in your model compilation, and possibly change your last layer to a single unit, i.e. model.add(Dense(1)).
Second, as your code currently is, you don't actually fit independent models from scratch in each of your CV folds (which is the very essence of CV); in Keras, model.fit works cumulatively, i.e. it does not "reset" the model each time it is called, but it continues fitting from the previous call. That's exactly why if you see your scores, it is evident that the model is significantly better in the later folds (which already gives a hint for improving: add more epochs). To fit independent models as you should do for a proper CV, you should move create_model() inside the for loop.
Third, your usage of np.mean() here is again meaningless, as you average both the loss and the accuracy (i.e. apples with oranges) together; the fact that from 5 values of loss between 54 and 94 you end up with an "average" of 38 should have already alerted you that you are attempting something wrong. Truth is, if you dismiss the accuracy metric, as argued above, you would not have this problem here.
All in all, here is how it seems that your code should be in principle (but again, I have not the slightest idea of the exact problem you are trying to solve, so some details might be different):
def create_model():
model = Sequential()
# Adding the input layer
model.add(Dense(26,activation='relu',input_shape=(n_cols,)))
# Adding the hidden layer
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
model.add(Dense(60,activation='relu'))
# Adding the output layer
model.add(Dense(1)) # change to 1 unit
# Compiling the RNN
model.compile(optimizer='adam', loss='mean_squared_error') # dismiss accuracy
return model
kf = KFold(n_splits = 5, shuffle = True)
scores = []
for i in range(5):
result = next(kf.split(data_input), None)
input_train = data_input[result[0]]
input_test = data_input[result[1]]
output_train = data_output[result[0]]
output_test = data_output[result[1]]
# Fitting the RNN to the Training set
model = create_model() # move create_model here
model.fit(input_train, output_train, epochs=10000, batch_size=200 ,verbose=2) # increase the epochs
predictions = model.predict(input_test)
scores.append(model.evaluate(input_test, output_test))
print('Loss from each Iteration: ', scores)
print('Average K-Fold Loss :' , np.mean(scores))
I am new to neural networks and have two, probably pretty basic, questions. I am setting up a generic LSTM Network to predict the future of sequence, based on multiple Features.
My training data is therefore of the shape (number of training sequences, length of each sequence, amount of features for each timestep).
Or to make it more specific, something like (2000, 10, 3).
I try to predict the value of one feature, not of all three.
Problem:
If I make my Network deeper and/or wider, the only output I get is the constant mean of the values to be predicted. Take this setup for example:
z0 = Input(shape=[None, len(dataset[0])])
z = LSTM(32, return_sequences=True, activation='softsign', recurrent_activation='softsign')(z0)
z = LSTM(32, return_sequences=True, activation='softsign', recurrent_activation='softsign')(z)
z = LSTM(64, return_sequences=True, activation='softsign', recurrent_activation='softsign')(z)
z = LSTM(64, return_sequences=True, activation='softsign', recurrent_activation='softsign')(z)
z = LSTM(128, activation='softsign', recurrent_activation='softsign')(z)
z = Dense(1)(z)
model = Model(inputs=z0, outputs=z)
print(model.summary())
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
history= model.fit(trainX, trainY,validation_split=0.1, epochs=200, batch_size=32,
callbacks=[ReduceLROnPlateau(factor=0.67, patience=3, verbose=1, min_lr=1E-5),
EarlyStopping(patience=50, verbose=1)])
If I just use one layer, like:
z0 = Input(shape=[None, len(dataset[0])])
z = LSTM(4, activation='soft sign', recurrent_activation='softsign')(z0)
z = Dense(1)(z)
model = Model(inputs=z0, outputs=z)
print(model.summary())
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
history= model.fit(trainX, trainY,validation_split=0.1, epochs=200, batch_size=32,
callbacks=[ReduceLROnPlateau(factor=0.67, patience=3, verbose=1, min_lr=1E-5),
EarlyStopping(patience=200, verbose=1)])
The predictions are somewhat reasonable, at least they are not constant anymore.
Why does that happen? Around 2000 samples not that many, but in the case of overfitting, I would expect the predictions to match perfectly...
EDIT: Solved, as stated in the comments, it's just that Keras always expects Batches: Keras
When I use:
`test=model.predict(trainX[0])`
to get the prediction for the first sequence, I get an dimension error:
"Error when checking : expected input_1 to have 3 dimensions, but got array with shape (3, 3)"
I need to feed in an array of sequences like:
`test=model.predict(trainX[0:1])`
This is a workaround, but I am not really sure, whether this has any deeper meaning, or is just a syntax thing...
This is because you have not normalised input data.
Any neural network model will initially have weights normalised around zero. Since your training dataset has all positive values, the model will try to adjust its weights to predict only positive values. However, the activation function (in your case softsign) will map it to 1. So the model can do nothing except adding the bias. That is why you are getting an almost constant line around the average value of the dataset.
For this, you can use a general tool like sklearn to pre-process your data. If you are using pandas dataframe, something like this will help
data_df = (data_df - data_df.mean()) / data_df.std()
Or to have the parameters in the model, you can consider adding batch normalization layer to your model