I've been trying to build an image classifier with CNN. There are 2300 images in my dataset and two categories: men and women. Here's the model I used:
early_stopping = EarlyStopping(min_delta = 0.001, patience = 30, restore_best_weights = True)
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(256, (3, 3), input_shape=X.shape[1:], activation = 'relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Conv2D(256, (3, 3), input_shape=X.shape[1:], activation = 'relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(tf.keras.layers.Dense(64))
model.add(tf.keras.layers.Dense(1, activation='softmax'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
h= model.fit(xtrain, ytrain, validation_data=(xval, yval), batch_size=32, epochs=30, callbacks = [early_stopping], verbose = 0)
Accuracy of this model is 0.501897 and loss 7.595693(the model is stuck on these numbers in every epoch) but if I replace Softmax activation with Sigmoid, accuracy is about 0.98 and loss 0.06. Why does such strange thing happen with Softmax? All info I could find was that these two activations are similar and softmax is even better but I couldn't find anything about such abnormality. I'll be glad if someone could explain what the problem is.
Summary of your results:
a) CNN with Softmax activation function -> accuracy ~ 0.50, loss ~ 7.60
b) CNN with Sigmoid activation function -> accuracy ~ 0.98, loss ~ 0.06
TLDR
Update:
Now that I also see you are using only 1 output neuron with Softmax, you will not be able to capture the second class in binary classification. With Softmax you need to define K neurons in the output layer - where K is the number of classes you want to predict. Whereas with Sigmoid: 1 output neuron is sufficient for binary classification.
so in short, this should change in your code when using softmax for 2 classes:
#use 2 neurons with softmax
model.add(tf.keras.layers.Dense(2, activation='softmax'))
Additionally:
When doing binary classification, a sigmoid function is more suitable as it is simply computationally more effective compared to the more generalized softmax function (which is normally being used for multi-class prediction when you have K>2 classes).
Further Reading:
Some attributes of selected activation functions
If the short answer above is not enough for you, I can share with you some things I've learned from my research about activation functions with NNs in short:
To begin with, let's be clear with the terms activation and activation function
activation (alpha): is the state of a neuron. The state of neurons in hidden or output layers will be quantified by the weighted sum of input signals from a previous layer
activation function f(alpha): Is a function that transforms an activation to a neuron signal. Usually a non-linear and differentiable function as for instance the sigmoid function. Many applications & research has been applied with the sigmoid function (see Bengio & Courville, 2016, p.67 ff.). Mostly the same activation function is being used throughout the neural network, but it is possible to use multiple (e.g. different ones in different layers).
Now to the effects of activation functions:
The choice of activation function can have an immense impact on learning of neural networks (as you have seen in your example). Historically it was common to use the sigmoid function, as it was a good function to depict a saturated neuron. Today, especially in CNNs other activation functions, also only partially linear activation functions (like relu) is being preferred over sigmoid function. There are many different functions, just to name some: sigmoid, tanh, relu, prelu, elu ,maxout, max, argmax, softmax etc.
Now let's only compare sigmoid, relu/maxout and softmax:
# pseudo code / formula
sigmoid = f(alpha) = 1 / (1 + exp(-alpha))
relu = f(alpha) = max(0,alpha)
maxout = f(alpha) = max(alpha1, alpha2)
softmax = f(alpha_j) = alpha_j / sum_K(alpha_k)
sigmoid:
in binary classification preferably used for output layer
values can range between [0,1], suitable for a probabilistic interpretation (+)
saturated neurons can eliminate gradient (-)
not zero centered (-)
exp() is computationally expensive (-)
relu:
no saturated neurons in positive regions (+)
computationally less expensive (+)
not zero centered (-)
saturated neurons in negative regions (-)
maxout:
positive attributes of relu (+)
doubles the number of parameters per neuron, normally requires an increased learning effort (-)
softmax:
can bee seen as a generalization of sigmoid function
mainly being used as output activation function in multi-class prediction problems
values range between [0,1], suitable for a probabilistic interpretation (+)
computationally more expensive because of exp() terms (-)
Some good references for further reading:
http://cs231n.stanford.edu/2020/syllabus
http://deeplearningbook.org (Bengio & Courtville)
https://arxiv.org/pdf/1811.03378.pdf
https://papers.nips.cc/paper/2018/file/6ecbdd6ec859d284dc13885a37ce8d81-Paper.pdf
The reason why you see those different results is the size of your output layer - it is 1 neuron.
Softmax by definition requires more than 1 output neuron to make sense. 1 Softmax neuron will always output 1 (lookup the formula and think about it). That is why you see ~50% accuracy, since your network always predicts class 1.
Sigmoid doesn't have this problem and can output anything, that's why it trains.
If you want to test softmax, you have to make an output neuron for each class and then "one-hot encode" your ytrain and yval (look up one-hot encoding for more explanations). In your case this means: label 0 -> [1, 0], label 1 -> [0, 1]. You can see, the index of the one encodes the class. I'm not sure but in that case I believe you'd use the categorical cross entropy. I was not able to tell conclusively from the docs, but it seems to me that binary cross entropy expects 1 output neuron that's either 0 or 1 (where Sigmoid is the correct activation to use) whereas the categorical cross entropy expects one output neuron for each class, where Softmax makes sense. You could use Sigmoid even for the multioutput case, but it's not common.
So in short, it seems to me that binary xentropy expects the class encoded by the value of the 1 neuron, whereas categorical xentropy expects the class encoded by which output neuron is the most active. (in simplifying terms)
Related
I am trying to better understand the decision boundary of a binary classifier by getting explicit set of points on the decision boundary. The approach I am taking is as follows:
I have a simple feedforward neural network with softmax on the final activation layer trained on one-hot encoded data. It returns a vector (1,0) for the first class and (0,1) for the second class.
model1 = keras.Sequential([
layers.Flatten(input_shape=(2,)),
layers.Dense(20, activation='relu'),
layers.Dense(20, activation='relu'),
layers.Dense(20, activation='relu'),
layers.Dense(2, activation='softmax')
])
If the model is very confident that the datapoint is in the first class the vector, before applying the softmax function, will look like (0.9, 0.1). If it is less certain, then the vector before the softmax function will look like (0.6, 0.4). If it is a point on the decision boundary then the vector will look like (0.5, 0.5). If it is certain that our vector is in the second class it would look something like (0.1,0.9) before softmax activation.
In the different cases we can evaluate the difference for a vector (x1,x2) as x1-x2. In the first case this gives us 0.9-0.1=0.8, in the second 0.6-0.4=0.2, in the third 0.5-0.5=0, in the fourth 0.1-0.9=-0.8.
The observation here is that the set of parameters when this vector is 0 defines the decision boundary of our machine learning problem which I am trying to understand better.
Is there a way to extract the output of the machine learning model before it passes through the softmax layer to evaluate this difference?
If the above is not possible, can I use transfer learning to achieve the same goal? In particular can I train model1 as above and construct a model2
model1 = keras.Sequential([
...
layers.Dense(2)
])
and transfer the weights from model1 to model2 to evaluate this difference?
I already trained a neural network with the last layer using sigmoid. If I can not retrain the network with softmax, can I change the final predictions as probability? Now the output of
pred = fin_model.predict_proba(x_train)
is like
array([[0.65247375, 0.45892698],
[0.65919983, 0.4590024 ],
[0.15964866, 0.47771254],
[0.53297156, 0.47564888],
[0.16078213, 0.4779702 ]], dtype=float32)
The sum of each one like 0.6524+0.4589 is not 1, and thus can not be a probability. Is there a way to change it to probabilities?
The Sigmoid function always returns a value between 0 and 1 and mainly used in binary classification. Sigmoid activation function will mark one class as close to 0 (<=0.5) and other close to 1 (>0.5).
To use sigmoid, you need to define final layer as:
model.add(Dense(1, activation_function='sigmoid'))
However you can also use Softmax activation for binary classification. Softmax converts a vector of values to a probability distribution which gives output vector in range (0, 1) and sum to 1.
It can be declared in final layer as :
model.add(Dense(2, activation_function='softmax'))
You can get more details on softmax and sigmoid here.
I would like to learn image segmentation in TensorFlow with values in {0.0,1.0}. I have two images, ground_truth and prediction and each have shape (120,160). The ground_truth image pixels only contain values that are either 0.0 or 1.0.
The prediction image is the output of a decoder and the last two layers of it are a tf.layers.conv2d_transpose and tf.layers.conv2d like so:
transforms (?,120,160,30) -> (?,120,160,15)
outputs = tf.layers.conv2d_transpose(outputs, filters=15, kernel_size=1, strides=1, padding='same')
# ReLU
outputs = activation(outputs)
# transforms (?,120,160,15) -> (?,120,160,1)
outputs = tf.layers.conv2d(outputs, filters=1, kernel_size=1, strides=1, padding='same')
The last layer does not carry an activation function and thus it's output is unbounded. I use the following loss function:
logits = tf.reshape(predicted, [-1, predicted.get_shape()[1] * predicted.get_shape()[2]])
labels = tf.reshape(ground_truth, [-1, ground_truth.get_shape()[1] * ground_truth.get_shape()[2]])
loss = 0.5 * tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=labels,logits=logits))
This setup converges nicely. However, I have realized that the outputs of my last NN layer at validation time seem to be in [-inf, inf]. If I visualize the output I can see that the segmented object is not segmented since almost all pixels are "activated". The distributions of values for a single output of the last conv2d layer looks like this:
Question:
Do I have to post-process the outputs (crop negative values or run output trough a sigmoid activation etc.)? What do I need to do to enforce my output values to be {0,1}?
Solved it. The problem was that the tf.nn.sigmoid_cross_entropy_with_logits runs the logits through a sigmoid which is of course not used at validation time since the loss operation is only called during train time. The solution therefore is:
make sure to run the network outputs through a tf.nn.sigmoid at validation/test time like this:
return output if is_training else tf.nn.sigmoid(output)
I am trying to train an autoencoder on some simulated data where an input is basically a vector with Gaussian noise applied. The code is almost exactly the same as in this example: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/autoencoder.py
The only differences are I changed the network parameters and the cost function:
n_hidden_1 = 32 # 1st layer num features
n_hidden_2 = 16 # 2nd layer num features
n_input = 149 # LunaH-Map data input (number of counts per orbit)
cost = tf.reduce_mean(-tf.reduce_sum(y_true * tf.log(y_pred), reduction_indices=[1]))
During training, the error steadily decreases down to 0.00015, but the predicted and true values are very different, e.g.
as shown in this image. In fact, the predicted y vector is almost all ones.
How is it possible to get decreasing error with very wrong predictions? Is it possible that my network is just trying to move the weights closer to log(1) so as to minimize the cross entropy cost? If so, how do I combat this?
Yes, the network simply learns to to predict 1 which reduces the loss. The cross-entropy loss you are using is categorical which is used when y_true is one-hot code (example: [0,0,1,0]) and final layer is softmax (ensures sum of all output is 1). So when y_true[idx] is 0, the loss don't care while when the y_true[idx] is 1 and y_pred[idx] is 0 there is infinite(high) loss but if its 1 then loss is again 0.
Now categorical cross-entropy loss is not suitable for autoencoders. For real valued inputs and hence outputs its mean-squared-error, which is what is used in example that you cited. But there the final activation layer is sigmoid, implicitly saying that each element of x is 0/1. So either you need to convert your data to support the same or have the last layer of decoder linear.
If you do want to use cross-entropy loss you can use binary cross-entropy
For inputs with 0,1 binary cross-entropy: tf.reduce_mean(y_true * tf.log(y_pred) + (1-y_true) * tf.log(1-y_pred)). If you work it out in both misprediction case 0-1, 1-0, the network gets infinite loss. Note again here the final layer should be softmax and elements of x should be between 0 and 1
I'm trying to train a multilayer perseptron to classify between true or false, based on the given input.
So far I'm using the example:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py
But this gives me the output as a binary value and I rather have a decimal or percentage based output.
What I've tried:
I've tried to change the optimizer for the other available ones with no success.
optimizer =
tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
The optimizer will not change the output that is actually given by the layers.
The provided example uses ReLu for the layers, which is good for classification but to model probability it wouldn't work. You would be better off with a sigmoid function instead.
The sigmoid function can be used to model probability, whereas ReLu can be used to model positive real number.
In order to make it work for the provided example, change the multilayer_perceptron function to:
def multilayer_perceptron(_X, _weights, _biases):
layer_1 = tf.sigmoid(tf.add(tf.matmul(_X, _weights['h1']), _biases['b1']), name="sigmoid_l1") #Hidden layer with sigmoid activation
layer_2 = tf.sigmoid(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2']), name="sigmoid_l2") #Hidden layer with sigmoid activation
return tf.matmul(layer_2, _weights['out'], name="matmul_lout") + _biases['out']
It basically replaces the ReLu activation for a sigmoid one.
Then, for the evaluation, use softmax as follows:
output1 = tf.nn.softmax((multilayer_perceptron(x, weights, biases)), name="output")
avd = sess.run(output1, feed_dict={x: features_t})
It will provide you a range between 0 and 1 for each class. Also, you'll probably have to increase the number of epochs for this to work.