Simple GAN predicts NaN in Tensorflow after 2 steps - python

I'm implementing a basic GAN based on the one in the Tensorflow documentation.
After 2 training steps, the prediction from the generator is all NaN.
I don't know why it happens, but I noticed that the gradients of the 2nd convolution layer of the discriminator are all NaN since the first step:
<tf.Tensor: shape=(5, 5, 64, 128), dtype=float32, numpy=
array([[[[nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan,
My loss functions:
loss = BinaryCrossentropy(from_logits=True)
def generator_loss(discriminator_output):
loss_value = loss(tf.ones_like(discriminator_output), discriminator_output)
print(f"Generator loss: {loss_value}")
return loss_value
def discriminator_loss(outputs_for_reals, outputs_for_fakes):
real_loss = loss(tf.ones_like(outputs_for_reals), outputs_for_reals)
fake_loss = loss(tf.zeros_like(outputs_for_fakes), outputs_for_fakes)
print(f"Disc loss (real/fake): {real_loss}/{fake_loss}")
return real_loss + fake_loss
The training loop:
def training_step(self, real_images):
noise = tf.random.normal([BATCH_SIZE, NOISE_DIM])
with tf.GradientTape() as generator_tape, tf.GradientTape() as discriminator_tape:
generated_images = self._generator(noise, training=True)
if True in tf.math.is_nan(generated_images)[0, 0, :, 0]:
print("NaN result found")
else:
print("OK Result")
results_for_real = self._discriminator(real_images, training=True)
results_for_fake = self._discriminator(generated_images, training=True)
generator_loss = self._generator_loss(results_for_fake)
discriminator_loss = self._discriminator_loss(results_for_real, results_for_fake)
generator_gradients = generator_tape.gradient(generator_loss,
self._generator.trainable_variables)
discriminator_gradients = discriminator_tape.gradient(discriminator_loss,
self._discriminator.trainable_variables)
self._generator_optimizer.apply_gradients(
zip(generator_gradients, self._generator.trainable_variables))
self._discriminator_optimizer.apply_gradients(
zip(discriminator_gradients, self._discriminator.trainable_variables))
return generator_loss, discriminator_loss, generated_images
I build the models exactly the same way as in the documentation.
Things I tried:
Reducing the learning rate
Running the model in different training modes (training=False/training=True)
Decorating the training step with tf.function
No matter what I do, calling the generator with any input will produce exclusively NaN elements.
Example output:
2020-12-11 18:15:42.783543: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation.
Modify $PATH to customize ptxas location.
This message will be only logged once.
OK Result
Generator loss: 1.5484254360198975
Disc loss (real/fake): 1.4994642734527588/0.2869953215122223
OK Result
Generator loss: 1.2899521589279175
Disc loss (real/fake): 1.3189733028411865/0.3767632842063904
Backend Qt5Agg is interactive backend. Turning interactive mode on.
NaN result found

I used RTX3080 which only works with CUDA 11, and I was using CUDA 10.1. With 10.1 it works without warnings but produces garbage/nan values

Related

Gradient Accumulation with Custom model.fit in TF.Keras?

Please add a minimum comment on your thoughts so that I can improve my query. Thank you. -)
I'm trying to train a tf.keras model with Gradient Accumulation (GA). But I don't want to use it in the custom training loop (like) but customize the .fit() method by overriding the train_step.Is it possible? How to accomplish this? The reason is if we want to get the benefit of keras built-in functionality like fit, callbacks, we don't want to use the custom training loop but at the same time if we want to override train_step for some reason (like GA or else) we can customize the fit method and still get the leverage of using those built-in functions.
And also, I know the pros of using GA but what are the major cons of using it? Why does it's not come as a default but an optional feature with the framework?
# overriding train step
# my attempt
# it's not appropriately implemented
# and need to fix
class CustomTrainStep(keras.Model):
def __init__(self, n_gradients, *args, **kwargs):
super().__init__(*args, **kwargs)
self.n_gradients = n_gradients
self.gradient_accumulation = [
tf.zeros_like(this_var) for this_var in self.trainable_variables
]
def train_step(self, data):
x, y = data
batch_size = tf.cast(tf.shape(x)[0], tf.float32)
# Gradient Tape
with tf.GradientTape() as tape:
y_pred = self(x, training=True)
loss = self.compiled_loss(
y, y_pred, regularization_losses=self.losses
)
# Calculate batch gradients
gradients = tape.gradient(loss, self.trainable_variables)
# Accumulate batch gradients
accum_gradient = [
(acum_grad+grad) for acum_grad, grad in \
zip(self.gradient_accumulation, gradients)
]
accum_gradient = [
this_grad/batch_size for this_grad in accum_gradient
]
# apply accumulated gradients
self.optimizer.apply_gradients(
zip(accum_gradient, self.trainable_variables)
)
# TODO: reset self.gradient_accumulation
# update metrics
self.compiled_metrics.update_state(y, y_pred)
return {m.name: m.result() for m in self.metrics}
Please, run and check with the following toy setup.
# Model
size = 32
input = keras.Input(shape=(size,size,3))
efnet = keras.applications.DenseNet121(
weights=None,
include_top = False,
input_tensor = input
)
base_maps = keras.layers.GlobalAveragePooling2D()(efnet.output)
base_maps = keras.layers.Dense(
units=10, activation='softmax',
name='primary'
)(base_maps)
custom_model = CustomTrainStep(
n_gradients=10, inputs=[input], outputs=[base_maps]
)
# bind all
custom_model.compile(
loss = keras.losses.CategoricalCrossentropy(),
metrics = ['accuracy'],
optimizer = keras.optimizers.Adam()
)
# data
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
x_train = tf.expand_dims(x_train, -1)
x_train = tf.repeat(x_train, 3, axis=-1)
x_train = tf.divide(x_train, 255)
x_train = tf.image.resize(x_train, [size,size]) # if we want to resize
y_train = tf.one_hot(y_train , depth=10)
# customized fit
custom_model.fit(x_train, y_train, batch_size=64, epochs=3, verbose = 1)
Update
I've found that some others also tried to achieve this and ended up with the same issue. One has got some workaround, here, but it's too messy and I think there should be some better approach.
Update 2
The accepted answer (by Mr.For Example) is fine and works well in single strategy. Now, I like to start 2nd bounty to extend it to support multi-gpu, tpu, and with mixed-precision techniques. There are some complications, see details.
Yes it is possible to customize the .fit() method by overriding the train_step without a custom training loop, following simple example will show you how to train a simple mnist classifier with gradient accumulation:
import tensorflow as tf
class CustomTrainStep(tf.keras.Model):
def __init__(self, n_gradients, *args, **kwargs):
super().__init__(*args, **kwargs)
self.n_gradients = tf.constant(n_gradients, dtype=tf.int32)
self.n_acum_step = tf.Variable(0, dtype=tf.int32, trainable=False)
self.gradient_accumulation = [tf.Variable(tf.zeros_like(v, dtype=tf.float32), trainable=False) for v in self.trainable_variables]
def train_step(self, data):
self.n_acum_step.assign_add(1)
x, y = data
# Gradient Tape
with tf.GradientTape() as tape:
y_pred = self(x, training=True)
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Calculate batch gradients
gradients = tape.gradient(loss, self.trainable_variables)
# Accumulate batch gradients
for i in range(len(self.gradient_accumulation)):
self.gradient_accumulation[i].assign_add(gradients[i])
# If n_acum_step reach the n_gradients then we apply accumulated gradients to update the variables otherwise do nothing
tf.cond(tf.equal(self.n_acum_step, self.n_gradients), self.apply_accu_gradients, lambda: None)
# update metrics
self.compiled_metrics.update_state(y, y_pred)
return {m.name: m.result() for m in self.metrics}
def apply_accu_gradients(self):
# apply accumulated gradients
self.optimizer.apply_gradients(zip(self.gradient_accumulation, self.trainable_variables))
# reset
self.n_acum_step.assign(0)
for i in range(len(self.gradient_accumulation)):
self.gradient_accumulation[i].assign(tf.zeros_like(self.trainable_variables[i], dtype=tf.float32))
# Model
input = tf.keras.Input(shape=(28, 28))
base_maps = tf.keras.layers.Flatten(input_shape=(28, 28))(input)
base_maps = tf.keras.layers.Dense(128, activation='relu')(base_maps)
base_maps = tf.keras.layers.Dense(units=10, activation='softmax', name='primary')(base_maps)
custom_model = CustomTrainStep(n_gradients=10, inputs=[input], outputs=[base_maps])
# bind all
custom_model.compile(
loss = tf.keras.losses.CategoricalCrossentropy(),
metrics = ['accuracy'],
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) )
# data
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
x_train = tf.divide(x_train, 255)
y_train = tf.one_hot(y_train , depth=10)
# customized fit
custom_model.fit(x_train, y_train, batch_size=6, epochs=3, verbose = 1)
Outputs:
Epoch 1/3
10000/10000 [==============================] - 13s 1ms/step - loss: 0.5053 - accuracy: 0.8584
Epoch 2/3
10000/10000 [==============================] - 13s 1ms/step - loss: 0.1389 - accuracy: 0.9600
Epoch 3/3
10000/10000 [==============================] - 13s 1ms/step - loss: 0.0898 - accuracy: 0.9748
Pros:
Gradient accumulation is a mechanism to split the batch of samples —
used for training a neural network — into several mini-batches of
samples that will be run sequentially
Because GA calculates the loss and gradients after each mini-batch, but instead of updating the model parameters, it waits and accumulates the gradients over consecutive batches, so it can overcoming memory constraints, i.e using less memory to training the model like it using large batch size.
Example: If you run a gradient accumulation with steps of 5 and batch
size of 4 images, it serves almost the same purpose of running with a
batch size of 20 images.
We could also parallel the training when using GA, i.e aggregate gradients from multiple machines.
Things to consider:
This technique is working so well so it is widely used, there few things to consider before using it that I don't think it should be called cons, after all, all GA does is turning 4 + 4 to 2 + 2 + 2 + 2.
If your machine has sufficient memory for the batch size that already large enough then there no need to use it, because it is well known that too large of a batch size will lead to poor generalization, and it will certainly run slower if you using GA to achieve the same batch size that your machine's memory already can handle.
Reference:
What is Gradient Accumulation in Deep Learning?
Thanks to #Mr.For Example for his convenient answer.
Usually, I also observed that using Gradient Accumulation, won't speed up training since we are doing n_gradients times forward pass and compute all the gradients. But it will speed up the convergence of our model. And I found that using the mixed_precision technique here can be really helpful here. Details here.
policy = tf.keras.mixed_precision.Policy('mixed_float16')
tf.keras.mixed_precision.experimental.set_policy(policy)
Here is a complete gist.

How to visualize training process with output per patch/epoch?

My neural network in Keras learns a representation of my original data. In order to see exactly how it learns I thought it would be interesting to plot the data for every training batch (or epoch alternatively) and convert the plots into a video.
I'm stuck on how to get the outputs of my model during the training phase.
I thought about doing something like this (pseudo code):
epochs = 200
plt_outputs = []
for i in range(epochs):
model.fit(x_train,y_train, epochs = 1)
plt_outputs.append(output_layer(x_test))
where output_layer is the layer in my neural network I'm interested in. Afterwards I would use plot_data to generate each plot and turn it into a video. (That part I'm not concerned about yet..)
But that doesn't strike me as a good solution, plus I don't know how get the output for every batch. Any thoughts on this?
You can customize what happens in the test step, much like this official tutorial:
import tensorflow as tf
import numpy as np
class CustomModel(tf.keras.Model):
def test_step(self, data):
# Unpack the data
x, y = data
# Compute predictions
y_pred = self(x, training=False)
test_outputs.append(y_pred) # ADD THIS HERE
# Updates the metrics tracking the loss
self.compiled_loss(y, y_pred, regularization_losses=self.losses)
# Update the metrics.
self.compiled_metrics.update_state(y, y_pred)
# Return a dict mapping metric names to current value.
# Note that it will include the loss (tracked in self.metrics).
return {m.name: m.result() for m in self.metrics}
# Construct an instance of CustomModel
inputs = tf.keras.Input(shape=(8,))
x = tf.keras.layers.Dense(8, activation='relu')(inputs)
outputs = tf.keras.layers.Dense(1)(x)
model = CustomModel(inputs, outputs)
model.compile(loss="mse", metrics=["mae"], run_eagerly=True)
test_outputs = list() # ADD THIS HERE
# Evaluate with our custom test_step
x = np.random.random((1000, 8))
y = np.random.random((1000, 1))
model.evaluate(x, y)
I added a list, and now in the test step, it will append this list with the output. You will need to add run_eagerly=True in model.compile() for this to work. This will output a list of such outputs:
<tf.Tensor: shape=(32, 1), dtype=float32, numpy=
array([[ 0.10866462],
[ 0.2749035 ],
[ 0.08196291],
[ 0.25862294],
[ 0.30985728],
[ 0.20230596],
...
[ 0.17108777],
[ 0.29692617],
[-0.03684975],
[ 0.03525433],
[ 0.26774448],
[ 0.21728781],
[ 0.0840873 ]], dtype=float32)>

Training MSE loss larger than theoretical maximum?

I am training a keras model whose last layer is a single sigmoid unit:
output = Dense(units=1, activation='sigmoid')
I am training this model with some training data in which the expected output is always a number between 0.0 and 1.0.
I am compiling the model with mean-squared-error:
model.compile(optimizer='adam', loss='mse')
Since both the expected output and the real output are single floats between 0 and 1, I was expecting a loss between 0 and 1 as well, but when I start the training I get a loss of 3.3932, larger than 1.
Am I missing something?
Edit:
I am adding an example to show the problem:
https://drive.google.com/file/d/1fBBrgW-HlBYhG-BUARjTXn3SpWqrHHPK/view?usp=sharing
(I cannot just paste the code because I need to attach the training data)
After running python stackoverflow.py, the summary of the model will be shown, as well as the training process.
I also print the minimum and maximum values of y_true each step to verify that they are within the [0, 1] range.
There is no need to wait for the training to finish, you will see that the loss during the first few epochs is much larger than 1.
First, we can demystify mse loss - it's a normal callable function in tf.keras:
import tensorflow as tf
import numpy as np
mse = tf.keras.losses.mse
print(mse([1] * 3, [0] * 3)) # tf.Tensor(1, shape=(), dtype=int32)
Next, as the name "mean squared error" implies, it's a mean, meaning size of vectors passed to it do not change the value as long as the mean is the same:
print(mse([1] * 10, [0] * 10)) # tf.Tensor(1, shape=(), dtype=int32)
In order for the mse to exceed 1, average error must exceed 1:
print( mse(np.random.random((100,)), np.random.random((100,))) ) # tf.Tensor(0.14863832582680103, shape=(), dtype=float64)
print( mse( 10 * np.random.random((100,)), np.random.random((100,))) ) # tf.Tensor(30.51209646429651, shape=(), dtype=float64)
Lastly, sigmoid indeed guarantees that output is between 0 and 1:
sigmoid = tf.keras.activations.sigmoid
signal = 10 * np.random.random((100,))
output = sigmoid(signal)
print(f"Raw: {np.mean(signal):.2f}; Sigmoid: {np.mean(output):.2f}" ) # Raw: 5.35; Sigmoid: 0.92
What this implies is that in your code, mean of y_true is NOT between 0 and 1.
You can verify this with np.mean(y_true).
I do not have an answer for the question asked. I am getting nans in my MSE loss, with input in range [0,1] and sigmoid at output. So I thought the question is relevant.
Here are a few observations about sigmoid:
import tensorflow as tf
import numpy as np
x=tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32)
x=tf.keras.activations.sigmoid(x)
x.numpy()
# array([2.0611537e-09, 2.6894143e-01, 5.0000000e-01, 7.3105860e-01,
# 1.0000000e+00], dtype=float32)
x=tf.constant([float('nan')]*5, dtype = tf.float32)
x=tf.keras.activations.sigmoid(x)
x.numpy()
# array([nan, nan, nan, nan, nan], dtype=float32)
x=tf.constant([np.inf]*5, dtype = tf.float32)
x=tf.keras.activations.sigmoid(x)
x.numpy()
# array([1., 1., 1., 1., 1.], dtype=float32)
So, it is possible to get nans out of sigmoid. Just in case someone (me, in near future) has this doubt (again).

GradientTape gives different gradients depending on loss function being decorated by tf.function or not

I find that the gradients computed depend on the interplay of tf.function decorators in the following way.
First I create some synthetic data for a binary classification
tf.random.set_seed(42)
np.random.seed(42)
x=tf.random.normal((2,1))
y=tf.constant(np.random.choice([0,1],2))
Then I define two loss functions that differ only in the tf.function decorator
weights=tf.constant([1.,.1])[tf.newaxis,...]
def customloss1(y_true,y_pred,sample_weight=None):
y_true_one_hot=tf.one_hot(tf.cast(y_true,tf.uint8),2)
y_true_scale=tf.multiply(weights,y_true_one_hot)
return tf.reduce_mean(tf.keras.losses.categorical_crossentropy(y_true_scale,y_pred))
#tf.function
def customloss2(y_true,y_pred,sample_weight=None):
y_true_one_hot=tf.one_hot(tf.cast(y_true,tf.uint8),2)
y_true_scale=tf.multiply(weights,y_true_one_hot)
return tf.reduce_mean(tf.keras.losses.categorical_crossentropy(y_true_scale,y_pred))
Then I make a very simple logistic regression model with all the bells and whistles removed to keep it simple
tf.random.set_seed(42)
np.random.seed(42)
model=tf.keras.Sequential([
tf.keras.layers.Dense(2,use_bias=False,activation='softmax',input_shape=[1,])
])
and finally define two functions to calculate the gradients of the aforementioned loss functions with one being decorated by tf.function and the other not being decorated by it
def get_gradients1(x,y):
with tf.GradientTape() as tape1:
p1=model(x)
l1=customloss1(y,p1)
with tf.GradientTape() as tape2:
p2=model(x)
l2=customloss2(y,p2)
gradients1=tape1.gradient(l1,model.trainable_variables)
gradients2=tape2.gradient(l2,model.trainable_variables)
return gradients1, gradients2
#tf.function
def get_gradients2(x,y):
with tf.GradientTape() as tape1:
p1=model(x)
l1=customloss1(y,p1)
with tf.GradientTape() as tape2:
p2=model(x)
l2=customloss2(y,p2)
gradients1=tape1.gradient(l1,model.trainable_variables)
gradients2=tape2.gradient(l2,model.trainable_variables)
return gradients1, gradients2
Now when I run
get_gradients1(x,y)
I get
([<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[ 0.11473544, -0.11473544]], dtype=float32)>],
[<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[ 0.11473544, -0.11473544]], dtype=float32)>])
and the gradients are equal as expected. However when I run
get_gradients2(x,y)
I get
([<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[ 0.02213785, -0.5065186 ]], dtype=float32)>],
[<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[ 0.11473544, -0.11473544]], dtype=float32)>])
where only the second answer is correct. Thus, when my outer function is decorated I only get the correct answer from the inner function that is decorated as well. I was under the impression that decorating the outer one (which is the training loop in many applications) is sufficient but here we see its not. I want to understand why and also then how deep does one have to go to decorate the functions being used?
Added some debugging info
I added some debugging info and I show the code only for customloss2 (the other is identical)
#tf.function
def customloss2(y_true,y_pred,sample_weight=None):
y_true_one_hot=tf.one_hot(tf.cast(y_true,tf.uint8),2)
y_true_scale=tf.multiply(weights,y_true_one_hot)
tf.print('customloss2',type(y_true_scale),type(y_pred))
tf.print('y_true_scale','\n',y_true_scale)
tf.print('y_pred','\n',y_pred)
return tf.reduce_mean(tf.keras.losses.categorical_crossentropy(y_true_scale,y_pred))
and on running get_gradients1 I get
customloss1 <type 'EagerTensor'> <type 'EagerTensor'>
y_true_scale
[[1 0]
[0 0.1]]
y_pred
[[0.510775387 0.489224613]
[0.529191136 0.470808864]]
customloss2 <class 'tensorflow.python.framework.ops.Tensor'> <class 'tensorflow.python.framework.ops.Tensor'>
y_true_scale
[[1 0]
[0 0.1]]
y_pred
[[0.510775387 0.489224613]
[0.529191136 0.470808864]]
we see that the tensors for customloss1 are Eager but for customloss2 are Tensor and yet we get same value for gradients.
On the other hand when I run it on get_gradients2
customloss1 <class 'tensorflow.python.framework.ops.Tensor'> <class 'tensorflow.python.framework.ops.Tensor'>
y_true_scale
[[1 0]
[0 0.1]]
y_pred
[[0.510775387 0.489224613]
[0.529191136 0.470808864]]
customloss2 <class 'tensorflow.python.framework.ops.Tensor'> <class 'tensorflow.python.framework.ops.Tensor'>
y_true_scale
[[1 0]
[0 0.1]]
y_pred
[[0.510775387 0.489224613]
[0.529191136 0.470808864]]
we see everything is identical with no tensors being Eager and yet I get different gradients!
This is a somewhat complicated issue, but it has an explanation. The problem lies within the function tf.keras.backend.categorical_crossentropy, which has a different behavior depending on whether you are running on eager or graph (tf.function) mode.
The function considers three possible situations. The first one is that you pass from_logits=True, in which case it just calls tf.nn.softmax_cross_entropy_with_logits:
if from_logits:
return nn.softmax_cross_entropy_with_logits_v2(
labels=target, logits=output, axis=axis)
If you give from_logits=False, which is the most common in Keras, since the output layer for categorical classification is generally a softmax, then it considers two possibilities. The first is that, if the given output value comes from a softmax operation, then it can just use the input to that operation and call tf.nn.softmax_cross_entropy_with_logits, which is preferred to compute the actual cross entropy with the softmax values because it prevents "saturated" results. However, this can only be done in graph mode, because eager mode tensors do not keep track of the operation it produced them, nevermind the inputs to that operation.
if not isinstance(output, (ops.EagerTensor, variables_module.Variable)):
output = _backtrack_identity(output)
if output.op.type == 'Softmax':
# When softmax activation function is used for output operation, we
# use logits from the softmax function directly to compute loss in order
# to prevent collapsing zero when training.
# See b/117284466
assert len(output.op.inputs) == 1
output = output.op.inputs[0]
return nn.softmax_cross_entropy_with_logits_v2(
labels=target, logits=output, axis=axis)
The last case is when you have given from_logits=False and either you are in eager mode or the given output tensor does not directly come from a softmax operation, in which case the only option is to compute the cross entropy from the softmax value.
# scale preds so that the class probas of each sample sum to 1
output = output / math_ops.reduce_sum(output, axis, True)
# Compute cross entropy from probabilities.
epsilon_ = _constant_to_tensor(epsilon(), output.dtype.base_dtype)
output = clip_ops.clip_by_value(output, epsilon_, 1. - epsilon_)
return -math_ops.reduce_sum(target * math_ops.log(output), axis)
The problem is that, even though these are mathematically equivalent ways to compute the cross entropy, they do not have the same precision. They are pretty much the same when logits are small, but if they get big they can diverge a lot. Here is a simple test:
import tensorflow as tf
#tf.function
def test_keras_xent(y, p, from_logits=False, mask_op=False):
# p is always logits
if not from_logits:
# Compute softmax if not using logits
p = tf.nn.softmax(p)
if mask_op:
# A dummy addition prevents Keras from detecting that
# the value comes from a softmax operation
p = p + tf.constant(0, p.dtype)
return tf.keras.backend.categorical_crossentropy(y, p, from_logits=from_logits)
# Test
tf.random.set_seed(0)
y = tf.constant([1., 0., 0., 0.])
# Logits in [0, 1)
p = tf.random.uniform([4], minval=0, maxval=1)
tf.print(test_keras_xent(y, p, from_logits=True))
# 1.50469065
tf.print(test_keras_xent(y, p, from_logits=False, mask_op=False))
# 1.50469065
tf.print(test_keras_xent(y, p, from_logits=False, mask_op=True))
# 1.50469065
# Logits in [0, 10)
p = tf.random.uniform([4], minval=0, maxval=10)
tf.print(test_keras_xent(y, p, from_logits=True))
# 3.47569656
tf.print(test_keras_xent(y, p, from_logits=False, mask_op=False))
# 3.47569656
tf.print(test_keras_xent(y, p, from_logits=False, mask_op=True))
# 3.47569656
# Logits in [0, 100)
p = tf.random.uniform([4], minval=0, maxval=100)
tf.print(test_keras_xent(y, p, from_logits=True))
# 68.0106506
tf.print(test_keras_xent(y, p, from_logits=False, mask_op=False))
# 68.0106506
tf.print(test_keras_xent(y, p, from_logits=False, mask_op=True))
# 16.1180954
Taking your example:
import tensorflow as tf
tf.random.set_seed(42)
x = tf.random.normal((2, 1))
y = tf.constant(np.random.choice([0, 1], 2))
y1h = tf.one_hot(y, 2, dtype=x.dtype)
model = tf.keras.Sequential([
# Linear activation because we want the logits for testing
tf.keras.layers.Dense(2, use_bias=False, activation='linear', input_shape=[1,])
])
p = model(x)
tf.print(test_keras_xent(y1h, p, from_logits=True))
# [0.603375256 0.964639068]
tf.print(test_keras_xent(y1h, p, from_logits=False, mask_op=False))
# [0.603375256 0.964639068]
tf.print(test_keras_xent(y1h, p, from_logits=False, mask_op=True))
# [0.603375256 0.964638948]
The results here are almost identical, but you can see there is a small difference in the second value. This has in turn an effect (probably in amplified) in the computed gradients, which of course are as well "equivalent" mathematical expression but with different precision properties.
It turns out this is a bug and I have raised it here.

Validation loss not moving with MLP in Regression

Given input features as such, just raw numbers:
tensor([0.2153, 0.2190, 0.0685, 0.2127, 0.2145, 0.1260, 0.1480, 0.1483, 0.1489,
0.1400, 0.1906, 0.1876, 0.1900, 0.1925, 0.0149, 0.1857, 0.1871, 0.2715,
0.1887, 0.1804, 0.1656, 0.1665, 0.1137, 0.1668, 0.1168, 0.0278, 0.1170,
0.1189, 0.1163, 0.2337, 0.2319, 0.2315, 0.2325, 0.0519, 0.0594, 0.0603,
0.0586, 0.0067, 0.0624, 0.2691, 0.0617, 0.2790, 0.2805, 0.2848, 0.2454,
0.1268, 0.2483, 0.2454, 0.2475], device='cuda:0')
And the expected output is a single real number output, e.g.
tensor(-34.8500, device='cuda:0')
Full code on https://www.kaggle.com/alvations/pytorch-mlp-regression
I've tried creating a simple 2 layer network with:
class MLP(nn.Module):
def __init__(self, input_size, output_size, hidden_size):
super(MLP, self).__init__()
self.linear = nn.Linear(input_size, hidden_size)
self.classifier = nn.Linear(hidden_size, output_size)
def forward(self, inputs, hidden=None, dropout=0.5):
inputs = F.dropout(inputs, dropout) # Drop-in.
# First Layer.
output = F.relu(self.linear(inputs))
# Matrix manipulation magic.
batch_size, sequence_len, hidden_size = output.shape
# Technically, linear layer takes a 2-D matrix as input, so more manipulation...
output = output.contiguous().view(batch_size * sequence_len, hidden_size)
# Apply dropout.
output = F.dropout(output, dropout)
# Put it through the classifier
# And reshape it to [batch_size x sequence_len x vocab_size]
output = self.classifier(output).view(batch_size, sequence_len, -1)
return output
And training as such:
# Training routine.
def train(num_epochs, dataloader, valid_dataset, model, criterion, optimizer):
losses = []
valid_losses = []
learning_rates = []
plt.ion()
x_valid, y_valid = valid_dataset
for _e in range(num_epochs):
for batch in tqdm(dataloader):
# Zero gradient.
optimizer.zero_grad()
#print(batch)
this_x = torch.tensor(batch['x'].view(len(batch['x']), 1, -1)).to(device)
this_y = torch.tensor(batch['y'].view(len(batch['y']), 1, 1)).to(device)
# Feed forward.
output = model(this_x)
prediction, _ = torch.max(output, dim=1)
loss = criterion(prediction, this_y.view(len(batch['y']), -1))
loss.backward()
optimizer.step()
losses.append(torch.sqrt(loss.float()).data)
with torch.no_grad():
# Zero gradient.
optimizer.zero_grad()
output = model(x_valid.view(len(x_valid), 1, -1))
prediction, _ = torch.max(output, dim=1)
loss = criterion(prediction, y_valid.view(len(y_valid), -1))
valid_losses.append(torch.sqrt(loss.float()).data)
clear_output(wait=True)
plt.plot(losses, label='Train')
plt.plot(valid_losses, label='Valid')
plt.legend()
plt.pause(0.05)
Tuning several hyperparameters, it looks like the model doesn't train well, the validation loss doesn't move at all e.g.
hyperparams = Hyperparams(input_size=train_dataset.x.shape[1],
output_size=1,
hidden_size=150,
loss_func=nn.MSELoss,
learning_rate=1e-8,
optimizer=optim.Adam,
batch_size=500)
And it's loss curve:
Any idea what's wrong with the network?
Am I training the regression model with the wrong loss? Or I've just not yet found the right hyperparameters?
Or am I validating the model wrongly?
From the code you provided, it is tough to say why the validation loss is constant but I see several problems in your code.
Why do you validate for each training mini-batch? Instead, you should validate your model after you do the training for one complete epoch (iterating over your full dataset once). So, the skeleton should be like:
for _e in range(num_epochs):
for batch in tqdm(train_dataloader):
# training code
with torch.no_grad():
for batch in tqdm(valid_dataloader):
# validation code
# plot your loss values
Also, you can plot after each epoch, not after each mini-batch training.
Did you check whether the model parameters are getting updated after optimizer.step() during training? How many validation examples do you have? Why don't you use mini-batch computation during validation?
Why do you do: optimizer.zero_grad() during validation? It doesn't make sense because, during validation, you are not going to do anything related to optimization.
You should use model.eval() during validation to turn off the dropouts. See PyTorch documentation to learn about .train() and .eval() methods.
The learning rate is set to 1e-8, isn't it too small? Why don't you use the default learning rate for Adam (1e-3)?
The following requires some reasoning.
Why are you using such a large batch size? What is your training dataset size?
You can directly plot the MSELoss, instead of taking the square root.
My suggestion would be: use some existing resources for MLP in PyTorch. Don't do it from scratch if you do not have sufficient knowledge at this point. It would make you suffer a lot.

Categories

Resources