Why doesn't the Adadelta optimizer decay the learning rate? - python

I have initialised an Adadelta optimizer in Keras (using Tensorflow backend) and assigned it to a model:
my_adadelta = keras.optimizers.Adadelta(learning_rate=0.01, rho=0.95)
my_model.compile(optimizer=my_adadelta, loss="binary_crossentropy")
During training, I am using a callback to print the learning rate after every epoch:
class LRPrintCallback(Callback):
def on_epoch_end(self, epoch, logs=None):
lr = self.model.optimizer.lr
print(K.eval(lr))
However, this prints the same (initial) learning rate after every epoch.
The same thing happens if I initialize the optimizer like this:
my_adadelta = keras.optimizers.Adadelta(learning_rate=0.01, decay=0.95)
Am I doing something wrong in the initialization? Is the learning rate maybe changing but I am not printing the right thing?

As discussed in a relevant Github thread, the decay does not affect the variable lr itself, which is used only to store the initial value of the learning rate. In order to print the decayed value, you need to explicitly compute it yourself and store it in a separate variable lr_with_decay; you can do so by using the following callback:
class MyCallback(Callback):
def on_epoch_end(self, epoch, logs=None):
lr = self.model.optimizer.lr
decay = self.model.optimizer.decay
iterations = self.model.optimizer.iterations
lr_with_decay = lr / (1. + decay * K.cast(iterations, K.dtype(decay)))
print(K.eval(lr_with_decay))
as explained here and here. In fact, the specific code snippet suggested there, i.e.
lr = self.lr
if self.initial_decay > 0:
lr *= (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay))))
comes directly from the underlying Keras source code for Adadelta.
As clear from the inspection of the linked source code, the parameter of interest here for decaying the learning rate is decay, and not rho; despite the term 'decay' used also for describing rho in the documentation, it is a different decay not having anything to do with the learning rate:
rho: float >= 0. Adadelta decay factor, corresponding to fraction of gradient to keep at each time step.

Related

Decay parameter of Adam optimizer in Keras

I think that Adam optimizer is designed such that it automtically adjusts the learning rate.
But there is an option to explicitly mention the decay in the Adam parameter options in Keras.
I want to clarify the effect of decay on Adam optimizer in Keras.
If we compile the model using decay say 0.01 on lr = 0.001, and then fit the model running for 50 epochs, then does the learning rate get reduced by a factor of 0.01 after each epoch?
Is there any way where we can specify that the learning rate should decay only after running for certain number of epochs?
In pytorch there is a different implementation called AdamW, which is not present in the standard keras library.
Is this the same as varying the decay after every epoch as mentioned above?
Thanks in advance for the reply.
From source code, decay adjusts lr per iterations according to
lr = lr * (1. / (1. + decay * iterations)) # simplified
see image below. This is epoch-independent. iterations is incremented by 1 on each batch fit (e.g. each time train_on_batch is called, or how many ever batches are in x for model.fit(x) - usually len(x) // batch_size batches).
To implement what you've described, you can use a callback as below:
from keras.callbacks import LearningRateScheduler
def decay_schedule(epoch, lr):
# decay by 0.1 every 5 epochs; use `% 1` to decay after each epoch
if (epoch % 5 == 0) and (epoch != 0):
lr = lr * 0.1
return lr
lr_scheduler = LearningRateScheduler(decay_schedule)
model.fit(x, y, epochs=50, callbacks=[lr_scheduler])
The LearningRateScheduler takes a function as an argument, and the function is fed the epoch index and lr at the beginning of each epoch by .fit. It then updates lr according to that function - so on next epoch, the function is fed the updated lr.
Also, there is a Keras implementation of AdamW, NadamW, and SGDW, by me - Keras AdamW.
Clarification: the very first call to .fit() invokes on_epoch_begin with epoch = 0 - if we don't wish lr to be decayed immediately, we should add a epoch != 0 check in decay_schedule. Then, epoch denotes how many epochs have already passed - so when epoch = 5, the decay is applied.
Internally, there is a learning rate decay at each after each batch-size, yet not after each epoch as it is commonly believed.
You can read more about it here: https://www.pyimagesearch.com/2019/07/22/keras-learning-rate-schedules-and-decay/
However, you can also implement your own learning_rate scheduler, via a custom callback function:
def learning_rate_scheduler(epoch, lr):
#Say you want to decay linearly by 5 after every 10 epochs the lr
#(epoch + 1) since it starts from epoch 0
if (epoch + 1) % 10 == 0:
lr = lr / 5
callbacks = [
tensorflow.keras.callbacks.LearningRateScheduler(learning_rate_scheduler, verbose=1)
]
model.fit(...,callbacks=callbacks,...)
The above method works for all types of optimizers, not only Adam.

Keras learning rate decay in pytorch

I have a question concerning learning rate decay in Keras. I need to understand how the option decay works inside optimizers in order to translate it to an equivalent PyTorch formulation.
From the source code of SGD I see that the update is done this way after every batch update:
lr = self.lr * (1. / (1. + self.decay * self.iterations))
Does this mean that after every batch update the lr is updated starting from its value from its previous update or from its initial value? I mean, which of the two following interpretation is the correct one?
lr = lr_0 * (1. / (1. + self.decay * self.iterations))
or
lr = lr * (1. / (1. + self.decay * self.iterations)),
where lr is the lr updated after previous iteration and lr_0 is always the initial learning rate.
If the correct answer is the first one, this would mean that, in my case, the learning rate would decay from 0.001 to just 0.0002 after 100 epochs, whereas in the second case it would decay from 0.001 at around 1e-230 after 70 epochs.
Just to give you some context, I'm working with a CNN for a regression problem from images and I just have to translate Keras code into Pytorch code. So far, with the second of the afore-mentioned interpretations I manage to only always predict the same value, disregarding of batch size and input at test time.
Thanks in advance for your help!
Based on the implementation in Keras I think your first formulation is the correct one, the one that contain the initial learning rate (note that self.lr is not being updated).
However I think your calculation is probably not correct: since the denominator is the same, and lr_0 >= lr since you are doing decay, the first formulation has to result in a bigger number.
I'm not sure if this decay is available in PyTorch, but you can easily create something similar with torch.optim.lr_scheduler.LambdaLR.
decay = .001
fcn = lambda step: 1./(1. + decay*step)
scheduler = LambdaLR(optimizer, lr_lambda=fcn)
Finally, don't forget that you will need to call .step() explicitly on the scheduler, it's not enough to step your optimizer. Also, most often learning scheduling is only done after a full epoch, not after every single batch, but I see that here you are just recreating Keras behavior.
Actually, the response of mkisantal might be incorrect, since the actual equation for the learning rate in keras (at least it was, now there is no default decay option) was like this:
lr = lr * (1. / (1. + self.decay * self.iterations))
(see https://github.com/keras-team/keras/blob/2.2.0/keras/optimizers.py#L178)
And the solution presented by mkisantal is missing the recurrent/multiplicative term lr, therefore the more accurate version should be based on MultiplicativeLR:
decay = .001
fcn = lambda step: 1./(1. + decay*step)
scheduler = MultiplicativeLR(optimizer, lr_lambda=fcn)

Applying custom learning rates to variables in Tensorflow

In Tensorflow, after I obtain my loss term, I give it to an optimizer and it adds the necessary differentiation and update terms to the computation graph:
global_counter = tf.Variable(0, dtype=DATA_TYPE, trainable=False)
learning_rate = tf.train.exponential_decay(
INITIAL_LR, # Base learning rate.
global_counter, # Current index into the dataset.
DECAY_STEP, # Decay step.
DECAY_RATE, # Decay rate.
staircase=True)
optimizer = tf.train.MomentumOptimizer(learning_rate, 0.9).minimize(network.finalLoss, global_step=global_counter)
feed_dict = {TRAIN_DATA_TENSOR: samples, TRAIN_LABEL_TENSOR: labels}
results = sess.run([optimizer], feed_dict=feed_dict)
I want a small modification to this process. I want to scale the learning_rate differently for my every distinct parameter in the network. For example, let A and B two different trainable parameters in the network and let dL/dA and dL/dB the partial derivatives of the parameters with respect to the loss. The momentum optimizer updates the variables as:
Ma <- 0.9*Ma + learning_rate*dL/dA
A <- A - Ma
Mb <- 0.9*Mb + learning_rate*dL/dB
B <- B - Mb
I want to modify this as:
Ma <- 0.9*Ma + ca*learning_rate*dL/dA
A <- A - Ma
Mb <- 0.9*Mb + cb*learning_rate*dL/dB
B <- B - Mb
Where ca and cb are special learning rate scales for different parameters. As far as I understand, Tensorflow has compute_gradients and apply_gradients methods we can call for such cases, but the documentation is not very clear about how to use them. Any help would be much appreciated.
TO calculate gradient:
self.gradients = tf.gradients(self.loss, tf.trainable_variables())
Now, you access the gradients using sess.run([model.gradients], feed_dict)
Assuming, you have declared the learning_rate as a tf.Variable(), you can assign the learning rate using the following code:
sess.run(tf.assign(model.lr, args.learning_rate * (args.decay_rate ** epoch)))
The above code is just an example. You can modify it to be used for your purpose.
Custom learning rate, in tensorflow
are very easy to handle.
learning_rate = tf.Variable(INITIAL_LR,trainable=False,name="lr")
and say l1 and l2 are two different learning rates :
l1 = ca * learning_rate
l2 = cb * learning_rate
you can do any type of mathematical manipulation with respect to learning rate, and apply it in this manner :
optimizer=tf.train.MomentumOptimizer(l1,0.9).minimize(network.finalLoss, global_step=global_counter)
Regarding your problem: what you want is actually different gradient for different layers, say L1 layer (trainable variables containing Ma) and L2
(trainable variables containing Mb)
global_counter = tf.Variable(0, dtype=DATA_TYPE, trainable=False)
learning_rate = tf.train.exponential_decay(
INITIAL_LR, # Base learning rate.
global_counter, # Current index into the dataset.
DECAY_STEP, # Decay step.
DECAY_RATE, # Dec
staircase=True)
optimizer1 = tf.train.MomentumOptimizer(ca * learning_rate, 0.9).minimize(network.finalLoss, global_step=global_counter , var_list= L1)
optimizer2 = tf.train.MomentumOptimizer(cb * learning_rate, 0.9).minimize(network.finalLoss, global_step=global_counter , var_list= L2)
optimizer = tf.group(optimizer1 , optimizer2)
feed_dict = {TRAIN_DATA_TENSOR: samples, TRAIN_LABEL_TENSOR: labels}
results = sess.run([optimizer], feed_dict=feed_dict)
You can find the optimized version of the above code here
Please note if you can designate learning rate via tf.assign it returns the reference to the learning rate whereas the optimizer expects a float learning value type which probably will/should throw an error

tensorflow cifar10 example Learning rate decay confusion when using multiple gpus

community,
I have a small question about the learning rate decay in multi-GPUs training of the Tensorflow cifar10 example.
Here is the code:
# Create a variable to count the number of train() calls. This equals the
# number of batches processed * FLAGS.num_gpus.
global_step = tf.get_variable(
'global_step', [],
initializer=tf.constant_initializer(0), trainable=False)
# Calculate the learning rate schedule.
num_batches_per_epoch = (cifar10.NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN /
FLAGS.batch_size)
decay_steps = int(num_batches_per_epoch * cifar10.NUM_EPOCHS_PER_DECAY)
# Decay the learning rate exponentially based on the number of steps.
lr = tf.train.exponential_decay(cifar10.INITIAL_LEARNING_RATE,
global_step,
decay_steps,
cifar10.LEARNING_RATE_DECAY_FACTOR,
staircase=True)
In this code, the number of gpus is not considered. For instance, if we increase the FLAGS.num_gpus to 4. The decay_steps does not change.
In the comments, global_step is supposed to equal the number of batches processed * FlAGS.num_gpus. However, global_step only increases when opt.apply_gradients() function is called. It only increases 1 step per iteration.
In my opinion, the code should be
decay_steps = int(num_batches_per_epoch * cifar10.NUM_EPOCHS_PER_DECAY/FLAGS.num_gpus)
Therefore, when utilizing multiple GPUs, the number of iteration required to go through 1 epoch is reduced.
Please correct me and help me understand if my logic is not correct.

Keras: how learning rate changes when Adadelta optimizer is used?

For example I use Adadelta for optimizer when compile network model, then learning rate will change in time by this rule (but what is iterations ? ) and how can I log learning rate value to console?
model.compile(loss=keras.losses.mean_squared_error,
optimizer= keras.optimizers.Adadelta())
In documentation lr is just starting learning rate?
The rule is related to updates with decay. Adadelta is an adaptive learning rate method which uses exponentially decaying average of gradients.
Looking at Keras source code, learning rate is recalculated based on decay like:
lr = self.lr
if self.initial_decay > 0:
lr *= (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay))))
So yes, lr is just starting learning rate.
To print it after every epoch, as #orabis mentioned, you can make a callback class:
class YourLearningRateTracker(Callback):
def on_epoch_end(self, epoch, logs=None):
lr = self.model.optimizer.lr
decay = self.model.optimizer.decay
iterations = self.model.optimizer.iterations
lr_with_decay = lr / (1. + decay * K.cast(iterations, K.dtype(decay)))
print(K.eval(lr_with_decay))
and then add its instance to the callbacks when calling model.fit() like:
model.fit(..., callbacks=[YourLearningRateTracker()])
However, note that, by default, decay parameter for Adadelta is zero and is not part of the “standard” arguments, so your learning rate would not be changing its value when using default arguments.
I suspect that decay is not intended to be used with Adadelta.
On the other hand, rho parameter, which is nonzero by default, doesn’t describe the decay of the learning rate, but corresponds to the fraction of gradient to keep at each time step (according to the Keras documentation).
I found some relevant information on this Github issue, and by asking a similar question.

Categories

Resources