I'm confused regarding as to how the adam optimizer actually works in tensorflow.
The way I read the docs, it says that the learning rate is changed every gradient descent iteration.
But when I call the function I give it a learning rate. And I don't call the function to let's say, do one epoch (implicitly calling # iterations so as to go through my data training). I call the function for each batch explicitly like
for epoch in epochs
for batch in data
sess.run(train_adam_step, feed_dict={eta:1e-3})
So my eta cannot be changing. And I'm not passing a time variable in. Or is this some sort of generator type thing where upon session creation t is incremented each time I call the optimizer?
Assuming it is some generator type thing and the learning rate is being invisibly reduced: How could I get to run the adam optimizer without decaying the learning rate? It seems to me like RMSProp is basically the same, the only thing I'd have to do to make it equal (learning rate disregarded) is to change the hyperparameters momentum and decay to match beta1 and beta2 respectively. Is that correct?
I find the documentation quite clear, I will paste here the algorithm in pseudo-code:
Your parameters:
learning_rate: between 1e-4 and 1e-2 is standard
beta1: 0.9 by default
beta2: 0.999 by default
epsilon: 1e-08 by default
The default value of 1e-8 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1.
Initialization:
m_0 <- 0 (Initialize initial 1st moment vector)
v_0 <- 0 (Initialize initial 2nd moment vector)
t <- 0 (Initialize timestep)
m_t and v_t will keep track of a moving average of the gradient and its square, for each parameters of the network. (So if you have 1M parameters, Adam will keep in memory 2M more parameters)
At each iteration t, and for each parameter of the model:
t <- t + 1
lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)
m_t <- beta1 * m_{t-1} + (1 - beta1) * gradient
v_t <- beta2 * v_{t-1} + (1 - beta2) * gradient ** 2
variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)
Here lr_t is a bit different from learning_rate because for early iterations, the moving averages have not converged yet so we have to normalize by multiplying by sqrt(1 - beta2^t) / (1 - beta1^t). When t is high (t > 1./(1.-beta2)), lr_t is almost equal to learning_rate
To answer your question, you just need to pass a fixed learning rate, keep beta1 and beta2 default values, maybe modify epsilon, and Adam will do the magic :)
Link with RMSProp
Adam with beta1=1 is equivalent to RMSProp with momentum=0. The argument beta2 of Adam and the argument decay of RMSProp are the same.
However, RMSProp does not keep a moving average of the gradient. But it can maintain a momentum, like MomentumOptimizer.
A detailed description of rmsprop.
maintain a moving (discounted) average of the square of gradients
divide gradient by the root of this average
(can maintain a momentum)
Here is the pseudo-code:
v_t <- decay * v_{t-1} + (1-decay) * gradient ** 2
mom = momentum * mom{t-1} + learning_rate * gradient / sqrt(v_t + epsilon)
variable <- variable - mom
RMS_PROP and ADAM both have adaptive learning rates .
The basic RMS_PROP
cache = decay_rate * cache + (1 - decay_rate) * dx**2
x += - learning_rate * dx / (np.sqrt(cache) + eps)
You can see originally this has two parameters decay_rate & eps
Then we can add a momentum to make our gradient more stable Then we can write
cache = decay_rate * cache + (1 - decay_rate) * dx**2
**m = beta1*m + (1-beta1)*dx** [beta1 =momentum parameter in the doc ]
x += - learning_rate * dx / (np.sqrt(cache) + eps)
Now you can see here if we keep beta1 = o Then it's rms_prop without the momentum .
Then Basics of ADAM
In cs-231 Andrej Karpathy has initially described the adam like this
Adam is a recently proposed update that looks a bit like RMSProp with
momentum
So yes ! Then what makes this difference from the rms_prop with momentum ?
m = beta1*m + (1-beta1)*dx
v = beta2*v + (1-beta2)*(dx**2)
**x += - learning_rate * m / (np.sqrt(v) + eps)**
He again mentioned in the updating equation m , v are more smooth .
So the difference from the rms_prop is the update is less noisy .
What makes this noise ?
Well in the initialization procedure we will initialize m and v as zero .
m=v=0
In order to reduce this initializing effect it's always to have some warm-up . So then equation is like
m = beta1*m + (1-beta1)*dx beta1 -o.9 beta2-0.999
**mt = m / (1-beta1**t)**
v = beta2*v + (1-beta2)*(dx**2)
**vt = v / (1-beta2**t)**
x += - learning_rate * mt / (np.sqrt(vt) + eps)
Now we run this for few iterations . Clearly pay attention to the bold lines , you can see when t is increasing (iteration number) following thing happen to the mt ,
mt = m
Related
I had a quick question regarding the KL divergence loss as while I'm researching I have seen numerous different implementations. The two most commmon are these two. However, while look at the mathematical equation, I'm not sure if mean should be included.
KL_loss = -0.5 * torch.sum(1 + torch.log(sigma**2) - mean**2 - sigma**2)
OR
KL_loss = -0.5 * torch.sum(1 + torch.log(sigma**2) - mean**2 - sigma**2)
KL_loss = torch.mean(KL_loss)
Thank you!
The equation being used here calculates the loss for a single example:
For batches of data, we need to calculate the loss over multiple examples.
Using our per example equation, we get multiple loss values, 1 per example. We need some way to reduce the per example loss calculations to a single scalar value. Most commonly, you want to take the mean over the batch. You'll see that most of pytorch's loss functions use reduction="mean". The advantage of taking the mean instead of the sum is that our loss becomes batch size invariant (i.e. doesn't scale with batch size).
From the stackoverflow post you linked with the implementations, you'll see the first and second linked implementations take the mean over the batch (i.e. divide by the batch size).
KLD = -0.5 * torch.sum(1 + log_var - mean.pow(2) - log_var.exp())
...
(BCE + KLD) / x.size(0)
KL_loss = -0.5 * torch.sum(1 + logv - mean.pow(2) - logv.exp())
...
(NLL_loss + KL_weight * KL_loss) / batch_size
The third linked implementation takes the mean over not just the batch, but also the sigma/mu vectors themselves:
0.5 * torch.mean(mean_sq + stddev_sq - torch.log(stddev_sq) - 1)
So instead of scaling the sum by 1/N where N is the batch size, you're scaling by 1/(NM) where M is the dimensionality of the mu and sigma vectors. In this case, your loss is both batch size and latent dimension size invariant. It's important to note that scaling your loss doesn't change the "shape" of the loss landscape (i.e. optimal points stay fixed), it just scales it (which you can control how to step through via the learning rate).
I'm trying to write the Logistic Regression in python
to find parameters we need minimize the loos/cost function with help of Gradient Descent or Gradient Ascent
Gradient Descent used for minimize the value
Gradient Ascent used for maximize value
in my case i used the Gradient Descent in the
code
def findOptimizeParameters(self, old_weight=None, old_bias=None):
"""
we need to find optimize the parameters with Gradient Decent
new weight perameter=old weight perameter- leaning rate* parial detivati of loos fun wrt wight
new bias perameter=old bias perameter- leaning rate* parial detivati of loos fun wrt bias
partial derivative of loos fun wrt w= (1/m)(yhat-y)(x)
partial derivative of loos fun wrt b= (1/m)(yhat-y)
:return:
"""
dweight = self.getDerivativeOfWeight()
dintercept = self.getDerivativeOfBias()
new_weight = old_weight - (self.learning_rate * dweight)
new_bias = old_bias - (self.learning_rate * dintercept)
return new_weight, new_bias
def getDerivativeOfWeight(self):
"""Derivative of Loss function WRT WEIGHT is (1/no.of Samples)*(sum of(productof(xi(ypredict-y))))"""
no_of_samples = self.m
# z= b0+b1x1+b2x2.......
z = self.bias + np.dot(self.x_data, self.weights)
ypredict = self.getPredictValueOfY(z)
return (1 / no_of_samples) * np.dot(self.x_data.T, (ypredict - self.y_data))
def getDerivativeOfBias(self):
"""Derivative of Loss function WRT WEIGHT is (1/no.of Samples)*(sum of(productof((ypredict-y))))"""
no_of_samples = self.m
# z= b0+b1x1+b2x2.......
z = self.bias + np.dot(self.x_data, self.weights)
ypredict = self.getPredictValueOfY(z)
return (1 / no_of_samples) * np.sum(ypredict - self.y_data)
#staticmethod
def getPredictValueOfY(Z):
return 1 / (1 + np.exp(-1 * Z))
Initially weights and Bias are taken Zeros
while looping multiple times on my data bias values comes (0,-3.2,-7.4,-4.3,-1.001) randomly.
some time increases and some time decreases but there no constant increment or decrement
first of all is it correct (as per Gradient Descent first constant decrement and then increment)?
if not how to know which one is choosing b/w GD/GA?
is it any wrong with my understanding?
I have a question concerning learning rate decay in Keras. I need to understand how the option decay works inside optimizers in order to translate it to an equivalent PyTorch formulation.
From the source code of SGD I see that the update is done this way after every batch update:
lr = self.lr * (1. / (1. + self.decay * self.iterations))
Does this mean that after every batch update the lr is updated starting from its value from its previous update or from its initial value? I mean, which of the two following interpretation is the correct one?
lr = lr_0 * (1. / (1. + self.decay * self.iterations))
or
lr = lr * (1. / (1. + self.decay * self.iterations)),
where lr is the lr updated after previous iteration and lr_0 is always the initial learning rate.
If the correct answer is the first one, this would mean that, in my case, the learning rate would decay from 0.001 to just 0.0002 after 100 epochs, whereas in the second case it would decay from 0.001 at around 1e-230 after 70 epochs.
Just to give you some context, I'm working with a CNN for a regression problem from images and I just have to translate Keras code into Pytorch code. So far, with the second of the afore-mentioned interpretations I manage to only always predict the same value, disregarding of batch size and input at test time.
Thanks in advance for your help!
Based on the implementation in Keras I think your first formulation is the correct one, the one that contain the initial learning rate (note that self.lr is not being updated).
However I think your calculation is probably not correct: since the denominator is the same, and lr_0 >= lr since you are doing decay, the first formulation has to result in a bigger number.
I'm not sure if this decay is available in PyTorch, but you can easily create something similar with torch.optim.lr_scheduler.LambdaLR.
decay = .001
fcn = lambda step: 1./(1. + decay*step)
scheduler = LambdaLR(optimizer, lr_lambda=fcn)
Finally, don't forget that you will need to call .step() explicitly on the scheduler, it's not enough to step your optimizer. Also, most often learning scheduling is only done after a full epoch, not after every single batch, but I see that here you are just recreating Keras behavior.
Actually, the response of mkisantal might be incorrect, since the actual equation for the learning rate in keras (at least it was, now there is no default decay option) was like this:
lr = lr * (1. / (1. + self.decay * self.iterations))
(see https://github.com/keras-team/keras/blob/2.2.0/keras/optimizers.py#L178)
And the solution presented by mkisantal is missing the recurrent/multiplicative term lr, therefore the more accurate version should be based on MultiplicativeLR:
decay = .001
fcn = lambda step: 1./(1. + decay*step)
scheduler = MultiplicativeLR(optimizer, lr_lambda=fcn)
When I trained model for several epochs and want to retrain it again for more epochs. How would Adam optimizer work. will it initialize the time from t =0 or will it save the last time step?
a) The documentation in tensorflow shows the following calculations. Is there a away I can add these metrics to tensorboard.
t <- t + 1
lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)
m_t <- beta1 * m_{t-1} + (1 - beta1) * g
v_t <- beta2 * v_{t-1} + (1 - beta2) * g * g
variable <- variable - lr_t * m_t / (sqrt(v_t) + epsilon)
there are no answers for a few questions since a long time question1 and question2.
I am actually getting a problem with error rate when re-training the model from the last checkpoint and I was not sure what exactly is happening with Adam optimizer in this case ?
Your answer is a bit similar to this question I think: Saving the state of the AdaGrad algorithm in Tensorflow
If you save and reload the state of the optimizer it will continue, if you don't load the state of your optimizer after training it will simply start again!
I would like to create a custom loss function that has a weight term that's updated based on what epoch I'm in.
For example:
Let's say I have a loss function which has a beta weight, where beta increases over the first 20 epochs...
def custom_loss(x, x_pred):
loss1 = objectives.binary_crossentropy(x, x_pred)
loss2 = objectives.mse(x, x_pred)
return (beta*current_epoch/20) * loss1 + loss2
How could I implement something like this into a keras loss function?
Looking at their documentation they mention that you can use theano/Tf symbolic functions that return a scalar for each data point.
So you could do something like this
loss = tf.contrib.losses.softmax_cross_entropy(x, x_pred) *
(beta * current_epoch / 20 ) +
tf.contrib.losses.mean_squared_error
You would have to pass x and x_pred as x and x_pred as tf.placeholders
I think for model creation you could use keras but then again you would have to run the computational graph with sess.run()
References:
https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html#using-keras-models-with-tensorflow