I was working on price prediction with the data set provided in this link, the imports-85.data.
With horsepower, curb-weight, engine-size and highway-mpg, I tried to normalize (due to the high cost) and run the gradient descent algorithm by implementing the following:
Initialization
data = df[attrs]
m = len(data) # m-training examples
f = len(attrs) # n-features
X = np.hstack((np.ones(shape=(m,1)),np.array(data)))
T = np.zeros(f + 1) # Coefficients of x(0),x(1),...x(n)
norm_price = df.price / 1000
Y = np.array(norm_price)
# Normalization
data['curb-weight'] = (data['curb-weight'] * 0.453592) / 1000 # To kg (e-1000)
data['highway-mpg'] = data['highway-mpg'] * 0.425144 # To km per litre (kml)
data['engine-size'] = data['engine-size'] / 100 # To e-100
data['horsepower'] = data['horsepower'] / 100 # To e-100
col_rename = {
'curb-weight':'curb-weight-kg(e-1000)',
'highway-mpg':'highway-kml',
'engine-size':'engine-size(e-100)',
'horsepower':'horsepower(e-100)'
}
data.rename(columns=col_rename,inplace=True)
Cost calculation
def calculateCost():
global m,T,X
hypot = (X.dot(T) - Y).transpose().dot(X.dot(T) - Y)
return hypot / (2 * m)
Gradient descent
def gradDescent(threshold,iter = 10000,alpha = 3e-8):
global T,X,Y,m
i = 0
cost = calculateCost()
cost_hist = [cost]
while i < iter:
T = T - (alpha / m) * X.transpose().dot(X.dot(T) - Y)
cost = calculateCost()
cost_hist.append(cost)
i += 1
if cost <= threshold:
return cost_hist
I ran the gradient descent with this implementation:
Batch Gradient Descent
Without normalization, the cost would be 118634960.460199.
With normalization, the cost would be 118.634960460199
As a result, I have a few questions:
Is my normalization technique correct?
After normalization, the cost would be different. How do I set the threshold for the cost after normalization?
I think you may be misunderstanding 'normalization' in the context of machine learning. From my interpretation of your code your 'normalization' section is doing unit conversions. Prior to gradient decent it is common to apply a max-min scaling or a standard scaling, see the scikit learn user guide. These techniques create features with a consistent scale range, so that changes in a single feature do not completely dominate the loss function. This question and this blog post have a longer discussion.
Related
I'm trying to implement the cyclical learning rate approach on top of the PyTorch reimplementation of StyleGAN by rosinality. To do so, I am building on what suggested in this blog post.
To check how the loss changes as a function of the learning rate, I need to plot how the (LR, loss) changes. Here you can find my modified version of train.py. These are the main changes I made:
Define cyclical_lr, a function regulating the cyclical learning rate
def cyclical_lr(stepsize, min_lr, max_lr):
# Scaler: we can adapt this if we do not want the triangular CLR
scaler = lambda x: 1.
# Lambda function to calculate the LR
lr_lambda = lambda it: min_lr + (max_lr - min_lr) * relative(it, stepsize)
# Additional function to see where on the cycle we are
def relative(it, stepsize):
cycle = math.floor(1 + it / (2 * stepsize))
x = abs(it / stepsize - 2 * cycle + 1)
return max(0, (1 - x)) * scaler(cycle)
return lr_lambda
Implement the cyclical learning rate for both the discriminator and the generator
step_size = 5*256
end_lr = 10**-1
factor = 10**5
clr = cyclical_lr(step_size, min_lr=end_lr / factor, max_lr=end_lr)
scheduler_g = torch.optim.lr_scheduler.LambdaLR(g_optimizer, [clr, clr])
d_optimizer = optim.Adam(discriminator.parameters(), lr=args.lr, betas=(0.0, 0.99))
scheduler_d = torch.optim.lr_scheduler.LambdaLR(d_optimizer, [clr])
Do you have suggestions on how to plot how the loss changes as a function of the learning rate? Ideally, I would like to do it using TensorBoard, where for now I am plotting the generator loss, the discriminator loss and the size of the generated images as a function of the iteration number:
if (i + 1) % 100 == 0:
writer.add_scalar('Loss/G', gen_loss_val, i * args.batch.get(resolution))
writer.add_scalar('Loss/D', disc_loss_val, i * args.batch.get(resolution))
writer.add_scalar('Step/pixel_size', (4 * 2 ** step), i * args.batch.get(resolution))
print(args.batch.get(resolution))
I have data pairs (x,y) which are created by a cubic function
y = g(x) = ax^3 − bx^2 − cx + d
plus some random noise. Now, I want to fit a model (parameters a,b,c,d) to this data using gradient descent.
My implementation:
param={}
param["a"]=0.02
param["b"]=0.001
param["c"]=0.002
param["d"]=-0.04
def model(param,x,y,derivative=False):
x2=np.power(x,2)
x3=np.power(x,3)
y_hat = param["a"]*x3+param["b"]*x2+param["c"]*x+param["d"]
if derivative==False:
return y_hat
derv={} #of Cost function w.r.t parameters
m = len(y_hat)
derv["a"]=(2/m)*np.sum((y_hat-y)*x3)
derv["b"]=(2/m)*np.sum((y_hat-y)*x2)
derv["c"]=(2/m)*np.sum((y_hat-y)*x)
derv["d"]=(2/m)*np.sum((y_hat-y))
return derv
def cost(y_hat,y):
assert(len(y)==len(y_hat))
return (np.sum(np.power(y_hat-y,2)))/len(y)
def optimizer(param,x,y,lr=0.01,epochs = 100):
for i in range(epochs):
y_hat = model(param,x,y)
derv = model(param,x,y,derivative=True)
param["a"]=param["a"]-lr*derv["a"]
param["b"]=param["b"]-lr*derv["b"]
param["c"]=param["c"]-lr*derv["c"]
param["d"]=param["d"]-lr*derv["d"]
if i%10==0:
#print (y,y_hat)
#print(param,derv)
print(cost(y_hat,y))
X = np.array(x)
Y = np.array(y)
optimizer(param,X,Y,0.01,100)
When run, the cost seems to be increasing:
36.140028646153525
181.88127675295928
2045.7925570171055
24964.787906199843
306448.81623701524
3763271.7837247783
46215271.5069297
567552820.2134454
6969909237.010273
85594914704.25394
Did I compute the gradients wrong? I don't know why the cost is exploding.
Here is the data: https://pastebin.com/raw/1VqKazUV.
If I run your code with e.g. lr=1e-4, the cost decreases.
Check your gradients (just print the result of model(..., True)), you will see that they are quite large. As your learning rate is also not too small, you are likely oscillating away from the minimum (see any ML textbook for example plots of this, you should also be able to see this if you just print your parameters after every iteration).
The documentation for tf.train.MomentumOptimizer offers a use_nesterov parameter to utilise Nesterov's Accelerated Gradient (NAG) method.
However, NAG requires the gradient at a location other than that of the current variable to be calculated, and the apply_gradients interface only allows for the current gradient to be passed. So I don't quite understand how the NAG algorithm could be implemented with this interface.
The documentation says the following about the implementation:
use_nesterov: If True use Nesterov Momentum. See Sutskever et al.,
2013. This
implementation always computes gradients at the value of the
variable(s) passed to the optimizer. Using Nesterov Momentum makes the
variable(s) track the values called theta_t + mu*v_t in the paper.
Having read through the paper in the link, I'm a little unsure about whether this description answers my question or not. How can the NAG algorithm be implemented when the interface doesn't require a gradient function to be provided?
TL;DR
TF's implementation of Nesterov is indeed an approximation of the original formula, valid for high values of momentum.
Details
This is a great question. In the paper, the NAG update is defined as
vt+1 = μ.vt - λ.∇f(θt + μ.vt)
θt+1 = θt + vt+1
where f is our cost function, θt our parameters at time t, μ the momentum, λ the learning rate; vt is the NAG's internal accumulator.
The main difference with standard momentum is the use of the gradient at θt + μ.vt, not at θt. But as you said, tensorflow only uses gradient at θt. So what is the trick?
Part of the trick is actually mentioned in the part of the documentation you cited: the algorithm is tracking θt + μ.vt, not θt. The other part comes from an approximation valid for high value of momentum.
Let's make a slight change of notation from the paper for the accumulator to stick with tensorflow's definition. Let's define at = vt / λ. The update rules are changed slightly as
at+1 = μ.at - ∇f(θt + μ.λ.at)
θt+1 = θt + λ.at+1
(The motivation for this change in TF is that now a is a pure gradient momentum, independent of the learning rate. This makes the update process robust to changes in λ, a possibility common in practice but that the paper does not consider.)
If we note ψt = θt + μ.λ.at, then
at+1 = μ.at - ∇f(ψt)
ψt+1 = θt+1 + μ.λ.at+1
= θt + λ.at+1 + μ.λ.at+1
= ψt + λ.at+1 + μ.λ.(at+1 - at)
= ψt + λ.at+1 + μ.λ.[(μ-1)at - ∇f(ψt)]
≈ ψt + λ.at+1
This last approximation holds for strong values of momentum, where μ is close to 1, so that μ-1 is close to zero, and ∇f(ψt) is small compared to a — this last approximation is more debatable actually, and less valid for directions with frequent gradient switch.
We now have an update that uses the gradient of the current position, and the rules are pretty simple — they are in fact those of standard momentum.
However, we want θt, not ψt. This is the reason why we subtract μ.λ.at+1 to ψt+1 just before returning it — and to recover ψ it is added again first thing at the next call.
I couldn't see any info on this online, and the linked paper certainly wasn't helpful, so I had a look at the unit tests for tf.train.MomentumOptimizer, from which I can see tests for the implementation of both classic momentum and NAG modes.
Summary
var = var + accum * learning_rate * momentum
accum = accum * momentum + g
var = var - learning_rate * accum
var = var - accum * learning_rate * momentum
where accum starts at 0 and is updated at every step. The above is a modified version of the formulation in the unit test, and I find it a bit confusing. Here is the same set of equations arranged with my interpretation of what each of the parameters represent (I could be wrong though):
average_grad_0 = accum # previous rolling average
average_grad_1 = accum * momentum + g # updated rolling average
grad_diff = average_grad_1 - average_grad_0
adjustment = -learning_rate * (grad_diff * momentum + average_grad_1)
var += adjustment
accum = average_grad_new
In other words, it seems to me like tensorflow's implementation attempts to guess the "adjusted gradient" in NAG by assuming that the new gradient will be esimated by the current average gradient plus the product of momentum and the change in the average gradient. I'd love to see a proof for this!
What follows is more detail on how the classic and nesterov modes are implemented in tensorflow as per the tests.
Classic Momentum mode
For use_nesterov=False, based on the doTestBasic function, we have the following initial parameters:
learning_rate = 2.0
momentum = 0.9
var_0 = 1.0 # at time 0
grad = 0.1
Actually, the above are just the first element of the grads_0 and vars_0 arrays, but I'll just focus on a single value. For the subsequent timesteps, we have
var_1 = 1.0 - (0.1 * 2.0)
var_2 = 1.0 - (0.1 * 2.0) - ((0.9 * 0.1 + 0.1) * 2.0)
which I'm going to interpret as meaning;
var_1 = var_0 - (grad * learning_rate)
var_2 = var_1 - ((momentum * grad + grad) * learning_rate)
If we assume that for the purposes of the unit tests grad_0 == grad_1 == grad then this makes sense as a formulation of classic momentum.
Nesterov's Accelerated Gradient (NAG) mode
For use_nesterov=True, I had a look at the _update_nesterov_momentum_numpy function and the testNesterovMomentum test case.
The _update_nesterov_momentum_numpy function has the following definition:
def _update_nesterov_momentum_numpy(self, var, accum, g, lr, momentum):
var = var + accum * lr * momentum
accum = accum * momentum + g
var = var - lr * accum
var = var - accum * lr * momentum
return var, accum
and it is called in the unit tests like this:
for t in range(1, 5):
opt_op.run()
var0_np, accum0_np = self._update_nesterov_momentum_numpy(
var0_np, accum0_np, var0_np * 10, 2.0, 0.9)
I'm working on a comparison of popular gradient descent algorithms in Python. Here is a link to the notebook I've got going.
The Adagrad algorithm converges at a much slower rate than the plain vanilla batch, stochastic and mini-batch algorithms. I was expecting it to be an improvement from the basic methods. Is the difference attributable to one or more of the factors below or something else, or is this the expected result?
The test data set is small and Adagrad performs relatively better on larger data sets
Something having to do with the characteristics of the sample data
Something having to do with the parameters
An error in the code
Here is the code for Adagrad - it is also the last one in the notebook.
def gd_adagrad(data, alpha, num_iter, b=1):
m, N = data.shape
Xy = np.ones((m,N+1))
Xy[:,1:] = data
theta = np.ones(N)
grad_hist = 0
for i in range(num_iter):
np.random.shuffle(Xy)
batches = np.split(Xy, np.arange(b, m, b))
for B_x, B_y in ((B[:,:-1],B[:,-1]) for B in batches):
loss_B = B_x.dot(theta) - B_y
gradient = B_x.T.dot(loss_B) / B_x.shape[0]
grad_hist += np.square(gradient)
theta = theta - alpha * gradient / (10**-6 + np.sqrt(grad_hist))
return theta
theta = gd_adagrad(data_norm, alpha*10, 150, 50)
This is my second attempt at implementing gradient descent in one variable and it always diverges. Any ideas?
This is simple linear regression for minimizing the residual sum of squares in one variable.
def gradient_descent_wtf(xvalues, yvalues):
tolerance = 0.1
#y=mx+b
#some line to predict y values from x values
m=1.
b=1.
#a predicted y-value has value mx + b
for i in range(0,10):
#calculate y-value predictions for all x-values
predicted_yvalues = list()
for x in xvalues:
predicted_yvalues.append(m*x + b)
# predicted_yvalues holds the predicted y-values
#now calculate the residuals = y-value - predicted y-value for each point
residuals = list()
number_of_points = len(yvalues)
for n in range(0,number_of_points):
residuals.append(yvalues[n] - predicted_yvalues[n])
## calculate the residual sum of squares from the residuals, that is,
## square each residual and add them all up. we will try to minimize
## the residual sum of squares later.
residual_sum_of_squares = 0.
for r in residuals:
residual_sum_of_squares += r**2
print("RSS = %s" % residual_sum_of_squares)
##
##
##
#now make a version of the residuals which is multiplied by the x-values
residuals_times_xvalues = list()
for n in range(0,number_of_points):
residuals_times_xvalues.append(residuals[n] * xvalues[n])
#now create the sums for the residuals and for the residuals times the x-values
residuals_sum = sum(residuals)
residuals_times_xvalues_sum = sum(residuals_times_xvalues)
# now multiply the sums by a positive scalar and add each to m and b.
residuals_sum *= 0.1
residuals_times_xvalues_sum *= 0.1
b += residuals_sum
m += residuals_times_xvalues_sum
#and repeat until convergence.
#convergence occurs when ||sum vector|| < some tolerance.
# ||sum vector|| = sqrt( residuals_sum**2 + residuals_times_xvalues_sum**2 )
#check for convergence
magnitude_of_sum_vector = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5
if magnitude_of_sum_vector < tolerance:
break
return (b, m)
Result:
gradient_descent_wtf([1,2,3,4,5,6,7,8,9,10],[6,23,8,56,3,24,234,76,59,567])
RSS = 370433.0
RSS = 300170125.7
RSS = 4.86943013045e+11
RSS = 7.90447409339e+14
RSS = 1.28312217794e+18
RSS = 2.08287421094e+21
RSS = 3.38110045417e+24
RSS = 5.48849288217e+27
RSS = 8.90939341376e+30
RSS = 1.44624932026e+34
Out[108]:
(-3.475524066284303e+16, -2.4195981188763203e+17)
The gradients are huge -- hence you are following large vectors for long distances (0.1 times a large number is large). Find unit vectors in the appropriate direction. Something like this (with comprehensions replacing your loops):
def gradient_descent_wtf(xvalues, yvalues):
tolerance = 0.1
m=1.
b=1.
for i in range(0,10):
predicted_yvalues = [m*x+b for x in xvalues]
residuals = [y-y_hat for y,y_hat in zip(yvalues,predicted_yvalues)]
residual_sum_of_squares = sum(r**2 for r in residuals) #only needed for debugging purposes
print("RSS = %s" % residual_sum_of_squares)
residuals_times_xvalues = [r*x for r,x in zip(residuals,xvalues)]
residuals_sum = sum(residuals)
residuals_times_xvalues_sum = sum(residuals_times_xvalues)
# (residuals_sum,residual_times_xvalues_sum) is a vector which points in the negative
# gradient direction. *Find a unit vector which points in same direction*
magnitude = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5
residuals_sum /= magnitude
residuals_times_xvalues_sum /= magnitude
b += residuals_sum * (0.1)
m += residuals_times_xvalues_sum * (0.1)
#check for convergence -- this needs work!
magnitude_of_sum_vector = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5
if magnitude_of_sum_vector < tolerance:
break
return (b, m)
For example:
>>> gradient_descent_wtf([1,2,3,4,5,6,7,8,9,10],[6,23,8,56,3,24,234,76,59,567])
RSS = 370433.0
RSS = 368732.1655050716
RSS = 367039.18363896786
RSS = 365354.0543519137
RSS = 363676.7775934381
RSS = 362007.3533123621
RSS = 360345.7814567845
RSS = 358692.061974069
RSS = 357046.1948108295
RSS = 355408.17991291644
(1.1157111313023558, 1.9932828425473605)
which is certainly much more plausible.
It isn't a trivial matter to make a numerically stable gradient-descent algorithm. You might want to consult a decent textbook in numerical analysis.
First, Your code is right.
But you should consider something about math when you do linear regression.
For example, the residual is -205.8 and your learning rate is 0.1 so you will get a huge descent step -25.8.
It's a so large step that you can't go back to the correct m and b. You have to make your step small enough.
There are two ways to make gradient descent step reasonable:
initialize a small learning rate, such as 0.001 and 0.0003.
Divide your step by the total amount of your input values.