Slow Adagrad Convergence - python

I'm working on a comparison of popular gradient descent algorithms in Python. Here is a link to the notebook I've got going.
The Adagrad algorithm converges at a much slower rate than the plain vanilla batch, stochastic and mini-batch algorithms. I was expecting it to be an improvement from the basic methods. Is the difference attributable to one or more of the factors below or something else, or is this the expected result?
The test data set is small and Adagrad performs relatively better on larger data sets
Something having to do with the characteristics of the sample data
Something having to do with the parameters
An error in the code
Here is the code for Adagrad - it is also the last one in the notebook.
def gd_adagrad(data, alpha, num_iter, b=1):
m, N = data.shape
Xy = np.ones((m,N+1))
Xy[:,1:] = data
theta = np.ones(N)
grad_hist = 0
for i in range(num_iter):
np.random.shuffle(Xy)
batches = np.split(Xy, np.arange(b, m, b))
for B_x, B_y in ((B[:,:-1],B[:,-1]) for B in batches):
loss_B = B_x.dot(theta) - B_y
gradient = B_x.T.dot(loss_B) / B_x.shape[0]
grad_hist += np.square(gradient)
theta = theta - alpha * gradient / (10**-6 + np.sqrt(grad_hist))
return theta
theta = gd_adagrad(data_norm, alpha*10, 150, 50)

Related

Linear regression using Gradient Descent

I'm facing some issues trying to find the linear regression line using Gradient Descent, getting to weird results.
Here is the function:
def gradient_descent(m_k, c_k, learning_rate, points):
n = len(points)
dm, dc = 0, 0
for i in range(n):
x = points.iloc[i]['alcohol']
y = points.iloc[i]['total']
dm += -(2/n) * x * (y - (m_k * x + c_k)) # Partial der in m
dc += -(2/n) * (y - (m_k * x + c_k)) # Partial der in c
m = m_k - dm * learning_rate
c = c_k - dc * learning_rate
return m, c
And combined with a for loop
l_rate = 0.0001
m, c = 0, 0
epochs = 1000
for _ in range(epochs):
m, c = gradient_descent(m, c, l_rate, dataset)
plt.scatter(dataset.alcohol, dataset.total)
plt.plot(list(range(2, 10)), [m * x + c for x in range(2,10)], color='red')
plt.show()
Gives this result:
Slope: 2.8061974241244196
Y intercept: 0.5712221080810446
The problem is though that taking advantage of sklearn to compute the slope and intercept, i.e.
model = LinearRegression(fit_intercept=True).fit(np.array(dataset['alcohol']).copy().reshape(-1, 1),
np.array(dataset['total']).copy())
I get something completely different:
Slope: 2.0325063
Intercept: 5.8577761548263005
Any idea why? Looking on SO I've found out that a possible problem could be a too high learning rate, but as stated above I'm currently using 0.0001
Sklearn's LinearRegression doesn't use gradient descent - it uses Ordinary Least Squares (OLS) Regression which is a non-iterative method.
For your model, you might consider randomly initialising m, c rather than starting with 0,0. You could also consider adjusting the learning rate or using an adaptive learning rate.

Weights explode in polynomial regression with gradient descent

I'm just starting out learning machine learning and have been trying to fit a polynomial to data generated with a sine curve. I know how to do this in closed form, but I'm trying to get it to work with gradient descent too.
However, my weights explode to crazy heights, even with a very large penalty term. What am I doing wrong?
Here is the code:
import numpy as np
import matplotlib.pyplot as plt
from math import pi
N = 10
D = 5
X = np.linspace(0,100, N)
Y = np.sin(0.1*X)*50
X = X.reshape(N, 1)
Xb = np.array([[1]*N]).T
for i in range(1, D):
Xb = np.concatenate((Xb, X**i), axis=1)
#Randomly initializie the weights
w = np.random.randn(D)/np.sqrt(D)
#Solving in closed form works
#w = np.linalg.solve((Xb.T.dot(Xb)),Xb.T.dot(Y))
#Yhat = Xb.dot(w)
#Gradient descent
learning_rate = 0.0001
for i in range(500):
Yhat = Xb.dot(w)
delta = Yhat - Y
w = w - learning_rate*(Xb.T.dot(delta) + 100*w)
print('Final w: ', w)
plt.scatter(X, Y)
plt.plot(X,Yhat)
plt.show()
Thanks!
When updating theta, you have to take theta and subtract it with the learning weight times the derivative of theta divided by the training set size. You also have to divide your penality term by the training size set. But the main problem is that your learning rate is too large. For future debugging, it is helpful to print the cost to see if gradient descent is working and if the learning rate is too small or just right.
Below here is the code for 2nd degree polynomial which the found the optimum thetas (as you can see the learning rate is really small). I've also added the cost function.
N = 2
D = 2
#Gradient descent
learning_rate = 0.000000000001
for i in range(200):
Yhat = Xb.dot(w)
delta = Yhat - Y
print((1/N) * np.sum(np.dot(delta, np.transpose(delta))))
w = w - learning_rate*(np.dot(delta, Xb)) * (1/N)

Multivariate Linear Regression Cost Too High

I was working on price prediction with the data set provided in this link, the imports-85.data.
With horsepower, curb-weight, engine-size and highway-mpg, I tried to normalize (due to the high cost) and run the gradient descent algorithm by implementing the following:
Initialization
data = df[attrs]
m = len(data) # m-training examples
f = len(attrs) # n-features
X = np.hstack((np.ones(shape=(m,1)),np.array(data)))
T = np.zeros(f + 1) # Coefficients of x(0),x(1),...x(n)
norm_price = df.price / 1000
Y = np.array(norm_price)
# Normalization
data['curb-weight'] = (data['curb-weight'] * 0.453592) / 1000 # To kg (e-1000)
data['highway-mpg'] = data['highway-mpg'] * 0.425144 # To km per litre (kml)
data['engine-size'] = data['engine-size'] / 100 # To e-100
data['horsepower'] = data['horsepower'] / 100 # To e-100
col_rename = {
'curb-weight':'curb-weight-kg(e-1000)',
'highway-mpg':'highway-kml',
'engine-size':'engine-size(e-100)',
'horsepower':'horsepower(e-100)'
}
data.rename(columns=col_rename,inplace=True)
Cost calculation
def calculateCost():
global m,T,X
hypot = (X.dot(T) - Y).transpose().dot(X.dot(T) - Y)
return hypot / (2 * m)
Gradient descent
def gradDescent(threshold,iter = 10000,alpha = 3e-8):
global T,X,Y,m
i = 0
cost = calculateCost()
cost_hist = [cost]
while i < iter:
T = T - (alpha / m) * X.transpose().dot(X.dot(T) - Y)
cost = calculateCost()
cost_hist.append(cost)
i += 1
if cost <= threshold:
return cost_hist
I ran the gradient descent with this implementation:
Batch Gradient Descent
Without normalization, the cost would be 118634960.460199.
With normalization, the cost would be 118.634960460199
As a result, I have a few questions:
Is my normalization technique correct?
After normalization, the cost would be different. How do I set the threshold for the cost after normalization?
I think you may be misunderstanding 'normalization' in the context of machine learning. From my interpretation of your code your 'normalization' section is doing unit conversions. Prior to gradient decent it is common to apply a max-min scaling or a standard scaling, see the scikit learn user guide. These techniques create features with a consistent scale range, so that changes in a single feature do not completely dominate the loss function. This question and this blog post have a longer discussion.

Cost value doesn't decrease when using gradient descent

I have data pairs (x,y) which are created by a cubic function
y = g(x) = ax^3 − bx^2 − cx + d
plus some random noise. Now, I want to fit a model (parameters a,b,c,d) to this data using gradient descent.
My implementation:
param={}
param["a"]=0.02
param["b"]=0.001
param["c"]=0.002
param["d"]=-0.04
def model(param,x,y,derivative=False):
x2=np.power(x,2)
x3=np.power(x,3)
y_hat = param["a"]*x3+param["b"]*x2+param["c"]*x+param["d"]
if derivative==False:
return y_hat
derv={} #of Cost function w.r.t parameters
m = len(y_hat)
derv["a"]=(2/m)*np.sum((y_hat-y)*x3)
derv["b"]=(2/m)*np.sum((y_hat-y)*x2)
derv["c"]=(2/m)*np.sum((y_hat-y)*x)
derv["d"]=(2/m)*np.sum((y_hat-y))
return derv
def cost(y_hat,y):
assert(len(y)==len(y_hat))
return (np.sum(np.power(y_hat-y,2)))/len(y)
def optimizer(param,x,y,lr=0.01,epochs = 100):
for i in range(epochs):
y_hat = model(param,x,y)
derv = model(param,x,y,derivative=True)
param["a"]=param["a"]-lr*derv["a"]
param["b"]=param["b"]-lr*derv["b"]
param["c"]=param["c"]-lr*derv["c"]
param["d"]=param["d"]-lr*derv["d"]
if i%10==0:
#print (y,y_hat)
#print(param,derv)
print(cost(y_hat,y))
X = np.array(x)
Y = np.array(y)
optimizer(param,X,Y,0.01,100)
When run, the cost seems to be increasing:
36.140028646153525
181.88127675295928
2045.7925570171055
24964.787906199843
306448.81623701524
3763271.7837247783
46215271.5069297
567552820.2134454
6969909237.010273
85594914704.25394
Did I compute the gradients wrong? I don't know why the cost is exploding.
Here is the data: https://pastebin.com/raw/1VqKazUV.
If I run your code with e.g. lr=1e-4, the cost decreases.
Check your gradients (just print the result of model(..., True)), you will see that they are quite large. As your learning rate is also not too small, you are likely oscillating away from the minimum (see any ML textbook for example plots of this, you should also be able to see this if you just print your parameters after every iteration).

Bad results from LMS stochastic gradient descent

I'm trying to adapt a batch gradient descent algorithm from a previous question to do stochastic gradient descent, my cost seems to get stuck pretty far from the minimum value (in the example, around 1750 when the minimum is around 1450). It would seem like once it reaches that value, it just starts oscillating there. I also tried to shuffle range(0, x.shape[0]-1) every l but it didn't make any difference. I expect oscillations around the optimal value, but this just seemed too far off, so I think there must be a mistake.
import numpy as np
y = np.asfarray([[400], [330], [369], [232], [540]])
x = np.asfarray([[2104,3], [1600,3], [2400,3], [1416,2], [3000,4]])
x = np.concatenate((np.ones((5,1)), x), axis=1)
theta = np.asfarray([[0], [.5], [.5]])
fscale = np.sum(x, axis=0)
x /= fscale
alpha = .1
for l in range(1,100000):
for i in range(0, x.shape[0]-1):
h = np.dot(x, theta)
gradient = ((h[i:i+1] - y[i:i+1]) * x[i:i+1]).T
theta -= alpha * gradient
print ((h - y)**2).sum(), theta.squeeze() / fscale

Categories

Resources