Gradually decay the weight of loss function - python

I am not sure is the right place to ask this question, feel free to tell me if I need to remove the post.
I am quite new in pyTorch and currently working with CycleGAN (pyTorch implementation) as a part of my project and I understand most of the implementation of cycleGAN.
I read the paper with the name ‘CycleGAN with better Cycles’ and I am trying to apply the modification which mentioned in the paper. One of modification is Cycle consistency weight decay which I don’t know how to apply.
optimizer_G.zero_grad()
# Identity loss
loss_id_A = criterion_identity(G_BA(real_A), real_A)
loss_id_B = criterion_identity(G_AB(real_B), real_B)
loss_identity = (loss_id_A + loss_id_B) / 2
# GAN loss
fake_B = G_AB(real_A)
loss_GAN_AB = criterion_GAN(D_B(fake_B), valid)
fake_A = G_BA(real_B)
loss_GAN_BA = criterion_GAN(D_A(fake_A), valid)
loss_GAN = (loss_GAN_AB + loss_GAN_BA) / 2
# Cycle consistency loss
recov_A = G_BA(fake_B)
loss_cycle_A = criterion_cycle(recov_A, real_A)
recov_B = G_AB(fake_A)
loss_cycle_B = criterion_cycle(recov_B, real_B)
loss_cycle = (loss_cycle_A + loss_cycle_B) / 2
# Total loss
loss_G = loss_GAN +
lambda_cyc * loss_cycle + #lambda_cyc is 10
lambda_id * loss_identity #lambda_id is 0.5 * lambda_cyc
loss_G.backward()
optimizer_G.step()
My question is how can I gradually decay the weight of cycle consistency loss?
Any help in implementing this modification would be appreciated.
This is from the paper:
Cycle consistency loss helps to stabilize training a lot in early stages but becomes an obstacle towards realistic images in later stages. We propose to gradually decay the weight of cycle consistency loss λ as training progress. However, we should still make sure that λ is
not decayed to 0 so that generators won’t become unconstrained and go completely wild.
Thanks in advance.

Below is a prototype function you can use!
def loss (other params, decay params, initial_lambda, steps):
# compute loss
# compute cyclic loss
# function that computes lambda given the steps
cur_lambda = compute_lambda(step, decay_params, initial_lamdba)
final_loss = loss + cur_lambda*cyclic_loss
return final_loss
compute_lambda function for linearly decaying from 10 to 1e-5 in 50 steps
def compute_lambda(step, decay_params):
final_lambda = decay_params["final"]
initial_lambda = decay_params["initial"]
total_step = decay_params["total_step"]
start_step = decay_params["start_step"]
if (step < start_step+total_step and step>start_step):
return initial_lambda + (step-start_step)*(final_lambda-initial_lambda)/total_step
elif (step < start_step):
return initial_lambda
else:
return final_lambda
# Usage:
compute_lambda(i, {"final": 1e-5, "initial":10, "total_step":50, "start_step" : 50})

Related

Training with parametric partial derivatives in pytorch

Given a neural network with weights theta and inputs x, I am interested in calculating the partial derivatives of the neural network's output w.r.t. x, so that I can use the result when training the weights theta using a loss depending both on the output and the partial derivatives of the output. I figured out how to calculate the partial derivatives following this post. I also found this post that explains how to use sympy to achieve something similar, however, adapting it to a neural network context within pytorch seems like a huge amount of work and a recipee for very slow code.
Thus, I tried something different, which failed. As a minimal example, I created a function (substituting my neural network)
theta = torch.ones([3], requires_grad=True, dtype=torch.float32)
def trainable_function(time):
return theta[0]*time**3 + theta[1]*time**2 + theta[2]*time
Then, I defined a second function to give me partial derivatives:
def trainable_derivative(time):
deriv_time = torch.tensor(time, requires_grad=True)
fun_value = trainable_function(deriv_time)
gradient = torch.autograd.grad(fun_value, deriv_time, create_graph=True, retain_graph=True)
deriv_time.requires_grad = False
return gradient
Given some noisy observations of the derivatives, I now try to train theta. For simplicity, I create a loss that only depends on the derivatives. In this minimal example, the derivatives are used directly as observations, not as regularization, to avoid complicated loss functions that are besides the point.
def objective(train_times, observations):
predictions = torch.squeeze(torch.tensor([trainable_derivative(a) for a in train_times]))
return torch.sum((predictions - observations)**2)
optimizer = Adam([theta], lr=0.1)
for iteration in range(200):
optimizer.zero_grad()
loss = objective(data_times, noisy_targets)
loss.backward()
optimizer.step()
Unfortunately, when running this code, I get the error
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I suppose that when calculating the partial derivatives in the way I do, I do not really create a computational graph through which autodiff could differentiate through. Thus, the connection to the parameters theta somehow gets lost and now it looks to the optimizer as if the loss is completely independent of the parameters theta. However, I could be totally wrong..
Does anyone know how to fix this?
Is it possible to include this type of derivatives in the loss function in pytorch?
And if so, what would be the most pytorch-style way of doing this?
Many thanks for your help and advise, it is much appreciated.
For completeness:
To run the above code, some training data needs to be generated. I used the following code, which works perfectly and has been tested against the analytical derivatives:
true_a = 1
true_b = 1
true_c = 1
def true_function(time):
return true_a*time**3 + true_b*time**2 + true_c*time
def true_derivative(time):
deriv_time = torch.tensor(time, requires_grad=True)
fun_value = true_function(deriv_time)
return torch.autograd.grad(fun_value, deriv_time)
data_times = torch.linspace(0, 1, 500)
true_targets = torch.squeeze(torch.tensor([true_derivative(a) for a in data_times]))
noisy_targets = torch.tensor(true_targets) + torch.randn_like(true_targets)*0.1
Your approach to the problem appears overly complicated.
I believe that what you're trying to achieve is within reach in PyTorch.
I include here a simple code snippet that I believe showcases what you would like to do:
import torch
import torch.nn as nn
# Data and Function
torch.manual_seed(0)
input_dim = 1
output_dim = 2
n = 10 # batchsize
simple_function = nn.Sequential(nn.Linear(1, 2), nn.Sigmoid())
t = (torch.arange(n).float() / n).view(n, 1)
x = torch.randn(n, output_dim)
t.requires_grad = True
# Actual computation
xhat = simple_function(t)
jac = torch.autograd.functional.jacobian(simple_function, t, create_graph=True)
grad = jac[torch.arange(n),:,torch.arange(n),0]
loss = (x -xhat).pow(2).sum() + grad.pow(2).sum()
loss.backward()

What Loss Or Reward Is Backpropagated In Policy Gradients For Reinforcement Learning?

I have made a small script in Python to solve various Gym environments with policy gradients.
import gym, os
import numpy as np
#create environment
env = gym.make('Cartpole-v0')
env.reset()
s_size = len(env.reset())
a_size = 2
#import my neural network code
os.chdir(r'C:\---\---\---\Python Code')
import RLPolicy
policy = RLPolicy.NeuralNetwork([s_size,a_size],learning_rate=0.000001,['softmax']) #a 3layer network might be ([s_size, 5, a_size],learning_rate=1,['tanh','softmax'])
#it supports the sigmoid activation function also
print(policy.weights)
DISCOUNT = 0.95 #parameter for discounting future rewards
#first step
action = policy.feedforward(env.reset)
state,reward,done,info = env.step(action)
for t in range(3000):
done = False
states = [] #lists for recording episode
probs2 = []
rewards = []
while not done:
#env.render() #to visualize learning
probs = policy.feedforward(state)[-1] #calculate probabilities of actions
action = np.random.choice(a_size,p=probs) #choose action from probs
#record and update state
probs2.append(probs)
states.append(state)
state,reward,done,info = env.step(action)
rewards.append(reward) #should reward be before updating state?
#calculate gradients
gradients_w = []
gradients_b = []
for i in range(len((rewards))):
totalReward = sum([rewards[t]*DISCOUNT**t for t in range(len(rewards[i:]))]) #discounted reward
## !! this is the line that I need help with
gradient = policy.backpropagation(states[i],totalReward*(probs2[i])) #what should be backpropagated through the network
## !!
##record gradients
gradients_w.append(gradient[0])
gradients_b.append(gradient[1])
#combine gradients and update the weights and biases
gradients_w = np.array(gradients_w,object)
gradients_b = np.array(gradients_b,object)
policy.weights += policy.learning_rate * np.flip(np.sum(gradients_w,0),0) #np.flip because the gradients are calculated backwards
policy.biases += policy.learning_rate * np.flip(np.sum(gradients_b,0),0)
#reset and record
env.reset()
if t%100==0:
print('t'+str(t),'r',sum(rewards))
What should be passed backwards to calculate the gradients? I am using gradient ascent but I could switch it to descent. Some people have defined the reward function as totalReward*log(probabilities). Would that make the score derivative totalReward*(1/probs) or log(probs) or something else? Do you use a cost function like cross entropy?
I have tried
totalReward*np.log(probs)
totalReward*(1/probs)
totalReward*(probs**2)
totalReward*probs
probs = np.zeros(a_size)
probs[action] = 1
totalRewards*probs
and a couple others.
The last one is the only one that was able to solve any of them and it only worked on Cartpole. I have tested the various loss or score functions for thousands of episodes with gradient ascent and descent on Cartpole, Pendulum, and MountainCar. Sometimes it will improve a small amount but it will never solve it. What am I doing wrong?
And here is the RLPolicy code. It is not well written or pseudo coded but I don't think it is the problem because I checked it with gradient checking several times. But it would be helpful even if I could narrow it down to a problem with the neural network or somewhere else in my code.
#Neural Network
import numpy as np
import random, math, time, os
from matplotlib import pyplot as plt
def activation(x,function):
if function=='sigmoid':
return(1/(1+math.e**(-x))) #Sigmoid
if function=='relu':
x[x<0]=0
return(x)
if function=='tanh':
return(np.tanh(x.astype(float))) #tanh
if function=='softmax':
z = np.exp(np.array((x-max(x)),float))
y = np.sum(z)
return(z/y)
def activationDerivative(x,function):
if function=='sigmoid':
return(x*(1-x))
if function=='relu':
x[x<0]==0
x[x>0]==1
return(x)
if function=='tanh':
return(1-x**2)
if function=='softmax':
s = x.reshape(-1,1)
return(np.diagflat(s) - np.dot(s, s.T))
class NeuralNetwork():
def __init__ (self,layers,learning_rate,momentum,regularization,activations):
self.learning_rate = learning_rate
if (isinstance(layers[1],list)):
h = layers[1][:]
del layers[1]
for i in h:
layers.insert(-1,i)
self.layers = layers
self.weights = [2*np.random.rand(self.layers[i]*self.layers[i+1])-1 for i in range(len(self.layers)-1)]
self.biases = [2*np.random.rand(self.layers[i+1])-1 for i in range(len(self.layers)-1)]
self.weights = np.array(self.weights,object)
self.biases = np.array(self.biases,object)
self.activations = activations
def feedforward(self, input_array):
layer = input_array
neuron_outputs = [layer]
for i in range(len(self.layers)-1):
layer = np.tile(layer,self.layers[i+1])
layer = np.reshape(layer,[self.layers[i+1],self.layers[i]])
weights = np.reshape(self.weights[i],[self.layers[i+1],self.layers[i]])
layer = weights*layer
layer = np.sum(layer,1)#,self.layers[i+1]-1)
layer = layer+self.biases[i]
layer = activation(layer,self.activations[i])
neuron_outputs.append(np.array(layer,float))
return(neuron_outputs)
def neuronErrors(self,l,neurons,layerError,n_os):
if (l==len(self.layers)-2):
return(layerError)
totalErr = [] #total error
for e in range(len(layerError)): #-layers
e = e*self.layers[l+2]
a_ws = self.weights[l+1][e:e+self.layers[l+1]]
e = int(e/self.layers[l+2])
err = layerError[e]*a_ws #error
totalErr.append(err)
return(sum(totalErr))
def backpropagation(self,state,loss):
weights_gradient = [np.zeros(self.layers[i]*self.layers[i+1]) for i in range(len(self.layers)-1)]
biases_gradient = [np.zeros(self.layers[i+1]) for i in range(len(self.layers)-1)]
neuron_outputs = self.feedforward(state)
grad = self.individualBackpropagation(loss, neuron_outputs)
return(grad)
def individualBackpropagation(self, difference, neuron_outputs): #number of output
lr = self.learning_rate
n_os = neuron_outputs[:]
w_o = self.weights[:]
b_o = self.biases[:]
w_n = self.weights[:]
b_n = self.biases[:]
gradient_w = []
gradient_b = []
error = difference[:] #error for neurons
for l in range(len(self.layers)-2,-1,-1):
p_n = np.tile(n_os[l],self.layers[l+1]) #previous neuron
neurons = np.arange(self.layers[l+1])
error = (self.neuronErrors(l,neurons,error,n_os))
if not self.activations[l]=='softmax':
error = error*activationDerivative(neuron_outputs[l+1],self.activations[l])
else:
error = error # activationDerivative(neuron_outputs[l+1],self.activations[l]) #because softmax derivative returns different dimensions
w_grad = np.repeat(error,self.layers[l]) #weights gradient
b_grad = np.ravel(error) #biases gradient
w_grad = w_grad*p_n
b_grad = b_grad
gradient_w.append(w_grad)
gradient_b.append(b_grad)
return(gradient_w,gradient_b)
Thanks for any answers, this is my first question here.
Using as reference this post for the computation of the gradient ( https://medium.com/#jonathan_hui/rl-policy-gradients-explained-9b13b688b146) :
It seems to me that totalRewardOfEpisode*np.log(probability of sampled action) is the right computation. However in order to have a good estimate of the gradient I'd suggest using many episodes to compute it. (30 for example, you'd just need to average your end gradient by dividing by 30)
The main difference with your test with totalReward*np.log(probs) is that for each step I think you should only backpropagate on the probability of the action you sampled, not the whole output. Initialy in the cited article they use the total reward but then they suggest in the end using the discounted reward of the present and future rewards as you do, so that part doesn't seem theoretically problematic.
OLD answer :
To my knowledge deepRL methods usely use some estimate of the value of the state in the game or the value of each action. From what I see in your code you have a neural network that only outputs probabilities for each action.
Although what you want is definitely to maximize the total reward, you can't compute a gradient on the end reward because of the environment. I'd suggest you'd look into methods such as deepQLearning or Actor/Critic based methods such as PPO.
Given the method you chose you'll get different answers on how to compute your gradient.
mprouveur's answer was half correct but I felt that I needed to explain the right thing to backpropagate. The answer to my question on ai.stackexchange.com was how I came to understand this. The correct error to backpropagate is the log probability of taking the action multiplied by the goal reward. This can also be calculated as the cross entropy loss between the outputted probabilities and an array of zeros with the action that was taken being one 1. Because of the derivative of cross entropy loss, this will have the effect of pushing only the probability of the action that was taken closer to one. Then, the multiplication of the total reward makes better actions get pushed more to a higher probability. So, with the label being a one-hot encoded vector, the correct equation is label/probs * totalReward because that is the derivative of cross entropy loss and the derivative of the log of probs. I got this working in other code, but even with this equation I think something else in my code is wrong. It probably has something to do with how I made the softmax derivative too complicated instead of calculating the usual way, by combing the cross entropy derivative and softmax derivative. I will update this answer soon with correct code and more information.
The loss here depends on what output on each problem. Generaly, loss for backpropagate should be a number that represents for everything you have processed. For policy gradient, it will be the reward that it think it will get compare with the original reward, the log is just a way to bring it back to a probabily random variable. Single dimension. If you want to inspect the behavior behind codes, you should always check the shape/dimension between each process to fully understand

Why does the score (accumulated reward) goes down during the exploitation phase in this Deep Q-Learning model?

I'm having a hard time trying to make a Deep Q-Learning agent find the optimal policy. This is how my current model looks like in TensorFlow:
model = Sequential()
model.add(Dense(units=32, activation="relu", input_dim=self.env.state.size)),
model.add(Dense(units=self.env.allActionsKeys.size, activation="softmax"))
model.compile(loss="mse", optimizer=Adam(lr=0.00075), metrics=['accuracy'])
For the problem I'm working on at the moment 'self.env.state.size' is equal 6, and the number of possible actions ('self.env.allActionsKeys.size') is 30.
The input vector consists of bits, each of them with different ranges (not too different in this problem though). The range of 2 bits is [0,3], for other 2 [0,2] and for the remaining [0,1]. Please, note that this is supposed to be a simple problem, I'm also aiming for more complicated ones where the input size would be 15 for instance and the ranges can differ a bit more than that ([0,15], [0,3],...).
This is how my train method looks like:
def train(self, terminal_state):
if len(self.replay_memory) < MIN_REPLAY_MEMORY_SIZE:
return
# Get MINIBATCH_SIZE random samples from replay_memory
minibatch = random.sample(self.replay_memory, MINIBATCH_SIZE)
# Transition: (current_state, action, reward, normalized_next_state, next_state, done)
current_states = np.array([transition[0] for transition in minibatch])
current_qs_minibatch = self.model.predict(current_states, batch_size=MINIBATCH_SIZE, use_multiprocessing=True)
next_states = np.array([transition[3] for transition in minibatch])
next_qs_minibatch = self.model.predict(next_states, batch_size=MINIBATCH_SIZE, use_multiprocessing=True)
env_get_legal_actions = self.env.get_legal_actions
np_max = np.max
X = []
y = []
for index, (current_state, action, reward, normalized_next_state, next_state, done) in enumerate(minibatch):
if not done:
legalActionsIds = env_get_legal_actions(next_state)
max_next_q = np_max(next_qs_minibatch[index][legalActionsIds])
new_q = reward + DISCOUNT * max_next_q
else:
new_q = reward
current_qs = current_qs_minibatch[index].copy()
current_qs[action] = new_q
X.append(current_state)
y.append(current_qs)
self.model.fit(np.array(X), np.array(y), batch_size=MINIBATCH_SIZE, verbose=0, shuffle=False)
where DISCOUNT = 0.99 and MINIBATCH_SIZE = 64
I read that it's recommended to normalized the input vector so I tested 2 different attribute normalization methods: min-max norm. and z-score norm. And, since the value ranges don't differ that much I also tested without normalization. None of these methods proved to be better than the others.
What happens is that at the beginning, during the exploration phase, the score gets better over time which means the model is learning something, but then, during the exploitation phase, when the epsilon value is low and the agent takes most of the actions greedly, the score decreases drastically meaning that it actually didn't learn anything good.
This is my Deep Q-Learning algorithm:
epsilon = 1
for episode in range(1, EPISODES+1):
episode_reward = 0
step = 0
done = False
current_state = env_reset()
while not done:
normalized_current_state = env_normalize(current_state)
if np_random_number() > epsilon: # Take legal action greedily
actionsQValues = agent_get_qs(normalized_current_state)
legalActionsIds = env_get_legal_actions(current_state)
# Make the argmax selection among the legal actions
action = legalActionsIds[np_argmax(actionsQValues[legalActionsIds])]
else: # Take random legal action
action = env_sample()
new_state, reward, done = env_step(action)
episode_reward += reward
agent_update_replay_memory((normalized_current_state, action, reward, env_normalize(new_state), new_state, done))
agent_train(done)
current_state = new_state
step += 1
# Decay epsilon
if epsilon > MIN_EPSILON:
epsilon *= EPSILON_DECAY
where EPISODES = 4000 and EPSILON_DECAY = 0.9995.
I played around with all these hyper-parameters but the results are very similar. I don't know what else to try. Am I doing something wrong with normalization? Is there any other normalization method that is more recommended? Could the problem be in my neural network model that is not good enough?
I think it shouldn't be this difficult to make it work for a problem such simple as this one with an input size of 6, an output layer of 30 nodes and a hidden layer of 32.
Note that for the same problem I used a different representation of the state using a binary array of size 14 and it works fine with the same hyper-parameters. What might be the problem then when I use this other representation?
I discovered there was something wrong with the model implementation. The activation function should not be softmax but linear. At least, in my case it works much better this way.

How to calculate perplexity of RNN in tensorflow

I'm running the word RNN implmentation of tensor flow of Word RNN
How to calculate the perplexity of RNN.
Following is the code in training that shows training loss and other things in each epoch:
for e in range(model.epoch_pointer.eval(), args.num_epochs):
sess.run(tf.assign(model.lr, args.learning_rate * (args.decay_rate ** e)))
data_loader.reset_batch_pointer()
state = sess.run(model.initial_state)
speed = 0
if args.init_from is None:
assign_op = model.batch_pointer.assign(0)
sess.run(assign_op)
assign_op = model.epoch_pointer.assign(e)
sess.run(assign_op)
if args.init_from is not None:
data_loader.pointer = model.batch_pointer.eval()
args.init_from = None
for b in range(data_loader.pointer, data_loader.num_batches):
start = time.time()
x, y = data_loader.next_batch()
feed = {model.input_data: x, model.targets: y, model.initial_state: state,
model.batch_time: speed}
summary, train_loss, state, _, _ = sess.run([merged, model.cost, model.final_state,
model.train_op, model.inc_batch_pointer_op], feed)
train_writer.add_summary(summary, e * data_loader.num_batches + b)
speed = time.time() - start
if (e * data_loader.num_batches + b) % args.batch_size == 0:
print("{}/{} (epoch {}), train_loss = {:.3f}, time/batch = {:.3f}" \
.format(e * data_loader.num_batches + b,
args.num_epochs * data_loader.num_batches,
e, train_loss, speed))
if (e * data_loader.num_batches + b) % args.save_every == 0 \
or (e==args.num_epochs-1 and b == data_loader.num_batches-1): # save for the last result
checkpoint_path = os.path.join(args.save_dir, 'model.ckpt')
saver.save(sess, checkpoint_path, global_step = e * data_loader.num_batches + b)
print("model saved to {}".format(checkpoint_path))
train_writer.close()
The project you are referencing uses sequence_to_sequence_loss_by_example, which returns the cross-entropy loss. So for calculating the training perplexity, you just need to exponentiate the loss like explained here.
train_perplexity = tf.exp(train_loss)
We have to use e instead of 2 as a base, because TensorFlow measures the cross-entropy loss with the natural logarithm (TF Documentation). Thank you, #Matthias Arro and #Colin Skow for the hint.
Detailed Explanation
The cross-entropy of two probability distributions P and Q tells us the minimum average number of bits we need to encode events of P, when we develop a coding scheme based on Q. So, P is the true distribution, which we usually don't know. We want to find a Q as close to P as possible, so that we can develop a nice coding scheme with as few bits per event as possible.
I shouldn't say bits, because we can only use bits as a measure if we use base 2 in the calculation of the cross-entropy. But TensorFlow uses the natural logarithm, so instead let's measure the cross-entropy in nats.
So let's say we have a bad language model that says every token (character / word) in the vocabulary is equally probable to be the next one. For a vocabulary of 1000 tokens, this model will have a cross-entropy of log(1000) = 6.9 nats. When predicting the next token, it has to choose uniformly between 1000 tokens at each step.
A better language model will determine a probability distribution Q that is closer to P. Thus, the cross-entropy is lower - we might get a cross-entropy of 3.9 nats. If we now want to measure the perplexity, we simply exponentiate the cross-entropy:
exp(3.9) = 49.4
So, on the samples, for which we calculated the loss, the good model was as perplex as if it had to choose uniformly and independently among roughly 50 tokens.
It depends whether your loss function gives you a log likelihood of the data in base 2 or base e. This model is using legacy_seq2seq.sequence_loss_by_example, which uses TensorFlow's binary crossentropy, which appears to use logs of base e. Therefore, even though we're dealing with a discrete probability distribution (text), we should exponentiate with e, i.e. use tf.exp(train_loss) as Colin Skow suggested.

Linear regression implementation always performs worse than sklearn

I implemented linear regression with gradient descent in python. To see how well it is doing I compared it with scikit-learn's LinearRegression() class. For some reason, sklearn always outperforms my program by a MSE of 3 on average (I am using the Boston Housing dataset for testing). I understand that I am currently not doing gradient checking to check for convergence, but I am allowing for many iterations and have set the learning rate low enough such that it SHOULD converge. Is there any clear bug in my learning algorithm implementation? Here is my code:
import numpy as np
from sklearn.linear_model import LinearRegression
def getWeights(x):
lenWeights = len(x[1,:]);
weights = np.random.rand(lenWeights)
bias = np.random.random();
return weights,bias
def train(x,y,weights,bias,maxIter):
converged = False;
iterations = 1;
m = len(x);
alpha = 0.001;
while not converged:
for i in range(len(x)):
# Dot product of weights and training sample
hypothesis = np.dot(x[i,:], weights) + bias;
# Calculate gradient
error = hypothesis - y[i];
grad = (alpha * 1/m) * ( error * x[i,:] );
# Update weights and bias
weights = weights - grad;
bias = bias - alpha * error;
iterations = iterations + 1;
if iterations > maxIter:
converged = True;
break
return weights, bias
def predict(x, weights, bias):
return np.dot(x,weights) + bias
if __name__ == '__main__':
data = np.loadtxt('housing.txt');
x = data[:,:-1];
y = data[:,-1];
for i in range(len(x[1,:])):
x[:,i] = ( (x[:,i] - np.min(x[:,i])) / (np.max(x[:,i]) - np.min(x[:,i])) );
initialWeights,initialBias = getWeights(x);
weights,bias = train(x,y,initialWeights,initialBias,55000);
pred = predict(x, weights,bias);
MSE = np.mean(abs(pred - y));
print "This Program MSE: " + str(MSE)
sklearnModel = LinearRegression();
sklearnModel = sklearnModel.fit(x,y);
sklearnModel = sklearnModel.predict(x);
skMSE = np.mean(abs(sklearnModel - y));
print "Sklearn MSE: " + str(skMSE)
First, make sure that you are computing the correct objective function value. The linear regression objective should be .5*np.mean((pred-y)**2), rather than np.mean(abs(pred - y)).
You are actually running a stochastic gradient descent (SGD) algorithm (running a gradient iteration on individual examples), which should be distinguished from "gradient descent".
SGD is a good learning method, but a bad optimization method - it can take many iterations to converge to a minimum of the empirical error (http://leon.bottou.org/publications/pdf/nips-2007.pdf).
For SGD to converge, the learning rate must be restricted. Typically, the learning rate is set to the base learning rate divided by the number of iterations, something like alpha/(iterations+1), using the variables in your code.
You also include a multiple of 1/m in your gradient, which is typically not used in SGD updates.
To test your SGD implementation, rather than evaluating the error on the dataset that you trained with, split the dataset into a training set and a test set, and evaluate the error on this test set after training with both methods. The training/test set split will allow you to estimate the performance of your algorithm as a learning algorithm (estimate the expected error) rather than as an optimization algorithm (minimize the empirical error).
Try increasing your iteration value. This should allow your algorithm to, hopefully, converge on a value that is closer to the global minimum. Keep in mind you are not using l-bfgs which can come closer to converging much faster than plain gradient descent or even SGD.
Also try using the normal equation as another way to do Linear Regression.
http://eli.thegreenplace.net/2014/derivation-of-the-normal-equation-for-linear-regression/.

Categories

Resources