Train a logistic regression with regularization model from scratch - python

I am trying to implement Logistic Regression model with regularisation. I got stuck in computing the gradient because when I am running my gradient descent algorithm it actually shows that the cost function is increasing rather than decreasing.
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def Probability(theta, X):
return sigmoid(np.dot(X,theta))
def cost_function_regression(theta, x, y, Lambda):
# Computes the cost function for all the training samples
m = x.shape[0]
total_cost = (-(1 / m) * np.sum(
np.dot(y.T, np.log(Probability( theta,x))) + np.dot((1 - y).T, np.log(
1 - Probability(theta,x))))) + (Lambda/ 2)* np.sum(np.dot(theta, theta.T))
return total_cost
def Gradient_regression( theta, X,y, Lambda ):
m=X.shape[0]
grad=(((1/m)* np.dot(X.T, Probability(theta,X)-y)) + np.sum((Lambda/m )* theta))
return(grad)

We will start by establishing the theory followed by the working example and end with some comments.
Problem statement
The steps in fitting/training a logistic regression model (as with any supervised ML model) using gradient decent method are as below
Identify a hypothesis function [h(X)] with parameters [w,b]
Identify a loss function [J(w,b)]
Forward propagation: Make predictions using the hypothesis functions [y_hat = h(X)]
Calculate the error between the actual label [y] and the predicted label [y_hat] using the loss function.
Backward propagation: Adjust the parameters in the hypothesis function based on the error (by calculating the gradients), using the update rule
Got to step 3 if gradients are high else end
Calculating gradients
Hypothesis function for logistic regression :
Where X is a vector and X^i is the ith element of the vector.
The commonly used loss function for logistic regression is log loss. The log loss with l2 regularization is:
Lets calculate the gradients
Similarly
Now that we know the gradients, lets code the gradient decent algorithm to fit the parameters of our logistic regression model
Toy Example
# load data
iris = datasets.load_iris()
# Lets take only two classes
y = iris.target
X = iris.data[y != 2]
y = y[y != 2]
# Normalize data to 0 mean and 1 std
X[:, 0] = (X[:, 0] - np.mean(X[:, 0]))/np.std(X[:, 0])
X[:, 1] = (X[:, 1] - np.mean(X[:, 1]))/np.std(X[:, 1])
X[:, 2] = (X[:, 2] - np.mean(X[:, 2]))/np.std(X[:, 2])
X[:, 3] = (X[:, 3] - np.mean(X[:, 3]))/np.std(X[:, 3])
def sigmoid(x):
return 1 / (1+math.exp(-x))
# initialize weights
w0, w1, w2, w3, b = 0.01,0.01,0.01,0.01,0.01
n = len(X)
# Learning rate
alpha = 0.01
# The gardient decent loop
while True:
y_hat = [sigmoid(w0*x[0] + w1*x[1] + w2*x[2] + w3*x[3] + b) for x in X]
delta_w0 = -np.sum([(y[j] - y_hat[j])*X[j,0] for j in range(n)])/n + 2*w0
delta_w1 = -np.sum([(y[j] - y_hat[j])*X[j,1] for j in range(n)])/n + 2*w1
delta_w2 = -np.sum([(y[j] - y_hat[j])*X[j,2] for j in range(n)])/n + 2*w2
delta_w3 = -np.sum([(y[j] - y_hat[j])*X[j,3] for j in range(n)])/n + 2*w3
delta_b = -np.sum([(y[j] - y_hat[j]) for j in range(n)])/n + 2*b
w0 = w0 - alpha*delta_w0
w1 = w1 - alpha*delta_w1
w2 = w2 - alpha*delta_w2
w3 = w3 - alpha*delta_w3
b = b - alpha*delta_b
if np.sum(np.abs([delta_w0, delta_w1, delta_w2, delta_w3, delta_b])) < 1e-5:
break
# Make predictions
pred = [1 if i > 0.5 else 0 for i in y_hat]
# Find no:of correct predictions
correct = np.sum([1 if pred[i] == y[i] else 0 for i in range(n)])
print (correct)
Comments
The above toy example is the coded in the most inefficient way. The intention was to show the steps clearly rather then efficiency. Saying that, we will have to vectorize the operations (using np arrays and matrics operations) for efficiency.
Data normalization is important
The models are trained on train data and the performance is measured on test/validation data.

Related

Logistic regression Python implementation

I tried to implement logistic regression only with numpy in Python, but the result is not satisfying. The predictions seems incorrect and loss is not improving so it is probably something wrong with the code. Does anyone know what could fix it? Thank you very much!
Here is algorithm:
import numpy as np
# training data and labels
X = np.concatenate((np.random.normal(0.25, 0.1, 50), np.random.normal(0.75, 0.1, 50)), axis=None)
Y = np.concatenate((np.zeros((50,), dtype=np.int32), np.ones((50,), dtype=np.int32)), axis=None)
def logistic_sigmoid(a):
return 1 / (1 + np.exp(-a))
# forward pass
def forward_pass(w, x):
return logistic_sigmoid(w * x)
# gradient computation
def backward_pass(x, y, y_real):
return np.sum((y - y_real) * x)
# computing loss
def loss(y, y_real):
return -np.sum(y_real * np.log(y) + (1 - y_real) * np.log(1 - y))
# training
def train():
w = 0.0
learning_rate = 0.01
i = 200
test_number = 0.3
for epoch in range(i):
y = forward_pass(w, X)
gradient = backward_pass(X, y, Y)
w = w - learning_rate * gradient
print(f'epoch {epoch + 1}, x = {test_number}, y = {forward_pass(w, test_number):.3f}, loss = {loss(y, Y):.3f}')
train()
At first glance you are missing you intercept term (typically called b_0, or bias) and its gradient update. Also in the backward_pass and loss calculations you are not dividing by the amount of data samples.
You can see two examples of how to implement it from scratch here:
1: Example based on Andrew Ng explanations in the Machine Learning course in Coursera
2: Implementation of Jason Brownlee from Machine Learning mastery website

How to correctly calculate gradients in neural network with numpy

I am trying to build a simple neural network class from scratch using numpy, and test it using the XOR problem. But the backpropagation function (backprop) does not seem to be working correctly.
In the class, I construct instances by passing in the size of each layer, and the activation functions to use at each layer. I assume that the final activation function is softmax, so that I can calculate the derivative of cross-entropy loss wrt to Z of the last layer. I also do not have a separate set of bias matrices in my class. I just include them in the weight matrices as an extra column at the end.
I know that my backprop function is not working correctly, because the neural network does not ever converge on a somewhat correct output. I also created a numerical gradient function, and when comparing the results of both. I get drastically different numbers.
My understanding from what I have read is that the delta values of each layer (with L being the last layer, and i representing any other layer) should be:
And the respective gradients/weight-update of those layers should be:
Where * is the hardamard product, a represents the activation of some layer, and z represents the nonactivated output of some layer.
The sample data that I am using to test this is at the bottom of the file.
This is my first time trying to implement the backpropagation algorithm from scratch. So I am a bit lost on where to go from here.
import numpy as np
def sigmoid(n, deriv=False):
if deriv:
return np.multiply(n, np.subtract(1, n))
return 1 / (1 + np.exp(-n))
def softmax(X, deriv=False):
if not deriv:
exps = np.exp(X - np.max(X))
return exps / np.sum(exps)
else:
raise Error('Unimplemented')
def cross_entropy(y, p, deriv=False):
"""
when deriv = True, returns deriv of cost wrt z
"""
if deriv:
ret = p - y
return ret
else:
p = np.clip(p, 1e-12, 1. - 1e-12)
N = p.shape[0]
return -np.sum(y*np.log(p))/(N)
class NN:
def __init__(self, layers, activations):
"""random initialization of weights/biases
NOTE - biases are built into the standard weight matrices by adding an extra column
and multiplying it by one in every layer"""
self.activate_fns = activations
self.weights = [np.random.rand(layers[1], layers[0]+1)]
for i in range(1, len(layers)):
if i != len(layers)-1:
self.weights.append(np.random.rand(layers[i+1], layers[i]+1))
for j in range(layers[i+1]):
for k in range(layers[i]+1):
if np.random.rand(1,1)[0,0] > .5:
self.weights[-1][j,k] = -self.weights[-1][j,k]
def ff(self, X, get_activations=False):
"""Feedforward"""
activations, zs = [], []
for activate, w in zip(self.activate_fns, self.weights):
X = np.vstack([X, np.ones((1, 1))]) # adding bias
z = w.dot(X)
X = activate(z)
if get_activations:
zs.append(z)
activations.append(X)
return (activations, zs) if get_activations else X
def grad_descent(self, data, epochs, learning_rate):
"""gradient descent
data - list of 2 item tuples, the first item being an input, and the second being its label"""
grad_w = [np.zeros_like(w) for w in self.weights]
for _ in range(epochs):
for x, y in data:
grad_w = [n+o for n, o in zip(self.backprop(x, y), grad_w)]
self.weights = [w-(learning_rate/len(data))*gw for w, gw in zip(self.weights, grad_w)]
def backprop(self, X, y):
"""perfoms backprop for one layer of a NN with softmax/cross_entropy output layer"""
(activations, zs) = self.ff(X, True)
activations.insert(0, X)
deltas = [0 for _ in range(len(self.weights))]
grad_w = [0 for _ in range(len(self.weights))]
deltas[-1] = cross_entropy(y, activations[-1], True) # assumes output activation is softmax
grad_w[-1] = np.dot(deltas[-1], np.vstack([activations[-2], np.ones((1, 1))]).transpose())
for i in range(len(self.weights)-2, -1, -1):
deltas[i] = np.dot(self.weights[i+1][:, :-1].transpose(), deltas[i+1]) * self.activate_fns[i](zs[i], True)
grad_w[i] = np.hstack((np.dot(deltas[i], activations[max(0, i-1)].transpose()), deltas[i]))
# check gradient
num_gw = self.gradient_check(X, y, i)
print('numerical:', num_gw, '\nanalytic:', grad_w)
return grad_w
def gradient_check(self, x, y, i, epsilon=1e-4):
"""Numerically calculate the gradient in order to check analytical correctness"""
grad_w = [np.zeros_like(w) for w in self.weights]
for w, gw in zip(self.weights, grad_w):
for j in range(w.shape[0]):
for k in range(w.shape[1]):
w[j,k] += epsilon
out1 = cross_entropy(self.ff(x), y)
w[j,k] -= 2*epsilon
out2 = cross_entropy(self.ff(x), y)
gw[j,k] = np.float64(out1 - out2) / (2*epsilon)
w[j,k] += epsilon # return weight to original value
return grad_w
##### TESTING #####
X = [np.array([[0],[0]]), np.array([[0],[1]]), np.array([[1],[0]]), np.array([[1],[1]])]
y = [np.array([[1], [0]]), np.array([[0], [1]]), np.array([[0], [1]]), np.array([[1], [0]])]
data = []
for x, t in zip(X, y):
data.append((x, t))
def nn_test():
c = NN([2, 2, 2], [sigmoid, sigmoid, softmax])
c.grad_descent(data, 100, .01)
for x in X:
print(c.ff(x))
nn_test()
UPDATE: I found one small bug in the code, but it still does not converge correctly. I calculated/derived the gradients for both matrices by hand and found no errors in my implementation, so I still do not know what is wrong with it.
UPDATE #2: I created a procedural version of what I was using above with the following code. Upon testing I discovered that the NN was able to learn the correct weights for classifying each of the 4 cases in XOR separately, but when I try to train using all the training examples at once (as shown), the resultant weights almost always output something around .5 for both output nodes. Could someone please tell me why this is occurring?
X = [np.array([[0],[0]]), np.array([[0],[1]]), np.array([[1],[0]]), np.array([[1],[1]])]
y = [np.array([[1], [0]]), np.array([[0], [1]]), np.array([[0], [1]]), np.array([[1], [0]])]
weights = [np.random.rand(2, 3) for _ in range(2)]
for _ in range(1000):
for i in range(4):
#Feedforward
a0 = X[i]
z0 = weights[0].dot(np.vstack([a0, np.ones((1, 1))]))
a1 = sigmoid(z0)
z1 = weights[1].dot(np.vstack([a1, np.ones((1, 1))]))
a2 = softmax(z1)
# print('output:', a2, '\ncost:', cross_entropy(y[i], a2))
#backprop
del1 = cross_entropy(y[i], a2, True)
dcdw1 = del1.dot(np.vstack([a1, np.ones((1, 1))]).T)
del0 = weights[1][:, :-1].T.dot(del1)*sigmoid(z0, True)
dcdw0 = del0.dot(np.vstack([a0, np.ones((1, 1))]).T)
weights[0] -= .03*weights[0]*dcdw0
weights[1] -= .03*weights[1]*dcdw1
i = 0
a0 = X[i]
z0 = weights[0].dot(np.vstack([a0, np.ones((1, 1))]))
a1 = sigmoid(z0)
z1 = weights[1].dot(np.vstack([a1, np.ones((1, 1))]))
a2 = softmax(z1)
print(a2)
Softmax doesn't look right
Using cross entropy loss, the derivative for softmax is really nice (assuming you are using a 1 hot vector, where "1 hot" essentially means an array of all 0's except for a single 1, ie: [0,0,0,0,0,0,1,0,0])
For node y_n it ends up being y_n-t_n. So for a softmax with output:
[0.2,0.2,0.3,0.3]
And desired output:
[0,1,0,0]
The gradient at each of the softmax nodes is:
[0.2,-0.8,0.3,0.3]
It looks as if you are subtracting 1 from the entire array. The variable names aren't very clear, so if you could possibly rename them from L to what L represents, such as output_layer I'd be able to help more.
Also, for the other layers just to clear things up. When you say a^(L-1) as an example, do you mean "a to the power of (l-1)" or do you mean "a xor (l-1)"? Because in python ^ means xor.
EDIT:
I used this code and found the strange matrix dimensions (modified at line 69 in the function backprop)
deltas = [0 for _ in range(len(self.weights))]
grad_w = [0 for _ in range(len(self.weights))]
deltas[-1] = cross_entropy(y, activations[-1], True) # assumes output activation is softmax
print(deltas[-1].shape)
grad_w[-1] = np.dot(deltas[-1], np.vstack([activations[-2], np.ones((1, 1))]).transpose())
print(self.weights[-1].shape)
print(activations[-2].shape)
exit()

Vectorize for loop in python

I have a following loop where I am calculating softmax transform for batches of different sizes as below
import numpy as np
def softmax(Z,arr):
"""
:param Z: numpy array of any shape (output from hidden layer)
:param arr: numpy array of any shape (start, end)
:return A: output of multinum_logit(Z,arr), same shape as Z
:return cache: returns Z as well, useful during back propagation
"""
A = np.zeros(Z.shape)
for i in prange(len(arr)):
shiftx = Z[:,arr[i,1]:arr[i,2]+1] - np.max(Z[:,int(arr[i,1]):int(arr[i,2])+1])
A[:,arr[i,1]:arr[i,2]+1] = np.exp(shiftx)/np.exp(shiftx).sum()
cache = Z
return A,cache
Since this for loop is not vectorized it is the bottleneck in my code. What is a possible solution to make it faster. I have tried using #jit of numba which makes it little faster but not enough. I was wondering if there is another way to make it faster or vectorize/parallelize it.
Sample input data for the function
Z = np.random.random([1,10000])
arr = np.zeros([100,3])
arr[:,0] = 1
temp = int(Z.shape[1]/arr.shape[0])
for i in range(arr.shape[0]):
arr[i,1] = i*temp
arr[i,2] = (i+1)*temp-1
arr = arr.astype(int)
EDIT:
I forgot to stress here that my number of class is varying. For example batch 1 has say 10 classes, batch 2 may have 15 classes. Therefore I am passing an array arr which keeps track of the which rows belong to batch1 and so on. These batches are different than the batches in traditional neural network framework
In the above example arr keeps track of starting index and end index of rows. So the denominator in the softmax function will be sum of only those observations whose index lie between the starting and ending index.
Here's a vectorized softmax function. It's the implementation of an assignment from Stanford's cs231n course on conv nets.
The function takes in optimizable parameters, input data, targets, and a regularizer. (You can ignore the regularizer as that references another class exclusive to some cs231n assignments).
It returns a loss and gradients of the parameters.
def softmax_loss_vectorized(W, X, y, reg):
"""
Softmax loss function, vectorized version.
Inputs and outputs are the same as softmax_loss_naive.
"""
# Initialize the loss and gradient to zero.
loss = 0.0
dW = np.zeros_like(W)
num_train = X.shape[0]
scores = X.dot(W)
shift_scores = scores - np.amax(scores,axis=1).reshape(-1,1)
softmax = np.exp(shift_scores)/np.sum(np.exp(shift_scores), axis=1).reshape(-1,1)
loss = -np.sum(np.log(softmax[range(num_train), list(y)]))
loss /= num_train
loss += 0.5* reg * np.sum(W * W)
dSoftmax = softmax.copy()
dSoftmax[range(num_train), list(y)] += -1
dW = (X.T).dot(dSoftmax)
dW = dW/num_train + reg * W
return loss, dW
For comparison's sake, here is a naive (non-vectorized) implementation of the same method.
def softmax_loss_naive(W, X, y, reg):
"""
Softmax loss function, naive implementation (with loops)
Inputs have dimension D, there are C classes, and we operate on minibatches
of N examples.
Inputs:
- W: A numpy array of shape (D, C) containing weights.
- X: A numpy array of shape (N, D) containing a minibatch of data.
- y: A numpy array of shape (N,) containing training labels; y[i] = c means
that X[i] has label c, where 0 <= c < C.
- reg: (float) regularization strength
Returns a tuple of:
- loss as single float
- gradient with respect to weights W; an array of same shape as W
"""
loss = 0.0
dW = np.zeros_like(W)
num_train = X.shape[0]
num_classes = W.shape[1]
for i in xrange(num_train):
scores = X[i].dot(W)
shift_scores = scores - max(scores)
loss_i = -shift_scores[y[i]] + np.log(sum(np.exp(shift_scores)))
loss += loss_i
for j in xrange(num_classes):
softmax = np.exp(shift_scores[j])/sum(np.exp(shift_scores))
if j==y[i]:
dW[:,j] += (-1 + softmax) * X[i]
else:
dW[:,j] += softmax *X[i]
loss /= num_train
loss += 0.5 * reg * np.sum(W * W)
dW /= num_train + reg * W
return loss, dW
Source

Calculate optimal input of a neural network with theano, by using gradient descent w.r.t. inputs

I have implemented and trained a neural network with Theano of k binary inputs (0,1), one hidden layer and one unit in the output layer. Once it has been trained I want to obtain inputs that maximizes the output (e.g. x which makes unit of output layer closest to 1). So far I haven't found an implementation of it, so I am trying the following approach:
Train network => obtain trained weights (theta1, theta2)
Define the neural network function with x as input and trained theta1, theta2 as fixed parameters. That is: f(x) = sigmoid( theta1*(sigmoid (theta2*x ))). This function takes x and with given trained weights (theta1, theta2) gives output between 0 and 1.
Apply gradient descent w.r.t. x on the neural network function f(x) and obtain x that maximizes f(x) with theta1 and theta2 given.
For these I have implemented the following code with a toy example (k = 2). Based on the tutorial on http://outlace.com/Beginner-Tutorial-Theano/ but changed vector y, so that there is only one combination of inputs that gives f(x) ~ 1 which is x = [0, 1].
Edit1: As suggested optimizer was set to None and bias unit was fixed to 1.
Step 1: Train neural network. This runs well and with out error.
import os
os.environ["THEANO_FLAGS"] = "optimizer=None"
import theano
import theano.tensor as T
import theano.tensor.nnet as nnet
import numpy as np
x = T.dvector()
y = T.dscalar()
def layer(x, w):
b = np.array([1], dtype=theano.config.floatX)
new_x = T.concatenate([x, b])
m = T.dot(w.T, new_x) #theta1: 3x3 * x: 3x1 = 3x1 ;;; theta2: 1x4 * 4x1
h = nnet.sigmoid(m)
return h
def grad_desc(cost, theta):
alpha = 0.1 #learning rate
return theta - (alpha * T.grad(cost, wrt=theta))
in_units = 2
hid_units = 3
out_units = 1
theta1 = theano.shared(np.array(np.random.rand(in_units + 1, hid_units), dtype=theano.config.floatX)) # randomly initialize
theta2 = theano.shared(np.array(np.random.rand(hid_units + 1, out_units), dtype=theano.config.floatX))
hid1 = layer(x, theta1) #hidden layer
out1 = T.sum(layer(hid1, theta2)) #output layer
fc = (out1 - y)**2 #cost expression
cost = theano.function(inputs=[x, y], outputs=fc, updates=[
(theta1, grad_desc(fc, theta1)),
(theta2, grad_desc(fc, theta2))])
run_forward = theano.function(inputs=[x], outputs=out1)
inputs = np.array([[0,1],[1,0],[1,1],[0,0]]).reshape(4,2) #training data X
exp_y = np.array([1, 0, 0, 0]) #training data Y
cur_cost = 0
for i in range(5000):
for k in range(len(inputs)):
cur_cost = cost(inputs[k], exp_y[k]) #call our Theano-compiled cost function, it will auto update weights
print(run_forward([0,1]))
Output of run forward for [0,1] is: 0.968905860574.
We can also get values of weights with theta1.get_value() and theta2.get_value()
Step 2: Define neural network function f(x). Trained weights (theta1, theta2) are constant parameters of this function.
Things get a little trickier here because of the bias unit, which is part of he vector of inputs x. To do this I concatenate b and x. But the code now runs well.
b = np.array([[1]], dtype=theano.config.floatX)
#b_sh = theano.shared(np.array([[1]], dtype=theano.config.floatX))
rand_init = np.random.rand(in_units, 1)
rand_init[0] = 1
x_sh = theano.shared(np.array(rand_init, dtype=theano.config.floatX))
th1 = T.dmatrix()
th2 = T.dmatrix()
nn_hid = T.nnet.sigmoid( T.dot(th1, T.concatenate([x_sh, b])) )
nn_predict = T.sum( T.nnet.sigmoid( T.dot(th2, T.concatenate([nn_hid, b]))))
Step 3:
Problem is now in gradient descent as is not limited to values between 0 and 1.
fc2 = (nn_predict - 1)**2
cost3 = theano.function(inputs=[th1, th2], outputs=fc2, updates=[
(x_sh, grad_desc(fc2, x_sh))])
run_forward = theano.function(inputs=[th1, th2], outputs=nn_predict)
cur_cost = 0
for i in range(10000):
cur_cost = cost3(theta1.get_value().T, theta2.get_value().T) #call our Theano-compiled cost function, it will auto update weights
if i % 500 == 0: #only print the cost every 500 epochs/iterations (to save space)
print('Cost: %s' % (cur_cost,))
print x_sh.get_value()
The last iteration prints:
Cost: 0.000220317356533
[[-0.11492753]
[ 1.99729555]]
Furthermore input 1 keeps becoming more negative and input 2 increases, while the optimal solution is [0, 1]. How can this be fixed?
You are adding b=[1] via broadcasting rules as opposed to concatenating it. Also, once you concatenate it, your x_sh has one dimension to many which is why the error occurs at nn_predict and not nn_hid

How to use mini-batch instead of SGD

Here is a quick implementation of a one-layer neural network in python:
import numpy as np
# simulate data
np.random.seed(94106)
X = np.random.random((200, 3)) # 100 3d vectors
# first col is set to 1
X[:, 0] = 1
def simu_out(x):
return np.sum(np.power(x, 2))
y = np.apply_along_axis(simu_out, 1, X)
# code 1 if above average
y = (y > np.mean(y)).astype("float64")*2 - 1
# split into training and testing sets
Xtr = X[:100]
Xte = X[100:]
ytr = y[:100]
yte = y[100:]
w = np.random.random(3)
# 1 layer network. Final layer has one node
# initial weights,
def epoch():
err_sum = 0
global w
for i in range(len(ytr)):
learn_rate = .1
s_l1 = Xtr[i].T.dot(w) # signal at layer 1, pre-activation
x_l1 = np.tanh(s_l1) # output at layer 1, activation
err = x_l1 - ytr[i]
err_sum += err
# see here: https://youtu.be/Ih5Mr93E-2c?t=51m8s
delta_l1 = 2 * err * (1 - x_l1**2)
dw = Xtr[i] * delta_l1
w -= learn_rate * dw
print("Mean error: %f" % (err_sum / len(ytr)))
epoch()
for i in range(1000):
epoch()
def predict(X):
global w
return np.sign(np.tanh(X.dot(w)))
# > 80% accuracy!!
np.mean(predict(Xte) == yte)
It is using stochastic gradient descent for optimization. I am thinking how do I apply mini-batch gradient descent here?
The difference from "classical" SGD to a mini-batch gradient descent is that you use multiple samples (a so-called mini-batch) to calculate the update for w. This has the advantage, that the steps you take in direction of the solution are less noisy, as you follow a smoothed gradient.
To do that, you need an inner loop to calculate the update dw, where you iterate over the mini batch. For example (quick-n-dirty code):
def epoch():
err_sum = 0
learn_rate = 0.1
global w
for i in range(int(ceil(len(ytr) / batch_size))):
batch = Xtr[i:i+batch_size]
target = ytr[i:i+batch_size]
dw = np.zeros_like(w)
for j in range(batch_size):
s_l1 = batch[j].T.dot(w)
x_l1 = np.tanh(s_l1)
err = x_l1 - target[j]
err_sum += err
delta_l1 = 2 * err * (1 - x_l1**2)
dw += batch[j] * delta_l1
w -= learn_rate * (dw / batch_size)
print("Mean error: %f" % (err_sum / len(ytr)))
gave an accuracy of 87 percent in a test.
Now, one more thing: you always go through the training set from start to end. You should definitely shuffle the data in each iteration. Always going through in the same order can really affect your performance, especially if you e.g. first have all samples of class A, and then all of class B. This can also make your training go in cycles. So just go through the set in a random order, e.g. with
order = np.random.permutation(len(ytr))
and replace all occurrences of i by order[i] in the epoch() function.
And a more general remark: Global variables are often considered bad design, as you don't have any control over which snippet modifies your variables. Rather pass w as a parameter. The same goes for the learning rate and the batch size.

Categories

Resources