Simple neural network backpropagation implementation- Gradient problem (scipy fmin_cg optimizer) - python

I am trying to implement a 3 layer neural network with feedforward and backpropagation.
I have tested my cost function and it is working fine. My gradient function also seems ok.
but when I try to optimize variable using fmin_cg from scipy, I get this warning :
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 4.643489
Iterations: 1
Function evaluations: 123
Gradient evaluations: 110
I searched about this and someone told the problem is with gradient. This is my code for gradient:
theta_flatten = theta_flatten.reshape(1,-1)
# retrieve theta values from flattened theta
theta_hidden = theta_flatten[0,0:((input_layer_size+1)*hidden_layer_size)]
theta_hidden = theta_hidden.reshape((input_layer_size+1),hidden_layer_size)
theta_output = theta_flatten[0,((input_layer_size+1)*hidden_layer_size):]
theta_output = theta_output.reshape(hidden_layer_size+1,num_labels)
# start of section 1
a1 = x # 5000x401
z2 = np.dot(a1,theta_hidden) # 5000x25
a2 = sigmoid(z2)
a2 = np.append(np.ones(shape=(a1.shape[0],1)),a2,axis = 1) # 5000x26 # adding column of 1's to a2
z3 = np.dot(a2,theta_output) # 5000x10
a3 = sigmoid(z3) # a3 = h(x) w.r.t theta
a3 = rotate_column(a3) # mapping 0 to "0" instead of 0 to "10"
# end of section 1
# strat of section 2
delta3 = a3 - y # 5000x10
# end of section 2
# start of section 3
delta2 = (np.dot(delta3,theta_output.transpose()))[:,1:] # 5000x25 # drop delta2(0)
delta2 = delta2*sigmoid_gradient(z2)
# end of section 3
# start of section 4
DELTA2 = np.dot(a2.transpose(),delta3) # 26x10
DELTA1 = np.dot(a1.transpose(),delta2) # 401x25
# end of section 4
# start of section 5
theta_hidden_ = np.append(np.ones(shape=(theta_hidden.shape[0],1)),theta_hidden[:,1:],axis = 1) # regularization
theta_output_ = np.append(np.ones(shape=(theta_output.shape[0],1)),theta_output[:,1:],axis = 1) # regularization
D1 = DELTA1/a1.shape[0] + (theta_hidden_*lambda_)/a1.shape[0]
D2 = DELTA2/a1.shape[0] + (theta_output_*lambda_)/a1.shape[0]
# end of section 5
Dvec = np.append(D1,D2)
return Dvec
I look at github for other people implementations, but nothing works, and they implemented like me.
some comments :
Section one: feedforward implementation
Section two to four: backpropagation from ouput layer to input layer
Section five: aggregating gradients
Please help
Thank you

Related

Calculating Batch normalization

Part 1
Im going through this article and wanted to try and calculate a forward and backward pass with batch normalization.
When doing the steps after the first layer I get a batch norm output that are equal for all features.
Here is the code (I have on purpose done it in very small steps):
w = np.array([[0.3, 0.4],[0.5,0.1],[0.2,0.3]])
X = np.array([[0.7,0.1],[0.3,0.8],[0.4,0.6]])
def mu(x,axis=0):
return np.mean(x,axis=axis)
def sigma(z, mu):
Ai = np.sum(z,axis=0)
return np.sqrt((1/len(Ai)) * (Ai-mu)**2)
def Ai(z):
return np.sum(z,axis=0)
def norm(Ai,mu,sigma):
return (Ai-mu)/sigma
z1 = np.dot(w1,X.T)
mu1 = mu(z1)
A1 = Ai(z1)
sigma1 = sigma(z1,mu1)
gamma1 = np.ones(len(A1))
beta1 = np.zeros(len(A1))
Ahat = norm(A1,mu1,sigma1) #since gamma is just ones it does change anything here
The output I get from this is:
[1.73205081 1.73205081 1.73205081]
Part 2
In this image:
Should the sigma_mov and mu_mov be set to zero for the first layer?
EDIT: I think I found what I did wrong. In the normalization step I used A1 and not z1. Also I think I found that its normal to use initlize moving average with zeros for mean and ones for variance. Nice if anyone can confirm this.

Calculate equation of hyperplane using LIBSVM support vectors and coefficients in Python

I am using the LIBSVM library in python and am trying to reconstruct the equation (w'x + b) of the hyperplane from the calculated support vectors.
The model appears to train correctly but I am unable to manually calculate prediction results that match the output of svm_predict for the test data.
I have used the below link from the FAQ to try and troubleshoot but I am still not able to calculate the correct results. https://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html#f804
My code is as follows:
from svmutil import *
import numpy as np
ytrain, xtrain = svm_read_problem('small_train.libsvm')
# Change labels from 0 to -1
for index in range(len(ytrain)):
if ytrain[index] == 0:
ytrain[index] = -1.0
print ("Training set loaded...")
m = svm_train(ytrain, xtrain, '-q')
print ("Model trained...")
sv = np.asarray(m.get_SV())
sv_coef = np.asarray(m.get_sv_coef())
sv_indices = np.asarray(m.get_sv_indices())
rho = m.rho[0]
w = np.zeros(len(xtrain[0]))
b = -rho
# weight vector w = sum over i ( coefsi * xi )
for index, coef in zip(sv_indices, sv_coef):
ai = coef[0]
for key in xtrain[index-1]:
w[key] = w[key] + (ai * xtrain[index-1][key])
# From LIBSVM FAQ - Doesn't seem to impact results
# if m.label[1] == -1:
# w = np.negative(w)
# b = -b
print(np.round(w,2))
ytest, xtest = svm_read_problem('small_test.libsvm')
# Change labels from 0 to -1
for index in range(len(ytest)):
if ytest[index] == 0:
ytest[index] = -1.0
print ("Test set loaded...")
print ("Predict test set...")
p_label, p_acc, p_val = svm_predict(ytest, xtest, m)
print("p_label: ", p_label)
print("p_val: ", np.round(p_val,3))
for i in range(len(ytest)):
wx = 0
for key in xtest[i]:
wx = wx + (xtest[i][key] * w[key])
print("Manual calc: ", np.round(wx + b,3))
My understanding is that my manually calcualted results, using wx+b, should match those contained in p_val. I have tried negating both w and b and have still not been able to get the same results as those in p_val.
The data sets (LIBSVM format) I am using are:
small_train.libsvm
0 0:-0.36 1:-0.91 2:-0.99 3:-0.57 4:-1.38 5:-1.54
1 0:-1.4 1:-1.9 2:0.09 3:0.29 4:-0.3 5:-1.3
1 0:-0.43 1:1.45 2:-0.68 3:-1.58 4:0.32 5:-0.14
1 0:-0.76 1:0.3 2:-0.57 3:-0.33 4:-1.5 5:1.84
small_test.libsvm
1 0:-0.97 1:-0.69 2:-0.96 3:1.05 4:0.02 5:0.64
0 0:-0.82 1:-0.17 2:-0.36 3:-1.99 4:-1.54 5:-0.31
Are the values of w being calculated correctly? and are the p_val results the correct values to be comparing with?
Any help as always is greatly appreciated.
I managed to the get the values to match by changing:
m = svm_train(ytrain, xtrain, '-q')
to
m = svm_train(ytrain, xtrain, '-q -t 0')
From looking at the documentation the default kernel type is non-linear (radial basis function). After setting a linear kernal the results now appear to align.
Below are the available kernel types:
-t kernel_type : set type of kernel function (default 2)
0 -- linear: u'*v
1 -- polynomial: (gamma*u'*v + coef0)^degree
2 -- radial basis function: exp(-gamma*|u-v|^2)
3 -- sigmoid: tanh(gamma*u'*v + coef0)
4 -- precomputed kernel (kernel values in training_set_file)

"Hello World" CTC (Connectionist Temporal Classification) model

I have created the following Python program, which, as far as I understand CTC, should be a valid CTC-based model, as well as training data. The best documentation I can find is CNTK_208_Speech_CTC Tutorial, which is what I've based this on. The program is as simple as I could make it, and it relies only on numpy and CNTK, and generates data itself.
When I run this, I get the following error:
Validating --> ForwardBackward2850 = ForwardBackward (LabelsToGraph2847, StableSigmoid2703) : [5 x labelAxis1], [5 x inputAxis1] -> []
RuntimeError: The Matrix dimension in the ForwardBackwardNode operation does not match.
This seems to be the same issue from this ticket: https://github.com/Microsoft/CNTK/issues/2156
Here is the Python program:
# cntk_ctc_hello_world.py
#
# This is a "hello world" example of using CTC (Connectionist Temporal Classification) with CNTK.
#
# The input is a sequence of vectors of size 17. We use 17 because it's easy to spot that number in
# error messages. The output is a string of codes, each code being one of 4 possible characters from
# our alphabet that we'll refer to here as "ABCD", although they're actually just represented
# by the numbers 0..3, which is typical for classification systems. To make the setup of training data
# trivial, we assign the first four elements of our 17-dimension input vector to the four characters
# of our alphabet, so that the matching is:
# 10000000000000000 A
# 01000000000000000 B
# 00100000000000000 C
# 00010000000000000 D
# In our input sequences, we repeat each code three to five times, followed by three to five codes
# containing random noise. Whether it's repeated 3,4, or 5 times, is random for each code and each
# spacer. When we emit one of our codes, we fill the first 4 values with the code, and the remaining
# 13 values with random noise.
# For example:
# Input: AAA-----CCCC---DDDDD
# Output: ACD
import cntk as C
import numpy as np
import random
import sys
InputDim = 17
NumClasses = 4 # A,B,C,D
MinibatchSize = 100
MinibatchPerEpoch = 50
NumEpochs = 10
MaxOutputSeqLen = 10 # ABCDABCDAB
inputAxis = C.Axis.new_unique_dynamic_axis('inputAxis')
labelAxis = C.Axis.new_unique_dynamic_axis('labelAxis')
inputVar = C.sequence.input_variable((InputDim), sequence_axis=inputAxis, name="input")
labelVar = C.sequence.input_variable((NumClasses+1), sequence_axis=labelAxis, name="labels")
# Construct an LSTM-based model that will perform the classification
with C.default_options(activation=C.sigmoid):
classifier = C.layers.Sequential([
C.layers.For(range(3), lambda: C.layers.Recurrence(C.layers.LSTM(128))),
C.layers.Dense(NumClasses + 1)
])(inputVar)
criteria = C.forward_backward(C.labels_to_graph(labelVar), classifier, blankTokenId=NumClasses, delayConstraint=3)
err = C.edit_distance_error(classifier, labelVar, squashInputs=True, tokensToIgnore=[NumClasses])
lr = C.learning_rate_schedule([(3, .01), (1,.001)], C.UnitType.sample)
mm = C.momentum_schedule([(1000, 0.9), (0, 0.99)], MinibatchSize)
learner = C.momentum_sgd(classifier.parameters, lr, mm)
trainer = C.Trainer(classifier, (criteria, err), learner)
# Return a numpy array of 17 elements, for this code
def make_code(code):
a = np.zeros(NumClasses) # 0,0,0,0
v = np.random.rand(InputDim - NumClasses) # 13x random
a = np.concatenate((a, v))
a[code] = 1
return a
def make_noise_code():
return np.random.rand(InputDim)
def make_onehot(code):
v = np.zeros(NumClasses+1)
v[code] = 1
return v
def gen_batch():
x_batch = []
y_batch = []
for mb in range(MinibatchSize):
yLen = random.randint(1, MaxOutputSeqLen)
x = []
y = []
for i in range(yLen):
code = random.randint(0,3)
y.append(make_onehot(code))
xLen = random.randint(3,5) # Input is 3 to 5 repetitions of the code
for j in range(xLen):
x.append(make_code(code))
spacerLen = random.randint(3,5) # Spacer is 3 to 5 repetitions of noise
for j in range(spacerLen):
x.append(make_noise_code())
x_batch.append(np.array(x, dtype='float32'))
y_batch.append(np.array(y, dtype='float32'))
return x_batch, y_batch
#######################################################################################
# Dump first X/Y training pair from minibatch
#x, y = gen_batch()
#print("\nx sequence of first sample of minibatch:\n", x[0])
#print("\ny sequence of first sample of minibatch:\n", y[0])
#######################################################################################
progress_printer = C.logging.progress_print.ProgressPrinter(tag='Training', num_epochs=NumEpochs)
for epoch in range(NumEpochs):
for mb in range(MinibatchPerEpoch):
x_batch, y_batch = gen_batch()
trainer.train_minibatch({inputVar: x_batch, labelVar: y_batch})
progress_printer.epoch_summary(with_metric=True)
For those who are facing this error, there are two points to take note of:
(1) The data provided to labels sequence tensor to labels_to_graph must have the same sequence length as the data coming out from network output at runtime.
(2) If during the model building you change the dynamic sequence axis of input sequence tensor (e.g. stride in sequential axis), then you must call reconcile_dynamic_axes on your labels sequence tensor with the network_output as the second argument to the function. This tells CNTK that labels have the same dynamic axis as the network.
Adhering to these 2 points will allow forward_backward to run.

Issue with gradient descent implementation of linear regression

I am taking this Coursera class on machine learning / linear regression. Here is how they describe the gradient descent algorithm for solving for the estimated OLS coefficients:
So they use w for the coefficients, H for the design matrix (or features as they call it), and y for the dependent variable. And their convergence criteria is the usual of the norm of the gradient of RSS being less than tolerance epsilon; that is, their definition of "not converged" is:
I am having trouble getting this algorithm to converge and was wondering if I was overlooking something in my implementation. Below is the code. Please note that I also ran the sample dataset I use in it (df) through the statsmodels regression library, just to see that a regression could converge and to get coefficient values to tie out with. It did and they were:
Intercept 4.344435
x1 4.387702
x2 0.450958
Here is my implementation. At each iteration, it prints the norm of the gradient of RSS:
import numpy as np
import numpy.linalg as LA
import pandas as pd
from pandas import DataFrame
# First define the grad function: grad(RSS) = -2H'(y-Hw)
def grad_rss(df, var_name_y, var_names_h, w):
# Set up feature matrix H
H = DataFrame({"Intercept" : [1 for i in range(0,len(df))]})
for var_name_h in var_names_h:
H[var_name_h] = df[var_name_h]
# Set up y vector
y = df[var_name_y]
# Calculate the gradient of the RSS: -2H'(y - Hw)
result = -2 * np.transpose(H.values) # (y.values - H.values # w)
return result
def ols_gradient_descent(df, var_name_y, var_names_h, epsilon = 0.0001, eta = 0.05):
# Set all initial w values to 0.0001 (not related to our choice of epsilon)
w = np.array([0.0001 for i in range(0, len(var_names_h) + 1)])
# Iteration counter
t = 0
# Basic algorithm: keep subtracting eta * grad(RSS) from w until
# ||grad(RSS)|| < epsilon.
while True:
t = t + 1
grad = grad_rss(df, var_name_y, var_names_h, w)
norm_grad = LA.norm(grad)
if norm_grad < epsilon:
break
else:
print("{} : {}".format(t, norm_grad))
w = w - eta * grad
if t > 10:
raise Exception ("Failed to converge")
return w
# ##########################################
df = DataFrame({
"y" : [20,40,60,80,100] ,
"x1" : [1,5,7,9,11] ,
"x2" : [23,29,60,85,99]
})
# Run
ols_gradient_descent(df, "y", ["x1", "x2"])
Unfortunately this does not converge, and in fact prints a norm that is exploding with each iteration:
1 : 44114.31506051333
2 : 98203544.03067812
3 : 218612547944.95386
4 : 486657040646682.9
5 : 1.083355358314664e+18
6 : 2.411675439503567e+21
7 : 5.368670935963926e+24
8 : 1.1951287949674022e+28
9 : 2.660496151835357e+31
10 : 5.922574875391406e+34
11 : 1.3184342751414824e+38
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
......
Exception: Failed to converge
If I increase the maximum number of iterations enough, it doesn't converge, but just blows out to infinity.
Is there an implementation error here, or am I misinterpreting the explanation in the class notes?
Updated w/ Answer
As #Kant suggested, the eta needs to updated at each iteration. The course itself had some sample formulas for this but none of them helped in the convergence. This section of the Wikipedia page about gradient descent mentions the Barzilai-Borwein approach as a good way of updating the eta. I implemented it and altered my code to update the eta with it at each iteration, and the regression converged successfully. Below is my translation of the Wikipedia version of the formula to the variables used in regression, as well as code that implements it. Again, this code is called in the loop of my original ols_gradient_descent to update the eta.
def eta_t (w_t, w_t_minus_1, grad_t, grad_t_minus_1):
delta_w = w_t - w_t_minus_1
delta_grad = grad_t - grad_t_minus_1
eta_t = (delta_w.T # delta_grad) / (LA.norm(delta_grad))**2
return eta_t
Try decreasing the value of eta. Gradient descent can diverge if eta is too high.

Backpropagation for Neural Network - Python

I am writing a program to do neural network in python I am trying to set up the backpropagation algorithm. The basic idea is that I look through 5,000 training examples and collect the errors and find out in which direction I need to move the thetas and then move them in that direction. There are the training examples, then I use one hidden layer, and then an output layer. However I am getting the gradient/derivative/error wrong here because I am not moving the thetas correct as they need to be moved. I put 8 hours into this today not sure what I'm doing wrong. Thanks for your help!!
x = 401x5000 matrix
y = 10x5000 matrix # 10 possible output classes, so one column will look like [0, 0, 0, 1, 0... 0] to indicate the output class was 4
theta_1 = 25x401
theta_2 = 10x26
alpha=.01
sigmoid= lambda theta, x: 1 / (1 + np.exp(-(theta*x)))
#move thetas in right direction for each iteration
for iter in range(0,1):
all_delta_1, all_delta_2 = 0, 0
#loop through each training example, 1...m
for t in range(0,5000):
hidden_layer = np.matrix(np.concatenate((np.ones((1,1)),sigmoid(theta_1,x[:,t]))))
output_layer = sigmoid(theta_2,hidden_layer)
delta_3 = output_layer - y[:,t]
delta_2= np.multiply((theta_2.T*delta_3),(np.multiply(hidden_layer,(1-hidden_layer))))
#print type(delta_3), delta_3.shape, type(hidden_layer.T), hidden_layer.T.shape
all_delta_2 += delta_3*hidden_layer.T
all_delta_1 += delta_2[1:]*x[:,t].T
delta_gradient_2 = (all_delta_2 / m)
delta_gradient_1 = (all_delta_1 / m)
theta_1 = theta_1- (alpha * delta_gradient_1)
theta_2 = theta_2- (alpha * delta_gradient_2)
It looks like your gradients are with respect to the unsquashed output layer.
Try changing output_layer = sigmoid(theta_2,hidden_layer) to output_layer = theta_2*hidden_layer.
Or recompute the gradients for squashed output.

Categories

Resources