Gradient descent for ridge regression - python

I'm trying to write a code that return the parameters for ridge regression using gradient descent. Ridge regression is defined as
Where, L is the loss (or cost) function. w are the parameters of the loss function (which assimilates b). x are the data points. y are the labels for each vector x. lambda is a regularization constant. b is the intercept parameter (which is assimilated into w). So, L(w,b) = number
The gradient descent algorithm that I should implement looks like this:
Where ∇
is the gradient of L with respect to w. η
is a step size. t is the time or iteration counter.
My code:
def ridge_regression_GD(x,y,C):
x=np.insert(x,0,1,axis=1) # adding a feature 1 to x at beggining nxd+1
w=np.zeros(len(x[0,:])) # d+1
t=0
eta=1
summ = np.zeros(1)
grad = np.zeros(1)
losses = np.array([0])
loss_stry = 0
while eta > 2**-30:
for i in range(0,len(y)): # here we calculate the summation for all rows for loss and gradient
summ=summ+((y[i,]-np.dot(w,x[i,]))*x[i,])
loss_stry=loss_stry+((y[i,]-np.dot(w,x[i,]))**2)
losses=np.insert(losses,len(losses),loss_stry+(C*np.dot(w,w)))
grad=((-2)*summ)+(np.dot((2*C),w))
eta=eta/2
w=w-(eta*grad)
t+=1
summ = np.zeros(1)
loss_stry = 0
b=w[0]
w=w[1:]
return w,b,losses
The output should be the intercept parameter b, the vector w and the loss in each iteration, losses.
My problem is that when I run the code I get increasing values for w and for the losses, both in the order of 10^13.
Would really appreciate if you could help me out. If you need any more information or clarification just ask for it.
NOTE: This post was deleted from Cross Validated forum. If there's a better forum to post it please let me know.

After I check your code, turns out your implementation of Ridge regression is correct, the problem of increasing values for w which led to increasing losses you get is due to extreme and unstable update value of parameters (i.e abs(eta*grad) is too big), so I adjust the learning rate and weights decay rate to appropriate range and change the way you decay the learning rate then everything work as expected:
import numpy as np
sample_num = 100
x_dim = 10
x = np.random.rand(sample_num, x_dim)
w_tar = np.random.rand(x_dim)
b_tar = np.random.rand(1)[0]
y = np.matmul(x, np.transpose([w_tar])) + b_tar
C = 1e-6
def ridge_regression_GD(x,y,C):
x = np.insert(x,0,1,axis=1) # adding a feature 1 to x at beggining nxd+1
x_len = len(x[0,:])
w = np.zeros(x_len) # d+1
t = 0
eta = 3e-3
summ = np.zeros(x_len)
grad = np.zeros(x_len)
losses = np.array([0])
loss_stry = 0
for i in range(50):
for i in range(len(y)): # here we calculate the summation for all rows for loss and gradient
summ = summ + (y[i,] - np.dot(w, x[i,])) * x[i,]
loss_stry += (y[i,] - np.dot(w, x[i,]))**2
losses = np.insert(losses, len(losses), loss_stry + C * np.dot(w, w))
grad = -2 * summ + np.dot(2 * C,w)
w -= eta * grad
eta *= 0.9
t += 1
summ = np.zeros(1)
loss_stry = 0
return w[1:], w[0], losses
w, b, losses = ridge_regression_GD(x, y, C)
print("losses: ", losses)
print("b: ", b)
print("b_tar: ", b_tar)
print("w: ", w)
print("w_tar", w_tar)
x_pre = np.random.rand(3, x_dim)
y_tar = np.matmul(x_pre, np.transpose([w_tar])) + b_tar
y_pre = np.matmul(x_pre, np.transpose([w])) + b
print("y_pre: ", y_pre)
print("y_tar: ", y_tar)
Outputs:
losses: [ 0 1888 2450 2098 1128 354 59 5 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1]
b: 1.170527138363387
b_tar: 0.894306608050021
w: [0.7625987 0.6027163 0.58350218 0.49854847 0.52451963 0.59963663
0.65156702 0.61188389 0.74257133 0.67164963]
w_tar [0.82757802 0.76593551 0.74074476 0.37049698 0.40177269 0.60734677
0.72304859 0.65733725 0.91989305 0.79020028]
y_pre: [[3.44989377]
[4.77838804]
[3.53541958]]
y_tar: [[3.32865041]
[4.74528037]
[3.42093559]]
As you can see from losses change at outputs, the learning rate eta = 3e-3 is still bit two much, so the loss will go up at first few training episode, but start to drop when learning rate decay to appropriate value.

Related

Error34, 'Result too large'

So i wanted to do a gradient descent in Python so that i can find the global minimum of f, where x=10, learning rate is 0.01, epsilon is 0.00001 and max. number of iterations is 10000
# parameters to set
x = 10 # Starting value of x
alpha = 0.01 # Set learning rate
epsilon = 0.00001 # Stop algorithm when absolute difference between 2 consecutive x-values is less than epsilon
max_iter = 10000 # set maximum number of iterations
# Define function and derivative of function
f = lambda x: x**4-3*x**3+15
fprime = lambda x: 4*x**3-9*x**2
# Initialising
diff = 1 # initialise difference between 2 consecutive x-values
iter = 1 # iterations counter
# Now Gradient Descent
while diff > epsilon and iter < max_iter: # 2 stopiing criteria
x_new = x - alpha * fprime(x) # update rule
print("Iteration ", iter, ": x-value is:", x_new,", f(x) is: ", f(x_new) )
diff = abs(x_new - x)
iter = iter + 1
x = x_new
print("The local minimum occurs at: ", x)
But the thing is, when i run the entire code, it only manages to print out 5 iterations and then i encounter a OverFlowError message.
Your learning rate is too high, and thus causing the divergence that you're observing. A value of alpha = 0.001 converges to a local minimum:
# parameters to set
x = 10 # Starting value of x
alpha = 0.001 # Set learning rate
epsilon = 0.00001 # Stop algorithm when absolute difference between 2 consecutive x-values is less than epsilon
max_iter = 10000 # set maximum number of iterations
# Define function and derivative of function
f = lambda x: x**4-3*x**3+15
fprime = lambda x: 4*x**3-9*x**2
# Initialising
diff = 1 # initialise difference between 2 consecutive x-values
iter = 1 # iterations counter
# Now Gradient Descent
while diff > epsilon and iter < max_iter: # 2 stopiing criteria
x_new = x - alpha * fprime(x) # update rule
print("Iteration ", iter, ": x-value is:", x_new,", f(x) is: ", f(x_new) )
diff = abs(x_new - x)
iter = iter + 1
x = x_new
print("The local minimum occurs at: ", x)

Cost function of logistic regression outputs NaN for some values of theta

While implement logistic regression with only numpy library, I wrote the following code for cost function:
#sigmoid function
def sigmoid(z):
sigma = 1/(1+np.exp(-z))
return sigma
#cost function
def cost(X,y,theta):
m = y.shape[0]
z = X#theta
h = sigmoid(z)
J = np.sum((y*np.log(h))+((1-y)*np.log(1-h)))
J = -J/m
return J
Theta is a (3,1) array and X is the training data of shape (m,3). First column of X is ones.
For theta = [0,0,0], cost function outputs 0.693 which is the correct cost, but for theta = [1,-1,1], it outputs:
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:5: RuntimeWarning: divide by zero encountered in log
"""
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:5: RuntimeWarning: invalid value encountered in multiply
"""
nan
My code for gradient descent is:
#gradientdesc function
#alpha is the learning rate, iter is the number of iterations
def gradientDesc(X,y,theta,alpha,iter):
m = y.shape[0]
#d represents the derivative term
d = np.zeros((3,1))
for iter in range(iter):
h = sigmoid(X#theta) - y
temp = h.T.dot(X)
d = temp.T
d/=m
theta = theta - alpha*d
return theta
But this does not give the correct value of theta. What should I do?
Are the values in X large? This might lead to the sigmoid returning values close to zero that lead to the warnings you are seeing. Have a look at this thread:
Divide-by-zero-in-log
Your gradient descent won't work properly unless you solve this issue of values exploding. I would also consider adding regularization in your cost function.
J += C * np.sum(theta**2)

How to calculate logistic regression accuracy

I am a complete beginner in machine learning and coding in python, and I have been tasked with coding logistic regression from scratch to understand what happens under the hood. So far I have coded for the hypothesis function, cost function and gradient descent, and then coded for the logistic regression. However on coding for printing the accuracy I get a low output (0.69) which doesnt change with increasing iterations or changing the learning rate. My question is, is there a problem with my accuracy code below? Any help pointing to the right direction would be appreciated
X = data[['radius_mean', 'texture_mean', 'perimeter_mean',
'area_mean', 'smoothness_mean', 'compactness_mean', 'concavity_mean',
'concave points_mean', 'symmetry_mean', 'fractal_dimension_mean',
'radius_se', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se',
'compactness_se', 'concavity_se', 'concave points_se', 'symmetry_se',
'fractal_dimension_se', 'radius_worst', 'texture_worst',
'perimeter_worst', 'area_worst', 'smoothness_worst',
'compactness_worst', 'concavity_worst', 'concave points_worst',
'symmetry_worst', 'fractal_dimension_worst']]
X = np.array(X)
X = min_max_scaler.fit_transform(X)
Y = data["diagnosis"].map({'M':1,'B':0})
Y = np.array(Y)
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.25)
X = data["diagnosis"].map(lambda x: float(x))
def Sigmoid(z):
if z < 0:
return 1 - 1/(1 + math.exp(z))
else:
return 1/(1 + math.exp(-z))
def Hypothesis(theta, x):
z = 0
for i in range(len(theta)):
z += x[i]*theta[i]
return Sigmoid(z)
def Cost_Function(X,Y,theta,m):
sumOfErrors = 0
for i in range(m):
xi = X[i]
hi = Hypothesis(theta,xi)
error = Y[i] * math.log(hi if hi >0 else 1)
if Y[i] == 1:
error = Y[i] * math.log(hi if hi >0 else 1)
elif Y[i] == 0:
error = (1-Y[i]) * math.log(1-hi if 1-hi >0 else 1)
sumOfErrors += error
constant = -1/m
J = constant * sumOfErrors
#print ('cost is: ', J )
return J
def Cost_Function_Derivative(X,Y,theta,j,m,alpha):
sumErrors = 0
for i in range(m):
xi = X[i]
xij = xi[j]
hi = Hypothesis(theta,X[i])
error = (hi - Y[i])*xij
sumErrors += error
m = len(Y)
constant = float(alpha)/float(m)
J = constant * sumErrors
return J
def Gradient_Descent(X,Y,theta,m,alpha):
new_theta = []
constant = alpha/m
for j in range(len(theta)):
CFDerivative = Cost_Function_Derivative(X,Y,theta,j,m,alpha)
new_theta_value = theta[j] - CFDerivative
new_theta.append(new_theta_value)
return new_theta
def Accuracy(theta):
correct = 0
length = len(X_test, Hypothesis(X,theta))
for i in range(length):
prediction = round(Hypothesis(X[i],theta))
answer = Y[i]
if prediction == answer.all():
correct += 1
my_accuracy = (correct / length)*100
print ('LR Accuracy %: ', my_accuracy)
def Logistic_Regression(X,Y,alpha,theta,num_iters):
theta = np.zeros(X.shape[1])
m = len(Y)
for x in range(num_iters):
new_theta = Gradient_Descent(X,Y,theta,m,alpha)
theta = new_theta
if x % 100 == 0:
Cost_Function(X,Y,theta,m)
print ('theta: ', theta)
print ('cost: ', Cost_Function(X,Y,theta,m))
Accuracy(theta)
initial_theta = [0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]
alpha = 0.0001
iterations = 1000
Logistic_Regression(X,Y,alpha,initial_theta,iterations)
This is using data from the wisconsin breast cancer dataset (https://www.kaggle.com/uciml/breast-cancer-wisconsin-data) where I am weighing in 30 features - although changing the features to ones which are known to correlate also doesn't change my accuracy.
Python gives us this scikit-learn library that makes our work easier,
this worked for me:
from sklearn.metrics import accuracy_score
y_pred = log.predict(x_test)
score =accuracy_score(y_test,y_pred)
Accuracy is one of the most intuitive performance measure and it is simply a ratio of correctly predicted observation to the total observations. Higher accuracy means model is preforming better.
Accuracy = TP+TN/TP+FP+FN+TN
TP = True positives
TN = True negatives
FN = False negatives
TN = True negatives
While you are using accuracy measure your false positives and false negatives should be of similar cost. A better metric is the F1-score which is given by
F1-score = 2*(Recall*Precision)/Recall+Precision where,
Precision = TP/TP+FP
Recall = TP/TP+FN
Read more here
https://en.wikipedia.org/wiki/Precision_and_recall
The beauty about machine learning in python is that important modules like scikit-learn is open source so you can always look at the actual code.
Please use the below link to scikit learn metrics source code which will give you an idea how scikit-learn calculates the accuracy score when you do
from sklearn.metrics import accuracy_score
accuracy_score(y_true, y_pred)
https://github.com/scikit-learn/scikit-learn/tree/master/sklearn/metrics
I'm not sure how you arrived at a value of 0.0001 for alpha, but I think it's too low. Using your code with the cancer data shows that cost is decreasing with each iteration -- it's just going glacially.
When I raise this to 0.5, I still get a decreasing costs, but at a more reasonable level. After 1000 iterations it reports:
cost: 0.23668000993020666
And after fixing the Accuracy function I'm getting 92% on the test segment of the data.
You have Numpy installed, as shown by X = np.array(X). You should really consider using it for your operations. It will be orders of magnitude faster for jobs like this. Here is a vectorized version that gives results instantly rather than waiting:
import math
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
df = pd.read_csv("cancerdata.csv")
X = df.values[:,2:-1].astype('float64')
X = (X - np.mean(X, axis =0)) / np.std(X, axis = 0)
## Add a bias column to the data
X = np.hstack([np.ones((X.shape[0], 1)),X])
X = MinMaxScaler().fit_transform(X)
Y = df["diagnosis"].map({'M':1,'B':0})
Y = np.array(Y)
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.25)
def Sigmoid(z):
return 1/(1 + np.exp(-z))
def Hypothesis(theta, x):
return Sigmoid(x # theta)
def Cost_Function(X,Y,theta,m):
hi = Hypothesis(theta, X)
_y = Y.reshape(-1, 1)
J = 1/float(m) * np.sum(-_y * np.log(hi) - (1-_y) * np.log(1-hi))
return J
def Cost_Function_Derivative(X,Y,theta,m,alpha):
hi = Hypothesis(theta,X)
_y = Y.reshape(-1, 1)
J = alpha/float(m) * X.T # (hi - _y)
return J
def Gradient_Descent(X,Y,theta,m,alpha):
new_theta = theta - Cost_Function_Derivative(X,Y,theta,m,alpha)
return new_theta
def Accuracy(theta):
correct = 0
length = len(X_test)
prediction = (Hypothesis(theta, X_test) > 0.5)
_y = Y_test.reshape(-1, 1)
correct = prediction == _y
my_accuracy = (np.sum(correct) / length)*100
print ('LR Accuracy %: ', my_accuracy)
def Logistic_Regression(X,Y,alpha,theta,num_iters):
m = len(Y)
for x in range(num_iters):
new_theta = Gradient_Descent(X,Y,theta,m,alpha)
theta = new_theta
if x % 100 == 0:
#print ('theta: ', theta)
print ('cost: ', Cost_Function(X,Y,theta,m))
Accuracy(theta)
ep = .012
initial_theta = np.random.rand(X_train.shape[1],1) * 2 * ep - ep
alpha = 0.5
iterations = 2000
Logistic_Regression(X_train,Y_train,alpha,initial_theta,iterations)
I think I might have a different versions of scikit, because I had change the MinMaxScaler line to make it work. The result is that I can 10K iterations in the blink of an eye and the results of the applying the model to the test set is about 97% accuracy.
This also works using Vectorization to calculate the accuracy
But Accuracy is not recommended metric as the above Answer noted (if the data is not well_blanced you should not use accuracy instead you use F1-score)
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")

Theano inner product 3d matrix

thanks for reading this.
I'm trying to implement a multi-label logistic regression using theano:
import numpy
import theano
import theano.tensor as T
rng = numpy.random
examples = 5
features = 10
labels = 2
D = (rng.randn(examples, labels, features), rng.randint(size=(labels, examples), low=0, high=2))
training_steps = 10000
# Declare Theano symbolic variables
x = T.matrix("x")
y = T.vector("y")
w = theano.shared(rng.randn(1 , labels ,features), name="w")
b = theano.shared(0., name="b")
print "Initial model:"
print w.get_value(), b.get_value()
# Construct Theano expression graph
p_1 = 1 / (1 + T.exp(-T.dot(x, w) - b)) # Probability that target = 1
prediction = p_1 > 0.5 # The prediction thresholded
xent = -y * T.log(p_1) - (1-y) * T.log(1-p_1) # Cross-entropy loss function
cost = xent.mean() + 0.01 * (w ** 2).sum()# The cost to minimize
gw, gb = T.grad(cost, [w, b]) # Compute the gradient of the cost
# (we shall return to this in a
# following section of this tutorial)
# Compile
train = theano.function(
inputs=[x,y],
outputs=[prediction, xent],
updates=((w, w - 0.1 * gw), (b, b - 0.1 * gb)),
name='train')
predict = theano.function(inputs=[x], outputs=prediction , name='predict')
# Train
for i in range(training_steps):
pred, err = train(D[0], D[1])
print "Final model:"
print w.get_value(), b.get_value()
print "target values for D:", D[1]
print "prediction on D:", predict(D[0])
but -T.dot(x, w) product fails with this error:
TypeError: ('Bad input argument to theano function with name "train" at index 0(0-based)', 'Wrong number of dimensions: expected 2, got 3 with shape (5, 10, 2).')
x has shape (5, 2, 10) And W (1, 2, 10). I would expect the dot product to have shape (5,2).
My questions are:
Is there anyway to do this inner product?
Do you think there is a better way to achieve a multi-label logistic regression?
thanks!
---- EDIT -----
So here is an implementation of what I would like to do using numpy.
x = rng.randn(examples,labels,features)
w = rng.randn (labels,features)
dot = numpy.zeros((examples,labels))
for example in range(examples):
for label in range(labels):
dot[example,label] = x[example,label,:].dot(w[label,:])
print dot
output:
[[-1.70321498 2.51088139]
[-5.73608956 0.1066286 ]
[ 2.31334531 3.31892284]
[ 1.56301872 -0.56150922]
[-1.98815855 -2.98866706]]
But I don't know how to do this symbolically using Theano.
After some hours of fighting this seems to produce the right results:
I had an error which was having the input as rng.randn(examples,features,labels) instead of rng.randn(examples,features). This means, that besides having more labels, the inputs should be the same size.
And the way of computing the dot product the right way was using theano.scan method like:
results, updates = theano.scan(lambda label: T.dot(x, w[label,:]) - b[label], sequences=T.arange(labels))
thanks everybody for their help!
import numpy as np
import theano
import theano.tensor as T
rng = np.random
examples = 5
features = 10
labels = 2
D = (rng.randn(examples,features), rng.randint(size=(labels, examples), low=0, high=2))
training_steps = 10000
# Declare Theano symbolic variables
x = T.matrix("x")
y = T.matrix("y")
w = theano.shared(rng.randn(labels ,features), name="w")
b = theano.shared(np.zeros(labels), name="b")
print "Initial model:"
print w.get_value(), b.get_value()
results, updates = theano.scan(lambda label: T.dot(x, w[label,:]) - b[label], sequences=T.arange(labels))
# Construct Theano expression graph
p_1 = 1 / (1 + T.exp(- results)) # Probability that target = 1
prediction = p_1 > .5 # The prediction thresholded
xent = -y * T.log(p_1) - (1-y) * T.log(1-p_1) # Cross-entropy loss function
cost = xent.mean() + 0.01 * (w ** 2).sum()# The cost to minimize
gw, gb = T.grad(cost, [w, b]) # Compute the gradient of the cost
# (we shall return to this in a
# following section of this tutorial)
# Compile
train = theano.function(
inputs=[x,y],
outputs=[prediction, xent],
updates=((w, w - 0.1 * gw), (b, b - 0.1 * gb)),
name='train')
predict = theano.function(inputs=[x], outputs=prediction , name='predict')
# Train
for i in range(training_steps):
pred, err = train(D[0], D[1])
print "Final model:"
print w.get_value(), b.get_value()
print "target values for D:", D[1]
print "prediction on D:", predict(D[0])

Gradient descent with random input implementation

I am trying to implement gradient descent on a dataset. Even though I tried everything, I couldn't make it work. So, I created a test case. I am trying my code on a random data and try to debug.
More specifically, what I am doing is, I am generating random vectors between 0-1 and random labels for these vectors. And try to over-fit the training data.
However, my weight vector gets bigger and bigger in each iteration. And then, I have infinities. So, I do not actually learn anything. Here is my code:
import numpy as np
import random
def getRandomVector(n):
return np.random.uniform(0,1,n)
def getVectors(m, n):
return [getRandomVector(n) for i in range(n)]
def getLabels(n):
return [random.choice([-1,1]) for i in range(n)]
def GDLearn(vectors, labels):
maxIterations = 100
stepSize = 0.01
w = np.zeros(len(vectors[0])+1)
for i in range(maxIterations):
deltaw = np.zeros(len(vectors[0])+1)
for i in range(len(vectors)):
temp = np.append(vectors[i], -1)
deltaw += ( labels[i] - np.dot(w, temp) ) * temp
w = w + ( stepSize * (-1 * deltaw) )
return w
vectors = getVectors(100, 30)
labels = getLabels(100)
w = GDLearn(vectors, labels)
print w
I am using LMS for loss function. So, in all iterations, my update is the following,
where w^i is the ith weight vector and R is the stepSize and E(w^i) is the loss function.
Here is my loss function. (LMS)
and here is how I derivated the loss function,
,
Now, my questions are:
Should I expect good results in this random scenario using Gradient Descent? (What is the theoretical bounds?)
If yes, what is my bug in my implementation?
PS: I tried several other maxIterations and stepSize parameters. Still not working.
PS2: This is the best way I can ask the question here. Sorry if the question is too specific. But it made me crazy. I really want to learn the problem.
Your code has a couple of faults:
In GetVectors() method, you did not actually use the input variable m;
In GDLearn() method, you have a double loop, but you use the same variable i as the loop variables in both loops. (I guess the logic is still right, but it's confusing).
The prediction error (labels[i] - np.dot(w, temp)) has the wrong sign.
Step size does matters. If I am using 0.01 as step size, the cost is increasing in each iteration. Changing it to be 0.001 solved the problem.
Here is my revised code based on your original code.
import numpy as np
import random
def getRandomVector(n):
return np.random.uniform(0,1,n)
def getVectors(m, n):
return [getRandomVector(n) for i in range(m)]
def getLabels(n):
return [random.choice([-1,1]) for i in range(n)]
def GDLearn(vectors, labels):
maxIterations = 100
stepSize = 0.001
w = np.zeros(len(vectors[0])+1)
for iter in range(maxIterations):
cost = 0
deltaw = np.zeros(len(vectors[0])+1)
for i in range(len(vectors)):
temp = np.append(vectors[i], -1)
prediction_error = np.dot(w, temp) - labels[i]
deltaw += prediction_error * temp
cost += prediction_error**2
w = w - stepSize * deltaw
print 'cost at', iter, '=', cost
return w
vectors = getVectors(100, 30)
labels = getLabels(100)
w = GDLearn(vectors, labels)
print w
Running result -- you can see the cost is decreasing with each iteration but with a diminishing return.
cost at 0 = 100.0
cost at 1 = 99.4114482617
cost at 2 = 98.8476022685
cost at 3 = 98.2977744556
cost at 4 = 97.7612851154
cost at 5 = 97.2377571222
cost at 6 = 96.7268325883
cost at 7 = 96.2281642899
cost at 8 = 95.7414151147
cost at 9 = 95.2662577529
cost at 10 = 94.8023744037
......
cost at 90 = 77.367904046
cost at 91 = 77.2744249433
cost at 92 = 77.1823702888
cost at 93 = 77.0917090883
cost at 94 = 77.0024111475
cost at 95 = 76.9144470493
cost at 96 = 76.8277881325
cost at 97 = 76.7424064707
cost at 98 = 76.6582748518
cost at 99 = 76.5753667579
[ 0.16232142 -0.2425511 0.35740632 0.22548442 0.03963853 0.19595213
0.20080207 -0.3921798 -0.0238925 0.13097533 -0.1148932 -0.10077534
0.00307595 -0.30111942 -0.17924479 -0.03838637 -0.23938181 0.1384443
0.22929163 -0.0132466 0.03325976 -0.31489526 0.17468025 0.01351012
-0.25926117 0.09444201 0.07637793 -0.05940019 0.20961315 0.08491858
0.07438357]

Categories

Resources