How to get dimensions right using fmin_cg in scipy.optimize - python

I have been trying to use fmin_cg to minimize cost function for Logistic Regression.
xopt = fmin_cg(costFn, fprime=grad, x0= initial_theta,
args = (X, y, m), maxiter = 400, disp = True, full_output = True )
This is how I call my fmin_cg
Here is my CostFn:
def costFn(theta, X, y, m):
h = sigmoid(X.dot(theta))
J = 0
J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
return J.flatten()
Here is my grad:
def grad(theta, X, y, m):
h = sigmoid(X.dot(theta))
J = 1 / m * np.sum((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
gg = 1 / m * (X.T.dot(h-y))
return gg.flatten()
It seems to be throwing this error:
/Users/sugethakch/miniconda2/lib/python2.7/site-packages/scipy/optimize/linesearch.pyc in phi(s)
85 def phi(s):
86 fc[0] += 1
---> 87 return f(xk + s*pk, *args)
88
89 def derphi(s):
ValueError: operands could not be broadcast together with shapes (3,) (300,)
I know it's something to do with my dimensions. But I can't seem to figure it out.
I am noob, so I might be making an obvious mistake.
I have read this link:
fmin_cg: Desired error not necessarily achieved due to precision loss
But, it somehow doesn't seem to work for me.
Any help?
Updated size for X,y,m,theta
(100, 3) ----> X
(100, 1) -----> y
100 ----> m
(3, 1) ----> theta
This is how I initialize X,y,m:
data = pd.read_csv('ex2data1.txt', sep=",", header=None)
data.columns = ['x1', 'x2', 'y']
x1 = data.iloc[:, 0].values[:, None]
x2 = data.iloc[:, 1].values[:, None]
y = data.iloc[:, 2].values[:, None]
# join x1 and x2 to make one array of X
X = np.concatenate((x1, x2), axis=1)
m, n = X.shape
ex2data1.txt:
34.62365962451697,78.0246928153624,0
30.28671076822607,43.89499752400101,0
35.84740876993872,72.90219802708364,0
.....
If it helps, I am trying to re-code one of the homework assignments for the Coursera's ML course by Andrew Ng in python

Finally, I figured out what the problem in my initial program was.
My 'y' was (100, 1) and the fmin_cg expects (100, ). Once I flattened my 'y' it no longer threw the initial error. But, the optimization wasn't working still.
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.693147
Iterations: 0
Function evaluations: 43
Gradient evaluations: 41
This was the same as what I achieved without optimization.
I figured out the way to optimize this was to use the 'Nelder-Mead' method. I followed this answer: scipy is not optimizing and returns "Desired error not necessarily achieved due to precision loss"
Result = op.minimize(fun = costFn,
x0 = initial_theta,
args = (X, y, m),
method = 'Nelder-Mead',
options={'disp': True})#,
#jac = grad)
This method doesn't need a 'jacobian'.
I got the results I was looking for,
Optimization terminated successfully.
Current function value: 0.203498
Iterations: 157
Function evaluations: 287

Well, since I don't know exactly how your initializing m, X, y, and theta I had to make some assumptions. Hopefully my answer is relevant:
import numpy as np
from scipy.optimize import fmin_cg
from scipy.special import expit
def costFn(theta, X, y, m):
# expit is the same as sigmoid, but faster
h = expit(X.dot(theta))
# instead of 1/m, I take the mean
J = np.mean((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
return J #should be a scalar
def grad(theta, X, y, m):
h = expit(X.dot(theta))
J = np.mean((-(y * np.log(h))) - ((1-y) * np.log(1-h)))
gg = (X.T.dot(h-y))
return gg.flatten()
# initialize matrices
X = np.random.randn(100,3)
y = np.random.randn(100,) #this apparently needs to be a 1-d vector
m = np.ones((3,)) # not using m, used np.mean for a weighted sum (see ali_m's comment)
theta = np.ones((3,1))
xopt = fmin_cg(costFn, fprime=grad, x0=theta, args=(X, y, m), maxiter=400, disp=True, full_output=True )
While the code runs, I don't know enough about your problem to know if this is what you're looking for. But hopefully this can help you understand the problem better. One way to check your answer is to call fmin_cg with fprime=None and see how the answers compare.

Related

Fitting N datapoints in 3D on a straight line

I have N datapoints in 3d that lie on a line. The y-direction is fixed, so I want to fit x,z against y.
Lets say we have 6 datapoints, that align with the y axis:
x=[0,0,0,0,0,0]
y=[1,2,3,4,5,6]
z=[0,0,0,0,0,0]
what I want to do:
I want to get the best set of fitting parameters, the gof and fitting error.
So far with a least squarefit, I get a reduced chi2 of < 1, which means I might be overfitting (or misunderstanding something).
Questions:
1.) For example, for the above example I receive a reduced chi2 of 0- this seems false to me?
2.) Also, I am wondering if a least square fit is adequate for this as well- maybe someone can shed some insight on this? Would svd be a better choice for this?
import scipy.optimize
import numpy as np
#define a model (line)
def linear(params, y):
a, b = params
data = [a * y[i] + b for i in range(0, len(y))]
return data
#define the residuals that need to me minimized
def fitting_cost(params, x, y, z):
a_x, b_x, a_z, b_z = params
x_pred = linear((a_x, b_x), y)
z_pred = linear((a_z, b_z), y)
res_x = [x_pred[i] - x[i] for i in range(0, 6)]
res_z = [z_pred[i] - z[i] for i in range(0, 6)]
return res_x + res_z
#do the fit and return parameters plus gof
def least_squares_fit(x, y, z):
sp = [0,0,0,0]
result = scipy.optimize.leastsq(fitting_cost, sp,
args=(x, y, z),
full_output=True)
s_sq = (result[2]['fvec'] ** 2).sum() / (
len(result[2]['fvec']) - len(result[0]))
return result[0], s_sq

How do I speed up my RBF kernel function in my k-means algorithm

I am tring to implement the RBF Kernel Function for my kernel k-means alg. Here is my formula.
And then I implement it with Numpy, but there's a two-layer for loop, and I'm thinking about how to turn it into a matrix operation. Because if I could do matrix operations, it would be a lot faster to process my 784-dimensional data. Or maybe my implemention is not correct? Can someone help me?
import numpy as np
def get_gamma(X, Y):
gamma = 0
for x in X:
for y in Y:
tmp = x - y
gamma += tmp**2
gamma = gamma / (length**2)
return gamma
def kernel(X, Y, gamma):
up = np.sum(np.power(X-Y, 2))
res = np.exp(-*up/gamma)
return res
def kernel_distance(X, Y):
gamma = get_gamma(X, Y)
a = kernel(X, X, gamma)
b = kernel(Y, Y, gamma)
c = kernel(X, Y, gamma)
return np.sqrt(a+b-2*c)
That's odd if I run your code it gives me a number for k. But shouldn't it be an array? Also shouldn't X and Y be 2d since those are basically a list of your points? Anyways if I take my own X and Y
from scipy.spatial.distance import cdist
import numpy as np
n = 10
X = np.random.random((n,3))
Y = np.random.random((n,3))
I can solve your problem like this
norms_sq = cdist(X,Y,'sqeuclidean')
two_sigma_sq = 1/n**2*np.sum(norms_sq)
k = np.exp(-norms_sq/two_sigma_sq)

Mean Square Error in Python from scratch from equation

I am trying to find the mse for a given Phi, with output y, and calculated weights w. While trying to implement (y - w(transpose) * Phi ) in w(transpose) * Phi i am getting Value error. I know this is dimension error, but I've tried to change it and its not working for me.
I've tried transpose (but its not really transposing, just stays as it is), and reshape.
X=[1,2,3]
d=3
Phi=np.polynomial.polynomial.polyvander(X,d)
y=[2,3,4]
def train_model(Phi, y):
pht = np.matrix.transpose(Phi)
u = np.matmul(pht,Phi)
q = np.linalg.inv(u)
s = np.matmul(q,pht)
w = np.matmul(s,y)
return w
w=train_model(Phi,y)
def evaluate_model(Phi, y, w):
sum=0
wt = np.matrix.transpose(w)
for i in range (0,len(y)):
g = np.matmul(wt,Phi[:,i])
k = y[i]-g
l = k ** 2
sum+=l
avg=sum/len(y)
return avg
Edit:
The error I get is
ValueError: shapes (4,) and (53,) not aligned: 4 (dim 0) != 53 (dim 0)
It looks like your indexing is wrong, try
g = np.matmul(wt,Phi[i,:])

Scipy `fmin_cg` args are not match with my functions args

I am trying to build a linear regression model and find optimal values using fmin_cg optimizer.
I have two functions for this job. First linear_reg_cost which is cost function and second linear_reg_grad which is gradient of cost function. This functions both have same argument.
def hypothesis(x,theta):
return np.dot(x,theta)
Cost function:
def linear_reg_cost(x_flatten, y, theta_flatten, lambda_, num_of_features,num_of_samples):
x = x_flatten.reshape(num_of_samples, num_of_features)
theta = theta_flatten.reshape(n,1)
loss = hypothesis(x,theta)-y
regularizer = lambda_*np.sum(theta[1:,:]**2)/(2*m)
j = np.sum(loss ** 2)/(2*m)
return j
Gradient function:
def linear_reg_grad(x_flatten, y, theta_flatten, lambda_, num_of_features,num_of_samples):
x = x_flatten.reshape(num_of_samples, num_of_features)
m,n = x.shape
theta = theta_flatten.reshape(n,1)
new_theta = np.zeros(shape=(theta.shape))
loss = hypothesis(x,theta)-y
gradient = np.dot(x.T,loss)
new_theta[0:,:] = gradient/m
new_theta[1:,:] = gradient[1:,:]/m + lambda_*(theta[1:,]/m)
return new_theta
and fmin_cg:
theta = np.ones(n)
from scipy.optimize import fmin_cg
new_theta = fmin_cg(f=linear_reg_cost, x0=theta, fprime=linear_reg_grad,args=(x.flatten(), y, lambda_, m,n))
Note: I flatten x as input and retrieve in the cost and gradient function as matrix.
the output error:
<ipython-input-98-b29c1b8f6e58> in linear_reg_grad(x_flatten, y, theta_flatten, lambda_, num_of_features, num_of_samples)
1 def linear_reg_grad(x_flatten, y, theta_flatten, lambda_,num_of_features, num_of_samples):
----> 2 x = x_flatten.reshape(num_of_samples, num_of_features)
3 m,n = x.shape
4 theta = theta_flatten.reshape(n,1)
5 new_theta = np.zeros(shape=(theta.shape))
ValueError: cannot reshape array of size 2 into shape (2,12)
Note: x.shape = (12,2), y.shape = (12,1) ,theta.shape = (2,). So num_of_features =2 and num_of_samples=12. But error shows that my input x is parsing instead of theta. Why this happening even when I explicitly assigned args in fmin_cg? And how I should solve this problem?
Thanks for any advice
All of your implementations are correct but you have a little mistake.
Be inform to pass arguments in order for both of your functions.
Your problem is the order of num_of_feature and num_of_samples. You can replace their position with each other in linear_reg_grad or linear_reg_cost. Of course you should change this order in scipy.optimize.fmin_cg, args argument.
Second important thing is, x as first argument in fmin_cg is the variable you want to update each time and find the optimal one. So in your solution, x in fmin_cg must be theta not your x which is your input.

Doing many iterations of curve_fit in one go for piecewise function

I'm trying to perform what are many iterations of Scipy's curve_fit at once in order to avoid loops and therefore increase speed.
This is very similar to this problem, which was solved. However, the fact that the functions are piece-wise (discontinuous) makes so that that solution isn't applicable here.
Consider this example:
import numpy as np
from numpy import random as rng
from scipy.optimize import curve_fit
rng.seed(0)
N=20
X=np.logspace(-1,1,N)
Y = np.zeros((4, N))
for i in range(0,4):
b = i+1
a = b
print(a,b)
Y[i] = (X/b)**(-a) #+ 0.01 * rng.randn(6)
Y[i, X>b] = 1
This yields these arrays:
Which as you can see are discontinuous at X==b. I can retrieve the original values of a and b by using curve_fit iteratively:
def plaw(r, a, b):
""" Theoretical power law for the shape of the normalized conditional density """
import numpy as np
return np.piecewise(r, [r < b, r >= b], [lambda x: (x/b)**-a, lambda x: 1])
coeffs=[]
for ix in range(Y.shape[0]):
print(ix)
c0, pcov = curve_fit(plaw, X, Y[ix])
coeffs.append(c0)
But this process can be very slow depending of the size of X, Y and the loop, so I'm trying to speed things up by trying to get coeffs without the need for a loop. So far I haven't had any luck.
Things that might be important:
X and Y only contain positive values
a and b are always positive
Although the data to fit in this example is smooth (for the sake of simplicity), the real data has noise
EDIT
This is as far as I've gotten:
y=np.ma.masked_where(Y<1.01, Y)
lX = np.log(X)
lY = np.log(y)
A = np.vstack([lX, np.ones(len(lX))]).T
m,c=np.linalg.lstsq(A, lY.T)[0]
print('a=',-m)
print('b=',np.exp(-c/m))
But even without any noise the output is:
a= [0.18978965578339158 1.1353633705997466 2.220234483915197 3.3324502660995714]
b= [339.4090881838179 7.95073481873057 6.296592007396107 6.402567167503574]
Which is way worse than I was hoping to get.
Here are three approaches to speeding this up. You gave no desired speed up or accuracies, or even vector sizes, so buyer beware.
TL;DR
Timings:
len 1 2 3 4
1000 0.045 0.033 0.025 0.022
10000 0.290 0.097 0.029 0.023
100000 3.429 0.767 0.083 0.030
1000000 0.546 0.046
1) Original Method
2) Pre-estimate with Subset
3) M Newville [linear log-log estimate](https://stackoverflow.com/a/44975066/7311767)
4) Subset Estimate (Use Less Data)
Pre-estimate with Subset (Method 2):
A decent speedup can be achieved by simply running the curve_fit twice, where the first time uses a short subset of the data to get a quick estimate. That estimate is then used to seed a curve_fit with the entire dataset.
x, y = current_data
stride = int(max(1, len(x) / 200))
c0 = curve_fit(power_law, x[0:len(x):stride], y[0:len(y):stride])[0]
return curve_fit(power_law, x, y, p0=c0)[0]
M Newville linear log-log estimate (Method 3):
Using the log estimate proposed by M Newville, is also considerably faster. As the OP was concerned about the initial estimate method proposed by Newville, this method uses curve_fit with a subset to provide the estimate of the break point in the curve.
x, y = current_data
stride = int(max(1, len(x) / 200))
c0 = curve_fit(power_law, x[0:len(x):stride], y[0:len(y):stride])[0]
index_max = np.where(x > c0[1])[0][0]
log_x = np.log(x[:index_max])
log_y = np.log(y[:index_max])
result = linregress(log_x, log_y)
return -result[0], np.exp(-result[1] / result[0])
return (m, c), result
Use Less Data (Method 4):
Finally the seed mechanism used for the previous two methods provides pretty good estimates on the sample data. Of course it is sample data so your mileage may vary.
stride = int(max(1, len(x) / 200))
c0 = curve_fit(power_law, x[0:len(x):stride], y[0:len(y):stride])[0]
Test Code:
import numpy as np
from numpy import random as rng
from scipy.optimize import curve_fit
from scipy.stats import linregress
fit_data = {}
current_data = None
def data_for_fit(a, b, n):
key = a, b, n
if key not in fit_data:
rng.seed(0)
x = np.logspace(-1, 1, n)
y = np.clip((x / b) ** (-a) + 0.01 * rng.randn(n), 0.001, None)
y[x > b] = 1
fit_data[key] = x, y
return fit_data[key]
def power_law(r, a, b):
""" Power law for the shape of the normalized conditional density """
import numpy as np
return np.piecewise(
r, [r < b, r >= b], [lambda x: (x/b)**-a, lambda x: 1])
def method1():
x, y = current_data
return curve_fit(power_law, x, y)[0]
def method2():
x, y = current_data
return curve_fit(power_law, x, y, p0=method4()[0])
def method3():
x, y = current_data
c0, pcov = method4()
index_max = np.where(x > c0[1])[0][0]
log_x = np.log(x[:index_max])
log_y = np.log(y[:index_max])
result = linregress(log_x, log_y)
m, c = -result[0], np.exp(-result[1] / result[0])
return (m, c), result
def method4():
x, y = current_data
stride = int(max(1, len(x) / 200))
return curve_fit(power_law, x[0:len(x):stride], y[0:len(y):stride])
from timeit import timeit
def runit(stmt):
print("%s: %.3f %s" % (
stmt, timeit(stmt + '()', number=10,
setup='from __main__ import ' + stmt),
eval(stmt + '()')[0]
))
def runit_size(size):
print('Length: %d' % size)
if size <= 100000:
runit('method1')
runit('method2')
runit('method3')
runit('method4')
for i in (1000, 10000, 100000, 1000000):
current_data = data_for_fit(3, 3, i)
runit_size(i)
Two suggestions:
Use numpy.where (and possibly argmin) to find the X value at which the Y data becomes 1, or perhaps just slightly larger than 1, and truncate the data to that point -- effectively ignoring the data where Y=1.
That might be something like:
index_max = numpy.where(y < 1.2)[0][0]
x = y[:index_max]
y = y[:index_max]
Use the hint shown in your log-log plot that the power law is now linear in log-log. You don't need curve_fit, but can use scipy.stats.linregress on log(Y) vs log(Y). For your real work, that will at the very least give good starting values for a subsequent fit.
Following up on this and trying to follow your question, you might try something like:
import numpy as np
from scipy.stats import linregress
np.random.seed(0)
npts = 51
x = np.logspace(-2, 2, npts)
YTHRESH = 1.02
for i in range(5):
b = i + 1.0 + np.random.normal(scale=0.1)
a = b + np.random.random()
y = (x/b)**(-a) + np.random.normal(scale=0.0030, size=npts)
y[x>b] = 1.0
# to model exponential decay, first remove the values
# where y ~= 1 where the data is known to not decay...
imax = np.where(y < YTHRESH)[0][0]
# take log of this truncated x and y
_x = np.log(x[:imax])
_y = np.log(y[:imax])
# use linear regression on the log-log data:
out = linregress(_x, _y)
# map slope/intercept to scale, exponent
afit = -out.slope
bfit = np.exp(out.intercept/afit)
print(""" === Fit Example {i:3d}
a expected {a:4f}, got {afit:4f}
b expected {b:4f}, got {bfit:4f}
""".format(i=i+1, a=a, b=b, afit=afit, bfit=bfit))
Hopefully that's enough to get you going.

Categories

Resources