I have a function in 2 variables x1,x2
f = 3*x1^2 + 4*x2^2 + 5*x1 + 6*x2 + 10
Consider x is a row vector such that x = [x5,x6], where x5,x6 are components of the vector. If the notation is confusing, let us consider x = [x1,x1] but x1,x2 can be any arbitrary components. The same argument holds for y.
Then I want to find a from (x + ay) such that it will minimize the f. a is real constant, x and y are vectors. This is explained above.
If this does not make sense, then let us consider x,y as a 1-dimensional arrays with 2 locations. So, x(1),x(2),y(1),y(2) be their components. Then I want to multiply array y by a symbolic variable a.
For example, x=[4,5], y=[-2,3] then, (x + ay) = (4,5) + a(-2,3) = (4-2a,5+3a). a is symbolic variable that is unknown here.
Substituting in f1 (To be more clear, first argument in the definition of f x1 = 4-2a, second argument x2=5+3a)
f1 = 3*(4-2a)^2 + 4*(5+3a)^2 + 5*(4-2a) + 6*(5+3a) + 10 ............(eq. 1)
Then function f1 becomes unknown in one variable, a and can be minimized using 1D minimization algorithm, such as golden section search, given an interval [x_lower,x_upper].
My question is:
Given different x,y,
How to evaluate (x+ay) and pass (or substitute ?) it into function f (eq1)?
How to create 'dynamic' function f1, as in eq. 1, to pass it to 1D minimization algorithm? By dynamic, I mean here is function f1 will change every time for x,y.
I am interested in a low-level implementation of this problem (sticking to the basic features of a language as much as possible and without using language specific features or object oriented features) in python, MATLAB, C or any other language, but again in 'low level.' Can you suggest something?
UPDATE: I don't want to use symbolics from python, MATLAB or from any other language.
I'm rephrasing your question in my own words, because the question in its current form is confusing:
You have a function f(x1,x2) = 3*x1^2 + 4*x2^2 + 5*x1 + 6*x2 + 10. x1 and x2 are the components of a 2D vector obtained from summing x with the product of a and y, where x and y are given vectors, and a is a scalar. You want to obtain the function that results from substituting this relation into f.
Note that the notation is a bit confusing, so I will use instead x = z+a*y, where z (replacing the x you used) and y are the given vectors.
Let's define f as an anonymous function in Matlab (you could easily use a function file as well):
f = #(x) 3*x(1)^2 + 4*x(2)^2 + 5*x(1) + 6*x(2) + 10;
Note that I'm writing this differently than you did, i.e. x(1) and x(2) instead of x1 and x2. This means that I am using components of a vector instead of two unrelated variables.
Then, let's write your equation involving a as a function as well:
g = #(a) z + a*y;
The function g(a) returns a vector for each value a, obeying g(a) = z+a*y.
Now you can do the substitution:
h = #(a) f(g(a))
h is the desired function that takes a as input and returns the value of a applied at the vector obtained from z+a*y.
you can use eval convert string to function
f = 'x+a*y'
x = 4
y = 3
for a in xrange(3):
print eval(f)
output:
4
7
10
Related
I've created a polynomial object using numpy.poly1d and some arbitrary coefficients (a,b,c) so that I can find the roots of the equation ax^2 + bx + c = y0 at a given y0. In principle, that can be done fairly easily by calling the method root of the poly1d object.
The only issue is that the actual equation I am trying to solve is the same as the one written above, but modulo-divided by 2π which corresponds to finding x when the polynomial modulo-divided by 2π equals to y0, (or find x for y = (ax^2 + bx + (c-yo)) [2*pi])
However, it seems that I can't apply this modulo operator to a poly1d object.
Is there a way of doing that using NumPy?
Here are some lines of code:
import numpy as np
def x_to_y(x,a,b,c):
return (a*x**2 + b*x + c) % (2*np.pi)
def y_to_x(y0,a,b,c):
a,b,c = coeffs
eq = np.poly1d([a,b,c]) % (2*np.pi) # throws an error, can't apply % operation on poly1d object
return (eq-yo).roots
It seems that you can use np.mod instead of %.
The only problem being that np.mod returns an array in that case, not a poly1d object.
Actually managed to bodge it by simply adding 2kπ to the 0th order term of the polynomial, and sweeping over a few values of k till I get a root that is comprised within the correct bounds.
Not ideal but it works. Still open to clever ways of doing it though!
Initially, I have two arrays that correspond to the values of x and y in a function, but I don't know that function, I just know that the values of y depend on x. Then, I calculate a function that depends on both arrays.
I need to calculate in python the integral of that last function to obtain the total area under the curve between the first value of x and the last. Any idea of how to do that?
x = [array]
y(x) = [array]
a = 2.839*10**25
b = 4*math.pi
alpha = 0.5
z = 0.003642
def L(x,y,a,b,alpha,z):
return x*((y*b*a)/(1+z)**(1+alpha))
Your function is a function of x (in that given a value of x it spits out a value), so first you should repackage it as such (introduce a function yy which, given x, produces the requisite y), then write LL(x) = L(x, yy[x]), then use scipy.integrate to integrate it.
I am having trouble understanding the output of my function to implement multiple-ridge regression. I am doing this from scratch in Python for the closed form of the method. This closed form is shown below:
I have a training set X that is 100 rows x 10 columns and a vector y that is 100x1.
My attempt is as follows:
def ridgeRegression(xMatrix, yVector, lambdaRange):
wList = []
for i in range(1, lambdaRange+1):
lambVal = i
# compute the inner values (X.T X + lambda I)
xTranspose = np.transpose(x)
xTx = xTranspose # x
lamb_I = lambVal * np.eye(xTx.shape[0])
# invert inner, e.g. (inner)**(-1)
inner_matInv = np.linalg.inv(xTx + lamb_I)
# compute outer (X.T y)
outer_xTy = np.dot(xTranspose, y)
# multiply together
w = inner_matInv # outer_xTy
wList.append(w)
print(wList)
For testing, I am running it with the first 5 lambda values.
wList becomes 5 numpy.arrays each of length 10 (I'm assuming for the 10 coefficients).
Here is the first of those 5 arrays:
array([ 0.29686755, 1.48420319, 0.36388528, 0.70324668, -0.51604451,
2.39045735, 1.45295857, 2.21437745, 0.98222546, 0.86124358])
My question, and clarification:
Shouldn't there be 11 coefficients, (1 for the y-intercept + 10 slopes)?
How do I get the Minimum Square Error from this computation?
What comes next if I wanted to plot this line?
I think I am just really confused as to what I'm looking at, since I'm still working on my linear-algebra.
Thanks!
First, I would modify your ridge regression to look like the following:
import numpy as np
def ridgeRegression(X, y, lambdaRange):
wList = []
# Get normal form of `X`
A = X.T # X
# Get Identity matrix
I = np.eye(A.shape[0])
# Get right hand side
c = X.T # y
for lambVal in range(1, lambdaRange+1):
# Set up equations Bw = c
lamb_I = lambVal * I
B = A + lamb_I
# Solve for w
w = np.linalg.solve(B,c)
wList.append(w)
return wList
Notice that I replaced your inv call to compute the matrix inverse with an implicit solve. This is much more numerically stable, which is an important consideration for these types of problems especially.
I've also taken the A=X.T#X computation, identity matrix I generation, and right hand side vector c=X.T#y computation out of the loop--these don't change within the loop and are relatively expensive to compute.
As was pointed out by #qwr, the number of columns of X will determine the number of coefficients you have. You have not described your model, so it's not clear how the underlying domain, x, is structured into X.
Traditionally, one might use polynomial regression, in which case X is the Vandermonde Matrix. In that case, the first coefficient would be associated with the y-intercept. However, based on the context of your question, you seem to be interested in multivariate linear regression. In any case, the model needs to be clearly defined. Once it is, then the returned weights may be used to further analyze your data.
Typically to make notation more compact, the matrix X contains a column of ones for an intercept, so if you have p predictors, the matrix is dimensions n by p+1. See Wikipedia article on linear regression for an example.
To compute in-sample MSE, use the definition for MSE: the average of squared residuals. To compute generalization error, you need cross-validation.
Also, you shouldn't take lambVal as an integer. It can be small (close to 0) if the aim is just to avoid numerical error when xTx is ill-conditionned.
I would advise you to use a logarithmic range instead of a linear one, starting from 0.001 and going up to 100 or more if you want to. For instance you can change your code to that:
powerMin = -3
powerMax = 3
for i in range(powerMin, powerMax):
lambVal = 10**i
print(lambVal)
And then you can try a smaller range or a linear range once you figure out what is the correct order of lambVal with your data from cross-validation.
Suppose I want to find the "intersection point" of 2 arbitrary high-dimensional lines. The two lines won't actually intersect, but I still want to find the most intersect point (i.e. a point that is as close to all lines as possible).
Suppose those lines have direction vectors A, B and initial points C, D,
I can find the most intersect point by simply set up a linear least square problem: converting the line-intersection equation
Ax + C = By + D
to least-square form
[A, -B] # [[x, y]] = D - C
where # standards for matrix times vector, and then I can use e.g. np.linalg.lstsq to solve it.
But how can I find the "most intersect point" of 3 or more arbitrary lines? If I follow the same rule, I now have
Ax + D = By + E = Cz + F
The only way I can think of is decomposing this into three equations:
Ax + D = By + E
Ax + D = Cz + F
By + E = Cz + F
and converting them to least-square form
[A, -B, 0] [E - D]
[A, 0, -C] # [[x, y, z]] = [F - D]
[0, B, -C] [F - E]
The problem is the size of the least-square problem increases quadraticly about the number of lines. I'm wondering are there more efficient way to solve n-way-equal least-square linear problem?
I was thinking about the necessity of By + E = Cz + F above providing the other two terms. But since this problem do not have exact solution (i.e. they don't actually intersect), I believe doing so will create more "weight" on some variable?
Thank you for your help!
EDIT
I just tested pairing the first term with all other terms in the n-way-equality (and no other pairs) using the following code
def lineIntersect(k, b):
"k, b: N-by-D matrices describing N D-dimensional lines: k[i] * x + b[i]"
# Convert the problem to least-square form `Ax = B`
# A is temporarily defined 3-dimensional for convenience
A = np.zeros((len(k)-1, k.shape[1], len(k)), k.dtype)
A[:,:,0] = k[0]
A[range(len(k)-1),:,range(1,len(k))] = -k[1:]
# Convert to 2-dimensional matrix by flattening first two dimensions
A = A.reshape(-1, len(k))
# B should be 1-dimensional vector
B = (b[1:] - b[0]).ravel()
x = np.linalg.lstsq(A, B, None)[0]
return (x[:,None] * k + b).mean(0)
The result below indicates doing so is not correct because the first term in the n-way-equality is "weighted differently".
The first output is difference between the regular result and the result of different input order (line order should not matter) where the first term did not change.
The second output is the same with the first term did change.
k = np.random.rand(10, 100)
b = np.random.rand(10, 100)
print(np.linalg.norm(lineIntersect(k, b) - lineIntersect(np.r_[k[:1],k[:0:-1]], np.r_[b[:1],b[:0:-1]])))
print(np.linalg.norm(lineIntersect(k, b) - lineIntersect(k[::-1], b[::-1])))
results in
7.889616961715915e-16
0.10702479853076755
Another criterion for the 'almost intersection point' would be a point x such that the sum of the squares of the distances of x to the lines is as small as possible. Like your criterion, if the lines actually do intersect then the almost intersection point will be the actual intersection point. However I think the sum of distances squared criterion makes it straightforward to compute the point in question:
Suppose we represent a line by a point and a unit vector along the line. So if a line is represented by p,t then the points on the line are of the form
p + l*t for scalar l
The distance-squared of a point x from a line p,t is
(x-p)'*(x-p) - square( t'*(x-p))
If we have N lines p[i],t[i] then the sum of the distances squared from a point x is
Sum { (x-p[i])'*(x-p[i]) - square( t[i]'*(x[i]-p[i]))}
Expanding this out I get the above to be
x'*S*x - 2*x'*V + K
where
S = N*I - Sum{ t[i]*t[i]'}
V = Sum{ p[i] - (t[i]'*p[i])*t[i] }
and K does not depend on x
Unless all the lines are parallel, S will be (strictly) positive definite and hence invertible, and in that case our sum of distances squared is
(x-inv(S)*V)'*S*(x-inv(S)*V) + K - V'*inv(S)*V
Thus the minimising x is
inv(S)*V
So the drill is: normalise your 'direction vectors' (and scale each point by the same factor as used to scale the direction), form S and V as above, solve
S*x = V for x
This question might be better suited for the math stackexchange. Also, does anyone have a good way of formatting math here? Sorry that it's hard to read, I did my best with unicode.
EDIT: I misinterpreted what #ZisIsNotZis meant by the lines Ax+C so what disregard the next paragraph.
I'm not convinced that your method is stated correctly. Would you mind posting your code and a small example of the output (maybe in 2d with 3 or 4 lines so we can plot it)? When you're trying to find the intersection of two lines shouldn't you do Ax+C = Bx+D? If you do Ax+C=By+D you can pick some x on the first line and some y on the second line and satisfy both equations exactly. Because here x and y should be the same size as A and B which is the dimension of the space rather than scalars.
There are many ways to understand the problem of finding a point that is as close to all lines as possible. I think the most natural one is that the sum of squares of euclidian distance to each line is minimized.
Suppose we have a line in R^n: c^Tz + d = 0 (where c is unit length) and another point x. Then the shortest vector from x to the line is: (I-cc^T)(x-d) so the square of the distance from x to the line is ║(I-cc^T)(x-d)║^2. We can find the closest point to the line by minimizing this distance. Note that this is a standard least squares problem of the form min_x ║b-Ax║_2.
Now, suppose we have lines given by c_iz+d_i for i=1,...,m. The squared distance d_i^2 from a point x to the i-th line is d_i^2 = ║(I-cc^T)(x-d)║_2^2. We now want to solve the problem of min_x \sum_{i=1}^{m} d_i^2.
In matrix form we have:
║ ⎡ (I-c_1 c_1^T)(x-d_1) ⎤ ║
║ | (I-c_2 c_2^T)(x-d_2) | ║
min_x ║ | ... | ║
║ ⎣ (I-c_n c_n^T)(x-d_n) ⎦ ║_2
This is again in the form min_x ║b - Ax║_2 so there are good solvers available.
Each block has size n (dimension of the space) and there are m blocks (number of lines). So the system is mn byn. In particular, it is linear in the number of lines and quadratic in the dimension of the space.
It also has the advantage that if you add a line you simply add another block to the least squares system. This also offers the possibility of updating solutions iteratively as you add lines.
I'm not sure if there are special solvers for this type of least squares system. Note that each block is the identity minus a rank one matrix, so that might give some additional structure which can be used to speed things up. That said, I think using existing solvers will almost always work better than writing your own, unless you have quite a bit of background in numerical analysis or have a very specialized class of systems to solve.
Not a solution, some thoughts:
If line in nD space has parametric equation (with unit Dir vector)
L(t) = Base + Dir * t
then squared distance from point P to this line is
W = P - Base
Dist^2 = (W - (W.dot.Dir) * Dir)^2
If it is possible to write Min(Sum(Dist[i]^2)) in form suitable for LSQ method (make partial derivatives by every point coordinate), so resulting system might be solved for (x1..xn) coordinate vector.
(Situation resembles reversal of many points and single line of usual LSQ)
You say that you have two "high-dimensional" lines. This implies that the matrix indicating the lines has many more columns than rows.
If this is the case and you can efficiently find a low-rank decomposition such that A=LRᵀ, then you can rewrite the solution of the least squares problem min ||Ax-y||₂ as x=(Rᵀ RLᵀ L)⁻¹ Lᵀ y.
If m is the number of lines and n the dimension of the lines, then this reduces the least-squares time complexity from O(mn²+nʷ) to O(nr²+mr²) where r=min(m,n).
The problem then is to find such a decomposition.
I've defined the following function as a method of approximating an integral using Boole's Rule:
def integrate_boole(f,l,r,N):
h=((r-l)/N)
xN = np.linspace(l,r,N+1)
fN = f(xN)
return ((2*h)/45)*(7*fN[0]+32*(np.sum(fN[1:-2:2]))+12*(np.sum(fN[2:-3:4]))+14*(np.sum(fN[4:-5]))+7*fN[-1])
I used the function to get the value of the integral for sin(x)dx between 0 and pi (where N=8) and assigned it to a variable sine_int.
The answer given was 1.3938101893248442
After doing the original equation (see here) out by hand I realised this answer was quite inaccurate.
The sums of fN are giving incorrect values, but I'm not sure why. For example, np.sum(fN[4:-5]) is going to 0.
Is there a better way of coding the sums involved, or is there an error in my parameters that's causing the calculations to be inaccurate?
Thanks in advance.
EDIT
I should have made it clearer that this is supposed to be a composite version of the rule, i.e. approximating over N points where N is divisible by 4. So the typical 5 points with 4 intervals isn't going to cut it here, unfortunately. I would copy the equation I'm using into here, but I don't have an image of it and LaTex isn't an option. It should/might be clear from the code I have after return.
From a quick inspection looks like the term multiplying f(x_4) should be 32, not 14:
def integrate_boole(f,l,r,N):
h=((r-l)/N)
xN = np.linspace(l,r,N+1)
fN = f(xN)
return ((2*h)/45)*(7*fN[0]+32*(np.sum(fN[1:-2:2]))+
12*(np.sum(fN[2:-3:4]))+32*(np.sum(fN[4:-5]))+7*fN[-1])
First, one of your coefficients was wrong as pointed out by #nixon. Then, I think you do not really understand how the Boole's rule works - It approximates the integral of a function only using 5 points of the function. Hence, the terms like np.sum(fN[1:-2:2]) makes no sense. You only need five points, which you can obtain with xN = np.linspace(l,r,5). Your h is simply the distance between 2 of the contiguos points h = xN[1] - xN[0]. And then, easy peasy:
import numpy as np
def integrate_boole(f,l,r):
xN = np.linspace(l,r,5)
h = xN[1] - xN[0]
fN = f(xN)
return ((2*h)/45)*(7*fN[0]+32*fN[1]+12*fN[2]+32*fN[3]+7*fN[4])
def f(x):
return np.sin(x)
I = integrate_boole(f, 0, np.pi)
print(I) # Outputs 1.99857...
I'm not sure what you're hoping your code does w.r.t. Boole's rule. Why are you summing over samples of the function (i.e. np.sum(fN[2:-3:4]))? I think your N parameter is also not well defined and I'm not sure what it's supposed to represent. Maybe you're using another rule I'm not familiar with: I'll let you decide.
Regardless, here's an implementation of Boole's rule as Wikipedia defines it. Variables map to the Wikipedia version you linked:
def integ_boole(func, left, right):
h = (right - left) / 4
x1 = left
x2 = left + h
x3 = left + 2*h
x4 = left + 3*h
x5 = right # or left + 4h
result = (2*h / 45) * (7*func(x1) + 32*func(x2) + 12*func(x3) + 32*func(x4) + 7*func(x5))
return result
then, to test:
import numpy as np
print(integ_boole(np.sin, 0, np.pi))
outputs 1.9985707318238357, which is extremely close to the correct answer of 2.
HTH.