I am looking to build a multivariate, multiple linear regression model with N dependent variables and M independent variables. I was looking around and cannot seem to find an implementation. I did some research and found some notes here: http://users.stat.umn.edu/~helwig/notes/mvlr-Notes.pdf on slide 51. This seems very simple to implement:
import numpy as np
M = 10
N = 3
p = 15
Y = np.random.rand(p,N)
X = np.random.rand(p,M)
A = np.dot(np.transpose(X),X)
B = np.dot(np.transpose(X),Y)
sol = np.linalg.solve(A,B)
where sol outputs the matrix of coefficients. I eventually will be scaling this up to extremely large datasets. My main concern is the accuracy in this method. To be quite honest it seems all too simple. Can someone weigh in on whether this is sufficient in a multivariate, multiple regression or is there some package or anything else that I can use that is better?
Thank you
There's a lot going on behind the scenes than you think. What you are looking at is the least squared solution to a system of an overconstrained linear system of equations. Let me explain it like this. You have p equations and q unknown.
Case 1: p < q Infinitely many solutions. The matrix A of size p x q is singular and so infinitely many solution exists. They can be found by finding the particular solution and basis for the null space. np.linalg.solve can't be used to solve such a system as it only expects full rank square matrices. You can use np.linalg.lstsq instead.
Case 2: p = q Unique Solution. This means that the p x q matrix is invertible and you can solve the system Ax = b using x = A^(-1) b. In fact, when you call np.linalg.solve, this is exactly what's happening.
Case 3: p > q No solution exists. But we can approximate by projecting the vector b orthogonally onto the column space of matrix A. This means that we want to find a vector b_hat such that b_hat + w = b where w is some arbitrary vector perpendicular to b_hat and b_hat lies in the column space of the matrix A. Hence, there is some x_hat such that A * x_hat = b and A^(T) w = 0 (because w lies in the left null space). As w = b - b_hat, we have A^(T) w = A^(T) * b - A^(T) * ( A * x_hat ). Hence, we come to a solution, x_hat = (A^(T) * A)^(-1) * A^(T) * b. This x_hat is the best approximate solution to the system of linear equations with p > q. It can be proved mathematically but I think this is a plausible explanation. Again, np.linalg.solve can't be used to solve this type of linear system but there is another routine np.linalg.lstsq that can be used.
Related
I am trying to find the top k leading eigenvalues of a numpy matrix (using python dot product notation)
L#L + a*Y#Y.T, where L and Y are a symmetric nxn and an nxd matrix, respectively.
According to the below text from this paper, I should be able to calculate these leading eigenvalues with L#(L#v) + a*X#(X.T#v), where I guess v is an arbitrary vector. The Lanczos paper they cite is here.
I'm not quite sure where to start. I know that scipy has scipy.sparse.linalg.eigsh here, and from the notes it looks like it uses the Lanczos algorithm - but I am at a loss as to whether it's possible to use sparse.linalg.eigsh for my specific use case. I googled around and didn't find a Python implementation for this very quickly -- does anybody know if I can use sparse.linalg.eigsh to calculate this somehow? I definitely don't want to write this algorithm myself.
I also wasn't sure whether to post this in math.stackexchange or here, since it's a question about the Python implementation of a very mathy thing.
You could check scipy.sparse.linalg.eigsh.
import numpy as np;
from scipy.sparse.linalg import eigsh;
from numpy.linalg import eigh
a = 1.4
n = 20;
d = 7;
# random symmetric n x n matrix
L = np.random.randn(n, n)
L = L + L.T
# random n x d matrix
Y = np.random.randn(n, d)
A = L # L.T + a * Y # Y.T # your equation
A must be positive-definite to use eigsh, this is guaranteed to be true if a>0.
You could check the four eigenvalues as follows
eigsh(La, 4)[0]
For reference you can compare based on numpy.linalg.eigh that compute all the eigenvalues. Sort them, and take the last four elements of the sorted array, the results should be close.
np.sort(eigh(La)[0])[-4:]
I'm trying to solve an overdetermined system with QR decomposition and linalg.solve but the error I get is
LinAlgError: Last 2 dimensions of the array must be square.
This happens when the R array is not square, right? The code looks like this
import numpy as np
import math as ma
A = np.random.rand(2,3)
b = np.random.rand(2,1)
Q, R = np.linalg.qr(A)
Qb = np.matmul(Q.T,b)
x_qr = np.linalg.solve(R,Qb)
Is there a way to write this in a more efficient way for arbitrary A dimensions? If not, how do I make this code snippet work?
The reason is indeed that the matrix R is not square, probably because the system is overdetermined. You can try np.linalg.lstsq instead, finding the solution which minimizes the squared error (which should yield the exact solution if it exists).
import numpy as np
A = np.random.rand(2, 3)
b = np.random.rand(2, 1)
x_qr = np.linalg.lstsq(A, b)[0]
You need to call QR with the flag mode='reduced'. The default Q R matrices are returned as M x M and M x N, so if M is greater than N then your matrix R will be nonsquare. If you choose reduced (economic) mode your matrices will be M x N and N x N, in which case the solve routine will work fine.
However, you also have equations/unknowns backwards for an overdetermined system. Your code snippet should be
import numpy as np
A = np.random.rand(3,2)
b = np.random.rand(3,1)
Q, R = np.linalg.qr(A, mode='reduced')
#print(Q.shape, R.shape)
Qb = np.matmul(Q.T,b)
x_qr = np.linalg.solve(R,Qb)
As noted by other contributors, you could also call lstsq directly, but sometimes it is more convenient to have Q and R directly (e.g. if you are also planning on computing projection matrix).
As shown in the documentation of numpy.linalg.solve:
Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b.
Your system of equations is underdetermined not overdetermined. Notice that you have 3 variables in it and 2 equations, thus fewer equations than unknowns.
Also notice how it also mentions that in numpy.linalg.solve(a,b), a must be an MxM matrix. The reason behind this is that solving the system of equations Ax=b involves computing the inverse of A, and only square matrices are invertible.
In these cases a common approach is to take the Moore-Penrose pseudoinverse, which will compute a best fit (least squares) solution of the system. So instead of trying to solve for the exact solution use numpy.linalg.lstsq:
x_qr = np.linalg.lstsq(R,Qb)
Suppose I want to find the "intersection point" of 2 arbitrary high-dimensional lines. The two lines won't actually intersect, but I still want to find the most intersect point (i.e. a point that is as close to all lines as possible).
Suppose those lines have direction vectors A, B and initial points C, D,
I can find the most intersect point by simply set up a linear least square problem: converting the line-intersection equation
Ax + C = By + D
to least-square form
[A, -B] # [[x, y]] = D - C
where # standards for matrix times vector, and then I can use e.g. np.linalg.lstsq to solve it.
But how can I find the "most intersect point" of 3 or more arbitrary lines? If I follow the same rule, I now have
Ax + D = By + E = Cz + F
The only way I can think of is decomposing this into three equations:
Ax + D = By + E
Ax + D = Cz + F
By + E = Cz + F
and converting them to least-square form
[A, -B, 0] [E - D]
[A, 0, -C] # [[x, y, z]] = [F - D]
[0, B, -C] [F - E]
The problem is the size of the least-square problem increases quadraticly about the number of lines. I'm wondering are there more efficient way to solve n-way-equal least-square linear problem?
I was thinking about the necessity of By + E = Cz + F above providing the other two terms. But since this problem do not have exact solution (i.e. they don't actually intersect), I believe doing so will create more "weight" on some variable?
Thank you for your help!
EDIT
I just tested pairing the first term with all other terms in the n-way-equality (and no other pairs) using the following code
def lineIntersect(k, b):
"k, b: N-by-D matrices describing N D-dimensional lines: k[i] * x + b[i]"
# Convert the problem to least-square form `Ax = B`
# A is temporarily defined 3-dimensional for convenience
A = np.zeros((len(k)-1, k.shape[1], len(k)), k.dtype)
A[:,:,0] = k[0]
A[range(len(k)-1),:,range(1,len(k))] = -k[1:]
# Convert to 2-dimensional matrix by flattening first two dimensions
A = A.reshape(-1, len(k))
# B should be 1-dimensional vector
B = (b[1:] - b[0]).ravel()
x = np.linalg.lstsq(A, B, None)[0]
return (x[:,None] * k + b).mean(0)
The result below indicates doing so is not correct because the first term in the n-way-equality is "weighted differently".
The first output is difference between the regular result and the result of different input order (line order should not matter) where the first term did not change.
The second output is the same with the first term did change.
k = np.random.rand(10, 100)
b = np.random.rand(10, 100)
print(np.linalg.norm(lineIntersect(k, b) - lineIntersect(np.r_[k[:1],k[:0:-1]], np.r_[b[:1],b[:0:-1]])))
print(np.linalg.norm(lineIntersect(k, b) - lineIntersect(k[::-1], b[::-1])))
results in
7.889616961715915e-16
0.10702479853076755
Another criterion for the 'almost intersection point' would be a point x such that the sum of the squares of the distances of x to the lines is as small as possible. Like your criterion, if the lines actually do intersect then the almost intersection point will be the actual intersection point. However I think the sum of distances squared criterion makes it straightforward to compute the point in question:
Suppose we represent a line by a point and a unit vector along the line. So if a line is represented by p,t then the points on the line are of the form
p + l*t for scalar l
The distance-squared of a point x from a line p,t is
(x-p)'*(x-p) - square( t'*(x-p))
If we have N lines p[i],t[i] then the sum of the distances squared from a point x is
Sum { (x-p[i])'*(x-p[i]) - square( t[i]'*(x[i]-p[i]))}
Expanding this out I get the above to be
x'*S*x - 2*x'*V + K
where
S = N*I - Sum{ t[i]*t[i]'}
V = Sum{ p[i] - (t[i]'*p[i])*t[i] }
and K does not depend on x
Unless all the lines are parallel, S will be (strictly) positive definite and hence invertible, and in that case our sum of distances squared is
(x-inv(S)*V)'*S*(x-inv(S)*V) + K - V'*inv(S)*V
Thus the minimising x is
inv(S)*V
So the drill is: normalise your 'direction vectors' (and scale each point by the same factor as used to scale the direction), form S and V as above, solve
S*x = V for x
This question might be better suited for the math stackexchange. Also, does anyone have a good way of formatting math here? Sorry that it's hard to read, I did my best with unicode.
EDIT: I misinterpreted what #ZisIsNotZis meant by the lines Ax+C so what disregard the next paragraph.
I'm not convinced that your method is stated correctly. Would you mind posting your code and a small example of the output (maybe in 2d with 3 or 4 lines so we can plot it)? When you're trying to find the intersection of two lines shouldn't you do Ax+C = Bx+D? If you do Ax+C=By+D you can pick some x on the first line and some y on the second line and satisfy both equations exactly. Because here x and y should be the same size as A and B which is the dimension of the space rather than scalars.
There are many ways to understand the problem of finding a point that is as close to all lines as possible. I think the most natural one is that the sum of squares of euclidian distance to each line is minimized.
Suppose we have a line in R^n: c^Tz + d = 0 (where c is unit length) and another point x. Then the shortest vector from x to the line is: (I-cc^T)(x-d) so the square of the distance from x to the line is ║(I-cc^T)(x-d)║^2. We can find the closest point to the line by minimizing this distance. Note that this is a standard least squares problem of the form min_x ║b-Ax║_2.
Now, suppose we have lines given by c_iz+d_i for i=1,...,m. The squared distance d_i^2 from a point x to the i-th line is d_i^2 = ║(I-cc^T)(x-d)║_2^2. We now want to solve the problem of min_x \sum_{i=1}^{m} d_i^2.
In matrix form we have:
║ ⎡ (I-c_1 c_1^T)(x-d_1) ⎤ ║
║ | (I-c_2 c_2^T)(x-d_2) | ║
min_x ║ | ... | ║
║ ⎣ (I-c_n c_n^T)(x-d_n) ⎦ ║_2
This is again in the form min_x ║b - Ax║_2 so there are good solvers available.
Each block has size n (dimension of the space) and there are m blocks (number of lines). So the system is mn byn. In particular, it is linear in the number of lines and quadratic in the dimension of the space.
It also has the advantage that if you add a line you simply add another block to the least squares system. This also offers the possibility of updating solutions iteratively as you add lines.
I'm not sure if there are special solvers for this type of least squares system. Note that each block is the identity minus a rank one matrix, so that might give some additional structure which can be used to speed things up. That said, I think using existing solvers will almost always work better than writing your own, unless you have quite a bit of background in numerical analysis or have a very specialized class of systems to solve.
Not a solution, some thoughts:
If line in nD space has parametric equation (with unit Dir vector)
L(t) = Base + Dir * t
then squared distance from point P to this line is
W = P - Base
Dist^2 = (W - (W.dot.Dir) * Dir)^2
If it is possible to write Min(Sum(Dist[i]^2)) in form suitable for LSQ method (make partial derivatives by every point coordinate), so resulting system might be solved for (x1..xn) coordinate vector.
(Situation resembles reversal of many points and single line of usual LSQ)
You say that you have two "high-dimensional" lines. This implies that the matrix indicating the lines has many more columns than rows.
If this is the case and you can efficiently find a low-rank decomposition such that A=LRᵀ, then you can rewrite the solution of the least squares problem min ||Ax-y||₂ as x=(Rᵀ RLᵀ L)⁻¹ Lᵀ y.
If m is the number of lines and n the dimension of the lines, then this reduces the least-squares time complexity from O(mn²+nʷ) to O(nr²+mr²) where r=min(m,n).
The problem then is to find such a decomposition.
I try to find a solution for a system of equations by using scipy.optimize.fsolve in python 2.7. The goal is to calculate equilibrium concentrations for a chemical system. Due to the nature of the problem, some of the constants are very small. Now for some combinations i do get a proper solution. For some parameters i don't find a solution. Either the solutions are negative, which is not reasonable from a physical point of view or fsolve produces:
ier = 3, 'xtol=0.000000 is too small, no further improvement in the approximate\n solution is possible.')
ier = 4, 'The iteration is not making good progress, as measured by the \n improvement from the last five Jacobian evaluations.')
ier = 5, 'The iteration is not making good progress, as measured by the \n improvement from the last ten iterations.')
It seems to me, based on my research, that the failure to find proper solutions of the equation system is connected to the datatype float.64 not being precise enough. As a friend pointed out, the system is not well conditioned with parameters differing in several magnitudes.
So i tried to use fsolve with the mpfr type provided by the gmpy2 module but that resulted in the following error:
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
Now here is a small example with parameter which lead to a solution if the randomized starting parameters fit happen to be good. However if the constant C_HCL is chosen to be something like 1e-4 or bigger then i never find a proper solution.
from numpy import *
from scipy.optimize import *
K_1 = 1e-8
K_2 = 1e-8
K_W = 1e-30
C_HCL = 1e-11
C_NAOH = K_W/C_HCL
C_HL = 1e-6
if C_HCL-C_NAOH > 0:
Saeure_Base = C_HCL-C_NAOH+sqrt(K_W)
OH_init = K_W/(Saeure_Base)
elif C_HCL-C_NAOH < 0:
OH_init = C_NAOH-C_HCL+sqrt(K_W)
Saeure_Base = K_W/OH_init
# some randomized start parameters
G1 = random.uniform(0, 2)*Saeure_Base
G2 = random.uniform(0, 2)*OH_init
G3 = random.uniform(1, 2)*C_HL*(sqrt(K_W))/(Saeure_Base+OH_init)
G4 = random.uniform(0.1, 1)*(C_HL - G3)/2
G5 = C_HL - G3 - G4
zGuess = array([G1,G2,G3,G4,G5])
#equation system / 5 variables --> H3O, OH, HL, H2L, L
def myFunction(z):
H3O = z[0]
OH = z[1]
HL = z[2]
H2L = z[3]
L = z[4]
F = empty((5))
F[0] = H3O*L/HL - K_1
F[1] = OH*H2L/HL - K_2
F[2] = K_W - OH*H3O
F[3] = C_HL - HL - H2L - L
F[4] = OH+L+C_HCL-H2L-H3O-C_NAOH
return F
z = fsolve(myFunction,zGuess, maxfev=10000, xtol=1e-15, full_output=1,factor=0.1)
print z
So the questions are. Is this problem based on the precision of float.64 and
if yes , (how) can it be solved with python? Is fsolve the way to go? Would i need to change the fsolve function so it accepts a different data type?
The root of your problem is either theoretical or numerical.
The scipy.optimize.fsolvefunction is based on the MINPACK Fortran solver (http://www.netlib.org/minpack/). This solver use a Newton-Raphson optimisation algorithm to provide the solution.
There are underlying assumptions about the smoothness of the function when you use this algorithm. For example, the jacobian matrix at the solution point x is supposed to be invertible. The one you are more concerned about is the basins of attraction.
In order to converge, the starting point of the algorithm needs to be near the actual solution, i.e. in the basins of attraction. This condition is always met for convex functions, however it is easy to find some functions for which this algorithm behaves badly. Your function is one of this as you have a fraction of your inputs parameters.
To address this issue you should just change the starting point. This starting point becomes also very important for functions with multiple solutions: this picture from the wikipedia article shows you the solution found depending of the starting point (five colours for five solutions); so you should be careful with your solution and actually check the "physical" aspects of your solution.
For the numerical aspects, the Newton-Raphson algorithm needs to have the value of the jacobian matrix (the derivatives matrix). If it is not provided to the MINPACK solver, the jacobian is estimated with a finite-difference formula. The perturbation step for the finite difference formula need to be provided epsfcn=None, the None being here as default value only in the case where fprimeis provided (there is no need for the jacobian estimation in this case). So first you should incorporate that. You could also specify directly the jacobian by derivating your function by hand.
However, the minimum value for the step size will be the machine precision, also called machine epsilon. For your problem, you have very small inputs values which can be a problem. I would suggest multiply everyone of them by the same value (like 10^6), it is equivalent to a change of the units but will avoid rounding up errors and problems with machine precision.
This problem is also important when you look at the parameter xtol=1e-15 you provided. In your error message, it gives xtol=0.000000, as it is below machine precision and cannot be taken into account. Also, if you look at your line F[2] = K_W - OH*H3O, given the machine precision, it does not matter if K_W is 1e-15or 1e-30. 0 is a solution for both of this case compare to the machine precision. To avoid this problem, just multiply everything by a bigger value.
So to sum up:
For the Newton-Raphson algorithm, the initialisation point matters !
For this algorithm, you should specify how you compute the jacobian !
In numerical computation, never work with small values. You can easily change the dimension to something different: it is basic units conversion, like working in gram instead of kilogram.
I have a dynamic model set up as a (stiff) system of ODEs. I currently solve this with CVODE (from the SUNDIALS package in the Assimulo python package) and all is good.
I now want to add a new 3D heat sink (with temperature-dependent thermal parameters) to the problem. Instead of writing out all the equations from scratch for the 3D heat equation, my idea is to use an existing FEM or FVM framework to provide to me an interface that will allow me to easily provide the (t, y) for the 3D block to a routine, and get back the residuals y'. The principle is to use the equations from the FEM system but not the solver. CVODE can exploit sparsity, but the combined system is expected to solve slower than the FEM system would solve on its own, being tailored for such.
# pseudocode of a residuals function for CVODE
def residual(t, y):
# ODE system of n equations
res[0] = <function of t,y>;
res[1] = <function of t,y>;
...
res[n] = <function of t,y>;
# Here we add the FEM/FVM residuals
for i in range(FEMcount):
res[n+1+i] = FEMequations[FEMcount](t,y)
return res
My question is whether (a) this approach is sane, and (b) is there a FEM or FVM library that will easily let me treat it as a system of equations, such that I can "tack it on" to my existing set of ODE equations.
If can't let the two systems share the same time axis, then I will have to run them in a stepping mode, where I run the one model for a short time, update the boundary conditions for the other, run that one, update the first model's BCs, and so on.
I have some experience with the wonderful library FiPy, and I am expecting to eventually end up using that library in the manner described above. But I want to know about experience with other systems in problems of this nature, and also other approaches that I have missed.
Edit: I now have some example code that appears to be working, showing how the FiPy mesh diffusion residuals can be solved with CVODE. However, this is only one approach (using FiPy) and the remainder of my other questions and concerns still stand. Any suggestions welcome.
from fipy import *
from fipy.solvers.scipy import DefaultSolver
solverFIPY = DefaultSolver()
from assimulo.solvers import CVode as solverASSIMULO
from assimulo.problem import Explicit_Problem as Problem
# FiPy Setup - Using params from the Mesh1D example
###################################################
nx = 50; dx = 1.; D = 1.
mesh = Grid1D(nx = nx, dx = dx)
phi = CellVariable(name="solution variable", mesh=mesh, value=0.)
valueLeft, valueRight = 1., 0.
phi.constrain(valueRight, mesh.facesRight)
phi.constrain(valueLeft, mesh.facesLeft)
# Instead of eqX = TransientTerm() == ExplicitDiffusionTerm(coeff=D),
# Rather just operate on the diffusion term. CVODE will calculate the
# Transient side
edt = ExplicitDiffusionTerm(coeff=D)
timeStepDuration = 0.9 * dx**2 / (2 * D)
steps = 100
# For comparison with an analytical solution - again,
# taken from the Mesh1D.py example
phiAnalytical = CellVariable(name="analytical value", mesh=mesh)
x = mesh.cellCenters[0]
t = timeStepDuration * steps
from scipy.special import erf
phiAnalytical.setValue(1 - erf(x / (2 * numerix.sqrt(D * t))))
if __name__ == '__main__':
viewer = Viewer(vars=(phi, phiAnalytical))#, datamin=0., datamax=1.)
viewer.plot()
raw_input('Press a key...')
# Now for the Assimulo/Sundials solver setup
############################################
def residual(t, X):
# Pretty straightforward, phi is the unknown
phi.value = X # This is a vector, 50 elements
# Can immediately return the residuals, CVODE sees this vector
# of 50 elements as X'(t), which is like TransientTerm() from FiPy
return edt.justResidualVector(var=phi, solver=solverFIPY)
x0 = phi.value
t0 = 0.
model = Problem(residual, x0, t0)
simulation = solverASSIMULO(model)
tfinal = steps * timeStepDuration # s,
cell_tol = [1.0e-8]*50
simulation.atol = cell_tol
simulation.rtol = 1e-6
simulation.iter = 'Newton'
t, x = simulation.simulate(tfinal, 0)
print x[-1]
# Write back the answer to compare
phi.value = x[-1]
viewer.plot()
raw_input('Press a key...')
This will produce a graph showing a perfect match:
An ODE is a differential equation in one dimension.
An FEM model is for problems that are spacial ie, problems in higher dimensions. You want a finite difference method. It's easier to solve and understand from the perspective someone coming from ODE world.
The question I think you should really be asking is how do you take your ODE, and transfer it to a 3D problem space.
Multidimensional partial differential equations are difficult to solve, yet I'll refer you to a CFD method for doing just that however only in 2D. http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/
It should take you a solid afternoon to get through that! Good Luck!