Solving large system of equations with 4 unknowns - python

I have a dataset including the distance and bearing to ~1800 fused radar contacts as well as the actual distance and bearing to those contacts, and I need to develop a correction equation to get the perceived values to be as close to the actual values as possible.
There seems to be a trend in the error when visualizing, so it seems to me that there should be a somewhat simple equation to correct it.
This is the form of the ~1800 equations:
actual_distance = perceived_distance + X(percieved_bearing) + Y(speed_over_ground) + Z(course_over_ground) + A(heading)
What is the best way to solve for X, Y, Z, and A?
Also, I'm not convinced that all of these factors are necessary, so I'm completely willing to leave out one or two of the factors.
From the little linear algebra I understand, I've attempted something like this with no luck:
Ax = b --> x = b/A via numpy.linalg.solve(A, b)
where A is the 4 x ~1800 matrix and b is the 1 x ~1800 matrix
Is this on the right track?
To be clear, I'm expecting to generate coefficeints for an equation that will correct percieved distance to a contact so that it is as close as possible to the actual distance to contact.
I am also totally willing to abandon this method if there is abetter one.
Thanks for your help in advance.

The best way to solve such a system of equations is to use the: Incomplete Cholesky Conjugate Gradient Technique (ICCG). This can be implemented in Matlab, Numerical recipes in C++, Nag Fortran or many other languages. Its very efficient. Basically you are inversing a large banded matrix. The book by Golub describes it in detail.
Looks like this is useful:
https://docs.scipy.org/doc/numpy-1.14.1/reference/generated/numpy.linalg.cholesky.html

When you have more equations than unknowns, you might not have an exact solution. In such a case what you can do is use the Moore-Penrose pseudoinverse of your matrix A. A times b will give you the least-square distance solution. In numpy you can use https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html#numpy.linalg.lstsq

Related

Solution to nonlinear differential equation with non-constant mass matrix

If I have a system of nonlinear ordinary differential equations, M(t,y) y' = F(t,y), what is the best method of solution when my mass matrix M is sometimes singular?
I'm working with the following system of equations:
If t=0, this reduces to a differential algebraic equation. However, even if we restrict t>0, this becomes a differential algebraic equation whenever y4=0, which I cannot set a domain restriction to avoid (and is an integral part of the system I am trying to model). My only previous exposure to DAEs is when an entire row is 0 -- but in this case my mass matrix is not always singular.
What is the best way to implement this numerically?
So far, I've tried using Python where I add a small number (0.0001) to the main diagonals of M and invert it, solving the equations y' = M^{-1}(t,y) F(t,y). However, this seems prone to instabilities, and I'm unsure if this is a universally appropriate means of regularization.
Python doesn't have any built-in functions to deal with mass matrices, so I've also tried coding this in Julia. However, DifferentialEquations.jl states explicitly that "Non-constant mass matrices are not directly supported: users are advised to transform their problem through substitution to a DAE with constant mass matrices."
I'm at a loss on how to accomplish this. Any insights on how to do this substitution or a better way to solve this type of problem would be greatly appreciated.
The following transformation leads to a constant mass matrix:
.
You need to handle the case of y_4 = 0 separately.

Computing the Jacobian matrix, using (at most) Numpy

I'm attempting to write a simple implementation of the Newton-Raphson method, in Python. I've already done so using the SymPy library, however what I'm working on now will (ultimately) end up running in an environment where only Numpy is available.
For those unfamiliar with the algorithm, it works (in my case) as follows:
I have some a system of symbolic equations, which I "stack" to form a matrix F. The unknowns are X,Y,Z,T (which I wish to determine). Some additional values are initially unknown, until passed to my solver, which substitutes these known values for variables in the symbolic expressions.
Now, the Jacobian matrix (J) of F is computed. This, too, is a matrix of symbolic expressions.
Now, I iterate in some range (max_iter). With each iteration, I form a matrix A by substituing for the unknowns X,Y,Z,T in F current estimates (starting with some initial values). Similarly, I form a matrix b by substituting for X,Y,Z,T current estimates.
I then obtain new estimates by solving the matrix equation Ax = b for x. This vector x holds dT, dX, dY, dZ. I then add these to current estimates for T,X,Y,Z, and iterate again.
Thus far, I've found my largest issue to be computing the Jacobian matrix. I need only to do this once, however it will be different depending upon the coefficients fed to the solver (not unknowns, but only known once fed to the solver, so I can't simply hard-code the Jacobian).
While I'm not terribly familiar with Numpy, I know that it offers numpy.gradient. I'm not sure, however, that this is the same as SymPy's .jacobian.
How can the Jacobian matrix be found, either in "pure" Python, or with Numpy?
EDIT:
Should it be useful to you, more information on the problem can be found [here]. 1. It can be formulated a few different ways, however (as of now) I'm writing it as 4 equations of the form:
\sqrt{(X-x_i)^2+(Y-y_i)^2+(Z-z_i)^2 }= c * (t_i-T)
Where X,Y,Z and T are unknown.
This describes the solution to a localization problem, where we know (a) the location of n >= 4 observers in a 3-dimensional space, (b) the time at which each observer "saw" some signal, and (c) the velocity of the signal. The goal is to determine the coordinates of the signal source X,Y,Z (and, as a side effect, the time of emission, T).
Notice that I've tried (many) other approaches to solving this problem, and all leads point toward a combination of Newton-Raphson with regression.

Coupled non-linear equations in FyPi

I'm trying to set up a system for solving these 5 coupled PDEs in FyPi to study the dynamics of electrons and holes in semiconductors
The system of coupled PDEs
I'm struggling with defining the terms highligted in blue as they're products of one variable with gradient of another. For example, I'm able to define the third equation like this without error messages:
eq3 = ImplicitSourceTerm(coeff=1, var=J_n) == ImplicitSourceTerm(coeff=e*mu_n*PowerLawConvectionTerm(var=phi), var=n) + PowerLawConvectionTerm(coeff=mu_n*k*T, var=n)
But I'm not sure if this is a good way. Is there a better way how to define this non-linear term, please?
Also, if I wanted to define a term that would be product of two variables (say p and n), would it be just:
ImplicitSourceTerm(p, var=n)
Or is there a different way?
I am amazed that you don't get an error from passing a PowerLawConvectionTerm as a coefficient of an ImplicitSourceTerm. It's certainly not intended to work. I suspect you would get an error if you attempted to solve().
You should substitute your flux equations into your continuity equations so that you end up with three second-order PDEs for electron drift-diffusion, hole drift-diffusion, and Poisson's equation. It will hopefully then be a bit clearer how to use FiPy Terms to represent the different elements of those equations.
That said, these equations are challenging. Please see this issue and this notebook for some pointers on how to set up and solve these equations, but realize that we provide no examples in our documentation because we haven't been able to come up with anything robust enough. Solving for pseudo-Fermi levels has worked a bit better for me than solving for electron and hole concentrations.
ImplicitSourceTerm(p, var=n) is a reasonable way to represent the n*p recombination term.

Is there a way to solve yB = c without computing the right inverse?

I would like to solve an equation of the form yB = c, where y is my unknown (possibly a matrix). However the B matrix is not well conditioned, and I would like to have a method similar to numpy.linalg.solve in order to maintain the numerical accuracy of the solution.
I have tried to simply use the inverse of B, with numpy.linalg.inv, to find the solution y = cB^-1 as well as using the pseudo-inverse (numpy.linalg.pinv), but they prooved to be not accurate enough...
I have also looked into the QR decomposition, since numpy provides the method for it, in order to adapt it to the right inverse case, but here I struggle with the algebra.
Is there an accurate way to solve this equation ? Or is there an equivalent to numpy.linalg.solve for the right inverse ?
You can transpose the equation and then use linalg.solve.

Inverse Matrix (Numpy) int too large to convert to float

I am trying to take the inverse of a 365x365 matrix. Some of the values get as large as 365**365 and so they are converted to long numbers. I don't know if the linalg.matrix_power() function can handle long numbers. I know the problem comes from this (because of the error message and because my program works just fine for smaller matrices) but I am not sure if there is a way around this. The code needs to work for a NxN matrix.
Here's my code:
item=0
for i in xlist:
xtotal.append(arrayit.arrayit(xlist[item],len(xlist)))
item=item+1
print xtotal
xinverted=numpy.linalg.matrix_power(xtotal,-1)
coeff=numpy.dot(xinverted,ylist)
arrayit.arrayit:
def arrayit(number, length):
newarray=[]
import decimal
i=0
while i!=(length):
newarray.insert(0,decimal.Decimal(number**i))
i=i+1
return newarray;
The program is taking x,y coordinates from a list (list of x's and list of y's) and makes a function.
Thanks!
One thing you might try is the library mpmath, which can do simple matrix algebra and other such problems on arbitrary precision numbers.
A couple of caveats: It will almost certainly be slower than using numpy, and, as Lutzl points out in his answer to this question, the problem may well not be mathematically well defined. Also, you need to decide on the precision you want before you start.
Some brief example code,
from mpmath import mp, matrix
# set the precision - see http://mpmath.org/doc/current/basics.html#setting-the-precision
mp.prec = 5000 # set it to something big at the cost of speed.
# Ideally you'd precalculate what you need.
# a quick trial with 100*100 showed that 5000 works and 500 fails
# see the documentation at http://mpmath.org/doc/current/matrices.html
# where xtotal is the output from arrayit
my_matrix = matrix(xtotal) # I think this should work. If not you'll have to create it and copy
# do the inverse
xinverted = my_matrix**-1
coeff = xinverted*matrix(ylist)
# note that as lutlz pointed out you really want to use solve instead of calculating the inverse.
# I think this is something like
from mpmath import lu_solve
coeff = lu_solve(my_matrix,matrix(ylist))
I suspect your real problem is with the maths rather than the software, so I doubt this will work fantastically well for you, but it's always possible!
Did you ever hear of Lagrange or Newton interpolation? This would avoid the whole construction of the VanderMonde matrix. But not the potentially large numbers in the coefficients.
As a general observation, you do not want the inverse matrix. You do not need to compute it. What you want is to solve a system of linear equations.
x = numpy.linalg.solve(A, b)
solves the system A*x=b.
You (really) might want to look up the Runge effect. Interpolation with equally spaced sample points is an increasingly ill-conditioned task. Useful results can be obtained for single-digit degrees, larger degrees tend to give wildly oscillating polynomials.
You can often use polynomial regression, i.e., approximating your data set by the best polynomial of some low degree.

Categories

Resources