Problem.
Consider the following problem: An experiment consists of three receivers, i,j and k, with known coordinates (xi,yi,zi), (xj,yi... in 3D space. A stationary transmitter, with unknown coordinates (x,y,z), emits a signal with known velocity v. The time of arrival of this signal at each receiver is recorded, giving ti,tj,tk. The signal emission time, t, is unknown.
Using only the coordinates of the receivers and the signal arrival times, I wish to determine the location of the transmitter.
In general, to solve for the coordinates of such a transmitter in N-dimensional space, N+1 receivers are required. Hence, in this case, a single, unique solution is unobtainable. A small number of finite solutions should be obtainable via numerical methods, however.
We may write the following system of equations to model the problem:
Sqrt[(x-xi)^2 + (y-yi)^2 + (z-zi)^2] + s(tj-ti) = Sqrt[(x-xj)^2 + (y-yj)^2 + (z-zj)^2]
Sqrt[(x-xj)^2 + (y-yj)^2 + (z-zj)^2] + s(tk-tj) = Sqrt[(x-xk)^2 + (y-yk)^2 + (z-zk)^2]
Sqrt[(x-xi)^2 + (y-yi)^2 + (z-zi)^2] + s(tk-ti) = Sqrt[(x-xk)^2 + (y-yk)^2 + (z-zk)^2]
Each equation gives a hyperboloid. Under ideal conditions, these three hyperboloids will intersect at precisely two points— one being the "true" solution, and the other being a reflection of that solution about the plane defined by the three receivers. In practice, given sufficiently accurate measurements, numerical solvers should be able to approximate these points of intersection.
My goal is to determine both solutions. Although it is impossible to determine which is the "true" transmitter location, for my purposes this will be sufficient.
Implementation.
I wish to implement a solution in Python. I'm not terribly familiar with NumPy or SciPy, however I've done a fair bit of work in SymPy, so I began there.
SymPy offers a variety of solvers, most of which focus on obtaining solutions symbolically. Not surprisingly, solve() and the like failed to find a solution, even under simulated "ideal" conditions (picking a random point, calculating the time taken for a signal originating from that point to arrive at each receiver, and feeding this to the algorithm).
SymPy also offers a numerical solver, nsolve(). I gave this a go, using the approach given below, however (not surpisingly) I got the error ZeroDivisionError: Matrix is numerically singular.
f = sym.Eq(sym.sqrt((x - x_i)**2 + (y - y_i)**2 + (z - z_i)**2) - sym.sqrt((x - x_j)**2 + (y - y_j)**2 + (z - z_j)**2), D_ij)
g = sym.Eq(sym.sqrt((x - x_i)**2 + (y - y_i)**2 + (z - z_i)**2) - sym.sqrt((x - x_k)**2 + (y - y_k)**2 + (z - z_k)**2), D_ik)
h = sym.Eq(sym.sqrt((x - x_j)**2 + (y - y_j)**2 + (z - z_j)**2) - sym.sqrt((x - x_k)**2 + (y - y_k)**2 + (z - z_k)**2), D_jk)
print("Soln. ", sym.nsolve((f,g,h),(x,y,z), (1,1,1)))
As I understand it, nsolve() relies on "matrix" techniques. A singular matrix is one which may not be inverted (or, equivalently, which has determinant zero), hence SymPy is unable to solve the system in question.
My understanding of matrices and nonlinear system solving techniques is a bit lacking, however my understanding is that a singular matrix occurs when there are infinitely many solutions, no solutions, or more than one solution. Given that I know there to be exactly two solutions, I believe this is the issue.
What Python solvers are available can be used so solve a nonlinear system with multiple solutions? Or, alternatively, is there a way to modify this such that it is digestible by SymPy?
Browsing the Numpy and SciPy docs, it seems that their solvers are effectively identical to what SymPy offers.
The (crude) test code mentioned below:
from random import randrange
import math
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Set range
N=10
P=100
# Pick nodes to be at random locations
x_1 = randrange(N); y_1 = randrange(N); z_1 = randrange(N)
x_2 = randrange(N); y_2 = randrange(N); z_2 = randrange(N)
x_3 = randrange(N); y_3 = randrange(N); z_3 = randrange(N)
# Pick source to be at random location
x = randrange(P); y = randrange(P); z = randrange(P)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
# Normalize times to remove information about 'true' emission time
earliest = min(t_1, t_2, t_3)
t_1 = t_1 - earliest; t_2 = t2 - earliest; t_3 = t_3 = earliest
Background
I've been working for some time on attempting to solve the (notoriously painful) Time Difference of Arrival (TDoA) multi-lateration problem, in 3-dimensions and using 4 nodes. If you're unfamiliar with the problem, it is to determine the coordinates of some signal source (X,Y,Z), given the coordinates of n nodes, the time of arrival of the signal at each node, and the velocity of the signal v.
My solution is as follows:
For each node, we write (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 = (v(t_i - T)**2
Where (x_i, y_i, z_i) are the coordinates of the ith node, and T is the time of emission.
We have now 4 equations in 4 unknowns. Four nodes are obviously insufficient. We could try to solve this system directly, however that seems next to impossible given the highly nonlinear nature of the problem (and, indeed, I've tried many direct techniques... and failed). Instead, we simplify this to a linear problem by considering all i/j possibilities, subtracting equation i from equation j. We obtain (n(n-1))/2 =6 equations of the form:
2*(x_j - x_i)*X + 2*(y_j - y_i)*Y + 2*(z_j - z_i)*Z + 2 * v**2 * (t_i - t_j) = v**2 ( t_i**2 - t_j**2) + (x_j**2 + y_j**2 + z_j**2) - (x_i**2 + y_i**2 + z_i**2)
Which look like Xv_1 + Y_v2 + Z_v3 + T_v4 = b. We try now to apply standard linear least squares, where the solution is the matrix vector x in A^T Ax = A^T b. Unfortunately, if you were to try feeding this into any standard linear least squares algorithm, it'll choke up. So, what do we do now?
...
The time of arrival of the signal at node i is given (of course) by:
sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v
This equation implies that the time of arrival, T, is 0. If we have that T = 0, we can drop the T column in matrix A and the problem is greatly simplified. Indeed, NumPy's linalg.lstsq() gives a surprisingly accurate & precise result.
...
So, what I do is normalize the input times by subtracting from each equation the earliest time. All I have to do then is determine the dt that I can add to each time such that the residual of summed squared error for the point found by linear least squares is minimized.
I define the error for some dt to be the squared difference between the arrival time for the point predicted by feeding the input times + dt to the least squares algorithm, minus the input time (normalized), summed over all 4 nodes.
for node, time in nodes, times:
error += ( (sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v) - time) ** 2
My problem:
I was able to do this somewhat satisfactorily by using brute-force. I started at dt = 0, and moved by some step up to some maximum # of iterations OR until some minimum RSS error is reached, and that was the dt I added to the normalized times to obtain a solution. The resulting solutions were very accurate and precise, but quite slow.
In practice, I'd like to be able to solve this in real time, and therefore a far faster solution will be needed. I began with the assumption that the error function (that is, dt vs error as defined above) would be highly nonlinear-- offhand, this made sense to me.
Since I don't have an actual, mathematical function, I can automatically rule out methods that require differentiation (e.g. Newton-Raphson). The error function will always be positive, so I can rule out bisection, etc. Instead, I try a simple approximation search. Unfortunately, that failed miserably. I then tried Tabu search, followed by a genetic algorithm, and several others. They all failed horribly.
So, I decided to do some investigating. As it turns out the plot of the error function vs dt looks a bit like a square root, only shifted right depending upon the distance from the nodes that the signal source is:
Where dt is on horizontal axis, error on vertical axis
And, in hindsight, of course it does!. I defined the error function to involve square roots so, at least to me, this seems reasonable.
What to do?
So, my issue now is, how do I determine the dt corresponding to the minimum of the error function?
My first (very crude) attempt was to get some points on the error graph (as above), fit it using numpy.polyfit, then feed the results to numpy.root. That root corresponds to the dt. Unfortunately, this failed, too. I tried fitting with various degrees, and also with various points, up to a ridiculous number of points such that I may as well just use brute-force.
How can I determine the dt corresponding to the minimum of this error function?
Since we're dealing with high velocities (radio signals), it's important that the results be precise and accurate, as minor variances in dt can throw off the resulting point.
I'm sure that there's some infinitely simpler approach buried in what I'm doing here however, ignoring everything else, how do I find dt?
My requirements:
Speed is of utmost importance
I have access only to pure Python and NumPy in the environment where this will be run
EDIT:
Here's my code. Admittedly, a bit messy. Here, I'm using the polyfit technique. It will "simulate" a source for you, and compare results:
from numpy import poly1d, linspace, set_printoptions, array, linalg, triu_indices, roots, polyfit
from dataclasses import dataclass
from random import randrange
import math
#dataclass
class Vertexer:
receivers: list
# Defaults
c = 299792
# Receivers:
# [x_1, y_1, z_1]
# [x_2, y_2, z_2]
# [x_3, y_3, z_3]
# Solved:
# [x, y, z]
def error(self, dt, times):
solved = self.linear([time + dt for time in times])
error = 0
for time, receiver in zip(times, self.receivers):
error += ((math.sqrt( (solved[0] - receiver[0])**2 +
(solved[1] - receiver[1])**2 +
(solved[2] - receiver[2])**2 ) / c ) - time)**2
return error
def linear(self, times):
X = array(self.receivers)
t = array(times)
x, y, z = X.T
i, j = triu_indices(len(x), 1)
A = 2 * (X[i] - X[j])
b = self.c**2 * (t[j]**2 - t[i]**2) + (X[i]**2).sum(1) - (X[j]**2).sum(1)
solved, residuals, rank, s = linalg.lstsq(A, b, rcond=None)
return(solved)
def find(self, times):
# Normalize times
times = [time - min(times) for time in times]
# Fit the error function
y = []
x = []
dt = 1E-10
for i in range(50000):
x.append(self.error(dt * i, times))
y.append(dt * i)
p = polyfit(array(x), array(y), 2)
r = roots(p)
return(self.linear([time + r for time in times]))
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
myVertexer = Vertexer([[x_1, y_1, z_1],[x_2, y_2, z_2],[x_3, y_3, z_3],[x_4, y_4, z_4]])
solution = myVertexer.find([t_1, t_2, t_3, t_4])
print(solution)
It seems like the Bancroft method applies to this problem? Here's a pure NumPy implementation.
# Implementation of the Bancroft method, following
# https://gssc.esa.int/navipedia/index.php/Bancroft_Method
M = np.diag([1, 1, 1, -1])
def lorentz_inner(v, w):
return np.sum(v * (w # M), axis=-1)
B = np.array(
[
[x_1, y_1, z_1, c * t_1],
[x_2, y_2, z_2, c * t_2],
[x_3, y_3, z_3, c * t_3],
[x_4, y_4, z_4, c * t_4],
]
)
one = np.ones(4)
a = 0.5 * lorentz_inner(B, B)
B_inv_one = np.linalg.solve(B, one)
B_inv_a = np.linalg.solve(B, a)
for Lambda in np.roots(
[
lorentz_inner(B_inv_one, B_inv_one),
2 * (lorentz_inner(B_inv_one, B_inv_a) - 1),
lorentz_inner(B_inv_a, B_inv_a),
]
):
x, y, z, c_t = M # np.linalg.solve(B, Lambda * one + a)
print("Candidate:", x, y, z, c_t / c)
My answer might have mistakes (glaring) as I had not heard the TDOA term before this afternoon. Please double check if the method is right.
I could not find solution to your original problem of finding dt corresponding to the minimum error. My answer also deviates from the requirement that other than numpy no third party library had to be used (I used Sympy and largely used the code from here). However I am still posting this thinking that somebody someday might find it useful if all one is interested in ... is to find X,Y,Z of the source emitter. This method also does not take into account real-life situations where white noise or errors might be present or curvature of the earth and other complications.
Your initial test conditions are as below.
from random import randrange
import math
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
My solution is as below.
import sympy as sym
X,Y,Z = sym.symbols('X,Y,Z', real=True)
f = sym.Eq((x_1 - X)**2 +(y_1 - Y)**2 + (z_1 - Z)**2 , (c*t_1)**2)
g = sym.Eq((x_2 - X)**2 +(y_2 - Y)**2 + (z_2 - Z)**2 , (c*t_2)**2)
h = sym.Eq((x_3 - X)**2 +(y_3 - Y)**2 + (z_3 - Z)**2 , (c*t_3)**2)
i = sym.Eq((x_4 - X)**2 +(y_4 - Y)**2 + (z_4 - Z)**2 , (c*t_4)**2)
print("Solved coordinates are ", sym.solve([f,g,h,i],X,Y,Z))
print statement from your initial condition gave.
Actual: 111 553 110
and the solution that almost instantly came out was
Solved coordinates are [(111.000000000000, 553.000000000000, 110.000000000000)]
Sorry again if something is totally amiss.
I am trying to solve a problem that involves \sqrt{w^t \Sigma w} in the objective function. To compute w^t \Sigma w, I use the quad_form function. How do I take its square root?
When in the code I try to write
risk = sqrt(quad_form(w, E))
I am getting a DCP rule error but I am pretty sure it is convex given the other constraints I have. So the question is not really about maths but the actual implementation of the convex program.
The problem I am trying to solve is
ret = mu.T*w
risk = sqrt(quad_form(w, E))
gamma.value = distr.pdf(distr.ppf(alpha)) / (1 - alpha)
minimizer = Minimize(-ret + risk * gamma) #cvxpy.sqrt(risk) * gamma)
constraints = [w >= 0,
b.T * log(w) >= k]
prob = Problem(minimizer, constraints)
prob.solve(solver='ECOS_BB',verbose=True)
In order to take the square root of the quadratic form, matrix Sigma must be positive semidefinite. Compute a Cholesky decomposition Sigma = Q.T * Q and then include the term norm(Q*w,2) in your objective function.
I'm attempting to implement Vincenty's inverse problem as described on wiki HERE
The problem is that lambda is simply not converging. The value stays the same if I try to iterate over the sequence of formulas, and I'm really not sure why. Perhaps I've just stared myself blind on an obvious problem.
It should be noted that I'm new to Python and still learning the language, so I'm not sure if it's misuse of the language that might cause the problem, or if I do have some mistakes in some of the calculations that I perform. I just can't seem to find any mistakes in the formulas.
Basically, I've written in the code in as close of a format as I could to the wiki article, and the result is this:
import math
# Length of radius at equator of the ellipsoid
a = 6378137.0
# Flattening of the ellipsoid
f = 1/298.257223563
# Length of radius at the poles of the ellipsoid
b = (1 - f) * a
# Latitude points
la1, la2 = 10, 60
# Longitude points
lo1, lo2 = 5, 150
# For the inverse problem, we calculate U1, U2 and L.
# We set the initial value of lamb = L
u1 = math.atan( (1 - f) * math.tan(la1) )
u2 = math.atan( (1 - f) * math.tan(la2) )
L = (lo2 - lo1) * 0.0174532925
lamb = L
while True:
sinArc = math.sqrt( math.pow(math.cos(u2) * math.sin(lamb),2) + math.pow(math.cos(u1) * math.sin(u2) - math.sin(u1) * math.cos(u2) * math.cos(lamb),2) )
cosArc = math.sin(u1) * math.sin(u2) + math.cos(u1) * math.cos(u2) * math.cos(lamb)
arc = math.atan2(sinArc, cosArc)
sinAzimuth = ( math.cos(u1) * math.cos(u2) * math.sin(lamb) ) // ( sinArc )
cosAzimuthSqr = 1 - math.pow(sinAzimuth, 2)
cosProduct = cosArc - ((2 * math.sin(u1) * math.sin(u2) ) // (cosAzimuthSqr))
C = (f//16) * cosAzimuthSqr * (4 + f * (4 - 3 * cosAzimuthSqr))
lamb = L + (1 - C) * f * sinAzimuth * ( arc + C * sinArc * ( cosProduct + C * cosArc * (-1 + 2 * math.pow(cosProduct, 2))))
print(lamb)
As mentioned the problem is that the value "lamb" (lambda) will not become smaller. I've even tried to compare my code to other implementations, but they looked just about the same.
What am I doing wrong here? :-)
Thank you all!
First, you should convert you latitudes in radians too (you already do this for your longitudes):
u1 = math.atan( (1 - f) * math.tan(math.radians(la1)) )
u2 = math.atan( (1 - f) * math.tan(math.radians(la2)) )
L = math.radians((lo2 - lo1)) # better than * 0.0174532925
Once you do this and get rid of // (int divisions) and replace them by / (float divisions), lambda stops repeating the same value through your iterations and starts following this path (based on your example coordinates):
2.5325205864224847
2.5325167509030906
2.532516759118641
2.532516759101044
2.5325167591010813
2.5325167591010813
2.5325167591010813
As you seem to expect a convergence precision of 10^(−12), it seems to make the point.
You can now exit the loop (lambda having converged) and keep going until you compute the desired geodesic distance s.
Note: you can test your final value s here.
Even if it is correctly implemented, Vincenty's algorithm will fail to
converge for some points. (This problem was noted by Vincenty.)
I give an algorithm which is guaranteed to
converge in Algorithms for geodesics; there's a python
implementation available here. Finally, you can find more
information on the problem at the Wikipedia page,
Geodesics on an ellipsoid. (The talk page has examples
of pairs of points for which Vincenty, as implemented by the NGS,
fails to converge.)