Background
I've been working for some time on attempting to solve the (notoriously painful) Time Difference of Arrival (TDoA) multi-lateration problem, in 3-dimensions and using 4 nodes. If you're unfamiliar with the problem, it is to determine the coordinates of some signal source (X,Y,Z), given the coordinates of n nodes, the time of arrival of the signal at each node, and the velocity of the signal v.
My solution is as follows:
For each node, we write (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 = (v(t_i - T)**2
Where (x_i, y_i, z_i) are the coordinates of the ith node, and T is the time of emission.
We have now 4 equations in 4 unknowns. Four nodes are obviously insufficient. We could try to solve this system directly, however that seems next to impossible given the highly nonlinear nature of the problem (and, indeed, I've tried many direct techniques... and failed). Instead, we simplify this to a linear problem by considering all i/j possibilities, subtracting equation i from equation j. We obtain (n(n-1))/2 =6 equations of the form:
2*(x_j - x_i)*X + 2*(y_j - y_i)*Y + 2*(z_j - z_i)*Z + 2 * v**2 * (t_i - t_j) = v**2 ( t_i**2 - t_j**2) + (x_j**2 + y_j**2 + z_j**2) - (x_i**2 + y_i**2 + z_i**2)
Which look like Xv_1 + Y_v2 + Z_v3 + T_v4 = b. We try now to apply standard linear least squares, where the solution is the matrix vector x in A^T Ax = A^T b. Unfortunately, if you were to try feeding this into any standard linear least squares algorithm, it'll choke up. So, what do we do now?
...
The time of arrival of the signal at node i is given (of course) by:
sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v
This equation implies that the time of arrival, T, is 0. If we have that T = 0, we can drop the T column in matrix A and the problem is greatly simplified. Indeed, NumPy's linalg.lstsq() gives a surprisingly accurate & precise result.
...
So, what I do is normalize the input times by subtracting from each equation the earliest time. All I have to do then is determine the dt that I can add to each time such that the residual of summed squared error for the point found by linear least squares is minimized.
I define the error for some dt to be the squared difference between the arrival time for the point predicted by feeding the input times + dt to the least squares algorithm, minus the input time (normalized), summed over all 4 nodes.
for node, time in nodes, times:
error += ( (sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v) - time) ** 2
My problem:
I was able to do this somewhat satisfactorily by using brute-force. I started at dt = 0, and moved by some step up to some maximum # of iterations OR until some minimum RSS error is reached, and that was the dt I added to the normalized times to obtain a solution. The resulting solutions were very accurate and precise, but quite slow.
In practice, I'd like to be able to solve this in real time, and therefore a far faster solution will be needed. I began with the assumption that the error function (that is, dt vs error as defined above) would be highly nonlinear-- offhand, this made sense to me.
Since I don't have an actual, mathematical function, I can automatically rule out methods that require differentiation (e.g. Newton-Raphson). The error function will always be positive, so I can rule out bisection, etc. Instead, I try a simple approximation search. Unfortunately, that failed miserably. I then tried Tabu search, followed by a genetic algorithm, and several others. They all failed horribly.
So, I decided to do some investigating. As it turns out the plot of the error function vs dt looks a bit like a square root, only shifted right depending upon the distance from the nodes that the signal source is:
Where dt is on horizontal axis, error on vertical axis
And, in hindsight, of course it does!. I defined the error function to involve square roots so, at least to me, this seems reasonable.
What to do?
So, my issue now is, how do I determine the dt corresponding to the minimum of the error function?
My first (very crude) attempt was to get some points on the error graph (as above), fit it using numpy.polyfit, then feed the results to numpy.root. That root corresponds to the dt. Unfortunately, this failed, too. I tried fitting with various degrees, and also with various points, up to a ridiculous number of points such that I may as well just use brute-force.
How can I determine the dt corresponding to the minimum of this error function?
Since we're dealing with high velocities (radio signals), it's important that the results be precise and accurate, as minor variances in dt can throw off the resulting point.
I'm sure that there's some infinitely simpler approach buried in what I'm doing here however, ignoring everything else, how do I find dt?
My requirements:
Speed is of utmost importance
I have access only to pure Python and NumPy in the environment where this will be run
EDIT:
Here's my code. Admittedly, a bit messy. Here, I'm using the polyfit technique. It will "simulate" a source for you, and compare results:
from numpy import poly1d, linspace, set_printoptions, array, linalg, triu_indices, roots, polyfit
from dataclasses import dataclass
from random import randrange
import math
#dataclass
class Vertexer:
receivers: list
# Defaults
c = 299792
# Receivers:
# [x_1, y_1, z_1]
# [x_2, y_2, z_2]
# [x_3, y_3, z_3]
# Solved:
# [x, y, z]
def error(self, dt, times):
solved = self.linear([time + dt for time in times])
error = 0
for time, receiver in zip(times, self.receivers):
error += ((math.sqrt( (solved[0] - receiver[0])**2 +
(solved[1] - receiver[1])**2 +
(solved[2] - receiver[2])**2 ) / c ) - time)**2
return error
def linear(self, times):
X = array(self.receivers)
t = array(times)
x, y, z = X.T
i, j = triu_indices(len(x), 1)
A = 2 * (X[i] - X[j])
b = self.c**2 * (t[j]**2 - t[i]**2) + (X[i]**2).sum(1) - (X[j]**2).sum(1)
solved, residuals, rank, s = linalg.lstsq(A, b, rcond=None)
return(solved)
def find(self, times):
# Normalize times
times = [time - min(times) for time in times]
# Fit the error function
y = []
x = []
dt = 1E-10
for i in range(50000):
x.append(self.error(dt * i, times))
y.append(dt * i)
p = polyfit(array(x), array(y), 2)
r = roots(p)
return(self.linear([time + r for time in times]))
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
myVertexer = Vertexer([[x_1, y_1, z_1],[x_2, y_2, z_2],[x_3, y_3, z_3],[x_4, y_4, z_4]])
solution = myVertexer.find([t_1, t_2, t_3, t_4])
print(solution)
It seems like the Bancroft method applies to this problem? Here's a pure NumPy implementation.
# Implementation of the Bancroft method, following
# https://gssc.esa.int/navipedia/index.php/Bancroft_Method
M = np.diag([1, 1, 1, -1])
def lorentz_inner(v, w):
return np.sum(v * (w # M), axis=-1)
B = np.array(
[
[x_1, y_1, z_1, c * t_1],
[x_2, y_2, z_2, c * t_2],
[x_3, y_3, z_3, c * t_3],
[x_4, y_4, z_4, c * t_4],
]
)
one = np.ones(4)
a = 0.5 * lorentz_inner(B, B)
B_inv_one = np.linalg.solve(B, one)
B_inv_a = np.linalg.solve(B, a)
for Lambda in np.roots(
[
lorentz_inner(B_inv_one, B_inv_one),
2 * (lorentz_inner(B_inv_one, B_inv_a) - 1),
lorentz_inner(B_inv_a, B_inv_a),
]
):
x, y, z, c_t = M # np.linalg.solve(B, Lambda * one + a)
print("Candidate:", x, y, z, c_t / c)
My answer might have mistakes (glaring) as I had not heard the TDOA term before this afternoon. Please double check if the method is right.
I could not find solution to your original problem of finding dt corresponding to the minimum error. My answer also deviates from the requirement that other than numpy no third party library had to be used (I used Sympy and largely used the code from here). However I am still posting this thinking that somebody someday might find it useful if all one is interested in ... is to find X,Y,Z of the source emitter. This method also does not take into account real-life situations where white noise or errors might be present or curvature of the earth and other complications.
Your initial test conditions are as below.
from random import randrange
import math
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
My solution is as below.
import sympy as sym
X,Y,Z = sym.symbols('X,Y,Z', real=True)
f = sym.Eq((x_1 - X)**2 +(y_1 - Y)**2 + (z_1 - Z)**2 , (c*t_1)**2)
g = sym.Eq((x_2 - X)**2 +(y_2 - Y)**2 + (z_2 - Z)**2 , (c*t_2)**2)
h = sym.Eq((x_3 - X)**2 +(y_3 - Y)**2 + (z_3 - Z)**2 , (c*t_3)**2)
i = sym.Eq((x_4 - X)**2 +(y_4 - Y)**2 + (z_4 - Z)**2 , (c*t_4)**2)
print("Solved coordinates are ", sym.solve([f,g,h,i],X,Y,Z))
print statement from your initial condition gave.
Actual: 111 553 110
and the solution that almost instantly came out was
Solved coordinates are [(111.000000000000, 553.000000000000, 110.000000000000)]
Sorry again if something is totally amiss.
Related
Problem.
Consider the following problem: An experiment consists of three receivers, i,j and k, with known coordinates (xi,yi,zi), (xj,yi... in 3D space. A stationary transmitter, with unknown coordinates (x,y,z), emits a signal with known velocity v. The time of arrival of this signal at each receiver is recorded, giving ti,tj,tk. The signal emission time, t, is unknown.
Using only the coordinates of the receivers and the signal arrival times, I wish to determine the location of the transmitter.
In general, to solve for the coordinates of such a transmitter in N-dimensional space, N+1 receivers are required. Hence, in this case, a single, unique solution is unobtainable. A small number of finite solutions should be obtainable via numerical methods, however.
We may write the following system of equations to model the problem:
Sqrt[(x-xi)^2 + (y-yi)^2 + (z-zi)^2] + s(tj-ti) = Sqrt[(x-xj)^2 + (y-yj)^2 + (z-zj)^2]
Sqrt[(x-xj)^2 + (y-yj)^2 + (z-zj)^2] + s(tk-tj) = Sqrt[(x-xk)^2 + (y-yk)^2 + (z-zk)^2]
Sqrt[(x-xi)^2 + (y-yi)^2 + (z-zi)^2] + s(tk-ti) = Sqrt[(x-xk)^2 + (y-yk)^2 + (z-zk)^2]
Each equation gives a hyperboloid. Under ideal conditions, these three hyperboloids will intersect at precisely two points— one being the "true" solution, and the other being a reflection of that solution about the plane defined by the three receivers. In practice, given sufficiently accurate measurements, numerical solvers should be able to approximate these points of intersection.
My goal is to determine both solutions. Although it is impossible to determine which is the "true" transmitter location, for my purposes this will be sufficient.
Implementation.
I wish to implement a solution in Python. I'm not terribly familiar with NumPy or SciPy, however I've done a fair bit of work in SymPy, so I began there.
SymPy offers a variety of solvers, most of which focus on obtaining solutions symbolically. Not surprisingly, solve() and the like failed to find a solution, even under simulated "ideal" conditions (picking a random point, calculating the time taken for a signal originating from that point to arrive at each receiver, and feeding this to the algorithm).
SymPy also offers a numerical solver, nsolve(). I gave this a go, using the approach given below, however (not surpisingly) I got the error ZeroDivisionError: Matrix is numerically singular.
f = sym.Eq(sym.sqrt((x - x_i)**2 + (y - y_i)**2 + (z - z_i)**2) - sym.sqrt((x - x_j)**2 + (y - y_j)**2 + (z - z_j)**2), D_ij)
g = sym.Eq(sym.sqrt((x - x_i)**2 + (y - y_i)**2 + (z - z_i)**2) - sym.sqrt((x - x_k)**2 + (y - y_k)**2 + (z - z_k)**2), D_ik)
h = sym.Eq(sym.sqrt((x - x_j)**2 + (y - y_j)**2 + (z - z_j)**2) - sym.sqrt((x - x_k)**2 + (y - y_k)**2 + (z - z_k)**2), D_jk)
print("Soln. ", sym.nsolve((f,g,h),(x,y,z), (1,1,1)))
As I understand it, nsolve() relies on "matrix" techniques. A singular matrix is one which may not be inverted (or, equivalently, which has determinant zero), hence SymPy is unable to solve the system in question.
My understanding of matrices and nonlinear system solving techniques is a bit lacking, however my understanding is that a singular matrix occurs when there are infinitely many solutions, no solutions, or more than one solution. Given that I know there to be exactly two solutions, I believe this is the issue.
What Python solvers are available can be used so solve a nonlinear system with multiple solutions? Or, alternatively, is there a way to modify this such that it is digestible by SymPy?
Browsing the Numpy and SciPy docs, it seems that their solvers are effectively identical to what SymPy offers.
The (crude) test code mentioned below:
from random import randrange
import math
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Set range
N=10
P=100
# Pick nodes to be at random locations
x_1 = randrange(N); y_1 = randrange(N); z_1 = randrange(N)
x_2 = randrange(N); y_2 = randrange(N); z_2 = randrange(N)
x_3 = randrange(N); y_3 = randrange(N); z_3 = randrange(N)
# Pick source to be at random location
x = randrange(P); y = randrange(P); z = randrange(P)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
# Normalize times to remove information about 'true' emission time
earliest = min(t_1, t_2, t_3)
t_1 = t_1 - earliest; t_2 = t2 - earliest; t_3 = t_3 = earliest
I am trying to write a program that uses an array in further calculations. I initialize a grid of equally spaced points with NumPy and assign a value at each point as per the code snippet provided below. The function I am trying to describe with this array gives me a division by 0 error at x=y and it generally blows up around it. I know that the real part of said function is bounded by band_D/(2*math.pi)
at x=y and I tried manually assigning this value on the diagonal, but it seems that points around it are still ill-behaved and so I am not getting any right values. Is there a way to remedy this? This is how the function looks like with matplotlib
gamma=5
band_D=100
Dt=1e-3
x = np.arange(0,1/gamma,Dt)
y = np.arange(0,1/gamma,Dt)
xx,yy= np.meshgrid(x,y)
N=x.shape[0]
di = np.diag_indices(N)
time_fourier=(1j/2*math.pi)*(1-np.exp(1j*band_D*(xx-yy)))/(xx-yy)
time_fourier[di]=band_D/(2*math.pi)
You have a classic 0 / 0 problem. It's not really Numpy's job to figure out to apply De L'Hospital and solve this for you... I see, as other have commented, that you had the right idea with trying to set the limit value at the diagonal (where x approx y), but by the time you'd hit that line, the warning had already been emitted (just a warning, BTW, not an exception).
For a quick fix (but a bit of a fudge), in this case, you can try to add a small value to the difference:
xy = xx - yy + 1e-100
num = (1j / 2*np.pi) * (1 - np.exp(1j * band_D * xy))
time_fourier = num / xy
This also reveals that there is something wrong with your limit calculation... (time_fourier[0,0] approx 157.0796..., not 15.91549...).
and not band_D / (2*math.pi).
For a correct calculation:
def f(xy):
mask = xy != 0
limit = band_D * np.pi/2
return np.where(mask, np.divide((1j/2 * np.pi) * (1 - np.exp(1j * band_D * xy)), xy, where=mask), limit)
time_fourier = f(xx - yy)
You are dividing by x-y, that will definitely throw an error when x = y. The function being well behaved here means that the Taylor series doesn't diverge. But python doesn't know or care about that, it just calculates one step at a time until it reaches division by 0.
You had the right idea by defining a different function when x = y (ie, the mathematically true answer) but your way of applying it doesn't work because the correction is AFTER the division by 0, so it never gets read. This, however, should work
def make_time_fourier(x, y):
if np.isclose(x, y):
return band_D/(2*math.pi)
else:
return (1j/2*math.pi)*(1-np.exp(1j*band_D*(x-y)))/(x-y)
time_fourier = np.vectorize(make_time_fourier)(xx, yy)
print(time_fourier)
You can use np.divide with where option.
import math
gamma=5
band_D=100
Dt=1e-3
x = np.arange(0,1/gamma,Dt)
y = np.arange(0,1/gamma,Dt)
xx,yy = np.meshgrid(x,y)
N = x.shape[0]
di = np.diag_indices(N)
time_fourier = (1j / 2 * np.pi) * (1 - np.exp(1j * band_D * (xx - yy)))
time_fourier = np.divide(time_fourier,
(xx - yy),
where=(xx - yy) != 0)
time_fourier[di] = band_D / (2 * np.pi)
You can reformulate your function so that the division is inside the (numpy) sinc function, which handles it correctly.
To save typing I'll use D for band_D and use a variable
z = D*(xx-yy)/2
Then
T = (1j/2*pi)*(1-np.exp(1j*band_D*(xx-yy)))/(xx-yy)
= (2/D)*(1j/2*pi)*( 1 - cos( 2*z) - 1j*sin( 2*z))/z
= (1j/D*pi)* (2*sin(z)*sin(z) - 2j*sin(z)*cos(z))/z
= (2j/D*pi) * sin(z)/z * (sin(z) - 1j*cos(z))
= (2j/D*pi) * sinc( z/pi) * (sin(z) - 1j*cos(z))
numpy defines
sinc(x) to be sin(pi*x)/(pi*x)
I can't run python do you should chrck my calculations
The steps are
Substitute the definition of z and expand the complex exp
Apply the double angle formulae for sin and cos
Factor out sin(z)
Substitute the definition of sinc
I attempt to plot bifurcation diagram for following one-dimensional spatially extended system with boundary conditions
x[i,n+1] = (1-eps)*(r*x[i,n]*(1-x[i,n])) + 0.5*eps*( r*x[i-1,n]*(1-x[i-1,n]) + r*x[i+1,n]*(1-x[i+1,n])) + p
I am facing problem in getting desired output figure may be because of number of transients I am using. Can someone help me out by cross-checking my code, what values of nTransients should I choose or how many transients should I ignore ?
My Python code is as follows:
import numpy as np
from numpy import *
from pylab import *
L = 60 # no. of lattice sites
eps = 0.6 # diffusive coupling strength
r = 4.0 # control parameter r
np.random.seed(1010)
ic = np.random.uniform(0.1, 0.9, L) # random initial condition betn. (0,1)
nTransients = 900 # The iterates we'll throw away
nIterates = 1000 # This sets how much the attractor is filled in
nSteps = 400 # This sets how dense the bifurcation diagram will be
pLow = -0.4
pHigh = 0.0
pInc = (pHigh-pLow)/float(nSteps)
def LM(p, x):
x_new = []
for i in range(L):
if i==0:
x_new.append((1-eps)*(r*x[i]*(1-x[i])) + 0.5*eps*(r*x[L-1]*(1-x[L-1]) + r*x[i+1]*(1-x[i+1])) + p)
elif i==L-1:
x_new.append((1-eps)*(r*x[i]*(1-x[i])) + 0.5*eps*(r*x[i-1]*(1-x[i-1]) + r*x[0]*(1-x[0])) + p)
elif i>0 and i<L-1:
x_new.append((1-eps)*(r*x[i]*(1-x[i])) + 0.5*eps*(r*x[i-1]*(1-x[i-1]) + r*x[i+1]*(1-x[i+1])) + p)
return x_new
for p in arange(pLow, pHigh, pInc):
# set initial conditions
state = ic
# throw away the transient iterations
for i in range(nTransients):
state = LM(p, state)
# now stote the next batch of iterates
psweep = [] # store p values
x = [] # store iterates
for i in range(nIterates):
state = LM(p, state)
psweep.append(p)
x.append(state[L/2-1])
plot(psweep, x, 'k,') # Plot the list of (r,x) pairs as pixels
xlabel('Pinning Strength p')
ylabel('X(L/2)')
# Display plot in window
show()
Can someone also tell me figure displayed by pylab in the end has either dots or lines as a marker, if it is lines then how to get plot with dots.
This is my output image for reference, after using pixels:
It still isn't clear exactly what your desired output is, but I'm guessing you're aiming for something that looks like this image from Wikipedia:
Going with that assumption, I gave it my best shot, but I'm guessing your equations (with the boundary conditions and so on) give you something that simply doesn't look quite that pretty. Here's my result:
This plot by itself may not look like the best thing ever, however, if you zoom in, you can really see some beautiful detail (this is right from the center of the plot, where the two arms of the bifurcation come down, meet, and then branch away again):
Note that I have used horizontal lines, with alpha=0.1 (originally you were using solid, vertical lines, which was why the result didn't look good).
The code!
I essentially modified your program a little to make it vectorized: I removed the for loop over p, which made the whole thing run almost instantaneously. This enabled me to use a much denser sampling for p, and allowed me to plot horizontal lines.
from __future__ import print_function, division
import numpy as np
import matplotlib.pyplot as plt
L = 60 # no. of lattice sites
eps = 0.6 # diffusive coupling strength
r = 4.0 # control parameter r
np.random.seed(1010)
ic = np.random.uniform(0.1, 0.9, L) # random initial condition betn. (0,1)
nTransients = 100 # The iterates we'll throw away
nIterates = 100 # This sets how much the attractor is filled in
nSteps = 4000 # This sets how dense the bifurcation diagram will be
pLow = -0.4
pHigh = 0.0
pInc = (pHigh - pLow) / nSteps
def LM(p, x):
x_new = np.empty(x.shape)
for i in range(L):
if i == 0:
x_new[i] = ((1 - eps) * (r * x[i] * (1 - x[i])) + 0.5 * eps * (r * x[L - 1] * (1 - x[L - 1]) + r * x[i + 1] * (1 - x[i + 1])) + p)
elif i == L - 1:
x_new[i] = ((1 - eps) * (r * x[i] * (1 - x[i])) + 0.5 * eps * (r * x[i - 1] * (1 - x[i - 1]) + r * x[0] * (1 - x[0])) + p)
elif i > 0 and i < L - 1:
x_new[i] = ((1 - eps) * (r * x[i] * (1 - x[i])) + 0.5 * eps * (r * x[i - 1] * (1 - x[i - 1]) + r * x[i + 1] * (1 - x[i + 1])) + p)
return x_new
p = np.arange(pLow, pHigh, pInc)
state = np.tile(ic[:, np.newaxis], (1, p.size))
# set initial conditions
# throw away the transient iterations
for i in range(nTransients):
state = LM(p, state)
# now store the next batch of iterates
x = np.empty((p.size, nIterates)) # store iterates
for i in range(nIterates):
state = LM(p, state)
x[:, i] = state[L // 2 - 1]
# Plot the list of (r,x) pairs as pixels
plt.plot(p, x, c=(0, 0, 0, 0.1))
plt.xlabel('Pinning Strength p')
plt.ylabel('X(L/2)')
# Display plot in window
plt.show()
I don't want to try explaining the whole program to you: I've used a few standard numpy tricks, including broadcasting, but otherwise, I have not modified much. I've not modified your LM function at all.
Please don't hesitate to ask me in the comments if you have any questions! I'm happy to explain specifics that you want explained.
A note on transients and iterates: Hopefully, now that the program runs much faster, you can try playing with these elements yourself. To me, the number of transients seemed to decide for how long the plot remained "deterministic-looking". The number of iterates just increases the density of plot lines, so increasing this beyond a point didn't seem to make sense to me.
I tried increasing the number of transients all the way up till 10,000. Here's my result from that experiment, for your reference:
I'm trying to implement a multiclass logistic regression classifier that distinguishes between k different classes.
This is my code.
import numpy as np
from scipy.special import expit
def cost(X,y,theta,regTerm):
(m,n) = X.shape
J = (np.dot(-(y.T),np.log(expit(np.dot(X,theta))))-np.dot((np.ones((m,1))-y).T,np.log(np.ones((m,1)) - (expit(np.dot(X,theta))).reshape((m,1))))) / m + (regTerm / (2 * m)) * np.linalg.norm(theta[1:])
return J
def gradient(X,y,theta,regTerm):
(m,n) = X.shape
grad = np.dot(((expit(np.dot(X,theta))).reshape(m,1) - y).T,X)/m + (np.concatenate(([0],theta[1:].T),axis=0)).reshape(1,n)
return np.asarray(grad)
def train(X,y,regTerm,learnRate,epsilon,k):
(m,n) = X.shape
theta = np.zeros((k,n))
for i in range(0,k):
previousCost = 0;
currentCost = cost(X,y,theta[i,:],regTerm)
while(np.abs(currentCost-previousCost) > epsilon):
print(theta[i,:])
theta[i,:] = theta[i,:] - learnRate*gradient(X,y,theta[i,:],regTerm)
print(theta[i,:])
previousCost = currentCost
currentCost = cost(X,y,theta[i,:],regTerm)
return theta
trX = np.load('trX.npy')
trY = np.load('trY.npy')
theta = train(trX,trY,2,0.1,0.1,4)
I can verify that cost and gradient are returning values that are in the right dimension (cost returns a scalar, and gradient returns a 1 by n row vector), but i get the error
RuntimeWarning: divide by zero encountered in log
J = (np.dot(-(y.T),np.log(expit(np.dot(X,theta))))-np.dot((np.ones((m,1))-y).T,np.log(np.ones((m,1)) - (expit(np.dot(X,theta))).reshape((m,1))))) / m + (regTerm / (2 * m)) * np.linalg.norm(theta[1:])
why is this happening and how can i avoid this?
The proper solution here is to add some small epsilon to the argument of log function. What worked for me was
epsilon = 1e-5
def cost(X, y, theta):
m = X.shape[0]
yp = expit(X # theta)
cost = - np.average(y * np.log(yp + epsilon) + (1 - y) * np.log(1 - yp + epsilon))
return cost
You can clean up the formula by appropriately using broadcasting, the operator * for dot products of vectors, and the operator # for matrix multiplication — and breaking it up as suggested in the comments.
Here is your cost function:
def cost(X, y, theta, regTerm):
m = X.shape[0] # or y.shape, or even p.shape after the next line, number of training set
p = expit(X # theta)
log_loss = -np.average(y*np.log(p) + (1-y)*np.log(1-p))
J = log_loss + regTerm * np.linalg.norm(theta[1:]) / (2*m)
return J
You can clean up your gradient function along the same lines.
By the way, are you sure you want np.linalg.norm(theta[1:]). If you're trying to do L2-regularization, the term should be np.linalg.norm(theta[1:]) ** 2.
Cause:
This is happening because in some cases, whenever y[i] is equal to 1, the value of the Sigmoid function (theta) also becomes equal to 1.
Cost function:
J = (np.dot(-(y.T),np.log(expit(np.dot(X,theta))))-np.dot((np.ones((m,1))-y).T,np.log(np.ones((m,1)) - (expit(np.dot(X,theta))).reshape((m,1))))) / m + (regTerm / (2 * m)) * np.linalg.norm(theta[1:])
Now, consider the following part in the above code snippet:
np.log(np.ones((m,1)) - (expit(np.dot(X,theta))).reshape((m,1)))
Here, you are performing (1 - theta) when the value of theta is 1. So, that will effectively become log (1 - 1) = log (0) which is undefined.
I'm guessing your data has negative values in it. You can't log a negative.
import numpy as np
np.log(2)
> 0.69314718055994529
np.log(-2)
> nan
There are a lot of different ways to transform your data that should help, if this is the case.
def cost(X, y, theta):
yp = expit(X # theta)
cost = - np.average(y * np.log(yp) + (1 - y) * np.log(1 - yp))
return cost
The warning originates from np.log(yp) when yp==0 and in np.log(1 - yp) when yp==1. One option is to filter out these values, and not to pass them into np.log. The other option is to add a small constant to prevent the value from being exactly 0 (as suggested in one of the comments above)
Add epsilon value[which is a miniature value] to the log value so that it won't be a problem at all.
But i am not sure if it will give accurate results or not .
I'd like to implement Euler's method (the explicit and the implicit one)
(https://en.wikipedia.org/wiki/Euler_method) for the following model:
x(t)' = q(x_M -x(t))x(t)
x(0) = x_0
where q, x_M and x_0 are real numbers.
I know already the (theoretical) implementation of the method. But I couldn't figure out where I can insert / change the model.
Could anybody help?
EDIT: You were right. I didn't understand correctly the method. Now, after a few hours, I think that I really got it! With the explicit method, I'm pretty sure (nevertheless: could anybody please have a look at my code? )
With the implicit implementation, I'm not very sure if it's correct. Could please anyone have a look at the implementation of the implicit method and give me a feedback what's correct / not good?
def explizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = x + dt * (q * (xM - x) * x)
print '%6.3f %6.3f' % (t, x)
def implizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = (1.0 / (1.0 - q *(xM + x) * x))
print '%6.3f %6.3f' % (t, x)
Pre-emptive note: Although the general idea should be correct, I did all the algebra in place in the editor box so there might be mistakes there. Please, check it yourself before using for anything really important.
I'm not sure how you come to the "implicit" formula
x = (1.0 / (1.0 - q *(xM + x) * x))
but this is wrong and you can check it by comparing your "explicit" and "implicit" results: they should slightly diverge but with this formula they will diverge drastically.
To understand the implicit Euler method, you should first get the idea behind the explicit one. And the idea is really simple and is explained at the Derivation section in the wiki: since derivative y'(x) is a limit of (y(x+h) - y(x))/h, you can approximate y(x+h) as y(x) + h*y'(x) for small h, assuming our original differential equation is
y'(x) = F(x, y(x))
Note that the reason this is only an approximation rather than exact value is that even over small range [x, x+h] the derivative y'(x) changes slightly. It means that if you want to get a better approximation of y(x+h), you need a better approximation of "average" derivative y'(x) over the range [x, x+h]. Let's call that approximation just y'. One idea of such improvement is to find both y' and y(x+h) at the same time by saying that we want to find such y' and y(x+h) that y' would be actually y'(x+h) (i.e. the derivative at the end). This results in the following system of equations:
y'(x+h) = F(x+h, y(x+h))
y(x+h) = y(x) + h*y'(x+h)
which is equivalent to a single "implicit" equation:
y(x+h) - y(x) = h * F(x+h, y(x+h))
It is called "implicit" because here the target y(x+h) is also a part of F. And note that quite similar equation is mentioned in the Modifications and extensions section of the wiki article.
So now going to your case that equation becomes
x(t+dt) - x(t) = dt*q*(xM -x(t+dt))*x(t+dt)
or equivalently
dt*q*x(t+dt)^2 + (1 - dt*q*xM)*x(t+dt) - x(t) = 0
This is a quadratic equation with two solutions:
x(t+dt) = [(dt*q*xM - 1) ± sqrt((dt*q*xM - 1)^2 + 4*dt*q*x(t))]/(2*dt*q)
Obviously we want the solution that is "close" to the x(t) which is the + solution. So the code should be something like:
b = (q * xM * dt - 1)
x(t+h) = (b + (b ** 2 + 4 * q * x(t) * dt) ** 0.5) / 2 / q / dt
(editor note:) Applying the binomial complement, this formula has the numerically more stable form for small dt, where then b < 0,
x(t+h) = (2 * x(t)) / ((b ** 2 + 4 * q * x(t) * dt) ** 0.5 - b)