I am implemententing the Verlet algorithm for a double well potential V(x) = x^4-20x^2, as to create a simple phase portrait.The generated phase portrait has an augmented oval shape and is clearly incorrect. I have a feeling that my problem is occurring in my definition of the of x^3 but I am not sure. I have also included the algorithm for a classical harmonic oscillator to show that my code works correctly.
import numpy as np
import matplotlib.pyplot as plt
###Constants
w = 2
m=1
N=500
dt=0.05
t = np.linspace(0, N*dt, N+1)
np.shape(t)
x = np.zeros(N+1)
p = np.zeros(N+1)
p_0 = 0
x_0 = 1
x[0] = x_0
p[0] = p_0
#Velocity Verlet Tuckerman
#x(dt) = x(0) +p(0)/m*dt + 1/(2m) * F(x(0))
#p(dt = p(0) + dt/2[F(x(0)) + F(x(dt))]
#Harmonic Oscillator F(x) = -kx = -mw^2x
for n in range(N):
x[n+1] = x[n] + (p[n]/m)*dt - (0.5)*w**2*x[n]*dt*dt
p[n+1] = p[n] - m*(0.5)*w**2*x[n]*dt - m*0.5*w**2*x[n+1]*dt
plt.plot(x,p)
#Symmetric Double Well: F(x) = -4x^3 + 40x
#V(x) = x^4 -20x^2
for n in range(N):
x[n+1] = x[n] + (p[n]/m)*dt +1/(2*m)*( -4*(x[n]*x[n]*x[n])*dt*dt +40*x[n]*dt*dt)
p[n+1] = p[n] + (1/2)*(-4*m*(x[n]*x[n]*x[n])*dt +40*m*x[n]*dt - 4*m*(x[n+1]*x[n+1]*x[n+1])*dt +40*m*x[n+1]*dt)
plt.plot(x,p)
Thanks!
To be more precise, V has a minimum at x=+-sqrt(10) with value -100, the local maximum at x=0 gives value 0. The initial position x0=1, v0=0 places the solution in the right valley, oscillating around sqrt(10).
To get a figure-8 shape you need an initial point with V(x0) slightly larger than zero. For instance with x0=5 one gets V=25*(25-20)=125. Or take x0=4.5 ==> x0^2=20.25 ==> V ~ 5.
Background
I've been working for some time on attempting to solve the (notoriously painful) Time Difference of Arrival (TDoA) multi-lateration problem, in 3-dimensions and using 4 nodes. If you're unfamiliar with the problem, it is to determine the coordinates of some signal source (X,Y,Z), given the coordinates of n nodes, the time of arrival of the signal at each node, and the velocity of the signal v.
My solution is as follows:
For each node, we write (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 = (v(t_i - T)**2
Where (x_i, y_i, z_i) are the coordinates of the ith node, and T is the time of emission.
We have now 4 equations in 4 unknowns. Four nodes are obviously insufficient. We could try to solve this system directly, however that seems next to impossible given the highly nonlinear nature of the problem (and, indeed, I've tried many direct techniques... and failed). Instead, we simplify this to a linear problem by considering all i/j possibilities, subtracting equation i from equation j. We obtain (n(n-1))/2 =6 equations of the form:
2*(x_j - x_i)*X + 2*(y_j - y_i)*Y + 2*(z_j - z_i)*Z + 2 * v**2 * (t_i - t_j) = v**2 ( t_i**2 - t_j**2) + (x_j**2 + y_j**2 + z_j**2) - (x_i**2 + y_i**2 + z_i**2)
Which look like Xv_1 + Y_v2 + Z_v3 + T_v4 = b. We try now to apply standard linear least squares, where the solution is the matrix vector x in A^T Ax = A^T b. Unfortunately, if you were to try feeding this into any standard linear least squares algorithm, it'll choke up. So, what do we do now?
...
The time of arrival of the signal at node i is given (of course) by:
sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v
This equation implies that the time of arrival, T, is 0. If we have that T = 0, we can drop the T column in matrix A and the problem is greatly simplified. Indeed, NumPy's linalg.lstsq() gives a surprisingly accurate & precise result.
...
So, what I do is normalize the input times by subtracting from each equation the earliest time. All I have to do then is determine the dt that I can add to each time such that the residual of summed squared error for the point found by linear least squares is minimized.
I define the error for some dt to be the squared difference between the arrival time for the point predicted by feeding the input times + dt to the least squares algorithm, minus the input time (normalized), summed over all 4 nodes.
for node, time in nodes, times:
error += ( (sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v) - time) ** 2
My problem:
I was able to do this somewhat satisfactorily by using brute-force. I started at dt = 0, and moved by some step up to some maximum # of iterations OR until some minimum RSS error is reached, and that was the dt I added to the normalized times to obtain a solution. The resulting solutions were very accurate and precise, but quite slow.
In practice, I'd like to be able to solve this in real time, and therefore a far faster solution will be needed. I began with the assumption that the error function (that is, dt vs error as defined above) would be highly nonlinear-- offhand, this made sense to me.
Since I don't have an actual, mathematical function, I can automatically rule out methods that require differentiation (e.g. Newton-Raphson). The error function will always be positive, so I can rule out bisection, etc. Instead, I try a simple approximation search. Unfortunately, that failed miserably. I then tried Tabu search, followed by a genetic algorithm, and several others. They all failed horribly.
So, I decided to do some investigating. As it turns out the plot of the error function vs dt looks a bit like a square root, only shifted right depending upon the distance from the nodes that the signal source is:
Where dt is on horizontal axis, error on vertical axis
And, in hindsight, of course it does!. I defined the error function to involve square roots so, at least to me, this seems reasonable.
What to do?
So, my issue now is, how do I determine the dt corresponding to the minimum of the error function?
My first (very crude) attempt was to get some points on the error graph (as above), fit it using numpy.polyfit, then feed the results to numpy.root. That root corresponds to the dt. Unfortunately, this failed, too. I tried fitting with various degrees, and also with various points, up to a ridiculous number of points such that I may as well just use brute-force.
How can I determine the dt corresponding to the minimum of this error function?
Since we're dealing with high velocities (radio signals), it's important that the results be precise and accurate, as minor variances in dt can throw off the resulting point.
I'm sure that there's some infinitely simpler approach buried in what I'm doing here however, ignoring everything else, how do I find dt?
My requirements:
Speed is of utmost importance
I have access only to pure Python and NumPy in the environment where this will be run
EDIT:
Here's my code. Admittedly, a bit messy. Here, I'm using the polyfit technique. It will "simulate" a source for you, and compare results:
from numpy import poly1d, linspace, set_printoptions, array, linalg, triu_indices, roots, polyfit
from dataclasses import dataclass
from random import randrange
import math
#dataclass
class Vertexer:
receivers: list
# Defaults
c = 299792
# Receivers:
# [x_1, y_1, z_1]
# [x_2, y_2, z_2]
# [x_3, y_3, z_3]
# Solved:
# [x, y, z]
def error(self, dt, times):
solved = self.linear([time + dt for time in times])
error = 0
for time, receiver in zip(times, self.receivers):
error += ((math.sqrt( (solved[0] - receiver[0])**2 +
(solved[1] - receiver[1])**2 +
(solved[2] - receiver[2])**2 ) / c ) - time)**2
return error
def linear(self, times):
X = array(self.receivers)
t = array(times)
x, y, z = X.T
i, j = triu_indices(len(x), 1)
A = 2 * (X[i] - X[j])
b = self.c**2 * (t[j]**2 - t[i]**2) + (X[i]**2).sum(1) - (X[j]**2).sum(1)
solved, residuals, rank, s = linalg.lstsq(A, b, rcond=None)
return(solved)
def find(self, times):
# Normalize times
times = [time - min(times) for time in times]
# Fit the error function
y = []
x = []
dt = 1E-10
for i in range(50000):
x.append(self.error(dt * i, times))
y.append(dt * i)
p = polyfit(array(x), array(y), 2)
r = roots(p)
return(self.linear([time + r for time in times]))
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
myVertexer = Vertexer([[x_1, y_1, z_1],[x_2, y_2, z_2],[x_3, y_3, z_3],[x_4, y_4, z_4]])
solution = myVertexer.find([t_1, t_2, t_3, t_4])
print(solution)
It seems like the Bancroft method applies to this problem? Here's a pure NumPy implementation.
# Implementation of the Bancroft method, following
# https://gssc.esa.int/navipedia/index.php/Bancroft_Method
M = np.diag([1, 1, 1, -1])
def lorentz_inner(v, w):
return np.sum(v * (w # M), axis=-1)
B = np.array(
[
[x_1, y_1, z_1, c * t_1],
[x_2, y_2, z_2, c * t_2],
[x_3, y_3, z_3, c * t_3],
[x_4, y_4, z_4, c * t_4],
]
)
one = np.ones(4)
a = 0.5 * lorentz_inner(B, B)
B_inv_one = np.linalg.solve(B, one)
B_inv_a = np.linalg.solve(B, a)
for Lambda in np.roots(
[
lorentz_inner(B_inv_one, B_inv_one),
2 * (lorentz_inner(B_inv_one, B_inv_a) - 1),
lorentz_inner(B_inv_a, B_inv_a),
]
):
x, y, z, c_t = M # np.linalg.solve(B, Lambda * one + a)
print("Candidate:", x, y, z, c_t / c)
My answer might have mistakes (glaring) as I had not heard the TDOA term before this afternoon. Please double check if the method is right.
I could not find solution to your original problem of finding dt corresponding to the minimum error. My answer also deviates from the requirement that other than numpy no third party library had to be used (I used Sympy and largely used the code from here). However I am still posting this thinking that somebody someday might find it useful if all one is interested in ... is to find X,Y,Z of the source emitter. This method also does not take into account real-life situations where white noise or errors might be present or curvature of the earth and other complications.
Your initial test conditions are as below.
from random import randrange
import math
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
My solution is as below.
import sympy as sym
X,Y,Z = sym.symbols('X,Y,Z', real=True)
f = sym.Eq((x_1 - X)**2 +(y_1 - Y)**2 + (z_1 - Z)**2 , (c*t_1)**2)
g = sym.Eq((x_2 - X)**2 +(y_2 - Y)**2 + (z_2 - Z)**2 , (c*t_2)**2)
h = sym.Eq((x_3 - X)**2 +(y_3 - Y)**2 + (z_3 - Z)**2 , (c*t_3)**2)
i = sym.Eq((x_4 - X)**2 +(y_4 - Y)**2 + (z_4 - Z)**2 , (c*t_4)**2)
print("Solved coordinates are ", sym.solve([f,g,h,i],X,Y,Z))
print statement from your initial condition gave.
Actual: 111 553 110
and the solution that almost instantly came out was
Solved coordinates are [(111.000000000000, 553.000000000000, 110.000000000000)]
Sorry again if something is totally amiss.
I am trying to show the gray scott model of diffusion. I keep getting a bunch of runtime warning errors even though I feel like my code is really close to correct. Is there something wrong with my discretization?
import numpy as np
import matplotlib.pyplot as plt
#parameters
N=128
F=.042
k=.062
Du=(2**-5)*(N**2/6.25)
Dv=(1**-5)*(N**2/6.25)
tend=1000
dt=tend/N
t=0
#start arrays
U=np.ones((N,N))
V=np.zeros((N,N))
#Initial Value Boxes (20x20 in middle)
low=int(((N/2)-10))
high=int(((N/2)+10))+1
U[low:high,low:high]=.5
V[low:high,low:high]=.25
#Random Noise
U+=.01*np.random.random((N,N))
V+=.01*np.random.random((N,N))
#Laplace
def Laplace(f):
return np.roll(f,1)+np.roll(f,-1)+np.roll(f,1,axis=False)+np.roll(f,-1,axis=False)-4*f
#Solve
pstep=100
for t in range(tend):
U+=((Du*Laplace(U))-(U*V*V)+(F*(1-U)))
V+=((Dv*Laplace(V))+(U*V*V)-(V*(F+k)))
if t%pstep ==0:
print(t//pstep)
plt.imshow(U, interpolation='bicubic',cmap=plt.cm.jet)
Ok, I got it to work by changing a few things in the calculation, but mostly altering the numeric stability by massively decreasing the diffusion coefficients, and decreasing the timestep. The net result of this, is that the whole simulation changes less between each step, so the value of the change is much smaller.
The errors you were getting were due to overflow of the floats in the calculation of dU and dV, by slowing the whole thing down (more timesteps) you don't need such massive numbers in dU and dV
import numpy as np
import matplotlib.pyplot as plt
# parameters
N = 128
F = .042
k = .062
# Du=(2**-5)*(N**2/6.25) # These were way too high for the
# numeric stability given the timestep
Du = 0.1
# Dv=(1**-5)*(N**2/6.25)
Dv = 0.5
tend = 1000
dt = tend / N
t = 0
dt = 0.1 # Timestep -
# the smaller you go here, the faster you can let the diffusion coefficients be
# start arrays
U = np.ones((N, N))
V = np.zeros((N, N))
# Initial Value Boxes (20x20 in middle)
low = int(((N / 2) - 10))
high = int(((N / 2) + 10)) + 1
U[low:high, low:high] = .5
V[low:high, low:high] = .25
# Random Noise
U += .01 * np.random.random((N, N))
V += .01 * np.random.random((N, N))
# Laplace
def Laplace(f):
return np.roll(f, 1) + np.roll(f, -1) + np.roll(f, 1, axis=False) + np.roll(f,-1, axis=False) - 4 * f
# Solve
pstep = 100
for t in range(tend):
dU = ((Du * Laplace(U)) - (U * V * V) + (F * (1 - U))) * dt
dV = ((Dv * Laplace(V)) + (U * V * V) - (V * (F + k))) * dt
U += dU
V += dV
if t % pstep == 0:
print(t // pstep)
plt.imshow(U, interpolation='bicubic', cmap=plt.cm.jet, vmin=0, vmax=1)
Of course the changes I made alter the physics a bit, and you will need to alter your t and pstep so those values make sense. And also check how you were calculating Du and Dv. If those values are actually supposed to be ~8000, then you need a much much smaller timestep.
For anyone else's reference, the Gray Scott model is explained here
ai = 1.0
# potentiall edit the 0.7 as I believe this is speciic to the De Sitter Universe
phi0 = (mp * (1/(H_0_true**(2) * 0.7))**(1/alpha))
for i in del_t_range:
#friedmann equation which is updated every iteration with new values of scale factor
H = (H_0_true * m.sqrt(((w_m/(ai**3)) + (w_r/(ai**4)) + (w_v) + (w_c/(ai**2)))))
#H values needed for analysis purposes
H_vals.append(H)
time.append(i)
#finite difference differentiation method for universe expansion
del_a = ai * H * del_t
a_val = ai + del_a
a.append(a_val)
ai = a_val
############################# ISSUE ###########################
#field potential calculation
V = (phi0 / mp)**(-alpha)
#appended to use for graphing
V_vals.append(V)
#print(V)
#differentiation of the potential wrt the field
V_dash = alpha * (phi0 / mp)**(-alpha - 1.0)
#print(V_dash)
#finite differnce of time derivative of phi
y_val = yi - (3 * H * yi - V_dash) * del_t
#print(y_val)
# needed for graphing
y_list.append(y_val)
# used for updating values of phi
phi_val = phi0 + (yi * del_t)
phi_list.append(phi_val)
# Energy Density calculations
rho_phi = 0.5 * (y_val / mp)**(2) + (V / mp**(4)) * (1 / tp)**(2)
#calculation required to sort out units
rho_phi_true = rho_phi / mp**(2)
#Used for graphing
rho_phi_vals.append(rho_phi_true)
# updates values for next iteration of loop
phi0 = phi_val
yi = y_val
The Following code is looped so I can obtain updated values for variables phi0, rho_phi, yi, V, and V_dash. Each of the updated values are then appended to python lists and plotted. However, I seem to be having an issue with updating my value for phi0 even though I have written that the new value of phi0 is equal to phi_val at the end of the loop. The values for phi0 over each iteration remain the same. For instance,
[1.0011701031373587e+30, 1.0011701031373587e+30, 1.0011701031373587e+30, 1.0011701031373587e+30, ...]
My apologies if I have been vague or confusing at all, please lemme know if there's any issues. Thank you very much! :-)
I am solving the dissipation equation using a finite differencing scheme. The initial condition is a half sin wave with Dirchlet boundary conditions on both sides. I insert an extra point on each side of the domain to enforce the Dirchlet boundary condition while maintaining fourth-order-accuracy, then use forwards-euler to evolve it in time
When I switch from the second-order-accurate stencil to the fourth-order-accurate stencil /12.
I do not see an improvement in the rate of convergence when I plot vs an estimate of the error.
I wrote and commented a code that shows my problem. When I use the 5 point strategy, my rate of convergence is the same:
Why is this happening? Why isn't the fourth-order-accurate stencil helping the convergence rate? I combed over this carefully and I think that there must be some issue in my understanding.
# Let's evolve the diffusion equation in time with Dirchlet BCs
# Load modules
import numpy as np
import matplotlib.pyplot as plt
# Domain size
XF = 1
# Viscosity
nu = 0.01
# Spatial Differentiation function, approximates d^u/dx^2
def diffusive_dudt(un, nu, dx, strategy='5c'):
undiff = np.zeros(un.size, dtype=np.float128)
# O(h^2)
if strategy == '3c':
undiff[2:-2] = nu * (un[3:-1] - 2 * un[2:-2] + un[1:-3]) / dx**2
# O(h^4)
elif strategy == '5c':
undiff[2:-2] = nu * (-1 * un[4:] + 16 * un[3:-1] - 30 * un[2:-2] + 16 * un[1:-3] - un[:-4]) / (12 * dx**2 )
else: raise(IOError("Invalid diffusive strategy")) ; quit()
return undiff
def geturec(x, nu=.05, evolution_time=1, u0=None, n_save_t=50, ubl=0., ubr=0., diffstrategy='5c', dt=None, returndt=False):
dx = x[1] - x[0]
# Prescribde cfl=0.1 and ftcs=0.2
if dt is not None: pass
else: dt = min(.1 * dx / 1., .2 / nu * dx ** 2)
if returndt: return dt
nt = int(evolution_time / dt)
divider = int(nt / float(n_save_t))
if divider ==0: raise(IOError("not enough time steps to save %i times"%n_save_t))
# The default initial condition is a half sine wave.
u_initial = ubl + np.sin(x * np.pi)
if u0 is not None: u_initial = u0
u = u_initial
u[0] = ubl
u[-1] = ubr
# insert ghost cells; extra cells on the left and right
# for the edge cases of the finite difference scheme
x = np.insert(x, 0, x[0]-dx)
x = np.insert(x, -1, x[-1]+dx)
u = np.insert(u, 0, ubl)
u = np.insert(u, -1, ubr)
# u_record holds all the snapshots. They are evenly spaced in time,
# except the final and initial states
u_record = np.zeros((x.size, int(nt / divider + 2)))
# Evolve through time
ii = 1
u_record[:, 0] = u
for _ in range(nt):
un = u.copy()
dudt = diffusive_dudt(un, nu, dx, strategy=diffstrategy)
# forward euler time step
u = un + dt * dudt
# Save every xth time step
if _ % divider == 0:
#print "C # ---> ", u * dt / dx
u_record[:, ii] = u.copy()
ii += 1
u_record[:, -1] = u
return u_record[1:-1, :]
# define L-1 Norm
def ul1(u, dx): return np.sum(np.abs(u)) / u.size
# Now let's sweep through dxs to find convergence rate
# Define dxs to sweep
xrang = np.linspace(350, 400, 4)
# this function accepts a differentiation key name and returns a list of dx and L-1 points
def errf(strat):
# Lists to record dx and L-1 points
ypoints = []
dxs= []
# Establish truth value with a more-resolved grid
x = np.linspace(0, XF, 800) ; dx = x[1] - x[0]
# Record truth L-1 and dt associated with finest "truth" grid
trueu = geturec(nu=nu, x=x, diffstrategy=strat, evolution_time=2, n_save_t=20, ubl=0, ubr=0)
truedt = geturec(nu=nu, x=x, diffstrategy=strat, evolution_time=2, n_save_t=20, ubl=0, ubr=0, returndt=True)
trueqoi = ul1(trueu[:, -1], dx)
# Sweep dxs
for nx in xrang:
x = np.linspace(0, XF, nx) ; dx = x[1] - x[0]
dxs.append(dx)
# Run solver, hold dt fixed
u = geturec(nu=nu, x=x, diffstrategy='5c', evolution_time=2, n_save_t=20, ubl=0, ubr=0, dt=truedt)
# record |L-1(dx) - L-1(truth)|
qoi = ul1(u[:, -1], dx)
ypoints.append(np.abs(trueqoi - qoi))
return dxs, ypoints
# Plot results. The fourth order method should have a slope of 4 on the log-log plot.
from scipy.optimize import minimize as mini
strategy = '5c'
dxs, ypoints = errf(strategy)
def fit2(a): return 1000 * np.sum((a * np.array(dxs) ** 2 - ypoints) ** 2)
def fit4(a): return 1000 * np.sum((np.exp(a) * np.array(dxs) ** 4 - ypoints) ** 2)
a = mini(fit2, 500).x
b = mini(fit4, 11).x
plt.plot(dxs, a * np.array(dxs)**2, c='k', label=r"$\nu^2$", ls='--')
plt.plot(dxs, np.exp(b) * np.array(dxs)**4, c='k', label=r"$\nu^4$")
plt.plot(dxs, ypoints, label=r"Convergence", marker='x')
plt.yscale('log')
plt.xscale('log')
plt.xlabel(r"$\Delta X$")
plt.ylabel("$L-L_{true}$")
plt.title(r"$\nu=%f, strategy=%s$"%(nu, strategy))
plt.legend()
plt.savefig('/Users/kilojoules/Downloads/%s.pdf'%strategy, bbox_inches='tight')
The error of the scheme is O(dt,dx²) resp. O(dt, dx⁴). As you keep dt=O(dx^2), the combined error is O(dx²) in both cases. You could try to scale dt=O(dx⁴) in the second case, however the balance of truncation and floating point error of the Euler or any first order method is reached around L*dt=1e-8, where L is a Lipschitz constant for the right side, so higher for more complex right sides. Even in the best case, going beyond dx=0.01 would be futile. Using a higher order method in time direction should help.
You used the wrong error metric. If you compare the fields on a point-by-point basis you'll get the convergence rate you were after.