SymPy unable to solve a system of trigonometric equations - python

I'm trying to get SymPy to solve a system of equations but it gives me an error saying:
NotImplementedError: could not solve 3*sin(3*t0/2)*tan(t0) + 2*cos(3*t0/2) - 4
Is there another way for me to be able to solve the system of equations:
sin(x)+(y-x)cos(x) = 0
-1.5(y-x)sin(1.5x)+cos(1.5x) = 2
I used :
from sympy import *
solve([sin(x)+(y-x)cos(x), -1.5(y-x)sin(1.5x)+cos(1.5x)-2], x, y)

SymPy could do better with this equation, but ultimately it's equivalent to some 10th degree polynomial the roots of which can only be represented abstractly. I'll describe the steps one can take and show how far SymPy can go. It's a semi-manual solution process which should be more automatic.
First of all, don't put 1.5, or other floating point numbers, in the equations. Instead, introduce a coefficient a = Rational(3, 2) and use that:
eq = [sin(x) + (y-x)*cos(x), -a*(y-x)*sin(a*x) + cos(a*x) - 2]
Variable y can be eliminated using the first equation: y=x-tan(x), which is easy for us to see, but SymPy sometimes misses the opportunity. Let's help it:
eq1 = eq[1].subs(y, x-tan(x)) # 3*sin(3*x/2)*tan(x)/2 + cos(3*x/2) - 2
As is, solve and solveset (an alternative SymPy solver) give up on the equation because of this mix of trigonometric functions of different arguments. Some of us remember from school days that trigonometric functions can be expressed as rational functions of the tangent of half-argument, so let's do that: rewrite the equation in terms of tan.
eq2 = eq1.rewrite(tan) # (-tan(3*x/4)**2 + 1)/(tan(3*x/4)**2 + 1) - 2 + 3*tan(3*x/4)*tan(x)/(tan(3*x/4)**2 + 1)
As mentioned, this halves the argument. Having fractions like x/4 in trig functions is bad. Introduce a new symbol, var('u'), and make u = x/4:
eq3 = eq2.subs(x, 4*u) # (-tan(3*u)**2 + 1)/(tan(3*u)**2 + 1) - 2 + 3*tan(3*u)*tan(4*u)/(tan(3*u)**2 + 1)
Now we can expand all these tangents in terms of tan(u), using expand_trig. The equation gets longer:
eq4 = expand_trig(eq3) # (1 - (-tan(u)**3 + 3*tan(u))**2/(-3*tan(u)**2 + 1)**2)/(1 + (-tan(u)**3 + 3*tan(u))**2/(-3*tan(u)**2 + 1)**2) - 2 + 3*(-4*tan(u)**3 + 4*tan(u))*(-tan(u)**3 + 3*tan(u))/((1 + (-tan(u)**3 + 3*tan(u))**2/(-3*tan(u)**2 + 1)**2)*(-3*tan(u)**2 + 1)*(tan(u)**4 - 6*tan(u)**2 + 1))
But it's also simpler because tan(u) can be treated as another unknown, say v.
eq5 = eq4.subs(tan(u), v) # (1 - (-v**3 + 3*v)**2/(-3*v**2 + 1)**2)/(1 + (-v**3 + 3*v)**2/(-3*v**2 + 1)**2) - 2 + 3*(-4*v**3 + 4*v)*(-v**3 + 3*v)/((1 + (-v**3 + 3*v)**2/(-3*v**2 + 1)**2)*(-3*v**2 + 1)*(v**4 - 6*v**2 + 1))
Great, now we have a rational function. It can be handled with solveset(eq5, x). By default solveset gives all complex solutions and we need only real roots among them, so let's specify the domain as Reals:
vsol = list(solveset(eq5, v, domain=S.Reals))
There is no algebraic formula for these, so they are recorded somewhat abstractly but these are actual numbers we can work with:
[CRootOf(3*v**10 + 9*v**8 - 78*v**6 + 22*v**4 - 21*v**2 + 1, 0),
CRootOf(3*v**10 + 9*v**8 - 78*v**6 + 22*v**4 - 21*v**2 + 1, 1),
CRootOf(3*v**10 + 9*v**8 - 78*v**6 + 22*v**4 - 21*v**2 + 1, 2),
CRootOf(3*v**10 + 9*v**8 - 78*v**6 + 22*v**4 - 21*v**2 + 1, 3)]
For example, we can go back to x and y now, and evaluate the solutions:
xsol = [4*atan(v) for v in vsol]
ysol = [x - tan(x) for x in xsol]
numsol = [(N(x), N(y)) for x, y in zip(xsol, ysol)]
Numeric values are
[(-4.35962510714700, -1.64344290066272),
(-0.877886785847899, 0.326585146723377),
(0.877886785847899, -0.326585146723377),
(4.35962510714700, 1.64344290066272)]
Of course there are infinitely more because the tangent is periodic. Finally, let's check these actually work:
residuals = [[e.subs({x: xv, y: yv}) for e in eq] for xv, yv in numsol]
These are a bunch of numbers of order 1e-15 or less, so yes, the equations hold within machine precision.
Unlike a purely numeric solution we'd get from SciPy or other numeric solvers, these can be evaluated with any accuracy without repeating the process. For example, 50 digits of the first x-solution:
xsol[0].evalf(50) # -4.3596251071470021258397061103704574594477338857831

Just for the fun of it here is a manual solution that only needs solving a polynomial of degree 5:
Write t = x/2, a = y-x, s = sin t, c = cos t, S = sin x and
C = cos x.
The the given equations can be rewritten
(1) 2 sc + a (c^2 - s^2) = 0
(2) 3 a s^3 - 9 a c^2 s - 6 c s^2 + 2 c^3 = 4
Multiplying (1) by 3 s and adding to (2):
(3) -6 a c^2 s + 2 c^3 = 4
Next we substitute a = -S / C and use S = 2sc and s^2 = 1 - c^2:
(4) 12 c^3 (1 - c^2) / C + 2 c^3 = 4
Multiply with C = 2 c^2 - 1:
(5) c^3 (12 - 12 c^2 + 4 c^2 - 2) = 8 c^2 - 4
Finally,
(6) 4 c^5 - 5 c^3 + 4 c^2 - 2 = 0
This has a pair of complex solutions, one real solution outside the domain of the cosine and another two solutions which give the four principal solutions for x.
(7) c_1/2 = 0.90520121, -0.57206084
(8) x_1/2/3/4 = +/- 2 arccos(x_1/2)

Related

Finding upper or lower limits given one of the limits and the solution (area under the curve) on a discrete point "curve"

I have an x vs y data. I made the non-linear estimation of that data and the resulting function looks like the below:
and the estimated function is: 25342𝑥^9 −155900𝑥^8+409218𝑥^7 − 599317𝑥^6 + 537190𝑥^5 − 303116𝑥^4 + ... + 274
I used this answer's suggestion to find the upper limit of my integral by using the below code:
p_optimal = estimate_function_from_data_points()
from sympy import integrate, solve
from sympy.abc import x, u
f = 25342.695344882944*x**9 - 155900.56387247072*x**8 + 409218.9290579793*x**7 - 599317.5264117827*x**6 + 537190.6784517929*x**5 - 303116.0648042093*x**4 + 105493.81468203208*x**3 - 20422.11374996729*x**2 + 1263.9293528900394*x + 274.55521542679185
lower = 0.0925 # Initial
upper = u
eq = integrate(f, (x, lower, upper))
eq, solve(eq + 100, u)
Out: [-0.114399781774514 - 0.112912224139529*I,
-0.114399781774514 + 0.112912224139529*I,
0.145632609024802 - 0.532284754794354*I,
0.145632609024802 + 0.532284754794354*I,
0.646926125977188 - 0.679233975801008*I,
0.646926125977188 + 0.679233975801008*I,
1.20499184950745 - 0.534200757552949*I,
1.20499184950745 + 0.534200757552949*I,
1.53445822404458 - 0.201823934360761*I,
1.53445822404458 + 0.201823934360761*I]
I get 9 results (as expected because it's 9th order function) and all of them are complex numbers because I don't have a nice function as given on the answer. What methods can I use to get a real number solution?
Edit: When I run the below code, I get 100 as a result of the integral. So, the upper limit should be around 0.945
lower = 0.0925 # Initial
upper = 0.945
integrate(f, (x, lower, upper)).evalf()
Out: 100.016292426307
If you're looking for eq to be equal to 100 then you should ask to solve Eq(eq, 100) or eq - 100 rather than eq + 100 (since that's asking for eq to be equal to -100). With that:
In [14]: solve(eq - 100, u)
Out[14]:
[-0.212948010713551, 0.944530756107237, 0.0760056875453759 - 0.410894283883493⋅ⅈ, 0.076005687545375
9 + 0.410894283883493⋅ⅈ, 0.530254482189976 - 0.556134814597481⋅ⅈ, 0.530254482189976 + 0.55613481459
7481⋅ⅈ, 1.07772724094452 - 0.347706425047349⋅ⅈ, 1.07772724094452 + 0.347706425047349⋅ⅈ, 1.367830243
4028 - 0.115266894498628⋅ⅈ, 1.3678302434028 + 0.115266894498628⋅ⅈ]
Note that the first two roots are real (one negative and one positive).
You can ask solve to return only real or only positive roots by setting assumptions on u:
In [23]: u = symbols('u', positive=True)
In [24]: eq = integrate(f, (x, 0.0925, u))
In [25]: solve(eq - 100, u)
Out[25]: [0.944530756107237]
Here the equation you are solving is really numerical but you are using SymPy's solve function which is intended to find exact analytic solutions. Here are some faster ways to solve this using SymPy's real_roots, nroots and nsolve functions:
In [15]: [r.n(3) for r in real_roots(eq - 100)]
Out[15]: [-0.213, 0.945]
In [16]: nroots(eq - 100)
Out[16]:
[-0.212948010713551, 0.944530756105865, 0.0760056875453764 - 0.410894283883493⋅ⅈ, 0.076005687545376
4 + 0.410894283883493⋅ⅈ, 0.530254482189946 - 0.556134814597486⋅ⅈ, 0.530254482189946 + 0.55613481459
7486⋅ⅈ, 1.07772724094385 - 0.34770642504758⋅ⅈ, 1.07772724094385 + 0.34770642504758⋅ⅈ, 1.36783024340
417 - 0.115266894502763⋅ⅈ, 1.36783024340417 + 0.115266894502763⋅ⅈ]
In [17]: nsolve(eq - 100, u, 0.9)
Out[17]: 0.944530756105865

Minimizing this error function, using NumPy

Background
I've been working for some time on attempting to solve the (notoriously painful) Time Difference of Arrival (TDoA) multi-lateration problem, in 3-dimensions and using 4 nodes. If you're unfamiliar with the problem, it is to determine the coordinates of some signal source (X,Y,Z), given the coordinates of n nodes, the time of arrival of the signal at each node, and the velocity of the signal v.
My solution is as follows:
For each node, we write (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 = (v(t_i - T)**2
Where (x_i, y_i, z_i) are the coordinates of the ith node, and T is the time of emission.
We have now 4 equations in 4 unknowns. Four nodes are obviously insufficient. We could try to solve this system directly, however that seems next to impossible given the highly nonlinear nature of the problem (and, indeed, I've tried many direct techniques... and failed). Instead, we simplify this to a linear problem by considering all i/j possibilities, subtracting equation i from equation j. We obtain (n(n-1))/2 =6 equations of the form:
2*(x_j - x_i)*X + 2*(y_j - y_i)*Y + 2*(z_j - z_i)*Z + 2 * v**2 * (t_i - t_j) = v**2 ( t_i**2 - t_j**2) + (x_j**2 + y_j**2 + z_j**2) - (x_i**2 + y_i**2 + z_i**2)
Which look like Xv_1 + Y_v2 + Z_v3 + T_v4 = b. We try now to apply standard linear least squares, where the solution is the matrix vector x in A^T Ax = A^T b. Unfortunately, if you were to try feeding this into any standard linear least squares algorithm, it'll choke up. So, what do we do now?
...
The time of arrival of the signal at node i is given (of course) by:
sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v
This equation implies that the time of arrival, T, is 0. If we have that T = 0, we can drop the T column in matrix A and the problem is greatly simplified. Indeed, NumPy's linalg.lstsq() gives a surprisingly accurate & precise result.
...
So, what I do is normalize the input times by subtracting from each equation the earliest time. All I have to do then is determine the dt that I can add to each time such that the residual of summed squared error for the point found by linear least squares is minimized.
I define the error for some dt to be the squared difference between the arrival time for the point predicted by feeding the input times + dt to the least squares algorithm, minus the input time (normalized), summed over all 4 nodes.
for node, time in nodes, times:
error += ( (sqrt( (X-x_i)**2 + (Y-y_i)**2 + (Z-z_i)**2 ) / v) - time) ** 2
My problem:
I was able to do this somewhat satisfactorily by using brute-force. I started at dt = 0, and moved by some step up to some maximum # of iterations OR until some minimum RSS error is reached, and that was the dt I added to the normalized times to obtain a solution. The resulting solutions were very accurate and precise, but quite slow.
In practice, I'd like to be able to solve this in real time, and therefore a far faster solution will be needed. I began with the assumption that the error function (that is, dt vs error as defined above) would be highly nonlinear-- offhand, this made sense to me.
Since I don't have an actual, mathematical function, I can automatically rule out methods that require differentiation (e.g. Newton-Raphson). The error function will always be positive, so I can rule out bisection, etc. Instead, I try a simple approximation search. Unfortunately, that failed miserably. I then tried Tabu search, followed by a genetic algorithm, and several others. They all failed horribly.
So, I decided to do some investigating. As it turns out the plot of the error function vs dt looks a bit like a square root, only shifted right depending upon the distance from the nodes that the signal source is:
Where dt is on horizontal axis, error on vertical axis
And, in hindsight, of course it does!. I defined the error function to involve square roots so, at least to me, this seems reasonable.
What to do?
So, my issue now is, how do I determine the dt corresponding to the minimum of the error function?
My first (very crude) attempt was to get some points on the error graph (as above), fit it using numpy.polyfit, then feed the results to numpy.root. That root corresponds to the dt. Unfortunately, this failed, too. I tried fitting with various degrees, and also with various points, up to a ridiculous number of points such that I may as well just use brute-force.
How can I determine the dt corresponding to the minimum of this error function?
Since we're dealing with high velocities (radio signals), it's important that the results be precise and accurate, as minor variances in dt can throw off the resulting point.
I'm sure that there's some infinitely simpler approach buried in what I'm doing here however, ignoring everything else, how do I find dt?
My requirements:
Speed is of utmost importance
I have access only to pure Python and NumPy in the environment where this will be run
EDIT:
Here's my code. Admittedly, a bit messy. Here, I'm using the polyfit technique. It will "simulate" a source for you, and compare results:
from numpy import poly1d, linspace, set_printoptions, array, linalg, triu_indices, roots, polyfit
from dataclasses import dataclass
from random import randrange
import math
#dataclass
class Vertexer:
receivers: list
# Defaults
c = 299792
# Receivers:
# [x_1, y_1, z_1]
# [x_2, y_2, z_2]
# [x_3, y_3, z_3]
# Solved:
# [x, y, z]
def error(self, dt, times):
solved = self.linear([time + dt for time in times])
error = 0
for time, receiver in zip(times, self.receivers):
error += ((math.sqrt( (solved[0] - receiver[0])**2 +
(solved[1] - receiver[1])**2 +
(solved[2] - receiver[2])**2 ) / c ) - time)**2
return error
def linear(self, times):
X = array(self.receivers)
t = array(times)
x, y, z = X.T
i, j = triu_indices(len(x), 1)
A = 2 * (X[i] - X[j])
b = self.c**2 * (t[j]**2 - t[i]**2) + (X[i]**2).sum(1) - (X[j]**2).sum(1)
solved, residuals, rank, s = linalg.lstsq(A, b, rcond=None)
return(solved)
def find(self, times):
# Normalize times
times = [time - min(times) for time in times]
# Fit the error function
y = []
x = []
dt = 1E-10
for i in range(50000):
x.append(self.error(dt * i, times))
y.append(dt * i)
p = polyfit(array(x), array(y), 2)
r = roots(p)
return(self.linear([time + r for time in times]))
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
myVertexer = Vertexer([[x_1, y_1, z_1],[x_2, y_2, z_2],[x_3, y_3, z_3],[x_4, y_4, z_4]])
solution = myVertexer.find([t_1, t_2, t_3, t_4])
print(solution)
It seems like the Bancroft method applies to this problem? Here's a pure NumPy implementation.
# Implementation of the Bancroft method, following
# https://gssc.esa.int/navipedia/index.php/Bancroft_Method
M = np.diag([1, 1, 1, -1])
def lorentz_inner(v, w):
return np.sum(v * (w # M), axis=-1)
B = np.array(
[
[x_1, y_1, z_1, c * t_1],
[x_2, y_2, z_2, c * t_2],
[x_3, y_3, z_3, c * t_3],
[x_4, y_4, z_4, c * t_4],
]
)
one = np.ones(4)
a = 0.5 * lorentz_inner(B, B)
B_inv_one = np.linalg.solve(B, one)
B_inv_a = np.linalg.solve(B, a)
for Lambda in np.roots(
[
lorentz_inner(B_inv_one, B_inv_one),
2 * (lorentz_inner(B_inv_one, B_inv_a) - 1),
lorentz_inner(B_inv_a, B_inv_a),
]
):
x, y, z, c_t = M # np.linalg.solve(B, Lambda * one + a)
print("Candidate:", x, y, z, c_t / c)
My answer might have mistakes (glaring) as I had not heard the TDOA term before this afternoon. Please double check if the method is right.
I could not find solution to your original problem of finding dt corresponding to the minimum error. My answer also deviates from the requirement that other than numpy no third party library had to be used (I used Sympy and largely used the code from here). However I am still posting this thinking that somebody someday might find it useful if all one is interested in ... is to find X,Y,Z of the source emitter. This method also does not take into account real-life situations where white noise or errors might be present or curvature of the earth and other complications.
Your initial test conditions are as below.
from random import randrange
import math
# SIMPLE CODE FOR SIMULATING A SIGNAL
# Pick nodes to be at random locations
x_1 = randrange(10); y_1 = randrange(10); z_1 = randrange(10)
x_2 = randrange(10); y_2 = randrange(10); z_2 = randrange(10)
x_3 = randrange(10); y_3 = randrange(10); z_3 = randrange(10)
x_4 = randrange(10); y_4 = randrange(10); z_4 = randrange(10)
# Pick source to be at random location
x = randrange(1000); y = randrange(1000); z = randrange(1000)
# Set velocity
c = 299792 # km/ns
# Generate simulated source
t_1 = math.sqrt( (x - x_1)**2 + (y - y_1)**2 + (z - z_1)**2 ) / c
t_2 = math.sqrt( (x - x_2)**2 + (y - y_2)**2 + (z - z_2)**2 ) / c
t_3 = math.sqrt( (x - x_3)**2 + (y - y_3)**2 + (z - z_3)**2 ) / c
t_4 = math.sqrt( (x - x_4)**2 + (y - y_4)**2 + (z - z_4)**2 ) / c
print('Actual:', x, y, z)
My solution is as below.
import sympy as sym
X,Y,Z = sym.symbols('X,Y,Z', real=True)
f = sym.Eq((x_1 - X)**2 +(y_1 - Y)**2 + (z_1 - Z)**2 , (c*t_1)**2)
g = sym.Eq((x_2 - X)**2 +(y_2 - Y)**2 + (z_2 - Z)**2 , (c*t_2)**2)
h = sym.Eq((x_3 - X)**2 +(y_3 - Y)**2 + (z_3 - Z)**2 , (c*t_3)**2)
i = sym.Eq((x_4 - X)**2 +(y_4 - Y)**2 + (z_4 - Z)**2 , (c*t_4)**2)
print("Solved coordinates are ", sym.solve([f,g,h,i],X,Y,Z))
print statement from your initial condition gave.
Actual: 111 553 110
and the solution that almost instantly came out was
Solved coordinates are [(111.000000000000, 553.000000000000, 110.000000000000)]
Sorry again if something is totally amiss.

How do I simplify the sum of sine and cosine in SymPy?

How do I simplify a*sin(wt) + b*cos(wt) into c*sin(wt+theta) using SymPy? For example:
f = sin(t) + 2*cos(t) = 2.236*sin(t + 1.107)
I tried the following:
from sympy import *
t = symbols('t')
f=sin(t)+2*cos(t)
trigsimp(f) #Returns sin(t)+2*cos(t)
simplify(f) #Returns sin(t)+2*cos(t)
f.rewrite(sin) #Returns sin(t)+2*sin(t+Pi/2)
PS.: I dont have direct access to a,b and w. Only to f
Any suggestion?
The general answer can be achieved by noting that you want to have
a * sin(t) + b * cos(t) = A * (cos(c)*sin(t) + sin(c)*cos(t))
This leads to a simultaneous equation a = A * cos(c) and b = A * sin(c).
Dividing the second equation by the second, we can solve for c. Substituting its solution into the first equation, you can solve for A.
I followed the same pattern but just to get it in terms of cos. If you want to get it in terms of sin, you can use Rodrigo's formula.
The following code should be able to take any linear combination of the form x * sin(t - w) or y * cos(t - z). There can be multiple sins and cos'.
from sympy import *
t = symbols('t', real=True)
expr = sin(t)+2*cos(t) # unknown
d = collect(expr.expand(trig=True), [sin(t), cos(t)], evaluate=False)
a = d[sin(t)]
b = d[cos(t)]
cos_phase = atan(a/b)
amplitude = a / sin(cos_phase)
print(amplitude.evalf() * cos(t - cos_phase.evalf()))
Which gives
2.23606797749979*cos(t - 0.463647609000806)
This seems to be a satisfactory match after plotting both graphs.
You could even have something like
expr = 2*sin(t - 3) + cos(t) - 3*cos(t - 2)
and it should work fine.
a * sin(wt) + b * cos(wt) = sqrt(a**2 + b**2) * sin(wt + acos(a / sqrt(a**2 + b**2)))
While the amplitude is the radical sqrt(a**2 + b**2), the phase is given by the arccosine of the ratio a / sqrt(a**2 + b**2), which may not be expressible in terms of arithmetic operations and radicals. Hence, you may be asking SymPy to do the impossible. Better use floating-point values, but you do not need SymPy for that.

Solving an overdetermined non-linear system for 8 unknowns

I am wondering what my best approach is in the following scenario. I have 8 unknowns, however a virtually unlimited number of non-linear equations which makes the system over-determined.
unknowns:
U
M
V
N
J
S
W
N
equations:
U*M + V * Catime1 – V*M – Mgtime1 = 0
J*M + W * Catime1 – W*M – Srtime1 = 0
U*N + V * Catime2 – V*N – Mgtime2 = 0
J*N + W * Catime2 – W*N – Srtime2 = 0
U*S + V * Catime3 – V*S - Mgtime3 = 0
J*S + W * Catime3 – W*S - Srtime3 = 0
U*T + V * Catime4 – W*S - Mgtime4 = 0
J*T + W * Catime4 – W*S - Srtime4 = 0
Here's what I need help with:
1) identify which Matlab (or even within python) function will solve this set of equations.
2) generate the input (equations) using python using a large Catime(i-1) and Srtime(i-1) data-set.
Write the problem as a Non Linear Least Squares problem and try to minimize it.
This will handle correctly the additional data you have.

Euler method (explicit and implicit)

I'd like to implement Euler's method (the explicit and the implicit one)
(https://en.wikipedia.org/wiki/Euler_method) for the following model:
x(t)' = q(x_M -x(t))x(t)
x(0) = x_0
where q, x_M and x_0 are real numbers.
I know already the (theoretical) implementation of the method. But I couldn't figure out where I can insert / change the model.
Could anybody help?
EDIT: You were right. I didn't understand correctly the method. Now, after a few hours, I think that I really got it! With the explicit method, I'm pretty sure (nevertheless: could anybody please have a look at my code? )
With the implicit implementation, I'm not very sure if it's correct. Could please anyone have a look at the implementation of the implicit method and give me a feedback what's correct / not good?
def explizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = x + dt * (q * (xM - x) * x)
print '%6.3f %6.3f' % (t, x)
def implizit_euler():
''' x(t)' = q(xM -x(t))x(t)
x(0) = x0'''
q = 2.
xM = 2
x0 = 0.5
T = 5
dt = 0.01
N = T / dt
x = x0
t = 0.
for i in range (0 , int(N)):
t = t + dt
x = (1.0 / (1.0 - q *(xM + x) * x))
print '%6.3f %6.3f' % (t, x)
Pre-emptive note: Although the general idea should be correct, I did all the algebra in place in the editor box so there might be mistakes there. Please, check it yourself before using for anything really important.
I'm not sure how you come to the "implicit" formula
x = (1.0 / (1.0 - q *(xM + x) * x))
but this is wrong and you can check it by comparing your "explicit" and "implicit" results: they should slightly diverge but with this formula they will diverge drastically.
To understand the implicit Euler method, you should first get the idea behind the explicit one. And the idea is really simple and is explained at the Derivation section in the wiki: since derivative y'(x) is a limit of (y(x+h) - y(x))/h, you can approximate y(x+h) as y(x) + h*y'(x) for small h, assuming our original differential equation is
y'(x) = F(x, y(x))
Note that the reason this is only an approximation rather than exact value is that even over small range [x, x+h] the derivative y'(x) changes slightly. It means that if you want to get a better approximation of y(x+h), you need a better approximation of "average" derivative y'(x) over the range [x, x+h]. Let's call that approximation just y'. One idea of such improvement is to find both y' and y(x+h) at the same time by saying that we want to find such y' and y(x+h) that y' would be actually y'(x+h) (i.e. the derivative at the end). This results in the following system of equations:
y'(x+h) = F(x+h, y(x+h))
y(x+h) = y(x) + h*y'(x+h)
which is equivalent to a single "implicit" equation:
y(x+h) - y(x) = h * F(x+h, y(x+h))
It is called "implicit" because here the target y(x+h) is also a part of F. And note that quite similar equation is mentioned in the Modifications and extensions section of the wiki article.
So now going to your case that equation becomes
x(t+dt) - x(t) = dt*q*(xM -x(t+dt))*x(t+dt)
or equivalently
dt*q*x(t+dt)^2 + (1 - dt*q*xM)*x(t+dt) - x(t) = 0
This is a quadratic equation with two solutions:
x(t+dt) = [(dt*q*xM - 1) ± sqrt((dt*q*xM - 1)^2 + 4*dt*q*x(t))]/(2*dt*q)
Obviously we want the solution that is "close" to the x(t) which is the + solution. So the code should be something like:
b = (q * xM * dt - 1)
x(t+h) = (b + (b ** 2 + 4 * q * x(t) * dt) ** 0.5) / 2 / q / dt
(editor note:) Applying the binomial complement, this formula has the numerically more stable form for small dt, where then b < 0,
x(t+h) = (2 * x(t)) / ((b ** 2 + 4 * q * x(t) * dt) ** 0.5 - b)

Categories

Resources