Equating differential equations in python - python

I want to equate those differential equations. I know I can solve them easily in the paper but I want to know how to do it in Python:
from sympy import symbols, Eq, solve
P = Function("P")
Q = Symbol('Q')
Q_d = Symbol("Q_d")
Q_s = Symbol("Q_s")
t = Symbol("t")
dy2 = 3 * Derivative(P(t), t,2)
dy1 = Derivative(P(t), t)
eq1 = Eq(dy2 + dy1 - P(t) + 9,Q_d)
display(eq1)
dy2_ = 5 * Derivative(P(t), t,2)
dy1_ = -Derivative(P(t), t)
eq2 = Eq(dy2_ + dy1_ +4* P(t) -1 ,Q_s)
display(eq2)
−𝑃(𝑡) + 𝑑/𝑑𝑡*𝑃(𝑡)+3*𝑑2/𝑑𝑡2 * 𝑃(𝑡) + 9 = 𝑄𝑑
4𝑃(𝑡) − 𝑑/𝑑𝑡*𝑃(𝑡)+5*𝑑2/𝑑𝑡2 * 𝑃(𝑡) −1 = 𝑄𝑠
These are basically "supply and demand" equations the result is basically:
2 * 𝑑2/𝑑𝑡2 * 𝑃(𝑡) = (2 * 𝑑/𝑑𝑡𝑃(𝑡) - 5𝑃(𝑡) +10)
How can I find this result? I know Sympy "Solve" can do such a thing:
solve((eq1,eq2), (x, y))
But in this case, I don't have any knowledge.

I assume you get what you call the result by setting Qs = Qd and subtracting the equations? It can be rewritten as
(2 * 𝑑/𝑑𝑡𝑃(𝑡) - 5𝑃(𝑡) +10) - 2 * 𝑑2/𝑑𝑡2 * 𝑃(𝑡) = 0
which you can obtain in sympy doing
>>> eq1.lhs - eq2.lhs
-5*P(t) + 2*Derivative(P(t), t) - 2*Derivative(P(t), (t, 2)) + 10
where lhs returns the left-hand side of the equation.

Related

helmholtz decomposition of a vector V(x,z)=[f(x,z),0,0]

i wrote the below program in python with the hope of conducting a Helmholtz decomposition on a vector V(x,z)=[f(x,z),0,0] where f(x,z) is a function defined earlier, the aim of this program is to get the solenoidal and harmonic parts of vector V as S(x,z)=[S1(x,z),S2(x,z),S3(x,z)] and H(x,z)=[H1(x,z),H2(x,z),H3(x,z)] with S and H satisfying the condition V=S+H which transllates to (S1+H1=f, S2+H2=0, S3+H3=0)
please help i cant get anywhere with this problem, the output of the above code isnt what i wanted , its the following:
Solenoidal:
[[-22.6179559436889 + 41.14742726254I, 33.243161684442 - 99.9416505604629I, -22.6179559436889 + 41.14742726254I], [0.000151144774536593 + 0.000222403457962539I, 0, -0.000151144774536593 - 0.000222403457962539I], [22.6210744289585 - 41.1540953247099I, -41.2442631673893 + 88.1909008014634I, 6.6295316668479 - 64.6849359328842I]]
Harmonic:
[[26.6155393446675 - 35.2651619174123I, -33.243161684442 + 99.9416505604629I, 18.6203725427103 - 47.0296926076676I], [-0.000151144774536593 - 0.000222403457962539I, 0, 0.000151144774536593 + 0.000222403457962539I], [-18.6231887384308 + 47.0368054767535I, 41.2442631673893 - 88.1909008014634I, -10.6274173573755 + 58.8022257808406I]]
`
import math
import numpy as np
from sympy import symbols, simplify, lambdify
​
# Define x and z as symbolic variables
x, z = symbols('x, z')
​
# Define the function f
def f(x, z):
term1 = 171.05 * 10**(-18) * ((1.00 * x**4 + 2.00 * x**2 * z**2 + 1.00 * z**4) * math.atan(z*x) - 1.00 * x**3 * z - 1.00 * x * z**3)
term2 = -3.17 * 10**6 * x**4 - 6.36 * 10**6 * x**2 * z**2 - 3.19 * 10**6 * z**4 + 1.00 * x**4 * z + 2.00 * x**2 * z**3 + 1.00 * z**5
term3 = (z - 44.33 * 10**3)
term4 = ((-2.00 * 10**3) / (576.30 * 10**3 + 13.00 * z))**2.69 * (x**2 + z**2)**7.00 / 2.00 * z
return term1 * term2 * term3 / (term4 + 1e-15) # Add a small value to term4 to avoid division by zero
​
# Define a 2D array with 3 elements
vector = np.array([[f(x, z) for x in range(-1, 2)] for z in range(-1, 2)])
​
def helmholtz_hodge_decomposition(vector):
# Compute the gradient of the vector field
gradient = np.gradient(vector)
# Compute the curl of the vector field
curl = np.cross(gradient[0], gradient[1])
# Compute the divergence of the vector field
divergence = np.sum(gradient, axis=0)
# Compute the harmonic part of the vector field
harmonic = -curl - divergence
# Compute the solenoidal part of the vector field
solenoidal = vector - harmonic
return solenoidal, harmonic
​
# Print the solenoidal and harmonic parts as functions of x and z
solenoidal, harmonic = helmholtz_hodge_decomposition(vector)
print("Solenoidal:")
print(simplify(solenoidal))
print("Harmonic:")
print(simplify(harmonic))
​
# Create functions from the solenoidal and harmonic parts
solenoidal_part = lambdify((x, z), simplify(solenoidal), 'numpy')
harmonic_part = lambdify((x, z), simplify(harmonic), 'numpy')
`
expecting :Conducting a Helmholtz decomposition on a vector V(x,z)=[f(x,z),0,0] where f(x,z) is a function defined earlier, the aim of this program is to get the solenoidal and harmonic parts of vector V as S(x,z)=[S1(x,z),S2(x,z),S3(x,z)] and H(x,z)=[H1(x,z),H2(x,z),H3(x,z)] with S and H satisfying the condition V=S+H which transllates to (S1+H1=f, S2+H2=0, S3+H3=0)

python algorithm for solving systems of equations without

I need algorithm, that solve systems like this:
Example 1:
5x - 6y = 0 <--- line
(10- x)**2 + (10- y)**2 = 2 <--- circle
Solution:
find y:
(10- 6/5*y)**2 + (10- y)**2 = 2
100 - 24y + 1.44y**2 + 100 - 20y + y**2 = 2
2.44y**2 - 44y + 198 = 0
D = b**2 - 4ac
D = 44*44 - 4*2.44*198 = 3.52
y[1,2] = (-b+-sqrt(D))/2a
y[1,2] = (44+-1.8761)/4.88 = 9.4008 , 8.6319
find x:
(10- x)**2 + (10- 5/6y)**2 = 2
100 - 20x + y**2 + 100 - 5/6*20y + (5/6*y)**2 = 2
1.6944x**2 - 36.6666x + 198 = 0
D = b**2 - 4ac
D = 36.6666*36.6666 - 4*1.6944*198 = 2.4747
x[1,2] = (-b+-sqrt(D))/2a
x[1,2] = (36.6666+-1.5731)/3.3888 = 11.2841 , 10.3557
my skills are not enough to write this algorithm please help
and another algorithm that solve this system.
5x - 6y = 0 <--- line
|-10 - x| + |-10 - y| = 2 <--- rhomb
as answer here i need two x and two y.
You can use sympy, Python's symbolic math library.
Solutions for fixed parameters
from sympy import symbols, Eq, solve
x, y = symbols('x y', real=True)
eq1 = Eq(5 * x - 6 * y, 0)
eq2 = Eq((10 - x) ** 2 + (10 - y) ** 2, 2)
solutions = solve([eq1, eq2], (x, y))
print(solutions)
for x, y in solutions:
print(f'{x.evalf()}, {y.evalf()}')
This leads to two solutions:
[(660/61 - 6*sqrt(22)/61, 550/61 - 5*sqrt(22)/61),
(6*sqrt(22)/61 + 660/61, 5*sqrt(22)/61 + 550/61)]
10.3583197613288, 8.63193313444070
11.2810245009662, 9.40085375080520
The other equations work very similar:
eq1 = Eq(5 * x - 6 * y, 0)
eq2 = Eq(Abs(-10 - x) + Abs(-10 - y), 2)
leading to :
[(-12, -10),
(-108/11, -90/11)]
-12.0000000000000, -10.0000000000000
-9.81818181818182, -8.18181818181818
Dealing with arbitrary parameters
For your new question, how to deal with arbitrary parameters, sympy can help to find formulas, at least when the structure of the equations is fixed:
from sympy import symbols, Eq, Abs, solve
x, y = symbols('x y', real=True)
a, b, xc, yc = symbols('a b xc yc', real=True)
r = symbols('r', real=True, positive=True)
eq1 = Eq(a * x - b * y, 0)
eq2 = Eq((xc - x) ** 2 + (yc - y) ** 2, r ** 2)
solutions = solve([eq1, eq2], (x, y))
Studying the generated solutions, some complicated expressions are repeated. Those could be substituted by auxiliary variables. Note that this step isn't necessary, but helps a lot in making sense of the solutions. Also note that substitution in sympy often only considers quite literal replacements. That's by the introduction of c below is done in two steps:
c, d = symbols('c d', real=True)
for xi, yi in solutions:
print(xi.subs(a ** 2 + b ** 2, c)
.subs(r ** 2 * a ** 2 + r ** 2 * b ** 2, c * r ** 2)
.subs(-a ** 2 * xc ** 2 + 2 * a * b * xc * yc - b ** 2 * yc ** 2 + c * r ** 2, d)
.simplify())
print(yi.subs(a ** 2 + b ** 2, c)
.subs(r ** 2 * a ** 2 + r ** 2 * b ** 2, c * r ** 2)
.subs(-a ** 2 * xc ** 2 + 2 * a * b * xc * yc - b ** 2 * yc ** 2 + c * r ** 2, d)
.simplify())
Which gives the formulas:
x1 = b*(a*yc + b*xc - sqrt(d))/c
y1 = a*(a*yc + b*xc - sqrt(d))/c
x2 = b*(a*yc + b*xc + sqrt(d))/c
y2 = a*(a*yc + b*xc + sqrt(d))/c
These formulas then can be converted to regular Python code without the need of sympy. That code will only work for an arbitrary line and circle. Some tests need to be added around, such as c == 0 (meaning the line is just a dot), and d either be zero, positive or negative.
The stand-alone code could look like:
import math
def give_solutions(a, b, xc, yc, r):
# intersection between a line a*x-b*y==0 and a circle with center (xc, yc) and radius r
c =a ** 2 + b ** 2
if c == 0:
print("degenerate line equation given")
else:
d = -a**2 * xc**2 + 2*a*b * xc*yc - b**2 * yc**2 + c * r**2
if d < 0:
print("no solutions")
elif d == 0:
print("1 solution:")
print(f" x1 = {b*(a*yc + b*xc)/c}")
print(f" y1 = {a*(a*yc + b*xc)/c}")
else: # d > 0
print("2 solutions:")
sqrt_d = math.sqrt(d)
print(f" x1 = {b*(a*yc + b*xc - sqrt_d)/c}")
print(f" y1 = {a*(a*yc + b*xc - sqrt_d)/c}")
print(f" x2 = {b*(a*yc + b*xc + sqrt_d)/c}")
print(f" y2 = {a*(a*yc + b*xc + sqrt_d)/c}")
For the rhombus, sympy doesn't seem to be able to work well with abs in the equations. However, you could use equations for the 4 sides, and test whether the obtained intersections are inside the range of the rhombus. (The four sides would be obtained by replacing abs with either + or -, giving four combinations.)
Working this out further, is far beyond the reach of a typical stackoverflow answer, especially as you seem to ask for an even more general solution.

Lyapunov Spectrum for known ODEs - Python 3 [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I want to numerically compute the Lyapunov Spectrum of the Lorenz System by using the standard method which is described in this Paper, p.81.
One basically need integrate the Lorenz system and the tangential vectors (i used the Runge-Kutta method for this). The evolution equation of the tangential vectors are given by the Jacobi matrix of the Lorenz system. After each iterations one needs to apply the Gram-Schmidt scheme on the vectors and store its lengths. The three Lyapunov exponents are then given by the averages of the stored lengths.
I implemented the above explained scheme in python (used version 3.7.4), but I don't get the correct results.
I thing the bug lies in the Rk4-Method for der vectors, but i cannot find any error...The RK4-method for the trajectories x,y,z works correctly (indicated by the plot) and the implemented Gram-Schmidt scheme is also correctly implemented.
I hope that someone could look through my short code and maybe find my error
Edit: Updated Code
from numpy import array, arange, zeros, dot, log
import matplotlib.pyplot as plt
from numpy.linalg import norm
# Evolution equation of tracjectories and tangential vectors
def f(r):
x = r[0]
y = r[1]
z = r[2]
fx = sigma * (y - x)
fy = x * (rho - z) - y
fz = x * y - beta * z
return array([fx,fy,fz], float)
def jacobian(r):
M = zeros([3,3])
M[0,:] = [- sigma, sigma, 0]
M[1,:] = [rho - r[2], -1, - r[0] ]
M[2,:] = [r[1], r[0], -beta]
return M
def g(d, r):
dx = d[0]
dy = d[1]
dz = d[2]
M = jacobian(r)
dfx = dot(M, dx)
dfy = dot(M, dy)
dfz = dot(M, dz)
return array([dfx, dfy, dfz], float)
# Initial conditions
d = array([[1,0,0], [0,1,0], [0,0,1]], float)
r = array([19.0, 20.0, 50.0], float)
sigma, rho, beta = 10, 45.92, 4.0
T = 10**5 # time steps
dt = 0.01 # time increment
Teq = 10**4 # Transient time
l1, l2, l3 = 0, 0, 0 # Lengths
xpoints, ypoints, zpoints = [], [], []
# Transient
for t in range(Teq):
# RK4 - Method
k1 = dt * f(r)
k11 = dt * g(d, r)
k2 = dt * f(r + 0.5 * k1)
k22 = dt * g(d + 0.5 * k11, r + 0.5 * k1)
k3 = dt * f(r + 0.5 * k2)
k33 = dt * g(d + 0.5 * k22, r + 0.5 * k2)
k4 = dt * f(r + k3)
k44 = dt * g(d + k33, r + k3)
r += (k1 + 2 * k2 + 2 * k3 + k4) / 6
d += (k11 + 2 * k22 + 2 * k33 + k44) / 6
# Gram-Schmidt-Scheme
orth_1 = d[0]
d[0] = orth_1 / norm(orth_1)
orth_2 = d[1] - dot(d[1], d[0]) * d[0]
d[1] = orth_2 / norm(orth_2)
orth_3 = d[2] - (dot(d[2], d[1]) * d[1]) - (dot(d[2], d[0]) * d[0])
d[2] = orth_3 / norm(orth_3)
for t in range(T):
k1 = dt * f(r)
k11 = dt * g(d, r)
k2 = dt * f(r + 0.5 * k1)
k22 = dt * g(d + 0.5 * k11, r + 0.5 * k1)
k3 = dt * f(r + 0.5 * k2)
k33 = dt * g(d + 0.5 * k22, r + 0.5 * k2)
k4 = dt * f(r + k3)
k44 = dt * g(d + k33, r + k3)
r += (k1 + 2 * k2 + 2 * k3 + k4) / 6
d += (k11 + 2 * k22 + 2 * k33 + k44) / 6
orth_1 = d[0] # Gram-Schmidt-Scheme
l1 += log(norm(orth_1))
d[0] = orth_1 / norm(orth_1)
orth_2 = d[1] - dot(d[1], d[0]) * d[0]
l2 += log(norm(orth_2))
d[1] = orth_2 / norm(orth_2)
orth_3 = d[2] - (dot(d[2], d[1]) * d[1]) - (dot(d[2], d[0]) * d[0])
l3 += log(norm(orth_3))
d[2] = orth_3 / norm(orth_3)
# Correct Solution (2.16, 0.0, -32.4)
lya1 = l1 / (dt * T)
lya2 = l2 / (dt * T) - lya1
lya3 = l3 / (dt * T) - lya1 - lya2
lya1, lya2, lya3
# my solution T = 10^5 : (1.3540301507934012, -0.0021967491623752448, -16.351653561383387)
The above code is updated according to Lutz suggestions.
The results look much better but they are still not 100% accurate.
Correct Solution (2.16, 0.0, -32.4)
My solution (1.3540301507934012, -0.0021967491623752448, -16.351653561383387)
The correct solutions are from Wolf's Paper, p.289. On page 290-291 he describes his method, which looks exactly the same as in the paper that i mentioned in the beginning of this post (Paper, p.81).
So there must be another error in my code...
You need to solve the system of point and Jacobian as the (forward) coupled system that it is. In the original source exactly that is done, everything is updated in one RK4 call for the combined system.
So for instance in the second stage, you would mix the operations to have a combined second stage
k2 = dt * f(r + 0.5 * k1)
M = jacobian(r + 0.5 * k1)
k22 = dt * g(d + 0.5 * k11, r + 0.5 * k1)
You could also delegate the computation of M inside the g function, as this is the only place where it is needed, and you increase locality in the scope of variables.
Note that I changed the update of d from k1 to k11, which should be the main source of the error in the numerical result.
Additional remarks on the last code version (2/28/2021):
As said in the comments, the code looks like it will do what the mathematics of the algorithm prescribes. There are two misreadings that prevent the code from returning a result close to the reference:
The parameter in the paper is sigma=16.
The paper uses not the natural logarithm, but the binary one, that is, the magnitude evolution is given as 2^(L_it). So you have to divide the computed exponents by log(2).
Using the method derived in https://scicomp.stackexchange.com/questions/36013/numerical-computation-of-lyapunov-exponent I get the exponents
[2.1531855610566595, -0.00847304754613621, -32.441308372177566]
which is sufficiently close to the reference (2.16, 0.0, -32.4).

Pythonic Way to Solve this Matrix

I've been thinking on this problem, but I can't seem to wrap my head around it.
I want to solve a matrix with three equations with unknowns x, y, z so they all equal the same number.
Lets say my equations are:
x + 3 = A
y(2y - 2) = 2A
z(4z - 1) = A
So I can construct a matrix looking like:
[(X + 3) , 0 , 0] [0] [A]
[ 0 ,(2y - 2), 0] [y] = [2A]
[ 0 , , 0, (4z -1)] [z] [A]
I know numpy has a linear algebra but that is only when the answer (A) is already known.
My question is, would I have to construct a loop to brute force the answer of (A) or is there a more pythonic way of answering these series of equations?
Linear algebra can only solve for multiples of your variables, not powers (that is why it is called linear, ie the equation for a straight line, Ax + By + Cz = 0).
For this set of equations you can use the quadratic formula to solve in terms of a:
x + 3 = a => x = a - 3
y * (y - 1) = a => y**2 - y - a = 0
y = (1 +/- (1 + 4*a) ** 0.5) / 2
= 0.5 +/- (0.25 + a) ** 0.5
(a >= -0.25 for real roots)
z * (4*z - 1) = a => 4 * z**2 - z - a = 0
z = (1 +/- (1 + 16*a) ** 0.5) / 8
= 0.125 +/- (0.015625 + 0.25*a) ** 0.5
(a >= -0.0625 for real roots)
then
def solve(a):
assert a >= -0.625, "No real solution"
x = a - 3
yoffs = (0.25 * a) ** 0.5
ylo = 0.5 - yoffs
yhi = 0.5 + yoffs
zoffs = (0.015625 + 0.25 * a) ** 0.5
zlo = 0.125 - zoffs
zhi = 0.125 + zoffs
return [
(x, ylo, zlo),
(x, ylo, zhi),
(x, yhi, zlo),
(x, yhi, zhi)
]
You do not have a system of 3 equations with 3 unknowns. You have a system of 3 equations with 4 unknowns: x, y, z and A.
That means your answer will be parameterized on A, because you do not have enough equations to solve for all unknowns.
Solving a general system of polynomial equations can be done by the so-called Groebner basis approach, which is what sympy uses. Here is a snippet on how to use the library to solve this or similar problems:
from sympy.solvers.polysys import solve_poly_system
from sympy.abc import x, y, z, A
f1 = x + 3 - A
f2 = y * (2 * y - 2) - 2 * A
f3 = z * (4 * z - 1) - A
solve_poly_system([f1, f2, f3], x, y, z)
# Outputs:
# [(A - 3, -sqrt(4*A + 1)/2 + 1/2, -sqrt(16*A + 1)/8 + 1/8),
# (A - 3, -sqrt(4*A + 1)/2 + 1/2, sqrt(16*A + 1)/8 + 1/8),
# (A - 3, sqrt(4*A + 1)/2 + 1/2, -sqrt(16*A + 1)/8 + 1/8),
# (A - 3, sqrt(4*A + 1)/2 + 1/2, sqrt(16*A + 1)/8 + 1/8)]
As you can see, the result requires to fix the value of A to be fully determined.

Intersections between Geodesics (shortest distance paths) on the surface of a sphere

I've searched far and wide but have yet to find a suitable answer to this problem. Given two lines on a sphere, each defined by their start and end points, determine whether or not and where they intersect. I've found this site (http://mathforum.org/library/drmath/view/62205.html) which runs through a good algorithm for the intersections of two great circles, although I'm stuck on determining whether the given point lies along the finite section of the great circles.
I've found several sites which claim they've implemented this, Including some questions here and on stackexchange, but they always seem to reduce back to the intersections of two great circles.
The python class I'm writing is as follows and seems to almost work:
class Geodesic(Boundary):
def _SecondaryInitialization(self):
self.theta_1 = self.point1.theta
self.theta_2 = self.point2.theta
self.phi_1 = self.point1.phi
self.phi_2 = self.point2.phi
sines = math.sin(self.phi_1) * math.sin(self.phi_2)
cosines = math.cos(self.phi_1) * math.cos(self.phi_2)
self.d = math.acos(sines - cosines * math.cos(self.theta_2 - self.theta_1))
self.x_1 = math.cos(self.theta_1) * math.cos(self.phi_1)
self.x_2 = math.cos(self.theta_2) * math.cos(self.phi_2)
self.y_1 = math.sin(self.theta_1) * math.cos(self.phi_1)
self.y_2 = math.sin(self.theta_2) * math.cos(self.phi_2)
self.z_1 = math.sin(self.phi_1)
self.z_2 = math.sin(self.phi_2)
self.theta_wraps = (self.theta_2 - self.theta_1 > PI)
self.phi_wraps = ((self.phi_1 < self.GetParametrizedCoords(0.01).phi and
self.phi_2 < self.GetParametrizedCoords(0.99).phi) or (
self.phi_1 > self.GetParametrizedCoords(0.01).phi) and
self.phi_2 > self.GetParametrizedCoords(0.99))
def Intersects(self, boundary):
A = self.y_1 * self.z_2 - self.z_1 * self.y_2
B = self.z_1 * self.x_2 - self.x_1 * self.z_2
C = self.x_1 * self.y_2 - self.y_1 * self.x_2
D = boundary.y_1 * boundary.z_2 - boundary.z_1 * boundary.y_2
E = boundary.z_1 * boundary.x_2 - boundary.x_1 * boundary.z_2
F = boundary.x_1 * boundary.y_2 - boundary.y_1 * boundary.x_2
try:
z = 1 / math.sqrt(((B * F - C * E) ** 2 / (A * E - B * D) ** 2)
+ ((A * F - C * D) ** 2 / (B * D - A * E) ** 2) + 1)
except ZeroDivisionError:
return self._DealWithZeroZ(A, B, C, D, E, F, boundary)
x = ((B * F - C * E) / (A * E - B * D)) * z
y = ((A * F - C * D) / (B * D - A * E)) * z
theta = math.atan2(y, x)
phi = math.atan2(z, math.sqrt(x ** 2 + y ** 2))
if self._Contains(theta, phi):
return point.SPoint(theta, phi)
theta = (theta + 2* PI) % (2 * PI) - PI
phi = -phi
if self._Contains(theta, phi):
return spoint.SPoint(theta, phi)
return None
def _Contains(self, theta, phi):
contains_theta = False
contains_phi = False
if self.theta_wraps:
contains_theta = theta > self.theta_2 or theta < self.theta_1
else:
contains_theta = theta > self.theta_1 and theta < self.theta_2
phi_wrap_param = self._PhiWrapParam()
if phi_wrap_param <= 1.0 and phi_wrap_param >= 0.0:
extreme_phi = self.GetParametrizedCoords(phi_wrap_param).phi
if extreme_phi < self.phi_1:
contains_phi = (phi < max(self.phi_1, self.phi_2) and
phi > extreme_phi)
else:
contains_phi = (phi > min(self.phi_1, self.phi_2) and
phi < extreme_phi)
else:
contains_phi = (phi > min(self.phi_1, self.phi_2) and
phi < max(self.phi_1, self.phi_2))
return contains_phi and contains_theta
def _PhiWrapParam(self):
a = math.sin(self.d)
b = math.cos(self.d)
c = math.sin(self.phi_2) / math.sin(self.phi_1)
param = math.atan2(c - b, a) / self.d
return param
def _DealWithZeroZ(self, A, B, C, D, E, F, boundary):
if (A - D) is 0:
y = 0
x = 1
elif (E - B) is 0:
y = 1
x = 0
else:
y = 1 / math.sqrt(((E - B) / (A - D)) ** 2 + 1)
x = ((E - B) / (A - D)) * y
theta = (math.atan2(y, x) + PI) % (2 * PI) - PI
return point.SPoint(theta, 0)
def GetParametrizedCoords(self, param_value):
A = math.sin((1 - param_value) * self.d) / math.sin(self.d)
B = math.sin(param_value * self.d) / math.sin(self.d)
x = A * math.cos(self.phi_1) * math.cos(self.theta_1) + (
B * math.cos(self.phi_2) * math.cos(self.theta_2))
y = A * math.cos(self.phi_1) * math.sin(self.theta_1) + (
B * math.cos(self.phi_2) * math.sin(self.theta_2))
z = A * math.sin(self.phi_1) + B * math.sin(self.phi_2)
new_phi = math.atan2(z, math.sqrt(x**2 + y**2))
new_theta = math.atan2(y, x)
return point.SPoint(new_theta, new_phi)
EDIT: I forgot to specify that if two curves are determined to intersect, I then need to have the point of intersection.
A simpler approach is to express the problem in terms of geometric primitive operations like the dot product, the cross product, and the triple product. The sign of the determinant of u, v, and w tells you which side of the plane spanned by v and w contains u. This enables us to detect when two points are on opposite sites of a plane. That's equivalent to testing whether a great circle segment crosses another great circle. Performing this test twice tells us whether two great circle segments cross each other.
The implementation requires no trigonometric functions, no division, no comparisons with pi, and no special behavior around the poles!
class Vector:
def __init__(self, x, y, z):
self.x = x
self.y = y
self.z = z
def dot(v1, v2):
return v1.x * v2.x + v1.y * v2.y + v1.z * v2.z
def cross(v1, v2):
return Vector(v1.y * v2.z - v1.z * v2.y,
v1.z * v2.x - v1.x * v2.z,
v1.x * v2.y - v1.y * v2.x)
def det(v1, v2, v3):
return dot(v1, cross(v2, v3))
class Pair:
def __init__(self, v1, v2):
self.v1 = v1
self.v2 = v2
# Returns True if the great circle segment determined by s
# straddles the great circle determined by l
def straddles(s, l):
return det(s.v1, l.v1, l.v2) * det(s.v2, l.v1, l.v2) < 0
# Returns True if the great circle segments determined by a and b
# cross each other
def intersects(a, b):
return straddles(a, b) and straddles(b, a)
# Test. Note that we don't need to normalize the vectors.
print(intersects(Pair(Vector(1, 0, 1), Vector(-1, 0, 1)),
Pair(Vector(0, 1, 1), Vector(0, -1, 1))))
If you want to initialize unit vectors in terms of angles theta and phi, you can do that, but I recommend immediately converting to Cartesian (x, y, z) coordinates to perform all subsequent calculations.
Intersection using plane trig can be calculated using the below code in UBasic.
5 'interx.ub adapted from code at
6 'https://rosettacode.org
7 '/wiki/Find_the_intersection_of_two_linesSinclair_ZX81_BASIC
8 'In U Basic by yuji kida https://en.wikipedia.org/wiki/UBASIC
10 XA=48.7815144526:'669595.708
20 YA=-117.2847245001:'2495736.332
30 XB=48.7815093807:'669533.412
40 YB=-117.2901673467:'2494425.458
50 XC=48.7824947147:'669595.708
60 YC=-117.28751374:'2495736.332
70 XD=48.77996737:'669331.214
80 YD=-117.2922957:'2494260.804
90 print "THE TWO LINES ARE:"
100 print "YAB=";YA-XA*((YB-YA)/(XB-XA));"+X*";((YB-YA)/(XB-XA))
110 print "YCD=";YC-XC*((YD-YC)/(XD-XC));"+X*";((YD-YC)/(XD-XC))
120 X=((YC-XC*((YD-YC)/(XD-XC)))-(YA-XA*((YB-YA)/(XB-XA))))/(((YB-YA)/(XB-XA))-((YD-YC)/(XD-XC)))
130 print "Lat = ";X
140 Y=YA-XA*((YB-YA)/(XB-XA))+X*((YB-YA)/(XB-XA))
150 print "Lon = ";Y
160 'print "YCD=";YC-XC*((YD-YC)/(XD-XC))+X*((YD-YC)/(XD-XC))

Categories

Resources