Related
I am trying to find a common tangent to two curves using python but I am not able to solve it.
The equations to the two curves are complicated that involve logarithms.
Is there a way in python to compute the x coordinates of a tangent that is common to both the curves in general. If I have 2 curves f(x) and g(x), I want to find the x-coordinates x1 and x2 on a common tangent where x1 lies on f(x) and x2 on g(x). I am trying f'(x1) = g'(x2) and f'(x1) = f(x1) - f(x2) / (x1 - x2) to get x1 and x2 but I am not able to get values using nonlinsolve as the equations are too complicated.
I want to just find x-coordinates of the common tangent
Can anyone suggest a better way?
import numpy as np
import sympy
from sympy import *
from matplotlib import pyplot as plt
x = symbols('x')
a, b, c, d, e, f = -99322.50019502985, -86864.87072433547, -96876.05627516498, -89703.35055202093, -3390.863799999999, -20942.518
def func(x):
y1_1 = a - a*x + b*x
y1_2 = c - c*x + d*x
c1 = (1 - x) ** (1 - x)
c2 = (x ** x)
y2 = 12471 * (sympy.log((c1*c2)))
y3 = 2*f*x**3 - x**2*(e + 3*f) + x*(e + f)
eqn1 = y1_1 + y2 + y3
eqn2 = y1_2 + y2 + y3
return eqn1, eqn2
val = np.linspace(0, 1)
f1 = sympy.lambdify(x, func(x)[0])(val)
f2 = sympy.lambdify(x, func(x)[1])(val)
plt.plot(val, f1)
plt.plot(val, f2)
plt.show()
I am trying this
x1, x2 = sympy.symbols('x1 x2')
fun1 = func(x1)[0]
fun2 = func(x2)[0]
diff1 = diff(fun1,x1)
diff2 = diff(fun2,x2)
eq1 = diff1 - diff2
eq2 = diff1 - ((fun1 - fun2) / (x1 - x2))
sol = nonlinsolve([eq1, eq2], [x1, x2])
the first thing that needs to be done is to reduce the formulas
for example the first formula is actually this:
formula = x*(1 - x)*(17551.6542 - 41885.036*x) + x*(1 - x)*(41885.036*x - 24333.3818) + 12457.6294706944*x + log((x/(1 - x))**(12000*x)*(1 - x)**12000) - 99322.5001950298
formula = (x-x^2)*(17551.6542 - 41885.036*x) + (x-x^2)*(41885.036*x - 24333.3818) + 12457.6294706944*x + log((x/(1 - x))**(12000*x)*(1 - x)**12000) - 99322.5001950298
# constants
a = 41885.036
b = 17551.6542
c = 24333.3818
d = 12457.6294706944
e = 99322.5001950298
f = 12000
formula = (x-x^2)*(b - a*x) + (x-x^2)*(a*x - c) + d*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
formula = (ax^3 -bx^2 + bx - ax^2) + (x-x^2)*(a*x - c) + d*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
formula = ax^3 -bx^2 + bx - ax^2 -ax^3 + ax^2 + cx^2 -cx + d*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
# collect x terms by power (note how the x^3 tern drops out, so its easier).
formula = (c-b)*x^2 + (b-c+d)*x + log((x/(1 - x))**(f*x)*(1 - x)**f) - e
which is much cleaner and is a quadratic with a log term.
i expect that you can do some work on the log term too, but this is an excercise for the original poster.
likewise the second formula can be reduced in the same way, which is again an excercise for the original poster.
From this, both equations need to be differentiated with respect to x to find the tangent. Then set both formulas to be equal to each other (for a common tangent).
This would completely solve the question.
I actually wonder if this is a python question at all or actually a pure maths question.....
The important point to note is that, since the derivatives are monotonic, for any value of derivative of fun1, there is a solution for fun2. This can be easily seen if you plot both derivatives.
Thus, we want a function that, given an x1, returns an x2 that matches it. I'll use numerical solution because the system is too cumbersome for numerical solution.
import scipy.optimize
def find_equal_value(f1, f2, x, x1):
goal = f1.subs(x, x1)
to_solve = sympy.lambdify(x, (f2 - goal)**2) # Quadratic functions tend to be better behaved, and the result is the same
sol = scipy.optimize.fmin(func=to_solve, x0=x1, ftol=1e-8, disp=False) # The value for f1 is a good starting guess
return sol[0]
I used fmin as the solver above because it worked and I knew how to use it by heart. Maybe root_scalar can give better results.
Using the function above, let's get some pairs (x1, x2) where the derivatives are equal:
df1 = sympy.diff(func(x)[0])
df2 = sympy.diff(func(x)[1])
x1 = 0.25236537 # Close to the zero derivative
x2 = find_equal_value(df1, df2, x, x1)
print(f'Derivative of f1 in x1: {df1.subs(x, x1)}')
print(f'Derivative of f2 in x2: {df2.subs(x, x2)}')
print(f'Error: {df1.subs(x, x1) - df2.subs(x, x2)}')
This results is:
Derivative of f1 in x1: 0.0000768765858083498
Derivative of f2 in x2: 0.0000681969431752805
Error: 0.00000867964263306931
If you want a x2 for several x1s (beware that in some cases the solver hits a value where the logs are invalid. Always check your result for validity):
x1s = np.linspace(0.2, 0.8, 50)
x2s = [find_equal_value(df1, df2, x, x1) for x1 in x1s]
plt.plot(x1s, x2s); plt.grid(); plt.show()
I am attempting to use the 4th order Yoshida integration technique to model the orbit of satellites in circular orbits around the Earth.
However, the orbits I achieve spiral away quite quickly. The code for a Moon like satellite is below. Interestingly, the particles behaved when I use Euler method, however, I wanted to try a more accurate method. The issue could then be within how I have implemented the algorithm itself.
I have tried using the gravitational parameter rather then computing G*M, but this did not help. I also reduced the time-step, messed around with units, printed and checked values for various integration steps etc., but could not find anything.
Is this the correct use of this algorithm?
G = 6.674e-20 # km^3 kg^-1 s^-2
day = 60.0 * 60.0 * 24.0 # length of day
dt = day / 10.0
M = 5.972e24 # kg
N = 1
delta = np.random.random(1) * 2.0 * np.pi / N
angles = np.linspace(0.0, 2.0 * np.pi, N) + delta
rad = np.random.uniform(low = 384e3, high = 384e3, size = (N))
x, y = rad * np.cos(angles), ringrad * np.sin(angles)
vx, vy = np.sqrt(G*M / rad) * -np.sin(angles), np.sqrt(G*M / rad) * np.cos(angles)
def update(frame):
global x, y, vx, vy, dt, day
positions.set_data(x, y)
# coefficients
q = 2**(1/3)
w1 = 1 / (2 - q)
w0 = -q * w1
d1 = w1
d3 = w1
d2 = w0
c1 = w1 / 2
c2 = (w0 + w1) / 2
c3 = c2
c4 = c1
# Step 1
x1 = x + c1*vx*dt
y1 = y + c1*vy*dt
dist1 = np.hypot(x1, y1)
acc1 = -(G*M) / (dist1**2.0)
dx1 = x1 - x
dy1 = y1 - y
accx1 = (acc1*dx1)/(x1)
accy1 = (acc1*dy1)/(y1)
vx1 = vx + d1*accx1*dt
vy1 = vy + d1*accy1*dt
# Step 2
x2 = x1 + c2*vx1*dt
y2 = y1 + c2*vy1*dt
dist2 = np.hypot(x2, y2)
acc2 = -(G*M) / (dist2**2.0)
dx2 = x2 - x1
dy2 = y2 - y1
accx2 = (acc2*dx2)/(x2)
accy2 = (acc2*dy2)/(y2)
vx2 = vx1 + d2*accx2*dt
vy2 = vy1 + d2*accy2*dt
# Step 3
x3 = x2 + c3*vx2*dt
y3 = y2 + c3*vy2*dt
dist3 = np.hypot(x3, y3)
acc3 = -(G*M) / (dist3**2.0)
dx3 = x3 - x2
dy3 = y3 - y2
accx3 = (acc3*dx3)/(x3)
accy3 = (acc3*dy3)/(y3)
vx3 = vx2 + d3*accx3*dt
vy3 = vy2 + d3*accy3*dt
# Full step
x = x3 + c4*vx3*dt
y = y3 + c4*vy3*dt
vx = vx3
vy = vy3
return positions
Can somebody please point me in the right direction...
I need to find the parameters a,b,c,d of two functions:
Y1 = ( (a * X1 + b) * p0 + (c * X2 + d) * p1 ) / (a * X1 + b + c * X2 + d)
Y2 = ( (a * X2 + b) * p2 + (c * X2 + d) * p3 ) / (a * X1 + b + c * X2 + d)
X1, X2 (independent variables) and Y1, Y2 (dependent variables) are observations, i.e. one-dimensional arrays with thousands of entries each.
p0, p1, p2, p3 are known constants (scalars).
I successfully solved the problem with the first function only with a curve-fit (see below), but how do i solve the problem for Y1 and Y2 ?
Thank you.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
X = [X1,X2]
def fitFunc(X, a,b,c,d):
X1, X2 = X
return ((a * X1 + b) * p0 + (c * X2 + d) * p1) / (a * X1 + b + c * X2 + d)
fitPar, fitCov = curve_fit(fitFunc, X, Y1)
print(fitPar)
One way would be to minimize both your functions together using scipy.optimize.minimze. In the example below, a function residual is passed a, b, c, and d as initial guesses. Using these guesses, Y1 and Y2 are evaluated, then the mean squared error is taken using the data and predicted values of respective functions. The error is returned as the mean error of the two functions. The optimized set of parameters is stored in res as res.x.
import numpy as np
from scipy.optimize import minimize
#p0 = ... known
#p1 = ... known
#p2 = ... known
#p3 = ... known
def Y1(X, a,b,c,d):
X1, X2 = X
return ((a * X1 + b) * p0 + (c * X2 + d) * p1) / (a * X1 + b + c * X2 + d)
def Y2(X, a,b,c,d):
X1, X2 = X
return ((a * X1 + b) * p2 + (c * X2 + d) * p3) / (a * X1 + b + c * X2 + d)
X1 = np.array([X1]) # your X1 array
X2 = np.array([X2]) # your X2 array
X = np.array([X1, X2])
y1_data = np.array([y1_data]) # your y1 data
y2_data = np.array([y2_data]) # your y2 data
def residual(x):
a = x[0]
b = x[1]
c = x[2]
d = x[3]
y1_pred = Y1(X,a,b,c,d)
y2_pred = Y2(X,a,b,c,d)
err1 = np.mean((y1_data - y1_pred)**2)
err2 = np.mean((y2_data - y2_pred)**2)
error = (err1 + err2) / 2
return error
x0 = [1, 1, 1, 1] # Initial guess for a, b, c, and d respectively
res = minimize(residual, x0, method="Nelder-Mead")
print(res.x)
I want to plot the motion of a double pendulum with a spring in python. I need to plot the theta1, theta2, r, and their first derivatives. I have found my equations for the motion, which are second-order ODEs so I then converted them to first-order ODEs where x1=theta1, x2=theta1-dot, y1=theta2, y2=theta2-dot, z1=r, and z2=r-dot. Here is a picture of the double pendulum problem: enter image description here
Here is my code:
from scipy.integrate import solve_ivp
from numpy import pi, sin, cos, linspace
g = 9.806 #Gravitational acceleration
l0 = 1 #Natural length of spring is 1
k = 2 #K value for spring is 2
OA = 2 #Length OA is 2
m = 1 #Mass of the particles is 1
def pendulumDynamics1(t, x): #Function to solve for theta-1 double-dot
x1 = x[0]
x2 = x[1]
y1 = y[0]
y2 = y[1]
z1 = z[0]
z2 = z[1]
Fs = -k*(z1-l0)
T = m*(x2**2)*OA + m*g*cos(x1) + Fs*cos(y1-x1)
x1dot = x2
x2dot = (Fs*sin(y1-x1) - m*g*sin(x1))/(m*OA) # angles are in radians
return [x1dot,x2dot]
def pendulumDynamics2(t, y): #Function to solve for theta-2 double-dot
x1 = x[0]
x2 = x[1]
y1 = y[0]
y2 = y[1]
z1 = z[0]
z2 = z[1]
Fs = -k*(z1-l0)
y1dot = y2
y2dot = (-g*sin(y1) - (Fs*cos(y1-x1)*sin(x1))/m + g*cos(y1-x1)*sin(x1) - x2*z1*sin(x1))/z1
return [y1dot,y2dot]
def pendulumDynamics3(t, z): #Function to solve for r double-dot (The length AB which is the spring)
x1 = x[0]
x2 = x[1]
y1 = y[0]
y2 = y[1]
z1 = z[0]
z2 = z[1]
Fs = -k*(z1-l0)
z1dot = z2
z2dot = g*cos(y1) - Fs/m + (y2**2)*z1 + x2*OA*cos(y1-x1) - (Fs*(sin(y1-x1))**2)/m + g*sin(x1)*sin(y1-x1)
return [z1dot,z2dot]
# Define initial conditions, etc
d2r = pi/180
x0 = [30*d2r, 0] # start from 30 deg, with zero velocity
y0 = [60*d2r, 0] # start from 60 deg, with zero velocity
z0 = [1, 0] #Start from r=1
t0 = 0
tf = 10
#Integrate dynamics, initial value problem
sol1 = solve_ivp(pendulumDynamics1,[t0,tf],x0,dense_output=True) # Save as a continuous solution
sol2 = solve_ivp(pendulumDynamics2,[t0,tf],y0,dense_output=True) # Save as a continuous solution
sol3 = solve_ivp(pendulumDynamics3,[t0,tf],z0,dense_output=True) # Save as a continuous solution
t = linspace(t0,tf,200) # determine solution at these times
dt = t[1]-t[0]
x = sol1.sol(t)
y = sol2.sol(t)
z = sol3.sol(t)
I have 3 functions in my code, each to solve for x, y, and z. I then use solve_ivp function to solve for x, and y, and z. The error in the code is:
`File "C:\Users\omora\OneDrive\Dokument\AERO 211\project.py", line 13, in pendulumDynamics1
y1 = y[0]
NameError: name 'y' is not defined`
I don't understand why it is saying that y is not defined, because I defined it in my functions.
Your system is closed without friction, thus can be captured by the Lagrange or Hamiltonian formalism. You have 3 position variables, thus a 6-dimensional dynamical state, complemented either by the velocities or the impulses.
Let q_k be theta_1, theta_2, r, Dq_k their time derivatives and p_k the impulse variables to q_k, then the dynamics can be realized by
def DoublePendulumSpring(u,t,params):
m_1, l_1, m_2, l_2, k, g = params
q_1,q_2,q_3 = u[:3]
p = u[3:]
A = [[l_1**2*(m_1 + m_2), l_1*m_2*q_3*cos(q_1 - q_2), -l_1*m_2*sin(q_1 - q_2)],
[l_1*m_2*q_3*cos(q_1 - q_2), m_2*q_3**2, 0],
[-l_1*m_2*sin(q_1 - q_2), 0, m_2]]
Dq = np.linalg.solve(A,p)
Dq_1,Dq_2,Dq_3 = Dq
T1 = Dq_2*q_3*sin(q_1 - q_2) + Dq_3*cos(q_1 - q_2)
T3 = Dq_1*l_1*cos(q_1 - q_2) + Dq_2*q_3
Dp = [-l_1*(m_2*Dq_1*T1 + g*(m_1+m_2)*sin(q_1)),
l_1*m_2*Dq_1*T1 - g*m_2*q_3*sin(q_2),
m_2*Dq_2*T3 + g*m_2*cos(q_2) + k*(l_2 - q_3) ]
return [*Dq, *Dp]
For a derivation see the Euler-Lagrange equations and their connection to the Hamilton equations. You might get asked about such a derivation.
This, after suitable defining the parameter tuple and initial conditions, can be fed to odeint and produces a solution that can then be plotted, animated or otherwise examined. The lower bob traces a path like the one below, not periodic and not very deterministic. (The fulcrum and the arc of the upper bob are also inserted, but less interesting.)
def pendulumDynamics1(t, x):
x1 = x[0]
x2 = x[1]
y1 = y[0]
y2 = y[1]
z1 = z[0]
z2 = z[1]
You only pass x as a parameter. The code inside the function has no idea what y and z refer to.
You will need to change the function call to also include those variables.
def pendulumDynamics1(t, x, y, z):
so I saw this code for RK4 on stack and I found it very useful. However, I cannot figure out a way to plot for each y value at each increment(h) of x.
def f(x,y):
return 2*x**2-4*x+y
def RK4(x0,y0):
while x0 < b:
k1 = h*f(x0,y0)
k2 = h*f(x0+0.5*h,y0+0.5*k1)
k3 = h*f(x0+0.5*h,y0+0.5*k2)
k4 = h*f(x0+h,y0+k3)
y0+=(k1+2*k2+2*k3+k4)/6
x0+=h
return y0
b=3
h=0.001
print(RK4(1,0.7182818))
You can append each point in a list as a tuple, and then perform the line plot operation on the list of tuples. You can find it in the commented code below.
import matplotlib.pyplot as plt
def f(x, y):
return 2 * x ** 2 - 4 * x + y
def RK4(x0, y0):
pts = [] # empty list
while x0 < b:
k1 = h * f(x0, y0)
k2 = h * f(x0 + 0.5 * h, y0 + 0.5 * k1)
k3 = h * f(x0 + 0.5 * h, y0 + 0.5 * k2)
k4 = h * f(x0 + h, y0 + k3)
y0 += (k1 + 2 * k2 + 2 * k3 + k4) / 6
x0 += h
pts.append((x0, y0)) # appending the tuple
plt.plot(*zip(*pts)) # plotting the list of tuple
plt.show()
return y0
b = 3
h = 0.001
print(RK4(1, 0.7182818))
You can see the plot as follows
From a design perspective, it would be preferred if the RK4 code and the plotting code were separated, the numerical solver should not be concerned with how its results are used afterwards.
Then the next decision would be about the construction of the time array, it could be passed to the RK4 method, or be constructed inside and returned, both have advantages. If speed is a concern, the arrays should be constructed explicitly in their final form (see example on math.SE), for expediency one can also construct them incrementally. Thus the code could be changed as
def RK4(f,x0,y0,xb,dx):
x, y = [x0],[y0]
while x0 < xb:
k1 = dx*f(x0,y0)
k2 = dx*f(x0+0.5*dx,y0+0.5*k1)
k3 = dx*f(x0+0.5*dx,y0+0.5*k2)
k4 = dx*f(x0+dx,y0+k3)
y0 += (k1+2*k2+2*k3+k4)/6
x0 += dx
x.append(x0); y.append(y0) # for vector y use y0.copy()
return x,y
and then call as
x,y = RK4(f=f,x0=1.0,y0=0.7182818,xb=3.0,dx=1e-3)
plt.plot(x,y)
#title, axis labels
plt.grid(); plt.show()