In Python we solve a differential equation OD_H with an initial point y0 = od0 in a specific point z similar to the following code
def OD_H(od, z, c, b):
....
return ....
od = solve_ivp(lambda od, z: OD_H(od, z, c, b), t_span = [z1, z], y0 = [od0])['y'][-1][-1]
or
od = odeint(OD_H, od0, [0, z], args=(c, b))[-1]
So we have
answer of ODE OD_H(y0 = 0.69, z=0.153) = 0.59
My question is,
Now If I have the answer of OD_H = 0.59 and y0 = 0.69, how could I obtain z? It should be 0.153 but consider we don't know its value and we cannot do trial and error to find it.
I appreciate your help.
In this case, you are proposing a root problem where the solver function evaluated minus the desired answer is the function f(x) where f(x)=0.
Because ODE solver returns point arrays and not callable functions, you need to interpolate first the solution points. Then, this is used in root finding problem.
from scipy.integrate import solve_ivp # Recommended initival value problem solver
from scipy.interpolate import interp1d # 1D interpolation
from scipy.optimize import brentq # Root finding method in an interval
exponential_decay = lambda t, y: -0.5 * y # dy/dt = f(t, y)
t_span = [0, 10] # Interval of integration
y0 = [2] # Initial state: y(t=t_span[0])=2
desired_answer = 0.59
sol_ode = solve_ivp(exponential_decay, t_span, y0) # IVP solution
f_sol_ode = interp1d(sol_ode.t, sol_ode.y) # Build interpolated function
brentq(lambda x: f_sol_ode(x) - desired_answer, t_span[0], t_span[1])
Related
I read in the scipy documentation (https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html) that fsolve : Find the rootS of a function.
But even, with a simple example, I obtained only one root
def my_eq(x):
y = x*x-1
return y
roots = fsolve(equation,0.1)
print(roots)
So, is it possible to obtain multiple roots with fsolve in one call ?
For other solvers, it is clear in the doc (https://docs.scipy.org/doc/scipy/reference/optimize.html#id2). They Find a root of a function
(N.B. I know that multiple root finding is difficult)
Apparently, the docs are a bit vague in that respect. The plural roots refers to the fact that both scipy.optimize.root and scipy.optimize.fsolve try to find one N-dimensional point x (root) of a multivariate function F: R^N -> R^N with F(x) = 0.
However, if you want to find multiple roots of your scalar function, you can write it as a multivariate function and pass different initial guesses:
import numpy as np
from scipy.optimize import fsolve
def my_eq(x):
y1 = x[0]*x[0]-1
y2 = x[1]*x[1]-1
return np.array([y1, y2])
x0 = [0.1, -0.1]
roots = fsolve(my_eq,[0.1, -0.1])
print(roots)
This yields all roots [1, -1].
scipy.optimize.root or scipy.optimize.fsolve only returns a single root closest to the x0 unless the func is a system of equations. So I think it cannot return multiple roots at once given only one scalar equation in this case.
A work around:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
def func(x):
f = x ** 2 - 1
return f
# find the roots between 5 and -5
v1 = fsolve(func, np.array(5)) # solution closest to x= 5 is 1
v2 = fsolve(func, np.array(-5)) # solution closest to x= -5 is -1
fig, ax = plt.subplots()
t = np.linspace(-5, 5, 200)
ax.plot(t, func(t), label='$y = x^2 -1$')
ax.plot(t, np.zeros(t.shape), label='$y = 0$')
ax.plot(v1, func(v1[0]), "o", label=f'root1 = {v1[0]}')
ax.plot(v2, func(v2[0]), "o", label=f'root2 = {v2[0]}')
ax.legend()
print(f'root1 = {v1[0]}, root2 = {v2[0]}')
plt.show()
I am trying to use scipy to numerically solve the following differential equation
x''+x=\sum_{k=1}^{20}\delta(t-k\pi), y(0)=y'(0)=0.
Here is the code
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
from sympy import DiracDelta
def f(t):
sum = 0
for i in range(20):
sum = sum + 1.0*DiracDelta(t-(i+1)*np.pi)
return sum
def ode(X, t):
x = X[0]
y = X[1]
dxdt = y
dydt = -x + f(t)
return [dxdt, dydt]
X0 = [0, 0]
t = np.linspace(0, 80, 500)
sol = odeint(ode, X0, t)
x = sol[:, 0]
y = sol[:, 1]
plt.plot(t,x, t, y)
plt.xlabel('t')
plt.legend(('x', 'y'))
# phase portrait
plt.figure()
plt.plot(x,y)
plt.plot(x[0], y[0], 'ro')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
However what I got from python is zero solution, which is different from what I got from Mathematica. Here are the mathematica code and the graph
so=NDSolve[{x''(t)+x(t)=\sum _{i=1}^{20} DiraDelta (t-i \pi ),x(0)=0,x'(0)=0},x(t),{t,0,80}]
It seems to me that scipy ignores the Dirac delta function. Where am I wrong? Any help is appreciated.
Dirac delta is not a function. Writing it as density in an integral is still only a symbolic representation. It is, as mathematical object, a functional on the space of continuous functions. delta(t0,f)=f(t0), not more, not less.
One can approximate the evaluation, or "sifting" effect of the delta operator by continuous functions. The usual good approximations have the form N*phi(N*t) where N is a large number and phi a non-negative function, usually with a somewhat compact shape, that has integral one. Popular examples are box functions, tent functions, the Gauß bell curve, ... So you could take
def tentfunc(t): return max(0,1-abs(t))
N = 10.0
def rhs(t): return sum( N*tentfunc(N*(t-(i+1)*np.pi)) for i in range(20))
X0 = [0, 0]
t = np.linspace(0, 80, 1000)
sol = odeint(lambda x,t: [ x[1], rhs(t)-x[0]], X0, t, tcrit=np.pi*np.arange(21), atol=1e-8, rtol=1e-10)
x,v = sol.T
plt.plot(t,x, t, v)
which gives
Note that the density of the t array also influences the accuracy, while the tcrit critical points did not do much.
Another way is to remember that delta is the second derivative of max(0,x), so one can construct a function that is the twice primitive of the right side,
def u(t): return sum(np.maximum(0,t-(i+1)*np.pi) for i in range(20))
so that now the equation is equivalent to
(x(t)-u(t))'' + x(t) = 0
set y = x-u then
y''(t) + y(t) = -u(t)
which now has a continuous right side.
X0 = [0, 0]
t = np.linspace(0, 80, 1000)
sol = odeint(lambda y,t: [ y[1], -u(t)-y[0]], X0, t, atol=1e-8, rtol=1e-10)
y,v = sol.T
x=y+u(t)
plt.plot(t,x)
odeint :
does not handle sympy symbolic objects
it's unlikely it can ever handle Dirac Delta terms.
The best bet is probably to turn dirac deltas into boundary conditions: assume that the function is continuous at the location of the Dirac delta, but the first derivative jumps. Integrating over infinitesimal interval around the location of the delta function gives you the boundary condition for the derivative just left and just right from the delta.
I am trying to solve the set of coupled boundary value problems such that;
U'' +aB'+ b*(cosh(lambda z))^{-2}tanh(lambda*z) = 0,
B'' + c*U' = 0,
T'' = (gamma^{-1} - 1)*(d*(U')^2 + e*(B')^2)
subject to the boundary conditions U(+/- 1/2) = +/-U_0*tanh(lambda/2), B(+/- 1/2) = 0 and T(-1/2) = 1, T(1/2) = 4. I have decomposed this set of equations into a set of first order differential equations, and used the derivative array such that [U, U', B, B', T, T']. But bvp solve is returning the error that I have a single Jacobian. When I remove the last two equations, I get a solution for U and B and that works fine. However, I am unsure why adding the other two equations results in this issue.
import numpy as np
from scipy.integrate import solve_bvp
import matplotlib.pyplot as plt
%matplotlib inline
alpha = 1E-7
zeta = 8E-3
C_k = 0.01
sigma = 0.005
Q = 30
U_0 = 0.1
gamma = 5/3
theta = 3
def fun(x, y):
return y[1], -2*U_0*Q**2*(1/np.cosh(Q*x))**2*np.tanh(Q*x)-((alpha)/(C_k*sigma))*y[3], y[3],\
-(1/(C_k*zeta))*y[1], y[4], (1/gamma - 1)*(sigma*(y[1])**2 + zeta*alpha*(y[3])**2)
def bc(ya, yb):
return ya[0]+U_0*np.tanh(Q*0.5), yb[0]-U_0*np.tanh(Q*0.5), ya[2]-0, yb[2]-0, ya[4] - 1, yb[4] - 4
x = np.linspace(-0.5, 0.5, 500)
y = np.zeros((6, x.size))
sol = solve_bvp(fun, bc, x, y)
print(sol)
However, the error that I am getting is that 'setting an array with sequence'. The first function and boundary conditions solves two coupled equations, then I use these results to evaluate the equation I have given. I have tried writing all of my equations in one function, however this seems to be returning trivial solutions i.e an array full of zeros.
Any help would be appreciated.
When the expressions become larger it is often more helpful to keep the computations human readable instead of compact.
def fun(x, y):
U, dU, B, dB, T, dT = y;
d2U = -2*U_0*Q**2*(1/np.cosh(Q*x))**2*np.tanh(Q*x)-((alpha)/(C_k*sigma))*dB;
d2B = -(1/(C_k*zeta))*dU;
d2T = (1/gamma - 1)*(sigma*dU**2 + zeta*alpha*dB**2);
return dU, d2U, dB, d2B, dT, d2T
This avoids missing an index error as there are no indices in this computation, all has names close to the original formulas.
Then the solution components (using initialization with only 5 points, resulting in a refinement with 65 points) plots as
I want to solve this differential equations with the given initial conditions:
(3x-1)y''-(3x+2)y'+(6x-8)y=0, y(0)=2, y'(0)=3
the ans should be
y=2*exp(2*x)-x*exp(-x)
here is my code:
def g(y,x):
y0 = y[0]
y1 = y[1]
y2 = (6*x-8)*y0/(3*x-1)+(3*x+2)*y1/(3*x-1)
return [y1,y2]
init = [2.0, 3.0]
x=np.linspace(-2,2,100)
sol=spi.odeint(g,init,x)
plt.plot(x,sol[:,0])
plt.show()
but what I get is different from the answer.
what have I done wrong?
There are several things wrong here. Firstly, your equation is apparently
(3x-1)y''-(3x+2)y'-(6x-8)y=0; y(0)=2, y'(0)=3
(note the sign of the term in y). For this equation, your analytical solution and definition of y2 are correct.
Secondly, as the #Warren Weckesser says, you must pass 2 parameters as y to g: y[0] (y), y[1] (y') and return their derivatives, y' and y''.
Thirdly, your initial conditions are given for x=0, but your x-grid to integrate on starts at -2. From the docs for odeint, this parameter, t in their call signature description:
odeint(func, y0, t, args=(),...):
t : array
A sequence of time points for which to solve for y. The initial
value point should be the first element of this sequence.
So you must integrate starting at 0 or provide initial conditions starting at -2.
Finally, your range of integration covers a singularity at x=1/3. odeint may have a bad time here (but apparently doesn't).
Here's one approach that seems to work:
import numpy as np
import scipy as sp
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def g(y, x):
y0 = y[0]
y1 = y[1]
y2 = ((3*x+2)*y1 + (6*x-8)*y0)/(3*x-1)
return y1, y2
# Initial conditions on y, y' at x=0
init = 2.0, 3.0
# First integrate from 0 to 2
x = np.linspace(0,2,100)
sol=odeint(g, init, x)
# Then integrate from 0 to -2
plt.plot(x, sol[:,0], color='b')
x = np.linspace(0,-2,100)
sol=odeint(g, init, x)
plt.plot(x, sol[:,0], color='b')
# The analytical answer in red dots
exact_x = np.linspace(-2,2,10)
exact_y = 2*np.exp(2*exact_x)-exact_x*np.exp(-exact_x)
plt.plot(exact_x,exact_y, 'o', color='r', label='exact')
plt.legend()
plt.show()
I am studying the dynamics of a damped, driven pendulum with second order ODE defined like so, and specifically I am progamming:
d^2y/dt^2 + c * dy/dt + sin(y) = a * cos(wt)
import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate
def pendeq(t,y,a,c,w):
y0 = -c*y[1] - np.sin(y[0]) + a*np.cos(w*t)
y1 = y[1]
z = np.array([y0, y1])
return z
a = 2.1
c = 1e-4
w = 0.666667 # driving angular frequency
t = np.array([0, 5000]) # interval of integration
y = np.array([0, 0]) # initial conditions
yp = integrate.quad(pendeq, t[0], t[1], args=(y,a,c,w))
This problem does look quite similar to Need help solving a second order non-linear ODE in python, but I am getting the error
Supplied function does not return a valid float.
What am I doing wrong??
integrate.quad requires that the function supplied (pendeq, in your case) returns only a float. Your function is returning an array.