I read in the scipy documentation (https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html) that fsolve : Find the rootS of a function.
But even, with a simple example, I obtained only one root
def my_eq(x):
y = x*x-1
return y
roots = fsolve(equation,0.1)
print(roots)
So, is it possible to obtain multiple roots with fsolve in one call ?
For other solvers, it is clear in the doc (https://docs.scipy.org/doc/scipy/reference/optimize.html#id2). They Find a root of a function
(N.B. I know that multiple root finding is difficult)
Apparently, the docs are a bit vague in that respect. The plural roots refers to the fact that both scipy.optimize.root and scipy.optimize.fsolve try to find one N-dimensional point x (root) of a multivariate function F: R^N -> R^N with F(x) = 0.
However, if you want to find multiple roots of your scalar function, you can write it as a multivariate function and pass different initial guesses:
import numpy as np
from scipy.optimize import fsolve
def my_eq(x):
y1 = x[0]*x[0]-1
y2 = x[1]*x[1]-1
return np.array([y1, y2])
x0 = [0.1, -0.1]
roots = fsolve(my_eq,[0.1, -0.1])
print(roots)
This yields all roots [1, -1].
scipy.optimize.root or scipy.optimize.fsolve only returns a single root closest to the x0 unless the func is a system of equations. So I think it cannot return multiple roots at once given only one scalar equation in this case.
A work around:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
def func(x):
f = x ** 2 - 1
return f
# find the roots between 5 and -5
v1 = fsolve(func, np.array(5)) # solution closest to x= 5 is 1
v2 = fsolve(func, np.array(-5)) # solution closest to x= -5 is -1
fig, ax = plt.subplots()
t = np.linspace(-5, 5, 200)
ax.plot(t, func(t), label='$y = x^2 -1$')
ax.plot(t, np.zeros(t.shape), label='$y = 0$')
ax.plot(v1, func(v1[0]), "o", label=f'root1 = {v1[0]}')
ax.plot(v2, func(v2[0]), "o", label=f'root2 = {v2[0]}')
ax.legend()
print(f'root1 = {v1[0]}, root2 = {v2[0]}')
plt.show()
Related
I am trying to use scipy to numerically solve the following differential equation
x''+x=\sum_{k=1}^{20}\delta(t-k\pi), y(0)=y'(0)=0.
Here is the code
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
from sympy import DiracDelta
def f(t):
sum = 0
for i in range(20):
sum = sum + 1.0*DiracDelta(t-(i+1)*np.pi)
return sum
def ode(X, t):
x = X[0]
y = X[1]
dxdt = y
dydt = -x + f(t)
return [dxdt, dydt]
X0 = [0, 0]
t = np.linspace(0, 80, 500)
sol = odeint(ode, X0, t)
x = sol[:, 0]
y = sol[:, 1]
plt.plot(t,x, t, y)
plt.xlabel('t')
plt.legend(('x', 'y'))
# phase portrait
plt.figure()
plt.plot(x,y)
plt.plot(x[0], y[0], 'ro')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
However what I got from python is zero solution, which is different from what I got from Mathematica. Here are the mathematica code and the graph
so=NDSolve[{x''(t)+x(t)=\sum _{i=1}^{20} DiraDelta (t-i \pi ),x(0)=0,x'(0)=0},x(t),{t,0,80}]
It seems to me that scipy ignores the Dirac delta function. Where am I wrong? Any help is appreciated.
Dirac delta is not a function. Writing it as density in an integral is still only a symbolic representation. It is, as mathematical object, a functional on the space of continuous functions. delta(t0,f)=f(t0), not more, not less.
One can approximate the evaluation, or "sifting" effect of the delta operator by continuous functions. The usual good approximations have the form N*phi(N*t) where N is a large number and phi a non-negative function, usually with a somewhat compact shape, that has integral one. Popular examples are box functions, tent functions, the Gauß bell curve, ... So you could take
def tentfunc(t): return max(0,1-abs(t))
N = 10.0
def rhs(t): return sum( N*tentfunc(N*(t-(i+1)*np.pi)) for i in range(20))
X0 = [0, 0]
t = np.linspace(0, 80, 1000)
sol = odeint(lambda x,t: [ x[1], rhs(t)-x[0]], X0, t, tcrit=np.pi*np.arange(21), atol=1e-8, rtol=1e-10)
x,v = sol.T
plt.plot(t,x, t, v)
which gives
Note that the density of the t array also influences the accuracy, while the tcrit critical points did not do much.
Another way is to remember that delta is the second derivative of max(0,x), so one can construct a function that is the twice primitive of the right side,
def u(t): return sum(np.maximum(0,t-(i+1)*np.pi) for i in range(20))
so that now the equation is equivalent to
(x(t)-u(t))'' + x(t) = 0
set y = x-u then
y''(t) + y(t) = -u(t)
which now has a continuous right side.
X0 = [0, 0]
t = np.linspace(0, 80, 1000)
sol = odeint(lambda y,t: [ y[1], -u(t)-y[0]], X0, t, atol=1e-8, rtol=1e-10)
y,v = sol.T
x=y+u(t)
plt.plot(t,x)
odeint :
does not handle sympy symbolic objects
it's unlikely it can ever handle Dirac Delta terms.
The best bet is probably to turn dirac deltas into boundary conditions: assume that the function is continuous at the location of the Dirac delta, but the first derivative jumps. Integrating over infinitesimal interval around the location of the delta function gives you the boundary condition for the derivative just left and just right from the delta.
I'm trying to solve the differential equation R^{2} = 1/R with initial condition that R(0) = 0 in python. I should get the solution that R'(t) = (3/2 * t)^(2/3) as I get this from mathematica. Plot of solution to R'[t]^2 = 1/R with initial condition R(0) = 0
I used the following code in python:
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
sqrt = np.sqrt
# function that returns dy/dt
def model(y,t):
#k = 1
dydt = sqrt(1/y)
return dydt
# initial condition
y0 = [0.0]
# time points
t = np.linspace(0,5)
# solve ODE
y = odeint(model,y0,t)
# plot results
plt.plot(t,y)
plt.ylabel('$R/R_0$')
plt.xticks([])
plt.yticks([])
plt.show()
however I get only 0 as I'm apparently dividing by zero at some point python plot of differential equation R'[t]^2 = 1/R, which is not correct. Could someone point out what I could do to get the solution and plot I am expecting.
Thank you
Your model needs to be changed.I get your equation for the solution would be something like this https://www.wolframalpha.com/input/?i=R%27%28t%29%5E2+%3D+1%2FR
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# function that returns dy/dt
def model(y,t):
#k = 1
dydt = 1*y**-1/2
return dydt
# initial condition
y0 = 0.1
# time points
t = np.linspace(0,20)
# solve ODE
y = odeint(model,y0,t)
# plot results
plt.plot(t,y)
plt.ylabel('$R$')
plt.xlabel('$t$')
plt.xticks([])
plt.yticks([])
plt.show()
Your starting value is 0, which results in the derivative being zero (y * -1, or simpler -y), which means 0 will get added to your current y-value, thus remaining at zero for the whole integration. Your code is correct, but not your formulation.
I see a 1/R in your link, so use that, e.g. dydt = 1/y; which will fail, because it results in division by zero, so don't start at zero, because your derivative is not defined there. There also appears to be a square root somewhere, that you imported, but never use.
In Python we solve a differential equation OD_H with an initial point y0 = od0 in a specific point z similar to the following code
def OD_H(od, z, c, b):
....
return ....
od = solve_ivp(lambda od, z: OD_H(od, z, c, b), t_span = [z1, z], y0 = [od0])['y'][-1][-1]
or
od = odeint(OD_H, od0, [0, z], args=(c, b))[-1]
So we have
answer of ODE OD_H(y0 = 0.69, z=0.153) = 0.59
My question is,
Now If I have the answer of OD_H = 0.59 and y0 = 0.69, how could I obtain z? It should be 0.153 but consider we don't know its value and we cannot do trial and error to find it.
I appreciate your help.
In this case, you are proposing a root problem where the solver function evaluated minus the desired answer is the function f(x) where f(x)=0.
Because ODE solver returns point arrays and not callable functions, you need to interpolate first the solution points. Then, this is used in root finding problem.
from scipy.integrate import solve_ivp # Recommended initival value problem solver
from scipy.interpolate import interp1d # 1D interpolation
from scipy.optimize import brentq # Root finding method in an interval
exponential_decay = lambda t, y: -0.5 * y # dy/dt = f(t, y)
t_span = [0, 10] # Interval of integration
y0 = [2] # Initial state: y(t=t_span[0])=2
desired_answer = 0.59
sol_ode = solve_ivp(exponential_decay, t_span, y0) # IVP solution
f_sol_ode = interp1d(sol_ode.t, sol_ode.y) # Build interpolated function
brentq(lambda x: f_sol_ode(x) - desired_answer, t_span[0], t_span[1])
I am trying to fit below mentioned two equations using python leastsq method but am not sure whether this is the right approach. First equation has incomplete gamma function in it while the second one is slightly complex, and along with an exponential function contains a term which is obtained by using a separate fitting formula.
J_mg = T_incomplete(hw/T_mag)
J_nmg = e^(-hw/T)*g(w,T)
Here g is a function of w and T and is calucated using a given fitting formula.
I am following the steps outlined in this question.
Here is what I have done
import numpy as np
from scipy.optimize import leastsq
from scipy.special import gammaincc
from scipy.special import gamma
from matplotlib.pyplot import plot
# generating data
NPTS = 10
hw = np.linspace(0.5, 10, NPTS)
j1 = np.linspace(0.001,10,NPTS)
j2 = np.linspace(0.003,10,NPTS)
T_mag = np.linspace(0.3,0.5,NPTS)
#defining functions
def calc_gaunt_factor(hw,T):
fitting_coeff= np.loadtxt('fitting_coeff.txt', skiprows=1)
#T is in KeV
#K_b = 8.6173303(50)e−5 ev/K
g = 0
gamma = 0.0136/T
theta= hw/T
A= (np.log10(gamma**2) +0.5)*0.4
B= (np.log10(theta)+1.5)*0.4
for i in range(11):
for j in range(11):
g_ij = fitting_coeff[i][j]*(A**i)*(B**j)
g = g_ij+g
return g
def j_w_mag(hw,T_mag):
order= 0.001
return np.sqrt(1/T_mag)*gamma(order)*gammaincc(order,hw/T_mag)
def j_w_nonmag(hw,T):
gamma = 0.0136/T
theta= hw/T
return np.sqrt(1/T)*np.exp((-hw)/T)*calc_gaunt_factor(hw,T)
def residual_func(T,T_mag,hw,j1,j2):
err_unmag = np.nan_to_num(j1 - j_w_nonmag(hw,T))
err_mag = np.nan_to_num(j2 - j_w_mag(hw,T_mag))
err= np.concatenate((err_unmag, err_mag))
return err
par_init = np.array([.35])
best, cov, info, message, ler = leastsq(residual_func,par_init,args=(T_mag,hw,j1,j2),full_output=True)
print("Best-Fit Parameters:")
print("T=%s" %(best[0]))
I am getting weird value for my fitting parameter, T. Is this the right approach? Thanks.
I have a differential equation of the form
dy(x)/dx = f(y,x)
that I would like to solve for y.
I have an array xs containing all of the values of x for which I need ys.
For only those values of x, I can evaluate f(y,x) for any y.
How can I solve for ys, preferably in python?
MWE
import numpy as np
# these are the only x values that are legal
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# some made up function --- I don't actually have an analytic form like this
def f(y, x):
if not np.any(np.isclose(x, xs)):
return np.nan
return np.sin(y + x**2)
# now I want to know which array of ys satisfies dy(x)/dx = f(y,x)
Assuming you can use something simple like Forward Euler...
Numerical solutions will rely on approximate solutions at previous times. So if you want a solution at t = 1 it is likely you will need the approximate solution at t<1.
My advice is to figure out what step size will allow you to hit the times you need, and then find the approximate solution on an interval containing those times.
import numpy as np
#from your example, smallest step size required to hit all would be 0.0001.
a = 0 #start point
b = 1.5 #possible end point
h = 0.0001
N = float(b-a)/h
y = np.zeros(n)
t = np.linspace(a,b,n)
y[0] = 0.1 #initial condition here
for i in range(1,n):
y[i] = y[i-1] + h*f(t[i-1],y[i-1])
Alternatively, you could use an adaptive step method (which I am not prepared to explain right now) to take larger steps between the times you need.
Or, you could find an approximate solution over an interval using a coarser mesh and interpolate the solution.
Any of these should work.
I think you should first solve ODE on a regular grid, and then interpolate solution on your fixed grid. The approximate code for your problem
import numpy as np
from scipy.integrate import odeint
from scipy import interpolate
xs = np.array([0.15, 0.383, 0.99, 1.0001])
# dy/dx = f(x,y)
def dy_dx(y, x):
return np.sin(y + x ** 2)
y0 = 0.0 # init condition
x = np.linspace(0, 10, 200)# here you can control an accuracy
sol = odeint(dy_dx, y0, x)
f = interpolate.interp1d(x, np.ravel(sol))
ys = f(xs)
But dy_dx(y, x) should always return something reasonable (not np.none).
Here is the drawing for this case