Bound solutions in scipy ode solver - python

I want to solve a high amount of bilinear ODE systems in python. The derivative is this:
def x_(x, t, growth, connections):
return x * growth + np.dot(connections, x) * x
I am not interested in very accurate results but in the qualitative behavior, i.e. whether a component goes to zero or not.
Because I have to solve such a big quantity of high-deminsional systems, I want to use a step size as big as possible.
Due to big step sizes it can happen that the ODE goes in one component below zero. This should not be possible since (because of the structure of the particular ODE) each component is bounded by zero. Hence - to prevent wrong results - I would like to set each component manually to zero once it is below.
Furtherly, in the systems that I want to solve it can happen that solutions blow up. I want to prevent this by setting an upper bound as well, i.e. if a value exceeds the bound it is set back to the value of the bound.
I hope I can make my goal understandable giving you the following pseudo-code of what I want to do:
for t in range(0, tEnd, dt):
$ compute x(t) using x(t-dt) $
x(t) = np.minimum(np.maximum(x(t), 0), upperBound)
I implemented this using a Runge-Kutta algorithm. Everything works fine. Just the performance is bad. Therefore, I would prefer using a pre-implemented method like scipy.integrate.odeint.
Thereby, I have no idea on how to set such bounds. An option that I tried was to manipulate the ODE that way, that the derivative becomes 0 once x is above the bound, and (positive) one once x is below 0. In addition, to prevent too high jumps within one timestep, I also bounded the derivative:
def x_(x, t, growth, connections, bound):
return (x > 0) * np.minimum((x < bound) * \
( x * growth + np.dot(connections, x) * x ), bound) + (x < 0)
Though this solution (especially for the zero-bound) is very ugly it would be sufficient if it worked. Unfortunately, it does not work. Using odeint
x = scipy.integrate.odeint(x_, x0, timesteps, param)
I get very often one of these two errors:
Repeated convergence failures (perhaps bad Jacobian or tolerances).
Excess work done on this call (perhaps wrong Dfun type).
They may be due to the discontinuities of my manipulated ODE. There are plenty of threads about these error messages on the internet but they did not help me. E.g. increasing the amount of allowed steps did neither prevent this issue nor is it a good solution for me since I need to use big step sizes. Furtherly, passing the Jacobian did not help either.
Having a look onto the solutions one can see that two types of strange behavior happen when the errors occure:
The solution blows in one single time-step up to +-1e250 (that should be impossible since dx/dt is bounded).
It first reaches the bound but goes down again (that should be impossible because x is at the bound and therefore x_ is 0).
I would appreciate all hints on how to solve the issue - no matter whether it is help on
how to prevent the errors in odeint
how to manipulate the ODE properly or on
how to write a very fast ODE solver where I can directly implement my needs.
I thank you in advance!
Edit
I was asked for a minimal example:
import numpy as np
import random as rd
rd.seed()
import scipy.integrate
def simulate(simParam, dim = 20, connectivity = .8, conRange = 1, threshold = 1E-3,
constGrowing=None):
"""
Creates the random system matrix and starts a simulation
"""
x0 = np.zeros(dim, dtype='float') + 1
connections = np.zeros(shape=(dim, dim), dtype='float')
growth = np.zeros(dim, dtype='float') +
(constGrowing if not constGrowing == None else 0)
for i in range(dim):
for j in range(dim):
if i != j:
connections[i][j] = rd.uniform(-conRange, conRange)
tend, step = simParam
return RK4NumPy(x_, (growth, connections), x0, 0, tend, step)
def x_(x, t, growth, connections, bound):
"""
Derivative of the ODE
"""
return (x > 0) * np.minimum((x < bound) *
(x * growth + np.dot(connections, x) * x), bound) + (x < 0)
def RK4NumPy(x_, param, x0, t0, tend, step, maxV = 1.0E2, silent=True):
"""
solving method
"""
param = param + (maxV,)
timesteps = np.arange(t0 + step, tend, step)
return scipy.integrate.odeint(x_, x0, timesteps, param)
simulate((300, 0.5))
To see the solution one would have to plot x. With the given parameters I get very often the above mentioned error
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.

Related

Performance issue with Scipy's solve_bvp and coupled differential equations

I'm facing a problem while trying to implement the coupled differential equation below (also known as single-mode coupling equation) in Python 3.8.3. As for the solver, I am using Scipy's function scipy.integrate.solve_bvp, whose documentation can be read here. I want to solve the equations in the complex domain, for different values of the propagation axis (z) and different values of beta (beta_analysis).
The problem is that it is extremely slow (not manageable) compared with an equivalent implementation in Matlab using the functions bvp4c, bvpinit and bvpset. Evaluating the first few iterations of both executions, they return the same result, except for the resulting mesh which is a lot greater in the case of Scipy. The mesh sometimes even saturates to the maximum value.
The equation to be solved is shown here below, along with the boundary conditions function.
import h5py
import numpy as np
from scipy import integrate
def coupling_equation(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty_like(a).astype(complex)
for idx, z_i in enumerate(z_mesh):
beta_zf_i = np.interp(z_i, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_i, z_a, ka_z) # Get ka at the desired point of the mesh
coupling_matrix = np.empty((2, 2), complex)
coupling_matrix[0] = [-1j * beta_zf_i, ka_z_i]
coupling_matrix[1] = [ka_z_i, 1j * beta_zf_i]
a_p[:, idx] = np.matmul(coupling_matrix, a[:, idx]) # Solve the coupling matrix
return a_p
def boundary_conditions(a_a, a_b):
return np.hstack(((a_a[0]-1), a_b[1]))
Moreover, I couldn't find a way to pass k, z and beta_zf as arguments of the function coupling_equation, given that the fun argument of the solve_bpv function must be a callable with the parameters (x, y). My approach is to define some global variables, but I would appreciate any help on this too if there is a better solution.
The analysis function which I am trying to code is:
def analysis(k, z, beta_analysis, max_mesh):
s11_analysis = np.empty_like(beta_analysis, dtype=complex)
s21_analysis = np.empty_like(beta_analysis, dtype=complex)
initial_mesh = np.linspace(z[0], z[-1], 10) # Initial mesh of 10 samples along L
mesh = initial_mesh
# a_init must be complex in order to solve the problem in a complex domain
a_init = np.vstack((np.ones(np.size(initial_mesh)).astype(complex),
np.zeros(np.size(initial_mesh)).astype(complex)))
for idx, beta in enumerate(beta_analysis):
print(f"Iteration {idx}: beta_analysis = {beta}")
global beta_zf
beta_zf = beta * np.ones(len(z)) # Global variable so as to use it in coupling_equation(x, y)
a = integrate.solve_bvp(fun=coupling_equation,
bc=boundary_conditions,
x=mesh,
y=a_init,
max_nodes=max_mesh,
verbose=1)
# mesh = a.x # Mesh for the next iteration
# a_init = a.y # Initial guess for the next iteration, corresponding to the current solution
s11_analysis[idx] = a.y[1][0]
s21_analysis[idx] = a.y[0][-1]
return s11_analysis, s21_analysis
I suspect that the problem has something to do with the initial guess that is being passed to the different iterations (see commented lines inside the loop in the analysis function). I try to set the solution of an iteration as the initial guess for the following (which must reduce the time needed for the solver), but it is even slower, which I don't understand. Maybe I missed something, because it is my first time trying to solve differential equations.
The parameters used for the execution are the following:
f2 = h5py.File(r'path/to/file', 'r')
k = np.array(f2['k']).squeeze()
z = np.array(f2['z']).squeeze()
f2.close()
analysis_points = 501
max_mesh = 1e6
beta_0 = 3e2;
beta_low = 0; # Lower value of the frequency for the analysis
beta_up = beta_0; # Upper value of the frequency for the analysis
beta_analysis = np.linspace(beta_low, beta_up, analysis_points);
s11_analysis, s21_analysis = analysis(k, z, beta_analysis, max_mesh)
Any ideas on how to improve the performance of these functions? Thank you all in advance, and sorry if the question is not well-formulated, I accept any suggestions about this.
Edit: Added some information about performance and sizing of the problem.
In practice, I can't find a relation that determines de number of times coupling_equation is called. It must be a matter of the internal operation of the solver. I checked the number of callings in one iteration by printing a line, and it happened in 133 ocasions (this was one of the fastests). This must be multiplied by the number of iterations of beta. For the analyzed one, the solver returned this:
Solved in 11 iterations, number of nodes 529.
Maximum relative residual: 9.99e-04
Maximum boundary residual: 0.00e+00
The shapes of a and z_mesh are correlated, since z_mesh is a vector whose length corresponds with the size of the mesh, recalculated by the solver each time it calls coupling_equation. Given that a contains the amplitudes of the progressive and regressive waves at each point of z_mesh, the shape of a is (2, len(z_mesh)).
In terms of computation times, I only managed to achieve 19 iterations in about 2 hours with Python. In this case, the initial iterations were faster, but they start to take more time as their mesh grows, until the point that the mesh saturates to the maximum allowed value. I think this is because of the value of the input coupling coefficients in that point, because it also happens when no loop in beta_analysisis executed (just the solve_bvp function for the intermediate value of beta). Instead, Matlab managed to return a solution for the entire problem in just 6 minutes, aproximately. If I pass the result of the last iteration as initial_guess (commented lines in the analysis function, the mesh overflows even faster and it is impossible to get more than a couple iterations.
Based on semi-random inputs, we can see that max_mesh is sometimes reached. This means that coupling_equation can be called with a quite big z_mesh and a arrays. The problem is that coupling_equation contains a slow pure-Python loop iterating on each column of the arrays. You can speed the computation up a lot using Numpy vectorization. Here is an implementation:
def coupling_equation_fast(z_mesh, a):
ka_z = k # Global
z_a = z # Global
a_p = np.empty(a.shape, dtype=np.complex128)
beta_zf_i = np.interp(z_mesh, z_a, beta_zf) # Get beta at the desired point of the mesh
ka_z_i = np.interp(z_mesh, z_a, ka_z) # Get ka at the desired point of the mesh
# Fast manual matrix multiplication
a_p[0] = (-1j * beta_zf_i) * a[0] + ka_z_i * a[1]
a_p[1] = ka_z_i * a[0] + (1j * beta_zf_i) * a[1]
return a_p
This code provides a similar output with semi-random inputs compared to the original implementation but is roughly 20 times faster on my machine.
Furthermore, I do not know if max_mesh happens to be big with your inputs too and even if this is normal/intended. It may make sense to decrease the value of max_mesh in order to reduce the execution time even more.

ODE with time-varying coefficients in scipy

I am evaluating a set of ODEs with time varying coefficients
def deriv(y, t, N, coefficients):
S, I, R = y
dSdt = coefficients['beta'](t) * S * I / N * -1
dIdt = coefficients['beta'](t) * S * I / N - coefficients['gamma']* I
dRdt = coefficients['gamma'] * I
return dSdt, dIdt, dRdt
In particular, I have 'beta' values in a pre-calculated array, of size equal to int(max(t)).
coefficients = {'beta' : beta_f,'gamma':0.1}
def beta_f(t):
return mybetas.iloc[int(t)]
# Initial conditions vector
y0 = (S0, I0, R0)
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv,y0,t,args=(N,coefficients))
When I run odeint, it is evaluate also for value beyond max(t), raising an index out of bound error in beta_f.
How to limit the evaluation span for odeint?
Since len(mybetas) == int(max(t)), you can get an out-of-bounds error even for values of t which are not beyond max(t).
For example, mybetas.iloc[int(max(t))] will give you the out-of-bounds error, even though int(max(t)) <= max(t) for positive values of t.
But to your point, odeint does indeed check some values outside of the domain of integration. I had to deal with a problem similar to yours just a few weeks ago, and the following two discussions on stackoverflow were really helpful:
integrate.ode sets t0 values outside of my data range
Solve ODEs with discontinuous input/forcing data
The second link explains why it might be computationally faster to solve the ODE with odeint over each individual integer time step one after the other in a for loop, instead of letting odeint deal with the discontinuities in your derivative caused by jumps in the values of your betas.
Otherwise, if this is appropriate for your study case, you can interpolate your betas, and let the function beta_f return interpolated values of beta. Of course, you will have to extend the interpolation domain slightly beyond your integration domain, since odeint might want to evaluate the derivative for some t larger than max(t).

Infinite Summation in Python

I have a function for with i need to do an infinite summation on (over all the integers) numerically. The summation doesn't always need to converge as I can change internal parameters. The function looks like,
m(g, x, q0) = sum(abs(g(x - n*q0))^2 for n in Integers)
m(g, q0) = minimize(m(g, x, q0) for x in [0, q0])
using a Pythonic pseudo-code
Using Scipy integration methods, I was just flooring the n and integrating like for a fixed x,
m(g, z, q0) = integrate.quad(lambda n:
abs(g(x - int(n)*q0))**2,
-inf, +inf)[0]
This works pretty well, but then I have to do optimization on the x as a function of x, and then do another summation on that which yields a integral of a optimization of an integral. Pretty much it takes a really long time.
Do you know of a better way to do the summation that is faster? Hand coding it seemed to go slower.
Currently, I am working with
g(x) = (2/sqrt(3))*pi**(-0.25)*(1 - x**2)*exp(-x**2/2)
but the solution should be general
The paper this comes from is "The Wavelet Transform, Time-Frequency Localization and Signal Analysis" by Daubechies (IEEE 1990)
Thank you
Thanks to all the useful comment, I wrote my own summator that seems to run pretty fast. It anyone has any recommendations to make it better, I will gladly take them.
I will test this on the problem I am working on and once it demonstrates success, I will claim it functional.
def integers(blk_size=100):
x = arange(0, blk_size)
while True:
yield x
yield -x -1
x += blk_size
#
# For convergent summation
# on not necessarily finite sequences
# processes in blocks which can be any size
# shape that the function can handle
#
def converge_sum(f, x_strm, eps=1e-5, axis=0):
total = sum(f(x_strm.next()), axis=axis)
for x_blk in x_strm:
diff = sum(f(x_blk), axis=axis)
if abs(linalg.norm(diff)) <= eps:
# Converged
return total + diff
else:
total += diff
g(x) is almost certainly your bottleneck. A very quick-and-dirty solution would be to vectorize it to operate on an array of integers, then use np.trapz to estimate the integral using the trapezoid rule:
import numpy as np
# appropriate range and step size depends on how accurate you need to be and how
# quickly the sum converges
xmin = -1000000
xmax = 1000000
dx = 1
x = np.arange(xmin, xmax + dx, dx)
gx = (2 / np.sqrt(3)) * np.pi**(-0.25)*(1 - x**2) * np.exp(-x**2 / 2)
sum_gx = np.trapz(gx, x, dx)
Aside from that, you could re-write g(x) using Cython or numba to speed it up.
There's a chance Numba improves speed significantly - http://numba.pydata.org
It's slightly painful to install but very easy to use. Have a look at:
https://jakevdp.github.io/blog/2015/02/24/optimizing-python-with-numpy-and-numba/

Integral of Intensity function in python

There is a function which determine the intensity of the Fraunhofer diffraction pattern of a circular aperture... (more information)
Integral of the function in distance x= [-3.8317 , 3.8317] must be about 83.8% ( If assume that I0 is 100) and when you increase the distance to [-13.33 , 13.33] it should be about 95%.
But when I use integral in python, the answer is wrong.. I don't know what's going wrong in my code :(
from scipy.integrate import quad
from scipy import special as sp
I0=100.0
dist=3.8317
I= quad(lambda x:( I0*((2*sp.j1(x)/x)**2)) , -dist, dist)[0]
print I
Result of the integral can't be bigger than 100 (I0) because this is the diffraction of I0 ... I don't know.. may be scaling... may be the method! :(
The problem seems to be in the function's behaviour near zero. If the function is plotted, it looks smooth:
However, scipy.integrate.quad complains about round-off errors, which is very strange with this beautiful curve. However, the function is not defined at 0 (of course, you are dividing by zero!), hence the integration does not go well.
You may use a simpler integration method or do something about your function. You may also be able to integrate it to very close to zero from both sides. However, with these numbers the integral does not look right when looking at your results.
However, I think I have a hunch of what your problem is. As far as I remember, the integral you have shown is actually the intensity (power/area) of Fraunhofer diffraction as a function of distance from the center. If you want to integrate the total power within some radius, you will have to do it in two dimensions.
By simple area integration rules you should multiply your function by 2 pi r before integrating (or x instead of r in your case). Then it becomes:
f = lambda(r): r*(sp.j1(r)/r)**2
or
f = lambda(r): sp.j1(r)**2/r
or even better:
f = lambda(r): r * (sp.j0(r) + sp.jn(2,r))
The last form is best as it does not suffer from any singularities. It is based on Jaime's comment to the original answer (see the comment below this answer!).
(Note that I omitted a couple of constants.) Now you can integrate it from zero to infinity (no negative radii):
fullpower = quad(f, 1e-9, np.inf)[0]
Then you can integrate from some other radius and normalize by the full intensity:
pwr = quad(f, 1e-9, 3.8317)[0] / fullpower
And you get 0.839 (which is quite close to 84 %). If you try the farther radius (13.33):
pwr = quad(f, 1e-9, 13.33)
which gives 0.954.
It should be noted that we introduce a small error by starting the integration from 1e-9 instead of 0. The magnitude of the error can be estimated by trying different values for the starting point. The integration result changes very little between 1e-9 and 1e-12, so they seem to be safe. Of course, you could use, e.g., 1e-30, but then there may be numerical instability in the division. (In this case there isn't, but in general singularities are numerically evil.)
Let us do one thing still:
import matplotlib.pyplot as plt
import numpy as np
x = linspace(0.01, 20, 1000)
intg = np.array([ quad(f, 1e-9, xx)[0] for xx in x])
plt.plot(x, intg/fullpower)
plt.grid('on')
plt.show()
And this is what we get:
At least this looks right, the dark fringes of the Airy disk are clearly visible.
What comes to the last part of the question: I0 defines the maximum intensity (the units may be, e.g. W/m2), whereas the integral gives total power (if the intensity is in W/m2, the total power is in W). Setting the maximum intensity to 100 does not guarantee anything about the total power. That is why it is important to calculate the total power.
There actually exists a closed form equation for the total power radiated onto a circular area:
P(x) = P0 ( 1 - J0(x)^2 - J1(x)^2 ),
where P0 is the total power.
Note that you also can get a closed form solution for your integration using Sympy:
import sympy as sy
sy.init_printing() # LaTeX like pretty printing in IPython
x,d = sy.symbols("x,d", real=True)
I0=100
dist=3.8317
f = I0*((2*sy.besselj(1,x)/x)**2) # the integrand
F = f.integrate((x, -d, d)) # symbolic integration
print(F.evalf(subs={d:dist})) # numeric evalution
F evaluates to:
1600*d*besselj(0, Abs(d))**2/3 + 1600*d*besselj(1, Abs(d))**2/3 - 800*besselj(1, Abs(d))**2/(3*d)
with besselj(0,r) corresponding to sp.j0(r).
They might be a singularity in the integration algorithm when doing the jacobian at x = 0. You can exclude this points from the integration with "points":
f = lambda x:( I0*((2*sp.j1(x)/x)**2))
I = quad(f, -dist, dist, points = [0])
I get then the following result (is this your desired result?)
331.4990321315221

How to find all zeros of a function using numpy (and scipy)?

Suppose I have a function f(x) defined between a and b. This function can have many zeros, but also many asymptotes. I need to retrieve all the zeros of this function. What is the best way to do it?
Actually, my strategy is the following:
I evaluate my function on a given number of points
I detect whether there is a change of sign
I find the zero between the points that are changing sign
I verify if the zero found is really a zero, or if this is an asymptote
U = numpy.linspace(a, b, 100) # evaluate function at 100 different points
c = f(U)
s = numpy.sign(c)
for i in range(100-1):
if s[i] + s[i+1] == 0: # oposite signs
u = scipy.optimize.brentq(f, U[i], U[i+1])
z = f(u)
if numpy.isnan(z) or abs(z) > 1e-3:
continue
print('found zero at {}'.format(u))
This algorithm seems to work, except I see two potential problems:
It will not detect a zero that doesn't cross the x axis (for example, in a function like f(x) = x**2) However, I don't think it can occur with the function I'm evaluating.
If the discretization points are too far, there could be more that one zero between them, and the algorithm could fail finding them.
Do you have a better strategy (still efficient) to find all the zeros of a function?
I don't think it's important for the question, but for those who are curious, I'm dealing with characteristic equations of wave propagation in optical fiber. The function looks like (where V and ell are previously defined, and ell is an positive integer):
def f(u):
w = numpy.sqrt(V**2 - u**2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.jnjn(ell-1, u)
kl = scipy.special.jnkn(ell, w)
kl1 = scipy.special.jnkn(ell-1, w)
return jl / (u*jl1) + kl / (w*kl1)
Why are you limited to numpy? Scipy has a package that does exactly what you want:
http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html
One lesson I've learned: numerical programming is hard, so don't do it :)
Anyway, if you're dead set on building the algorithm yourself, the doc page on scipy I linked (takes forever to load, btw) gives you a list of algorithms to start with. One method that I've used before is to discretize the function to the degree that is necessary for your problem. (That is, tune \delta x so that it is much smaller than the characteristic size in your problem.) This lets you look for features of the function (like changes in sign). AND, you can compute the derivative of a line segment (probably since kindergarten) pretty easily, so your discretized function has a well-defined first derivative. Because you've tuned the dx to be smaller than the characteristic size, you're guaranteed not to miss any features of the function that are important for your problem.
If you want to know what "characteristic size" means, look for some parameter of your function with units of length or 1/length. That is, for some function f(x), assume x has units of length and f has no units. Then look for the things that multiply x. For example, if you want to discretize cos(\pi x), the parameter that multiplies x (if x has units of length) must have units of 1/length. So the characteristic size of cos(\pi x) is 1/\pi. If you make your discretization much smaller than this, you won't have any issues. To be sure, this trick won't always work, so you may need to do some tinkering.
I found out it's relatively easy to implement your own root finder using the scipy.optimize.fsolve.
Idea: Find any zeroes from interval (start, stop) and stepsize step by calling the fsolve repeatedly with changing x0. Use relatively small stepsize to find all the roots.
Can only search for zeroes in one dimension (other dimensions must be fixed). If you have other needs, I would recommend using sympy for calculating the analytical solution.
Note: It may not always find all the zeroes, but I saw it giving relatively good results. I put the code also to a gist, which I will update if needed.
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
# Defined below
r = RootFinder(1, 20, 0.01)
args = (90, 5)
roots = r.find(f, *args)
print("Roots: ", roots)
# plot results
u = np.linspace(1, 20, num=600)
fig, ax = plt.subplots()
ax.plot(u, f(u, *args))
ax.scatter(roots, f(np.array(roots), *args), color="r", s=10)
ax.grid(color="grey", ls="--", lw=0.5)
plt.show()
Example output:
Roots: [ 2.84599497 8.82720551 12.38857782 15.74736542 19.02545276]
zoom-in:
RootFinder definition
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
class RootFinder:
def __init__(self, start, stop, step=0.01, root_dtype="float64", xtol=1e-9):
self.start = start
self.stop = stop
self.step = step
self.xtol = xtol
self.roots = np.array([], dtype=root_dtype)
def add_to_roots(self, x):
if (x < self.start) or (x > self.stop):
return # outside range
if any(abs(self.roots - x) < self.xtol):
return # root already found.
self.roots = np.append(self.roots, x)
def find(self, f, *args):
current = self.start
for x0 in np.arange(self.start, self.stop + self.step, self.step):
if x0 < current:
continue
x = self.find_root(f, x0, *args)
if x is None: # no root found.
continue
current = x
self.add_to_roots(x)
return self.roots
def find_root(self, f, x0, *args):
x, _, ier, _ = fsolve(f, x0=x0, args=args, full_output=True, xtol=self.xtol)
if ier == 1:
return x[0]
return None
Test function
The scipy.special.jnjn does not exist anymore, but I created similar test function for the case.
def f(u, V=90, ell=5):
w = np.sqrt(V ** 2 - u ** 2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.yn(ell - 1, u)
kl = scipy.special.kn(ell, w)
kl1 = scipy.special.kn(ell - 1, w)
return jl / (u * jl1) + kl / (w * kl1)
The main problem I see with this is if you can actually find all roots --- as have already been mentioned in comments, this is not always possible. If you are sure that your function is not completely pathological (sin(1/x) was already mentioned), the next one is what's your tolerance to missing a root or several of them. Put differently, it's about to what length you are prepared to go to make sure you did not miss any --- to the best of my knowledge, there is no general method to isolate all the roots for you, so you'll have to do it yourself. What you show is a reasonable first step already. A couple of comments:
Brent's method is indeed a good choice here.
First of all, deal with the divergencies. Since in your function you have Bessels in the denominators, you can first solve for their roots -- better look them up in e.g., Abramovitch and Stegun (Mathworld link). This will be a better than using an ad hoc grid you're using.
What you can do, once you've found two roots or divergencies, x_1 and x_2, run the search again in the interval [x_1+epsilon, x_2-epsilon]. Continue until no more roots are found (Brent's method is guaranteed to converge to a root, provided there is one).
If you cannot enumerate all the divergencies, you might want to be a little more careful in verifying a candidate is indeed a divergency: given x don't just check that f(x) is large, check that, e.g. |f(x-epsilon/2)| > |f(x-epsilon)| for several values of epsilon (1e-8, 1e-9, 1e-10, something like that).
If you want to make sure you don't have roots which simply touch zero, look for the extrema of the function, and for each extremum, x_e, check the value of f(x_e).
I've also encountered this problem to solve equations like f(z)=0 where f was an holomorphic function. I wanted to be sure not to miss any zero and finally developed an algorithm which is based on the argument principle.
It helps to find the exact number of zeros lying in a complex domain. Once you know the number of zeros, it is easier to find them. There are however two concerns which must be taken into account :
Take care about multiplicity : when solving (z-1)^2 = 0, you'll get two zeros as z=1 is counting twice
If the function is meromorphic (thus contains poles), each pole reduce the number of zero and break the attempt to count them.

Categories

Resources