scipy optimize - fmin Nelder-Mead simplex - python

I'm trying to use the scipy Nelder-Mead simplex search function to find a minimum to a non-linear function. It appears my simplex gets stuck because it starts off with an initial simplex that is too small. Unfortunately, I don't see anywhere in scipy where you can change some of the simplex parameters (e.g. initial simplex size). Is there a way? Am I missing something? Or are there other implementations of the NM simplex?
Thanks

Two suggestions for Nelder-Mead:
1) snap all x to a grid, say .01, inside the function:
x = np.round( x / grid ) * grid
f = ...
This acts as a simple noise filter in high dimensions
(in 2d or 3d, don't bother).
2) start off with the best d+1 of 2d+1 nearby points,
instead of the usual d+1:
def neard1( func, x, h, verbose=1 ):
""" eval func at 2d+1 points x, x +- h
sort
-> f[ d+1 best values ], X[ d+1 ]
to start or restart Nelder-Mead
"""
dim = len(x)
I = np.eye(dim)
np.fill_diagonal( I, h ) # scalar or vec
X = x + np.vstack(( np.zeros(dim), I, - I ))
fnear = np.array([ func( x ) for x in X ]) # 2d+1
f0 = fnear[0]
up = np.argsort( fnear ) # vec func: |fnear|
if verbose:
print "neard1: f %g +- %s around x %s" % (
f0, fnear[up] - f0, x )
bestd1 = up[:dim+1]
return fnear[bestd1], X[bestd1]
It's also not a bad idea to look at the neard1() values after Nelder-Mead,
to get an idea of what func() looks like there.
If any neighbors are better then the N-M "best", restart N-M from that new simplex.
(One can alternate neard1, N-M, neard1, N-M: easy but very problem-dependent.)
How many variables do you have, and how noisy is your function ?
Hope this helps

From the reference at http://docs.scipy.org/doc/:
Method Nelder-Mead uses the Simplex algorithm [R123], [R124]. This algorithm has been successful in many applications but other algorithms using the first and/or second derivatives information might be preferred for their better performances and robustness in general.
It may be recommended to use a completely different algorithm, then. Note that:
Method BFGS uses the quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS) [R127] pp. 136. It uses the first derivatives only. BFGS has proven good performance even for non-smooth optimizations. This method also returns an approximation of the Hessian inverse, stored as hess_inv in the OptimizeResult object.
BFGS sounds more robust and faster overall.
ParagonRG

Related

Double antiderivative computation in python

I have the following problem. I have a function f defined in python using numpy functions. The function is smooth and integrable on positive reals. I want to construct the double antiderivative of the function (assuming that both the value and the slope of the antiderivative at 0 are 0) so that I can evaluate it on any positive real smaller than 100.
Definition of antiderivative of f at x:
integrate f(s) with s from 0 to x
Definition of double antiderivative of f at x:
integrate (integrate f(t) with t from 0 to s) with s from 0 to x
The actual form of f is not important, so I will use a simple one for convenience. But please note that even though my example has a known closed form, my actual function does not.
import numpy as np
f = lambda x: np.exp(-x)*x
My solution is to construct the antiderivative as an array using naive numerical integration:
N = 10000
delta = 100/N
xs = np.linspace(0,100,N+1)
vs = f(xs)
avs = np.cumsum(vs)*delta
aavs = np.cumsum(avs)*delta
This of course works but it gives me arrays instead of functions. But this is not a big problem as I can interpolate aavs using a spline to get a function and get rid of the arrays.
from scipy.interpolate import UnivariateSpline
aaf = UnivariateSpline(xs, aavs)
The function aaf is approximately the double antiderivative of f.
The problem is that even though it works, there is quite a bit of overhead before I can get my function and precision is expensive.
My other idea was to interpolate f by a spline and take the antiderivative of that, however this introduces numerical errors that are too big for what I want to use the function.
Is there any better way to do that? By better I mean faster without sacrificing accuracy.
Edit: What I hope is possible is to use some kind of Fourier transform to avoid integrating twice. I hope that there is some convenient transform of vs that allows to multiply the values component-wise with xs and transform back to get the double antiderivative. I played with this a bit, but I got lost.
Edit: I figured out that by using the trapezoidal rule instead of a naive sum, increases the accuracy quite a bit. Using Simpson's rule should increase the accuracy further, but it's somewhat fiddly to do with numpy arrays.
Edit: As #user202729 rightfully complains, this seems off. The reason it seems off is because I have skipped some details. I explain here why what I say makes sense, but it does not affect my question.
My actual goal is not to find the double antiderivative of f, but to find a transformation of this. I have skipped that because I think it only confuses the matter.
The function f decays exponentially as x approaches 0 or infinity. I am minimizing the numerical error in the integration by starting the sum from 0 and going up to approximately the peak of f. This ensure that the relative error is approximately constant. Then I start from the opposite direction from some very big x and go back to the peak. Then I do the same for the antiderivative values.
Then I transform the aavs by another function which is sensitive to numerical errors. Then I find the region where the errors are big (the values oscillate violently) and drop these values. Finally I approximate what I believe are good values by a spline.
Now if I use spline to approximate f, it introduces an absolute error which is the dominant term in a rather large interval. This gets "integrated" twice and it ends up being a rather large relative error in aavs. Then once I transform aavs, I find that the 'good region' has shrunk considerably.
EDIT: The actual form of f is something I'm still looking into. However, it is going to be a generalisation of the lognormal distribution. Right now I am playing with the following family.
I start by defining a generalization of the normal distribution:
def pdf_n(params, center=0.0, slope=8):
scale, min, diff = params
if diff > 0:
r = min
l = min + diff
else:
r = min - diff
l = min
def retfun(m):
x = (m - center)/scale
E = special.expit(slope*x)*(r - l) + l
return np.exp( -np.power(1 + x*x, E)/2 )
return np.vectorize(retfun)
It may not be obvious what is happening here, but the result is quite simple. The function decays as exp(-x^(2l)) on the left and as exp(-x^(2r)) on the right. For min=1 and diff=0, this is the normal distribution. Note that this is not normalized. Then I define
g = pdf(params)
f = np.vectorize(lambda x:g(np.log(x))/x/area)
where area is the normalization constant.
Note that this is not the actual code I use. I stripped it down to the bare minimum.
You can compute the two np.cumsum (and the divisions) at once more efficiently using Numba. This is significantly faster since there is no need for several temporary arrays to be allocated, filled, read again and freed. Here is a naive implementation:
import numba as nb
#nb.njit('float64[::1](float64[::1], float64)') # Assume vs is contiguous
def doubleAntiderivative_naive(vs, delta):
res = np.empty(vs.size, dtype=np.float64)
sum1, sum2 = 0.0, 0.0
for i in range(vs.size):
sum1 += vs[i] * delta
sum2 += sum1 * delta
res[i] = sum2
return res
However, the sum is not very good in term of numerical stability. A Kahan summation is needed to improve the accuracy (or possibly the alternative Kahan–Babuška-Klein algorithm if you are paranoid about the accuracy and performance do not matter so much). Note that Numpy use a pair-wise algorithm which is quite good but far from being prefect in term of accuracy (this is a good compromise for both performance and accuracy).
Moreover, delta can be factorized during in the summation (ie. the result just need to be premultiplied by delta**2).
Here is an implementation using the more accurate Kahan summation:
#nb.njit('float64[::1](float64[::1], float64)')
def doubleAntiderivative_accurate(vs, delta):
res = np.empty(vs.size, dtype=np.float64)
delta2 = delta * delta
sum1, sum2 = 0.0, 0.0
c1, c2 = 0.0, 0.0
for i in range(vs.size):
# Kahan summation of the antiderivative of vs
y1 = vs[i] - c1
t1 = sum1 + y1
c1 = (t1 - sum1) - y1
sum1 = t1
# Kahan summation of the double antiderivative of vs
y2 = sum1 - c2
t2 = sum2 + y2
c2 = (t2 - sum2) - y2
sum2 = t2
res[i] = sum2 * delta2
return res
Here is the performance of the approaches on my machine (with an i5-9600KF processor):
Numpy cumsum: 51.3 us
Naive Numba: 11.6 us
Accutate Numba: 37.2 us
Here is the relative error of the approaches (based on the provided input function):
Numpy cumsum: 1e-13
Naive Numba: 5e-14
Accutate Numba: 2e-16
Perfect precision: 1e-16 (assuming 64-bit numbers are used)
If f can be easily computed using Numba (this is the case here), then vs[i] can be replaced by calls to f (inlined by Numba). This helps to reduce the memory consumption of the computation (N can be huge without saturating your RAM).
As for the interpolation, the splines often gives good numerical result but they are quite expensive to compute and AFAIK they require the whole array to be computed (each item of the array impact all the spline although some items may have a negligible impact alone). Regarding your needs, you could consider using Lagrange polynomials. You should be careful when using Lagrange polynomials on the edges. In your case, you can easily solve the numerical divergence issue on the edges by extending the array size with the border values (since you know the derivative on each edges of vs is 0). You can apply the interpolation on the fly with this method which can be good for both performance (typically if the computation is parallelized) and memory usage.
First, I created a version of the code I found more intuitive. Here I multiply cumulative sum values by bin widths. I believe there is a small error in the original version of the code related to the bin width issue.
import numpy as np
f = lambda x: np.exp(-x)*x
N = 1000
xs = np.linspace(0,100,N+1)
domainwidth = ( np.max(xs) - np.min(xs) )
binwidth = domainwidth / N
vs = f(xs)
avs = np.cumsum(vs)*binwidth
aavs = np.cumsum(avs)*binwidth
Next, for visualization here is some very simple plotting code:
import matplotlib
import matplotlib.pyplot as plt
plt.figure()
plt.scatter( xs, vs )
plt.figure()
plt.scatter( xs, avs )
plt.figure()
plt.scatter( xs, aavs )
plt.show()
The first integral matches the known result of the example expression and can be seen on wolfram
Below is a simple function that extracts an element from the second derivative. Note that int is a bad rounding function. I assume this is what you have implemented already.
def extract_double_antideriv_value(x):
return aavs[int(x/binwidth)]
singleresult = extract_double_antideriv_value(50.24)
print('singleresult', singleresult)
Whatever full computation steps are required, we need to know them before we can start optimizing. Do you have a million different functions to integrate? If you only need to query a single double anti-derivative many times, your original solution should be fairly ideal.
Symbolic Approximation:
Have you considered approximations to the original function f, which can have closed form integration solutions? You have a limited domain on which the function lives. Perhaps approximate f with a Taylor series (which can be constructed with known maximum error) then integrate exactly? (consider Pade, Taylor, Fourier, Cheby, Lagrange(as suggested by another answer), etc...)
Log Tricks:
Another alternative to dealing with spiky errors, would be to take the log of your original function. Is f always positive? Is the integration error caused because the neighborhood around the max is very small? If so, you can study ln(f) or even ln(ln(f)) instead. It would really help to understand what f looks like more.
Approximation Integration Tricks
There exist countless integration tricks in general, which can make approximate closed form solutions to undo-able integrals. A very common one when exponetnial functions are involved (I think yours is expoential?) is to use Laplace's Method. But which trick to pull out of the bag is highly dependent upon the conditions which f satisfies.

Manual implementation of a Support Vector Machine

I am wondering is there any article where SVM (Support Vector Machine) is implemented manually in R or Python.
I do not want to use a built-in function or package. ?
The example could be very simple in terms of feature space and linear separable.
I just want to go through the whole process to enhance my understanding.
The answer to this question is rather broad since there are several possible algorithms in order to train SVMs. Also packages like LibSVM (available both for Python and R) are open-source so you are free to check the code inside.
Hereinafter I will consider the Sequential Minimal Optimization (SMO) algorithm by J. Pratt which is implemented in LibSVM. Implementing manually an algorithm which solves the SVM optimization problem is rather tedious but, if that's your first approach with SVMs I'd suggest the following (albeit simplified) version of the SMO algorithm
http://cs229.stanford.edu/materials/smo.pdf
This lecture is from Prof. Andrew Ng (Stanford) and he shows a simplified version of the SMO algorithm. I don't know what your theoretical background in SVMs is, but let's just say that the major difference is that the Lagrange multipliers pair (alpha_i and alpha_j) is randomly selected whereas in the original SMO algorithm there's a much harder heuristic involved.
In other terms, thus, this algorithm does not guarantee to converge to a global optimum (which is always true in SVMs for the dataset at hand if trained with a proper algorithm), but this can give you a nice introduction to the optimization problem behind SVMs.
This paper, however, does not show any codes in R or Python but the pseudo-code is rather straightforward and easy to implement (~100 lines of code in either Matlab or Python). Also keep in mind that this training algorithm returns alpha (Lagrange multipliers vector) and b (intercept). You might want to have some additional parameters such as the proper Support Vectors but starting from vector alpha such quantities are rather easy to calculate.
At first, let's suppose you have a LearningSet in which there are as many rows as there are patterns and as many columns as there are features and let also suppose that you have LearningLabels which is a vector with as many elements as there are patterns and this vector (as its name suggests) contains the proper labels for the patterns in LearningSet.
Also recall that alpha has as many elements as there are patterns.
In order to evaluate the Support Vector indices you can check whether element i in alpha is greater than or equal to 0: if alpha[i]>0 then the i-th pattern from LearningSet is a Support Vector. Similarly, the i-th element from LearningLabels is the related label.
Finally, you might want to evaluate vector w, the free parameters vector. Given that alpha is known, you can apply the following formula
w = ((alpha.*LearningLabels)'*LearningSet)'
where alpha and LearningLabels are column vectors and LearningSet is the matrix as described above. In the above formula .* is the element-wise product whereas ' is the transpose operator.
I would like to add a small supplement to the previous answer.
If you feel confident with linear algebra, there is an alternative derivation of the SMO algorithm and a very simple implementation based on that formulas.
Basically, it contains about 30 lines of code in Python
class SVM:
def __init__(self, kernel='linear', C=10000.0, max_iter=100000, degree=3, gamma=1):
self.kernel = {'poly' : lambda x,y: np.dot(x, y.T)**degree,
'rbf' : lambda x,y: np.exp(-gamma*np.sum((y - x[:,np.newaxis])**2, axis=-1)),
'linear': lambda x,y: np.dot(x, y.T)}[kernel]
self.C = C
self.max_iter = max_iter
def restrict_to_square(self, t, v0, u):
t = (np.clip(v0 + t*u, 0, self.C) - v0)[1]/u[1]
return (np.clip(v0 + t*u, 0, self.C) - v0)[0]/u[0]
def fit(self, X, y):
self.X = X.copy()
self.y = y * 2 - 1
self.lambdas = np.zeros_like(self.y, dtype=float)
self.K = self.kernel(self.X, self.X) * self.y[:,np.newaxis] * self.y
for _ in range(self.max_iter):
for idxM in range(len(self.lambdas)):
idxL = np.random.randint(0, len(self.lambdas))
Q = self.K[[[idxM, idxM], [idxL, idxL]], [[idxM, idxL], [idxM, idxL]]]
v0 = self.lambdas[[idxM, idxL]]
k0 = 1 - np.sum(self.lambdas * self.K[[idxM, idxL]], axis=1)
u = np.array([-self.y[idxL], self.y[idxM]])
t_max = np.dot(k0, u) / (np.dot(np.dot(Q, u), u) + 1E-15)
self.lambdas[[idxM, idxL]] = v0 + u * self.restrict_to_square(t_max, v0, u)
idx, = np.nonzero(self.lambdas > 1E-15)
self.b = np.sum((1.0 - np.sum(self.K[idx] * self.lambdas, axis=1)) * self.y[idx]) / len(idx)
def decision_function(self, X):
return np.sum(self.kernel(X, self.X) * self.y * self.lambdas, axis=1) + self.b
In simple cases it works not much worth than sklearn.svm.SVC, comparison shown below
For more elaborate explanation with formulas you may want to refer to this ResearchGate preprint.
Code for image generation can be found on GitHub.

Infinite Summation in Python

I have a function for with i need to do an infinite summation on (over all the integers) numerically. The summation doesn't always need to converge as I can change internal parameters. The function looks like,
m(g, x, q0) = sum(abs(g(x - n*q0))^2 for n in Integers)
m(g, q0) = minimize(m(g, x, q0) for x in [0, q0])
using a Pythonic pseudo-code
Using Scipy integration methods, I was just flooring the n and integrating like for a fixed x,
m(g, z, q0) = integrate.quad(lambda n:
abs(g(x - int(n)*q0))**2,
-inf, +inf)[0]
This works pretty well, but then I have to do optimization on the x as a function of x, and then do another summation on that which yields a integral of a optimization of an integral. Pretty much it takes a really long time.
Do you know of a better way to do the summation that is faster? Hand coding it seemed to go slower.
Currently, I am working with
g(x) = (2/sqrt(3))*pi**(-0.25)*(1 - x**2)*exp(-x**2/2)
but the solution should be general
The paper this comes from is "The Wavelet Transform, Time-Frequency Localization and Signal Analysis" by Daubechies (IEEE 1990)
Thank you
Thanks to all the useful comment, I wrote my own summator that seems to run pretty fast. It anyone has any recommendations to make it better, I will gladly take them.
I will test this on the problem I am working on and once it demonstrates success, I will claim it functional.
def integers(blk_size=100):
x = arange(0, blk_size)
while True:
yield x
yield -x -1
x += blk_size
#
# For convergent summation
# on not necessarily finite sequences
# processes in blocks which can be any size
# shape that the function can handle
#
def converge_sum(f, x_strm, eps=1e-5, axis=0):
total = sum(f(x_strm.next()), axis=axis)
for x_blk in x_strm:
diff = sum(f(x_blk), axis=axis)
if abs(linalg.norm(diff)) <= eps:
# Converged
return total + diff
else:
total += diff
g(x) is almost certainly your bottleneck. A very quick-and-dirty solution would be to vectorize it to operate on an array of integers, then use np.trapz to estimate the integral using the trapezoid rule:
import numpy as np
# appropriate range and step size depends on how accurate you need to be and how
# quickly the sum converges
xmin = -1000000
xmax = 1000000
dx = 1
x = np.arange(xmin, xmax + dx, dx)
gx = (2 / np.sqrt(3)) * np.pi**(-0.25)*(1 - x**2) * np.exp(-x**2 / 2)
sum_gx = np.trapz(gx, x, dx)
Aside from that, you could re-write g(x) using Cython or numba to speed it up.
There's a chance Numba improves speed significantly - http://numba.pydata.org
It's slightly painful to install but very easy to use. Have a look at:
https://jakevdp.github.io/blog/2015/02/24/optimizing-python-with-numpy-and-numba/

Bound solutions in scipy ode solver

I want to solve a high amount of bilinear ODE systems in python. The derivative is this:
def x_(x, t, growth, connections):
return x * growth + np.dot(connections, x) * x
I am not interested in very accurate results but in the qualitative behavior, i.e. whether a component goes to zero or not.
Because I have to solve such a big quantity of high-deminsional systems, I want to use a step size as big as possible.
Due to big step sizes it can happen that the ODE goes in one component below zero. This should not be possible since (because of the structure of the particular ODE) each component is bounded by zero. Hence - to prevent wrong results - I would like to set each component manually to zero once it is below.
Furtherly, in the systems that I want to solve it can happen that solutions blow up. I want to prevent this by setting an upper bound as well, i.e. if a value exceeds the bound it is set back to the value of the bound.
I hope I can make my goal understandable giving you the following pseudo-code of what I want to do:
for t in range(0, tEnd, dt):
$ compute x(t) using x(t-dt) $
x(t) = np.minimum(np.maximum(x(t), 0), upperBound)
I implemented this using a Runge-Kutta algorithm. Everything works fine. Just the performance is bad. Therefore, I would prefer using a pre-implemented method like scipy.integrate.odeint.
Thereby, I have no idea on how to set such bounds. An option that I tried was to manipulate the ODE that way, that the derivative becomes 0 once x is above the bound, and (positive) one once x is below 0. In addition, to prevent too high jumps within one timestep, I also bounded the derivative:
def x_(x, t, growth, connections, bound):
return (x > 0) * np.minimum((x < bound) * \
( x * growth + np.dot(connections, x) * x ), bound) + (x < 0)
Though this solution (especially for the zero-bound) is very ugly it would be sufficient if it worked. Unfortunately, it does not work. Using odeint
x = scipy.integrate.odeint(x_, x0, timesteps, param)
I get very often one of these two errors:
Repeated convergence failures (perhaps bad Jacobian or tolerances).
Excess work done on this call (perhaps wrong Dfun type).
They may be due to the discontinuities of my manipulated ODE. There are plenty of threads about these error messages on the internet but they did not help me. E.g. increasing the amount of allowed steps did neither prevent this issue nor is it a good solution for me since I need to use big step sizes. Furtherly, passing the Jacobian did not help either.
Having a look onto the solutions one can see that two types of strange behavior happen when the errors occure:
The solution blows in one single time-step up to +-1e250 (that should be impossible since dx/dt is bounded).
It first reaches the bound but goes down again (that should be impossible because x is at the bound and therefore x_ is 0).
I would appreciate all hints on how to solve the issue - no matter whether it is help on
how to prevent the errors in odeint
how to manipulate the ODE properly or on
how to write a very fast ODE solver where I can directly implement my needs.
I thank you in advance!
Edit
I was asked for a minimal example:
import numpy as np
import random as rd
rd.seed()
import scipy.integrate
def simulate(simParam, dim = 20, connectivity = .8, conRange = 1, threshold = 1E-3,
constGrowing=None):
"""
Creates the random system matrix and starts a simulation
"""
x0 = np.zeros(dim, dtype='float') + 1
connections = np.zeros(shape=(dim, dim), dtype='float')
growth = np.zeros(dim, dtype='float') +
(constGrowing if not constGrowing == None else 0)
for i in range(dim):
for j in range(dim):
if i != j:
connections[i][j] = rd.uniform(-conRange, conRange)
tend, step = simParam
return RK4NumPy(x_, (growth, connections), x0, 0, tend, step)
def x_(x, t, growth, connections, bound):
"""
Derivative of the ODE
"""
return (x > 0) * np.minimum((x < bound) *
(x * growth + np.dot(connections, x) * x), bound) + (x < 0)
def RK4NumPy(x_, param, x0, t0, tend, step, maxV = 1.0E2, silent=True):
"""
solving method
"""
param = param + (maxV,)
timesteps = np.arange(t0 + step, tend, step)
return scipy.integrate.odeint(x_, x0, timesteps, param)
simulate((300, 0.5))
To see the solution one would have to plot x. With the given parameters I get very often the above mentioned error
Excess work done on this call (perhaps wrong Dfun type).
Run with full_output = 1 to get quantitative information.

How to find all zeros of a function using numpy (and scipy)?

Suppose I have a function f(x) defined between a and b. This function can have many zeros, but also many asymptotes. I need to retrieve all the zeros of this function. What is the best way to do it?
Actually, my strategy is the following:
I evaluate my function on a given number of points
I detect whether there is a change of sign
I find the zero between the points that are changing sign
I verify if the zero found is really a zero, or if this is an asymptote
U = numpy.linspace(a, b, 100) # evaluate function at 100 different points
c = f(U)
s = numpy.sign(c)
for i in range(100-1):
if s[i] + s[i+1] == 0: # oposite signs
u = scipy.optimize.brentq(f, U[i], U[i+1])
z = f(u)
if numpy.isnan(z) or abs(z) > 1e-3:
continue
print('found zero at {}'.format(u))
This algorithm seems to work, except I see two potential problems:
It will not detect a zero that doesn't cross the x axis (for example, in a function like f(x) = x**2) However, I don't think it can occur with the function I'm evaluating.
If the discretization points are too far, there could be more that one zero between them, and the algorithm could fail finding them.
Do you have a better strategy (still efficient) to find all the zeros of a function?
I don't think it's important for the question, but for those who are curious, I'm dealing with characteristic equations of wave propagation in optical fiber. The function looks like (where V and ell are previously defined, and ell is an positive integer):
def f(u):
w = numpy.sqrt(V**2 - u**2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.jnjn(ell-1, u)
kl = scipy.special.jnkn(ell, w)
kl1 = scipy.special.jnkn(ell-1, w)
return jl / (u*jl1) + kl / (w*kl1)
Why are you limited to numpy? Scipy has a package that does exactly what you want:
http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html
One lesson I've learned: numerical programming is hard, so don't do it :)
Anyway, if you're dead set on building the algorithm yourself, the doc page on scipy I linked (takes forever to load, btw) gives you a list of algorithms to start with. One method that I've used before is to discretize the function to the degree that is necessary for your problem. (That is, tune \delta x so that it is much smaller than the characteristic size in your problem.) This lets you look for features of the function (like changes in sign). AND, you can compute the derivative of a line segment (probably since kindergarten) pretty easily, so your discretized function has a well-defined first derivative. Because you've tuned the dx to be smaller than the characteristic size, you're guaranteed not to miss any features of the function that are important for your problem.
If you want to know what "characteristic size" means, look for some parameter of your function with units of length or 1/length. That is, for some function f(x), assume x has units of length and f has no units. Then look for the things that multiply x. For example, if you want to discretize cos(\pi x), the parameter that multiplies x (if x has units of length) must have units of 1/length. So the characteristic size of cos(\pi x) is 1/\pi. If you make your discretization much smaller than this, you won't have any issues. To be sure, this trick won't always work, so you may need to do some tinkering.
I found out it's relatively easy to implement your own root finder using the scipy.optimize.fsolve.
Idea: Find any zeroes from interval (start, stop) and stepsize step by calling the fsolve repeatedly with changing x0. Use relatively small stepsize to find all the roots.
Can only search for zeroes in one dimension (other dimensions must be fixed). If you have other needs, I would recommend using sympy for calculating the analytical solution.
Note: It may not always find all the zeroes, but I saw it giving relatively good results. I put the code also to a gist, which I will update if needed.
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
# Defined below
r = RootFinder(1, 20, 0.01)
args = (90, 5)
roots = r.find(f, *args)
print("Roots: ", roots)
# plot results
u = np.linspace(1, 20, num=600)
fig, ax = plt.subplots()
ax.plot(u, f(u, *args))
ax.scatter(roots, f(np.array(roots), *args), color="r", s=10)
ax.grid(color="grey", ls="--", lw=0.5)
plt.show()
Example output:
Roots: [ 2.84599497 8.82720551 12.38857782 15.74736542 19.02545276]
zoom-in:
RootFinder definition
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
class RootFinder:
def __init__(self, start, stop, step=0.01, root_dtype="float64", xtol=1e-9):
self.start = start
self.stop = stop
self.step = step
self.xtol = xtol
self.roots = np.array([], dtype=root_dtype)
def add_to_roots(self, x):
if (x < self.start) or (x > self.stop):
return # outside range
if any(abs(self.roots - x) < self.xtol):
return # root already found.
self.roots = np.append(self.roots, x)
def find(self, f, *args):
current = self.start
for x0 in np.arange(self.start, self.stop + self.step, self.step):
if x0 < current:
continue
x = self.find_root(f, x0, *args)
if x is None: # no root found.
continue
current = x
self.add_to_roots(x)
return self.roots
def find_root(self, f, x0, *args):
x, _, ier, _ = fsolve(f, x0=x0, args=args, full_output=True, xtol=self.xtol)
if ier == 1:
return x[0]
return None
Test function
The scipy.special.jnjn does not exist anymore, but I created similar test function for the case.
def f(u, V=90, ell=5):
w = np.sqrt(V ** 2 - u ** 2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.yn(ell - 1, u)
kl = scipy.special.kn(ell, w)
kl1 = scipy.special.kn(ell - 1, w)
return jl / (u * jl1) + kl / (w * kl1)
The main problem I see with this is if you can actually find all roots --- as have already been mentioned in comments, this is not always possible. If you are sure that your function is not completely pathological (sin(1/x) was already mentioned), the next one is what's your tolerance to missing a root or several of them. Put differently, it's about to what length you are prepared to go to make sure you did not miss any --- to the best of my knowledge, there is no general method to isolate all the roots for you, so you'll have to do it yourself. What you show is a reasonable first step already. A couple of comments:
Brent's method is indeed a good choice here.
First of all, deal with the divergencies. Since in your function you have Bessels in the denominators, you can first solve for their roots -- better look them up in e.g., Abramovitch and Stegun (Mathworld link). This will be a better than using an ad hoc grid you're using.
What you can do, once you've found two roots or divergencies, x_1 and x_2, run the search again in the interval [x_1+epsilon, x_2-epsilon]. Continue until no more roots are found (Brent's method is guaranteed to converge to a root, provided there is one).
If you cannot enumerate all the divergencies, you might want to be a little more careful in verifying a candidate is indeed a divergency: given x don't just check that f(x) is large, check that, e.g. |f(x-epsilon/2)| > |f(x-epsilon)| for several values of epsilon (1e-8, 1e-9, 1e-10, something like that).
If you want to make sure you don't have roots which simply touch zero, look for the extrema of the function, and for each extremum, x_e, check the value of f(x_e).
I've also encountered this problem to solve equations like f(z)=0 where f was an holomorphic function. I wanted to be sure not to miss any zero and finally developed an algorithm which is based on the argument principle.
It helps to find the exact number of zeros lying in a complex domain. Once you know the number of zeros, it is easier to find them. There are however two concerns which must be taken into account :
Take care about multiplicity : when solving (z-1)^2 = 0, you'll get two zeros as z=1 is counting twice
If the function is meromorphic (thus contains poles), each pole reduce the number of zero and break the attempt to count them.

Categories

Resources