Finding roots to equation using a modulo-divided numpy.poly1d in python - python

I've created a polynomial object using numpy.poly1d and some arbitrary coefficients (a,b,c) so that I can find the roots of the equation ax^2 + bx + c = y0 at a given y0. In principle, that can be done fairly easily by calling the method root of the poly1d object.
The only issue is that the actual equation I am trying to solve is the same as the one written above, but modulo-divided by 2π which corresponds to finding x when the polynomial modulo-divided by 2π equals to y0, (or find x for y = (ax^2 + bx + (c-yo)) [2*pi])
However, it seems that I can't apply this modulo operator to a poly1d object.
Is there a way of doing that using NumPy?
Here are some lines of code:
import numpy as np
def x_to_y(x,a,b,c):
return (a*x**2 + b*x + c) % (2*np.pi)
def y_to_x(y0,a,b,c):
a,b,c = coeffs
eq = np.poly1d([a,b,c]) % (2*np.pi) # throws an error, can't apply % operation on poly1d object
return (eq-yo).roots

It seems that you can use np.mod instead of %.
The only problem being that np.mod returns an array in that case, not a poly1d object.

Actually managed to bodge it by simply adding 2kπ to the 0th order term of the polynomial, and sweeping over a few values of k till I get a root that is comprised within the correct bounds.
Not ideal but it works. Still open to clever ways of doing it though!

Related

Unknown error with self-defined function for approximation of an integral

I've defined the following function as a method of approximating an integral using Boole's Rule:
def integrate_boole(f,l,r,N):
h=((r-l)/N)
xN = np.linspace(l,r,N+1)
fN = f(xN)
return ((2*h)/45)*(7*fN[0]+32*(np.sum(fN[1:-2:2]))+12*(np.sum(fN[2:-3:4]))+14*(np.sum(fN[4:-5]))+7*fN[-1])
I used the function to get the value of the integral for sin(x)dx between 0 and pi (where N=8) and assigned it to a variable sine_int.
The answer given was 1.3938101893248442
After doing the original equation (see here) out by hand I realised this answer was quite inaccurate.
The sums of fN are giving incorrect values, but I'm not sure why. For example, np.sum(fN[4:-5]) is going to 0.
Is there a better way of coding the sums involved, or is there an error in my parameters that's causing the calculations to be inaccurate?
Thanks in advance.
EDIT
I should have made it clearer that this is supposed to be a composite version of the rule, i.e. approximating over N points where N is divisible by 4. So the typical 5 points with 4 intervals isn't going to cut it here, unfortunately. I would copy the equation I'm using into here, but I don't have an image of it and LaTex isn't an option. It should/might be clear from the code I have after return.
From a quick inspection looks like the term multiplying f(x_4) should be 32, not 14:
def integrate_boole(f,l,r,N):
h=((r-l)/N)
xN = np.linspace(l,r,N+1)
fN = f(xN)
return ((2*h)/45)*(7*fN[0]+32*(np.sum(fN[1:-2:2]))+
12*(np.sum(fN[2:-3:4]))+32*(np.sum(fN[4:-5]))+7*fN[-1])
First, one of your coefficients was wrong as pointed out by #nixon. Then, I think you do not really understand how the Boole's rule works - It approximates the integral of a function only using 5 points of the function. Hence, the terms like np.sum(fN[1:-2:2]) makes no sense. You only need five points, which you can obtain with xN = np.linspace(l,r,5). Your h is simply the distance between 2 of the contiguos points h = xN[1] - xN[0]. And then, easy peasy:
import numpy as np
def integrate_boole(f,l,r):
xN = np.linspace(l,r,5)
h = xN[1] - xN[0]
fN = f(xN)
return ((2*h)/45)*(7*fN[0]+32*fN[1]+12*fN[2]+32*fN[3]+7*fN[4])
def f(x):
return np.sin(x)
I = integrate_boole(f, 0, np.pi)
print(I) # Outputs 1.99857...
I'm not sure what you're hoping your code does w.r.t. Boole's rule. Why are you summing over samples of the function (i.e. np.sum(fN[2:-3:4]))? I think your N parameter is also not well defined and I'm not sure what it's supposed to represent. Maybe you're using another rule I'm not familiar with: I'll let you decide.
Regardless, here's an implementation of Boole's rule as Wikipedia defines it. Variables map to the Wikipedia version you linked:
def integ_boole(func, left, right):
h = (right - left) / 4
x1 = left
x2 = left + h
x3 = left + 2*h
x4 = left + 3*h
x5 = right # or left + 4h
result = (2*h / 45) * (7*func(x1) + 32*func(x2) + 12*func(x3) + 32*func(x4) + 7*func(x5))
return result
then, to test:
import numpy as np
print(integ_boole(np.sin, 0, np.pi))
outputs 1.9985707318238357, which is extremely close to the correct answer of 2.
HTH.

sympy.solve() doesn't give one of the solutions with LambertW

Background:
I am trying to implement a function doing an inverse transform sampling. I use sympy for calculating CDF and getting its inverse function. While for some simple PDFs I get correct results, for a PDF which CDF's inverse function includes Lambert-W function, results are wrong.
Example:
Consider following example CDF:
import sympy as sym
y = sym.Symbol('y')
cdf = (-y - 1) * sym.exp(-y) + 1 # derived from `pdf = x * sym.exp(-x)`
sym.plot(cdf, (y, -1, 5))
Now calculating inverse of this function:
x = sym.Symbol('x')
inverse = sym.solve(sym.Eq(x, cdf), y)
print(inverse)
Output:
[-LambertW((x - 1)*exp(-1)) - 1]
This, in fact, is only a left branch of negative y's of a given CDF:
sym.plot(inverse[0], (x, -0.5, 1))
Question:
How can I get the right branch for positive y's of a given CDF?
What I tried:
Specifying x and y to be only positive:
x = sym.Symbol('x', positive=True)
y = sym.Symbol('y', positive=True)
This doesn't have any effect, even for the first CDF plot.
Making CDF a Piecewise function:
cdf = sym.Piecewise((0, y < 0),
((-y - 1) * sym.exp(-y) + 1, True))
Again no effect. Strange thing here is that on another computer plotting this function gave a proper graph with zero for negative y's, but solving for a positive y's branch doesn't work anywhere. (Different versions? I also had to specify adaptive=False to sympy.plot to make it work there.)
Using sympy.solveset instead of sympy.solve:
This just gives a useless ConditionSet(y, Eq(x*exp(y) + y - exp(y) + 1, 0), Complexes(S.Reals x S.Reals, False)) as a result. Apparently, solveset still doesn't know how to deal with LambertW functions. From the docs:
When cases which are not solved or can only be solved incompletely, a
ConditionSet is used and acts as an unevaluated solveset object. <...>
There are still a few things solveset can’t do, which the old solve
can, such as solving non linear multivariate & LambertW type
equations.
Is it a bug or am I missing something? Is there any workaround to get the desired result?
The inverse produced by sympy is almost correct. The problem lies in the fact that the LambertW function has multiple branches over the domain (-1/e, 0). By default, it uses the upper branch, however for your problem you require the lower branch. The lower branch can be accessed by passing in a second argument to LambertW with a value of -1.
inverse = -sym.LambertW((x - 1)*sym.exp(-1), -1) - 1
sym.plot(inverse, (x, 0, 0.999))
Gives

need to improve accuracy in fsolve to find multiples roots

I'm using this code to get the zeros of a nonlinear function.
Most certainly, the function should have 1 or 3 zeros
import numpy as np
import matplotlib.pylab as plt
from scipy.optimize import fsolve
[a, b, c] = [5, 10, 0]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
x = np.linspace(-10, 10, 1000)
print(fsolve(func, [-10, 0, 10]))
plt.plot(x, func(x))
plt.show()
In this case the code give the 3 expected roots without any problem.
But, with c = -1.5 the code miss a root, and with c = -3 it find a non existing root.
I want to calculate the roots for many different parameter combinations, so changing the seeds manually is not a practical solution.
I appreciate any solution, trick or advice.
What you need is an automatic way to obtain good initial estimates of the roots of the function. This is in general a difficult task, however, for univariate, continuous functions, it is rather simple. The idea is to note that (a) this class of functions can be approximated to an arbitrary precision by a polynomial of appropriately large order, and (b) there are efficient algorithms for finding (all) the roots of a polynomial. Fortunately, Numpy provides functions for both performing polynomial approximation and finding polynomial roots.
Let's consider a specific function
[a, b, c] = [5, 10, -1.5]
def func(x):
return -(x+a) + b / (1 + np.exp(-(x + c)))
The following code uses polyfit and poly1d to approximate func over the range of interest (-10<x<10) by a polynomial function f_poly of order 10.
x_range = np.linspace(-10,10,100)
y_range = func(x_range)
pfit = np.polyfit(x_range,y_range,10)
f_poly = np.poly1d(pfit)
As the following plot shows, f_poly is indeed a good approximation of func. Even greater accuracy can be obtained by increasing the order. However, there is no point in pursuing extreme accuracy in the polynomial approximation, since we are looking for approximate estimates of the roots that will be later refined by fsolve
The roots of the polynomial approximation can be simply obtained as
roots = np.roots(pfit)
roots
array([-10.4551+1.4893j, -10.4551-1.4893j, 11.0027+0.j ,
8.6679+2.482j , 8.6679-2.482j , -5.7568+3.2928j,
-5.7568-3.2928j, -4.9269+0.j , 4.7486+0.j , 2.9158+0.j ])
As expected, Numpy returns 10 complex roots. However, we are only interested for real roots within the interval [-10,10]. These can be extracted as follows:
x0 = roots[np.where(np.logical_and(np.logical_and(roots.imag==0, roots.real>-10), roots.real<10))].real
x0
array([-4.9269, 4.7486, 2.9158])
Array x0 can serve as the initialization for fsolve:
fsolve(func, x0)
array([-4.9848, 4.5462, 2.7192])
Remark: The pychebfun package provides a function that directly gives all the roots of a function within an interval. It is also based on the idea of performing polynomial approximation, however, it uses a more sophisticated (yet, more efficient) approach. It automatically chooses the best polynomial order of the approximation (no user input), with the polynomial roots being practically equal to the true ones (no need to refine them via fsolve).
This simple code gives the same roots as those by fsolve
import pychebfun
f_cheb = pychebfun.Chebfun.from_function(func, domain = (-10,10))
f_cheb.roots()
Between two stationary points (i.e., df/dx=0), you have one or zero roots. In your case it is possible to calculate the two stationary points analytically:
[-c + log(1/(b - sqrt(b*(b - 4)) - 2)) + log(2),
-c + log(1/(b + sqrt(b*(b - 4)) - 2)) + log(2)]
So you have three intervals where you need to find a zero. Using Sympy saves you from doing the calculations by hand. Its sy.nsolve() allows to robustly find a zero in an interval:
import sympy as sy
a, b, c, x = sy.symbols("a, b, c, x", real=True)
# The function:
f = -(x+a) + b / (1 + sy.exp(-(x + c)))
df = f.diff(x) # calculate f' = df/dx
xxs = sy.solve(df, x) # Solving for f' = 0 gives two solutions
# numerical values:
pp = {a: 5, b: 10, c: .5} # values for a, b, c
fpp = f.subs(pp)
xxs_pp = [xpr.subs(pp).evalf() for xpr in xxs] # numerical stationary points
xxs_pp.sort() # in ascending order
# resulting intervals:
xx_low = [-1e9, xxs_pp[0], xxs_pp[1]]
xx_hig = [xxs_pp[0], xxs_pp[1], 1e9]
# calculate roots for each interval:
xx0 = []
for xl_, xh_ in zip(xx_low, xx_hig):
try:
x0 = sy.nsolve(fpp, (xl_, xh_), solver="bisect") # calculate zero
except ValueError: # no solution found
continue
xx0.append(x0)
print("The zeros are:")
print(xx0)
sy.plot(fpp) # plot function

Implementing minimization method

I have a function in 2 variables x1,x2
f = 3*x1^2 + 4*x2^2 + 5*x1 + 6*x2 + 10
Consider x is a row vector such that x = [x5,x6], where x5,x6 are components of the vector. If the notation is confusing, let us consider x = [x1,x1] but x1,x2 can be any arbitrary components. The same argument holds for y.
Then I want to find a from (x + ay) such that it will minimize the f. a is real constant, x and y are vectors. This is explained above.
If this does not make sense, then let us consider x,y as a 1-dimensional arrays with 2 locations. So, x(1),x(2),y(1),y(2) be their components. Then I want to multiply array y by a symbolic variable a.
For example, x=[4,5], y=[-2,3] then, (x + ay) = (4,5) + a(-2,3) = (4-2a,5+3a). a is symbolic variable that is unknown here.
Substituting in f1 (To be more clear, first argument in the definition of f x1 = 4-2a, second argument x2=5+3a)
f1 = 3*(4-2a)^2 + 4*(5+3a)^2 + 5*(4-2a) + 6*(5+3a) + 10 ............(eq. 1)
Then function f1 becomes unknown in one variable, a and can be minimized using 1D minimization algorithm, such as golden section search, given an interval [x_lower,x_upper].
My question is:
Given different x,y,
How to evaluate (x+ay) and pass (or substitute ?) it into function f (eq1)?
How to create 'dynamic' function f1, as in eq. 1, to pass it to 1D minimization algorithm? By dynamic, I mean here is function f1 will change every time for x,y.
I am interested in a low-level implementation of this problem (sticking to the basic features of a language as much as possible and without using language specific features or object oriented features) in python, MATLAB, C or any other language, but again in 'low level.' Can you suggest something?
UPDATE: I don't want to use symbolics from python, MATLAB or from any other language.
I'm rephrasing your question in my own words, because the question in its current form is confusing:
You have a function f(x1,x2) = 3*x1^2 + 4*x2^2 + 5*x1 + 6*x2 + 10. x1 and x2 are the components of a 2D vector obtained from summing x with the product of a and y, where x and y are given vectors, and a is a scalar. You want to obtain the function that results from substituting this relation into f.
Note that the notation is a bit confusing, so I will use instead x = z+a*y, where z (replacing the x you used) and y are the given vectors.
Let's define f as an anonymous function in Matlab (you could easily use a function file as well):
f = #(x) 3*x(1)^2 + 4*x(2)^2 + 5*x(1) + 6*x(2) + 10;
Note that I'm writing this differently than you did, i.e. x(1) and x(2) instead of x1 and x2. This means that I am using components of a vector instead of two unrelated variables.
Then, let's write your equation involving a as a function as well:
g = #(a) z + a*y;
The function g(a) returns a vector for each value a, obeying g(a) = z+a*y.
Now you can do the substitution:
h = #(a) f(g(a))
h is the desired function that takes a as input and returns the value of a applied at the vector obtained from z+a*y.
you can use eval convert string to function
f = 'x+a*y'
x = 4
y = 3
for a in xrange(3):
print eval(f)
output:
4
7
10

How to find all zeros of a function using numpy (and scipy)?

Suppose I have a function f(x) defined between a and b. This function can have many zeros, but also many asymptotes. I need to retrieve all the zeros of this function. What is the best way to do it?
Actually, my strategy is the following:
I evaluate my function on a given number of points
I detect whether there is a change of sign
I find the zero between the points that are changing sign
I verify if the zero found is really a zero, or if this is an asymptote
U = numpy.linspace(a, b, 100) # evaluate function at 100 different points
c = f(U)
s = numpy.sign(c)
for i in range(100-1):
if s[i] + s[i+1] == 0: # oposite signs
u = scipy.optimize.brentq(f, U[i], U[i+1])
z = f(u)
if numpy.isnan(z) or abs(z) > 1e-3:
continue
print('found zero at {}'.format(u))
This algorithm seems to work, except I see two potential problems:
It will not detect a zero that doesn't cross the x axis (for example, in a function like f(x) = x**2) However, I don't think it can occur with the function I'm evaluating.
If the discretization points are too far, there could be more that one zero between them, and the algorithm could fail finding them.
Do you have a better strategy (still efficient) to find all the zeros of a function?
I don't think it's important for the question, but for those who are curious, I'm dealing with characteristic equations of wave propagation in optical fiber. The function looks like (where V and ell are previously defined, and ell is an positive integer):
def f(u):
w = numpy.sqrt(V**2 - u**2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.jnjn(ell-1, u)
kl = scipy.special.jnkn(ell, w)
kl1 = scipy.special.jnkn(ell-1, w)
return jl / (u*jl1) + kl / (w*kl1)
Why are you limited to numpy? Scipy has a package that does exactly what you want:
http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html
One lesson I've learned: numerical programming is hard, so don't do it :)
Anyway, if you're dead set on building the algorithm yourself, the doc page on scipy I linked (takes forever to load, btw) gives you a list of algorithms to start with. One method that I've used before is to discretize the function to the degree that is necessary for your problem. (That is, tune \delta x so that it is much smaller than the characteristic size in your problem.) This lets you look for features of the function (like changes in sign). AND, you can compute the derivative of a line segment (probably since kindergarten) pretty easily, so your discretized function has a well-defined first derivative. Because you've tuned the dx to be smaller than the characteristic size, you're guaranteed not to miss any features of the function that are important for your problem.
If you want to know what "characteristic size" means, look for some parameter of your function with units of length or 1/length. That is, for some function f(x), assume x has units of length and f has no units. Then look for the things that multiply x. For example, if you want to discretize cos(\pi x), the parameter that multiplies x (if x has units of length) must have units of 1/length. So the characteristic size of cos(\pi x) is 1/\pi. If you make your discretization much smaller than this, you won't have any issues. To be sure, this trick won't always work, so you may need to do some tinkering.
I found out it's relatively easy to implement your own root finder using the scipy.optimize.fsolve.
Idea: Find any zeroes from interval (start, stop) and stepsize step by calling the fsolve repeatedly with changing x0. Use relatively small stepsize to find all the roots.
Can only search for zeroes in one dimension (other dimensions must be fixed). If you have other needs, I would recommend using sympy for calculating the analytical solution.
Note: It may not always find all the zeroes, but I saw it giving relatively good results. I put the code also to a gist, which I will update if needed.
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
# Defined below
r = RootFinder(1, 20, 0.01)
args = (90, 5)
roots = r.find(f, *args)
print("Roots: ", roots)
# plot results
u = np.linspace(1, 20, num=600)
fig, ax = plt.subplots()
ax.plot(u, f(u, *args))
ax.scatter(roots, f(np.array(roots), *args), color="r", s=10)
ax.grid(color="grey", ls="--", lw=0.5)
plt.show()
Example output:
Roots: [ 2.84599497 8.82720551 12.38857782 15.74736542 19.02545276]
zoom-in:
RootFinder definition
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
class RootFinder:
def __init__(self, start, stop, step=0.01, root_dtype="float64", xtol=1e-9):
self.start = start
self.stop = stop
self.step = step
self.xtol = xtol
self.roots = np.array([], dtype=root_dtype)
def add_to_roots(self, x):
if (x < self.start) or (x > self.stop):
return # outside range
if any(abs(self.roots - x) < self.xtol):
return # root already found.
self.roots = np.append(self.roots, x)
def find(self, f, *args):
current = self.start
for x0 in np.arange(self.start, self.stop + self.step, self.step):
if x0 < current:
continue
x = self.find_root(f, x0, *args)
if x is None: # no root found.
continue
current = x
self.add_to_roots(x)
return self.roots
def find_root(self, f, x0, *args):
x, _, ier, _ = fsolve(f, x0=x0, args=args, full_output=True, xtol=self.xtol)
if ier == 1:
return x[0]
return None
Test function
The scipy.special.jnjn does not exist anymore, but I created similar test function for the case.
def f(u, V=90, ell=5):
w = np.sqrt(V ** 2 - u ** 2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.yn(ell - 1, u)
kl = scipy.special.kn(ell, w)
kl1 = scipy.special.kn(ell - 1, w)
return jl / (u * jl1) + kl / (w * kl1)
The main problem I see with this is if you can actually find all roots --- as have already been mentioned in comments, this is not always possible. If you are sure that your function is not completely pathological (sin(1/x) was already mentioned), the next one is what's your tolerance to missing a root or several of them. Put differently, it's about to what length you are prepared to go to make sure you did not miss any --- to the best of my knowledge, there is no general method to isolate all the roots for you, so you'll have to do it yourself. What you show is a reasonable first step already. A couple of comments:
Brent's method is indeed a good choice here.
First of all, deal with the divergencies. Since in your function you have Bessels in the denominators, you can first solve for their roots -- better look them up in e.g., Abramovitch and Stegun (Mathworld link). This will be a better than using an ad hoc grid you're using.
What you can do, once you've found two roots or divergencies, x_1 and x_2, run the search again in the interval [x_1+epsilon, x_2-epsilon]. Continue until no more roots are found (Brent's method is guaranteed to converge to a root, provided there is one).
If you cannot enumerate all the divergencies, you might want to be a little more careful in verifying a candidate is indeed a divergency: given x don't just check that f(x) is large, check that, e.g. |f(x-epsilon/2)| > |f(x-epsilon)| for several values of epsilon (1e-8, 1e-9, 1e-10, something like that).
If you want to make sure you don't have roots which simply touch zero, look for the extrema of the function, and for each extremum, x_e, check the value of f(x_e).
I've also encountered this problem to solve equations like f(z)=0 where f was an holomorphic function. I wanted to be sure not to miss any zero and finally developed an algorithm which is based on the argument principle.
It helps to find the exact number of zeros lying in a complex domain. Once you know the number of zeros, it is easier to find them. There are however two concerns which must be taken into account :
Take care about multiplicity : when solving (z-1)^2 = 0, you'll get two zeros as z=1 is counting twice
If the function is meromorphic (thus contains poles), each pole reduce the number of zero and break the attempt to count them.

Categories

Resources