root finding equation with 1 variable integrate - python

I am new to python. I have looked into scipy modules to find this but unsuccessful. Her is a my problem. I have an equation t is the unknown 0.3 is the initial guess. However I know area is given, c and x is given. I need to find for a given area, c and x(0to1) what is the value of t. I am not sure how to do this? any ideas?
from scipy import integrate
import numpy as np
c=1
t=0.3
## x = 0 to 1
def evalfunction(x, c, t): return 5*t*c*(0.2969*float(np.sqrt(x/c))+(-0.1260)*(x/c)+(-0.3516)*(x/c)**2+(0.2843)*(x/c)**3+(-0.1015)*(x/c)**4)
x3 = lambda x:evalfunction(x, c, t)
#x3 = lambda x: 5*t*c*(0.2969*float(np.sqrt(x/c))+(-0.1260)*(x/c)+(-0.3516)*(x/c)**2+(0.2843)*(x/c)**3+(-0.1015)*(x/c)**4)
area = integrate.quad(x3, 0, 1)
enter code here

Try something like http://en.wikipedia.org/wiki/Golden_section_search, i.e. guess an upper and lower bound for t, and then minimize the difference of area as gotten by integrate.quad(x3,0,1) and your expected area. When this difference is at its minimum up to some tolerance that you determine, you have found your t.

Related

What is the best way to find the inverse image of a function in python?

so I have a function that takes some constants and the value of my c variable and returns the value of my x and y variable like this:
def fun(*constants, c):
#Calculates some stuf to get x and y
return x, y
(x,y) = fun(constants, c)
All variables are real numbers
c belongs between 0 and a positive value cmax
The x,y points are ordered with respect to c
The function produces a curve that is continuous in the x-y plane
What is the best way to approximate the value of c given a specific value of y?
[Edited]
Tim Roberts suggests to use scipy.optimize.fsolve and this almost works for me. Is there a way to tell the fsolve to look only for roots specified in a range of c, in my case between 0 and cmax?
from scipy.optimize import fsolve
def fun(*constants, c):
#Calculates some stuf to get x and y
return x, y
def func(c):
return fun(*constants, c)[1]-y_objective
gess0 = cmax/2
y_objective = 10
c_wanted = fsolve(func, [gess0])
print(c_wanted)
The question as stated is quite broad and can delve into some deep mathematical results. I will attempt to answer your question as reasonably as possible below.
The set of assumptions you listed are AFAICT not general enough for an inverse to exist, even in a neighborhood around some region of interest.
However, let us instead assume that the conditions required of the inverse function theorem hold (see https://en.wikipedia.org/wiki/Inverse_function_theorem). The IFT gives a formula for the inverse derivative within a region where the conditions hold. You can then utilize the fundamental theorem of calculus to compute the inverse function in this region . See https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus.
The integration will need to be either done symbolically (very advanced) or can be approximated using quadrature. See https://en.wikipedia.org/wiki/Numerical_integration

solve_ivp from scipy does not integrate the whole range of tspan

I'm trying to use solve_ivp from scipy in Python to solve an IVP. I specified the tspan argument of solve_ivp to be (0,10), as shown below. However, for some reason, the solutions I get always stop around t=2.5.
from scipy.integrate import solve_ivp
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as optim
def dudt(t, u):
return u*(1-u/12)-4*np.heaviside(-(t-5), 1)
ic = [2,4,6,8,10,12,14,16,18,20]
sol = solve_ivp(dudt, (0, 10), ic, t_eval=np.linspace(0, 10, 10000))
for solution in sol.y:
y = [y for y in solution if y >= 0]
t = sol.t[:len(y)]
plt.plot(t, y)
What is going wrong
You should always look at what the solver returns. In this case it gives
message: 'Required step size is less than spacing between numbers.'
Think of the process of solving your initial value problem with scipy.integrate.solve_ivp as repeatedly estimating a direction and then going a small step in that direction. The above error means that the solutions to your equation change so fast that taking the minimal step size possible is too far. But your equation is simple enough that at least for t =< 5 where 4*np.heaviside(-(t-5), 1) always gives 4 it can be solved exactly/symbolically. I will explain more for t > 5 later.
Symbolic Solution
Sympy can solve your differential equation. While you can provide it an initial value it would have taken much longer to solve it once for each of your initial values. So instead I told it to give me all solutions and then I calculated the parameters C1 for your initial value separately.
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
ics = [2,4,6,8,10,12,14,16,18,20]
f = symbols("f", cls=Function)
t = symbols("t")
eq = Eq(f(t).diff(t),f(t)*(1-f(t)/12)-4)
base_sol = dsolve(eq)
c1s = [solve(base_sol.args[1].subs({t:0})-ic) for ic in ics]
# Apparently sympy is unhappy that numpy does not supply a cotangent.
# So I do that manually.
sols = [lambdify(t, base_sol.args[1].subs({symbols('C1'):C1[0]}),
modules=['numpy', {'cot':lambda x:1/np.tan(x)}]) for C1 in c1s]
t = np.linspace(0, 5, 10000)
for sol in sols:
y = sol(t)
mask = (y > -5) & (y < 20)
plt.plot(t[mask], y[mask])
At first glance the picture looks odd. Especially the blue and orange straight line part. This is just due to the values lying outside the masked range so matplotlib connects them directly. What is actually happening is a sudden jump. That jumped tipped off the numeric ode solver earlier. You can see it even more clearly when you make sympy print the first solution.
The tangent is known to have a jump at pi/4 and if you solve the argument of the tangent above you get 2.47241377386575. Which is probably where your plotting stopped.
Now what about t>5?
Unfortunately your equation is not continuous in t=5. One approach would be to solve the equation for t>5 separately for the initial values given by following the solutions of the first equation. But that is an other question for an other day.

Python solve nonlinear (transcedental) equations

I have an equation 'a*x+logx-b=0,(a and b are constants)', and I want to solve x. The problem is that I have numerous constants a(accordingly numerous b). How do I solve this equation by using python?
You could check out something like
http://docs.scipy.org/doc/scipy-0.13.0/reference/optimize.nonlin.html
which has tools specifically designed for these kinds of equations.
Cool - today I learned about Python's numerical solver.
from math import log
from scipy.optimize import brentq
def f(x, a, b):
return a * x + log(x) - b
for a in range(1,5):
for b in range(1,5):
result = brentq(lambda x:f(x, a, b), 1e-10, 20)
print a, b, result
brentq provides estimate where the function crosses the x-axis. You need to give it two points, one which is definitely negative and one which is definitely positive. For negative point choose number that is smaller than exp(-B), where B is maximum value of b. For positive point choose number that's bigger than B.
If you cannot predict range of b values, you can use a solver instead. This will probably produce a solution - but this is not guaranteed.
from scipy.optimize import fsolve
for a in range(1,5):
for b in range(1,5):
result = fsolve(f, 1, (a,b))
print a, b, result

using python to solve a nonlinear equation

I have never used python but Mathematica can't handle the equation I am trying to solve. I am trying to solve for the variable "a" of the following equations where s, c, mu, and delta t are known parameters.
I tried doing NSolve, Solve, etc in Mathematica but it has been running for an hour with no luck. Since I am not familiar with Python, is there a way I can use Python to solve this equation for a?
You're not going to find an analytic solution to these equations because they're transcendental, containing a both inside and outside of a trigonometric function.
I think the trouble you're having with numerical solutions is that the range of acceptable values for a is constrained by the arcsin. Since arcsin is only defined for arguments between -1 and 1 (assuming you want a to be real), your formulas for alpha and beta require that a > s/2 and a > (s-c)/2.
In Python, you can find a zero of your third equation (rewritten in the form f(a) = 0) using the brentq function:
import numpy as np
from scipy.optimize import brentq
s = 10014.6
c = 6339.06
mu = 398600.0
dt = 780.0
def f(a):
alpha = 2*np.arcsin(np.sqrt(s/(2*a)))
beta = 2*np.arcsin(np.sqrt((s-c)/(2*a)))
return alpha - beta - (np.sin(alpha)-np.sin(beta)) - np.sqrt(mu/a**3)*dt
a0 = max(s/2, (s-c)/2)
a = brentq(f, a0, 10*a0)
Edit:
To clarify the way brentq(f,a,b) works is that it searches for a zero of f on an interval [a,b]. Here, we know that a is at least max(s/2, (s-c)/2). I just guessed that 10 times that was a plausible upper bound, and that worked for the given parameters. More generally, you need to make sure that f changes sign between a and b. You can read more about how the function works in the SciPy docs.
I think its worth examining the behaviour of the function before atempting to solve it. Without doing that you dont know if there is a unique solution, many solutions, or no solution. (The biggest problem is many solutions, where numerical methods may not give you the solution you require/expect - and if you blindly use it "bad things" might happen). You examine the behaviour nicely using scipy and ipython. This is an example notebook that does that
# -*- coding: utf-8 -*-
# <nbformat>3.0</nbformat>
# <codecell>
s = 10014.6
c = 6339.06
mu = 398600.0
dt = 780.0
# <codecell>
def sin_alpha_2(x):
return numpy.sqrt(s/(2*x))
def sin_beta_2(x):
return numpy.sqrt((s-c)/(2*x))
def alpha(x):
return 2*numpy.arcsin( numpy.clip(sin_alpha_2(x),-0.99,0.99) )
def beta(x):
return 2*numpy.arcsin( numpy.clip(sin_beta_2(x),-0.99,0.99) )
# <codecell>
def fn(x):
return alpha(x)-beta(x)-numpy.sin(alpha(x))+numpy.sin(beta(x)) - dt * numpy.sqrt( mu / numpy.power(x,3) )
# <codecell>
xx = numpy.arange(1,20000)
pylab.plot(xx, numpy.clip(fn(xx),-2,2) )
# <codecell>
xx=numpy.arange(4000,10000)
pylab.plot(xx,fn(xx))
# <codecell>
xx=numpy.arange(8000,9000)
pylab.plot(xx,fn(xx))
This shows that we expect to find a solution with a between 8000 and 9000.
The odd kink in the curve at about 5000 and earlier solution at about 4000 is due to
the clipping required to make arcsin behave. Really the equation does not make sense below about a=5000. (exact value is the a0 given in Rays solution). This then gives a nice range that can be used with the techniques in Rays solution.

How to find all zeros of a function using numpy (and scipy)?

Suppose I have a function f(x) defined between a and b. This function can have many zeros, but also many asymptotes. I need to retrieve all the zeros of this function. What is the best way to do it?
Actually, my strategy is the following:
I evaluate my function on a given number of points
I detect whether there is a change of sign
I find the zero between the points that are changing sign
I verify if the zero found is really a zero, or if this is an asymptote
U = numpy.linspace(a, b, 100) # evaluate function at 100 different points
c = f(U)
s = numpy.sign(c)
for i in range(100-1):
if s[i] + s[i+1] == 0: # oposite signs
u = scipy.optimize.brentq(f, U[i], U[i+1])
z = f(u)
if numpy.isnan(z) or abs(z) > 1e-3:
continue
print('found zero at {}'.format(u))
This algorithm seems to work, except I see two potential problems:
It will not detect a zero that doesn't cross the x axis (for example, in a function like f(x) = x**2) However, I don't think it can occur with the function I'm evaluating.
If the discretization points are too far, there could be more that one zero between them, and the algorithm could fail finding them.
Do you have a better strategy (still efficient) to find all the zeros of a function?
I don't think it's important for the question, but for those who are curious, I'm dealing with characteristic equations of wave propagation in optical fiber. The function looks like (where V and ell are previously defined, and ell is an positive integer):
def f(u):
w = numpy.sqrt(V**2 - u**2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.jnjn(ell-1, u)
kl = scipy.special.jnkn(ell, w)
kl1 = scipy.special.jnkn(ell-1, w)
return jl / (u*jl1) + kl / (w*kl1)
Why are you limited to numpy? Scipy has a package that does exactly what you want:
http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html
One lesson I've learned: numerical programming is hard, so don't do it :)
Anyway, if you're dead set on building the algorithm yourself, the doc page on scipy I linked (takes forever to load, btw) gives you a list of algorithms to start with. One method that I've used before is to discretize the function to the degree that is necessary for your problem. (That is, tune \delta x so that it is much smaller than the characteristic size in your problem.) This lets you look for features of the function (like changes in sign). AND, you can compute the derivative of a line segment (probably since kindergarten) pretty easily, so your discretized function has a well-defined first derivative. Because you've tuned the dx to be smaller than the characteristic size, you're guaranteed not to miss any features of the function that are important for your problem.
If you want to know what "characteristic size" means, look for some parameter of your function with units of length or 1/length. That is, for some function f(x), assume x has units of length and f has no units. Then look for the things that multiply x. For example, if you want to discretize cos(\pi x), the parameter that multiplies x (if x has units of length) must have units of 1/length. So the characteristic size of cos(\pi x) is 1/\pi. If you make your discretization much smaller than this, you won't have any issues. To be sure, this trick won't always work, so you may need to do some tinkering.
I found out it's relatively easy to implement your own root finder using the scipy.optimize.fsolve.
Idea: Find any zeroes from interval (start, stop) and stepsize step by calling the fsolve repeatedly with changing x0. Use relatively small stepsize to find all the roots.
Can only search for zeroes in one dimension (other dimensions must be fixed). If you have other needs, I would recommend using sympy for calculating the analytical solution.
Note: It may not always find all the zeroes, but I saw it giving relatively good results. I put the code also to a gist, which I will update if needed.
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
# Defined below
r = RootFinder(1, 20, 0.01)
args = (90, 5)
roots = r.find(f, *args)
print("Roots: ", roots)
# plot results
u = np.linspace(1, 20, num=600)
fig, ax = plt.subplots()
ax.plot(u, f(u, *args))
ax.scatter(roots, f(np.array(roots), *args), color="r", s=10)
ax.grid(color="grey", ls="--", lw=0.5)
plt.show()
Example output:
Roots: [ 2.84599497 8.82720551 12.38857782 15.74736542 19.02545276]
zoom-in:
RootFinder definition
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
class RootFinder:
def __init__(self, start, stop, step=0.01, root_dtype="float64", xtol=1e-9):
self.start = start
self.stop = stop
self.step = step
self.xtol = xtol
self.roots = np.array([], dtype=root_dtype)
def add_to_roots(self, x):
if (x < self.start) or (x > self.stop):
return # outside range
if any(abs(self.roots - x) < self.xtol):
return # root already found.
self.roots = np.append(self.roots, x)
def find(self, f, *args):
current = self.start
for x0 in np.arange(self.start, self.stop + self.step, self.step):
if x0 < current:
continue
x = self.find_root(f, x0, *args)
if x is None: # no root found.
continue
current = x
self.add_to_roots(x)
return self.roots
def find_root(self, f, x0, *args):
x, _, ier, _ = fsolve(f, x0=x0, args=args, full_output=True, xtol=self.xtol)
if ier == 1:
return x[0]
return None
Test function
The scipy.special.jnjn does not exist anymore, but I created similar test function for the case.
def f(u, V=90, ell=5):
w = np.sqrt(V ** 2 - u ** 2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.yn(ell - 1, u)
kl = scipy.special.kn(ell, w)
kl1 = scipy.special.kn(ell - 1, w)
return jl / (u * jl1) + kl / (w * kl1)
The main problem I see with this is if you can actually find all roots --- as have already been mentioned in comments, this is not always possible. If you are sure that your function is not completely pathological (sin(1/x) was already mentioned), the next one is what's your tolerance to missing a root or several of them. Put differently, it's about to what length you are prepared to go to make sure you did not miss any --- to the best of my knowledge, there is no general method to isolate all the roots for you, so you'll have to do it yourself. What you show is a reasonable first step already. A couple of comments:
Brent's method is indeed a good choice here.
First of all, deal with the divergencies. Since in your function you have Bessels in the denominators, you can first solve for their roots -- better look them up in e.g., Abramovitch and Stegun (Mathworld link). This will be a better than using an ad hoc grid you're using.
What you can do, once you've found two roots or divergencies, x_1 and x_2, run the search again in the interval [x_1+epsilon, x_2-epsilon]. Continue until no more roots are found (Brent's method is guaranteed to converge to a root, provided there is one).
If you cannot enumerate all the divergencies, you might want to be a little more careful in verifying a candidate is indeed a divergency: given x don't just check that f(x) is large, check that, e.g. |f(x-epsilon/2)| > |f(x-epsilon)| for several values of epsilon (1e-8, 1e-9, 1e-10, something like that).
If you want to make sure you don't have roots which simply touch zero, look for the extrema of the function, and for each extremum, x_e, check the value of f(x_e).
I've also encountered this problem to solve equations like f(z)=0 where f was an holomorphic function. I wanted to be sure not to miss any zero and finally developed an algorithm which is based on the argument principle.
It helps to find the exact number of zeros lying in a complex domain. Once you know the number of zeros, it is easier to find them. There are however two concerns which must be taken into account :
Take care about multiplicity : when solving (z-1)^2 = 0, you'll get two zeros as z=1 is counting twice
If the function is meromorphic (thus contains poles), each pole reduce the number of zero and break the attempt to count them.

Categories

Resources