I have never used python but Mathematica can't handle the equation I am trying to solve. I am trying to solve for the variable "a" of the following equations where s, c, mu, and delta t are known parameters.
I tried doing NSolve, Solve, etc in Mathematica but it has been running for an hour with no luck. Since I am not familiar with Python, is there a way I can use Python to solve this equation for a?
You're not going to find an analytic solution to these equations because they're transcendental, containing a both inside and outside of a trigonometric function.
I think the trouble you're having with numerical solutions is that the range of acceptable values for a is constrained by the arcsin. Since arcsin is only defined for arguments between -1 and 1 (assuming you want a to be real), your formulas for alpha and beta require that a > s/2 and a > (s-c)/2.
In Python, you can find a zero of your third equation (rewritten in the form f(a) = 0) using the brentq function:
import numpy as np
from scipy.optimize import brentq
s = 10014.6
c = 6339.06
mu = 398600.0
dt = 780.0
def f(a):
alpha = 2*np.arcsin(np.sqrt(s/(2*a)))
beta = 2*np.arcsin(np.sqrt((s-c)/(2*a)))
return alpha - beta - (np.sin(alpha)-np.sin(beta)) - np.sqrt(mu/a**3)*dt
a0 = max(s/2, (s-c)/2)
a = brentq(f, a0, 10*a0)
Edit:
To clarify the way brentq(f,a,b) works is that it searches for a zero of f on an interval [a,b]. Here, we know that a is at least max(s/2, (s-c)/2). I just guessed that 10 times that was a plausible upper bound, and that worked for the given parameters. More generally, you need to make sure that f changes sign between a and b. You can read more about how the function works in the SciPy docs.
I think its worth examining the behaviour of the function before atempting to solve it. Without doing that you dont know if there is a unique solution, many solutions, or no solution. (The biggest problem is many solutions, where numerical methods may not give you the solution you require/expect - and if you blindly use it "bad things" might happen). You examine the behaviour nicely using scipy and ipython. This is an example notebook that does that
# -*- coding: utf-8 -*-
# <nbformat>3.0</nbformat>
# <codecell>
s = 10014.6
c = 6339.06
mu = 398600.0
dt = 780.0
# <codecell>
def sin_alpha_2(x):
return numpy.sqrt(s/(2*x))
def sin_beta_2(x):
return numpy.sqrt((s-c)/(2*x))
def alpha(x):
return 2*numpy.arcsin( numpy.clip(sin_alpha_2(x),-0.99,0.99) )
def beta(x):
return 2*numpy.arcsin( numpy.clip(sin_beta_2(x),-0.99,0.99) )
# <codecell>
def fn(x):
return alpha(x)-beta(x)-numpy.sin(alpha(x))+numpy.sin(beta(x)) - dt * numpy.sqrt( mu / numpy.power(x,3) )
# <codecell>
xx = numpy.arange(1,20000)
pylab.plot(xx, numpy.clip(fn(xx),-2,2) )
# <codecell>
xx=numpy.arange(4000,10000)
pylab.plot(xx,fn(xx))
# <codecell>
xx=numpy.arange(8000,9000)
pylab.plot(xx,fn(xx))
This shows that we expect to find a solution with a between 8000 and 9000.
The odd kink in the curve at about 5000 and earlier solution at about 4000 is due to
the clipping required to make arcsin behave. Really the equation does not make sense below about a=5000. (exact value is the a0 given in Rays solution). This then gives a nice range that can be used with the techniques in Rays solution.
Related
I am trying to find a solution to the following system where f and g are R^2 -> R^2 functions:
f(x1,x2) = (y1,y2)
g(y1,y2) = (x1,x2)
I tried solving it using scipy.optimize.fsolve as follows:
def eqm(vars):
x1,x2,y1,y2 = vars
eq1 = f([x1, x2])[0] - y1
eq2 = f([x1, x2])[1] - y2
eq3 = g([y1, y2])[0] - x1
eq4 = g([y1, y2])[1] - x2
return [eq1, eq2, eq3, eq4]
fsolve(eqm, x0 = [1,0.5,1,0.5])
Although it is returning an output, it does not seem to be a correct one as it does not seem to satisfy the two conditions, and seems to vary a lot with the x0 specified. Also getting a warning:
'The iteration is not making good progress, as measured by the improvement from the last ten iterations.' I do know for a fact that a unique solution exists, which I have obtained algebraically.
Not sure what is going on and if there is a simpler way of solving it, especially using just two equations instead of splitting up into 4. Something like:
def equations(vars):
X,Y = vars
eq1 = f(X)-Y
eq2 = g(Y)-X
return [eq1, eq2]
fsolve(equations, x0 =[[1,0.5],[1,0.5]])
Suggestions on other modules e.g. sympy are also welcome!
First, I recommend working with numpy arrays since manipulating these is simpler than lists.
I've slighlty rewritten your code:
import scipy.optimize as opt
def f(x):
return x
def g(x):
return x
def func(vars):
input = np.array(vars)
eq1 = f(input[:2]) - input[2:]
eq2 = g(input[2:]) - input[:2]
return np.concatenate([eq1, eq2])
root = opt.fsolve(func, [1, 1, 0., 1.2])
print(root)
print(func(root)) # should be close to zeros
What you have should work correctly, so I believe there is something wrong with the equations you're using. If you provide those, I can try to see what may be wrong.
This seems to be more of a problem of numerical mathematics than Python coding. Your functions may have "ugly" behavior around the solution, may be strongly non-linear or contain singularities. We cannot help further without seeing the functions. One thing you might try is to instead solve a system
g(f(x)) - x = 0
and simplify g(f(x)) as much as possible analytically. Then calculate y = f(x) after solving the equation.
I have just read Using adaptive time step for scipy.integrate.ode when solving ODE systems
.
My code below works fine but the results it produces when solving more complicated equations rather than the one I have provided in the example below, the differential equations seem inaccurate. Is there a way to change this code so that it automatically adapts the time-step according to specied absolute and
relative error tolerances? eg. 10^-8?
from scipy.integrate import ode
initials = [0.5,0.2]
integration_range = (0, 30)
def f(t,X):
x,y = X[0],X[1]
dxdt = x**2 + y
dydt = y**2 + x
return [dxdt,dydt]
X_solutions = []
t_solutions = []
def solution_getter(t,X):
t_solutions.append(t)
X_solutions.append(X.copy())
backend = "dopri5"
ode_solver = ode(f).set_integrator(backend)
ode_solver.set_solout(solution_getter)
ode_solver.set_initial_value(y=initials, t=0)
ode_solver.integrate(integration_range[1])
You could set the values of rtol and atol in the set_integrator call, see https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html.
The default values provide a medium accuracy that is good enough for graphics, but may not be enough for other purposes.
Here is the thing.
I am trying to use fsolve function in Python to find the root of a cubic function. This cubic function has a parameter, deltaW. What I do is change this parameter deltaW from -50 to 50, and find the root of the cubic function at the same time. Below is my script:
from scipy.optimize import fsolve
import matplotlib.pyplot as plt
import numpy as np
import pylab
g = 5.61
gamma = 6.45
kappa = 6.45
J = 6.45
rs = 1.0 #There are just parameters
m = 5.0*10**(-11)
wm = 2*3.14*23.4
X = []
X1 = []
def func(x): #Define the cubic function I need to solve
A = 1j*g**2*(kappa + 1j*deltaW)*x*x/(m*wm**2)
B = J**2 + (1j*deltaW - gamma)*(1j*deltaW + kappa)
C = A + B
D = abs(C)*x - J*np.sqrt(2*kappa)*rs
return D
for deltaW in np.linspace(-50, 50, 1000):
x0 = fsolve(func, 0.0001)
X.append(x0)
deltaW = np.linspace(-50, 50, 1000)
plt.plot(deltaW, X)
plt.show()
When I run this script, I get these two messages:
RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last five Jacobian evaluations.
warnings.warn(msg, RuntimeWarning)
/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py:152: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
I am sorry I do not have enough reputation to put the plot of this script here. My question is why do I get this message and why do my plot look so weird in the left part.
Is it because of my code is wrong?
As in almost all cases of finding roots, a good initial guess is imperative. Sometimes the best initial guess is, in fact, known to be wrong. That is the case here. The behavior of your script, which shows unexpected 'spikes' in the answer, can be looked at more deeply by both plotting up the function, and plotting up the found roots around those spikes (hey, you've got a Python console - this is really easy).
What you find is that the solution returned by the solver is jumping around, even though the function really doesn't look that different. The problem is that your initial guess of 0.0001 lies close to a tiny minimum of the function, and the solver can't figure out how to get out of there. Setting the initial guess to 1.0 (way far away, but on a nice, easy descending portion of the function that will head directly to the root), results instead in:
So, three things:
1. solvers need loving care and attention - they are rarely automagic.
Sometimes the 'right' initial guess can be well away from what you know is the right answer, but in such a way that the solver has an easy time of it.
the interactive Python console lets you look quickly at what is going on. Use the power of it!
I have a dynamic model set up as a (stiff) system of ODEs. I currently solve this with CVODE (from the SUNDIALS package in the Assimulo python package) and all is good.
I now want to add a new 3D heat sink (with temperature-dependent thermal parameters) to the problem. Instead of writing out all the equations from scratch for the 3D heat equation, my idea is to use an existing FEM or FVM framework to provide to me an interface that will allow me to easily provide the (t, y) for the 3D block to a routine, and get back the residuals y'. The principle is to use the equations from the FEM system but not the solver. CVODE can exploit sparsity, but the combined system is expected to solve slower than the FEM system would solve on its own, being tailored for such.
# pseudocode of a residuals function for CVODE
def residual(t, y):
# ODE system of n equations
res[0] = <function of t,y>;
res[1] = <function of t,y>;
...
res[n] = <function of t,y>;
# Here we add the FEM/FVM residuals
for i in range(FEMcount):
res[n+1+i] = FEMequations[FEMcount](t,y)
return res
My question is whether (a) this approach is sane, and (b) is there a FEM or FVM library that will easily let me treat it as a system of equations, such that I can "tack it on" to my existing set of ODE equations.
If can't let the two systems share the same time axis, then I will have to run them in a stepping mode, where I run the one model for a short time, update the boundary conditions for the other, run that one, update the first model's BCs, and so on.
I have some experience with the wonderful library FiPy, and I am expecting to eventually end up using that library in the manner described above. But I want to know about experience with other systems in problems of this nature, and also other approaches that I have missed.
Edit: I now have some example code that appears to be working, showing how the FiPy mesh diffusion residuals can be solved with CVODE. However, this is only one approach (using FiPy) and the remainder of my other questions and concerns still stand. Any suggestions welcome.
from fipy import *
from fipy.solvers.scipy import DefaultSolver
solverFIPY = DefaultSolver()
from assimulo.solvers import CVode as solverASSIMULO
from assimulo.problem import Explicit_Problem as Problem
# FiPy Setup - Using params from the Mesh1D example
###################################################
nx = 50; dx = 1.; D = 1.
mesh = Grid1D(nx = nx, dx = dx)
phi = CellVariable(name="solution variable", mesh=mesh, value=0.)
valueLeft, valueRight = 1., 0.
phi.constrain(valueRight, mesh.facesRight)
phi.constrain(valueLeft, mesh.facesLeft)
# Instead of eqX = TransientTerm() == ExplicitDiffusionTerm(coeff=D),
# Rather just operate on the diffusion term. CVODE will calculate the
# Transient side
edt = ExplicitDiffusionTerm(coeff=D)
timeStepDuration = 0.9 * dx**2 / (2 * D)
steps = 100
# For comparison with an analytical solution - again,
# taken from the Mesh1D.py example
phiAnalytical = CellVariable(name="analytical value", mesh=mesh)
x = mesh.cellCenters[0]
t = timeStepDuration * steps
from scipy.special import erf
phiAnalytical.setValue(1 - erf(x / (2 * numerix.sqrt(D * t))))
if __name__ == '__main__':
viewer = Viewer(vars=(phi, phiAnalytical))#, datamin=0., datamax=1.)
viewer.plot()
raw_input('Press a key...')
# Now for the Assimulo/Sundials solver setup
############################################
def residual(t, X):
# Pretty straightforward, phi is the unknown
phi.value = X # This is a vector, 50 elements
# Can immediately return the residuals, CVODE sees this vector
# of 50 elements as X'(t), which is like TransientTerm() from FiPy
return edt.justResidualVector(var=phi, solver=solverFIPY)
x0 = phi.value
t0 = 0.
model = Problem(residual, x0, t0)
simulation = solverASSIMULO(model)
tfinal = steps * timeStepDuration # s,
cell_tol = [1.0e-8]*50
simulation.atol = cell_tol
simulation.rtol = 1e-6
simulation.iter = 'Newton'
t, x = simulation.simulate(tfinal, 0)
print x[-1]
# Write back the answer to compare
phi.value = x[-1]
viewer.plot()
raw_input('Press a key...')
This will produce a graph showing a perfect match:
An ODE is a differential equation in one dimension.
An FEM model is for problems that are spacial ie, problems in higher dimensions. You want a finite difference method. It's easier to solve and understand from the perspective someone coming from ODE world.
The question I think you should really be asking is how do you take your ODE, and transfer it to a 3D problem space.
Multidimensional partial differential equations are difficult to solve, yet I'll refer you to a CFD method for doing just that however only in 2D. http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/
It should take you a solid afternoon to get through that! Good Luck!
Suppose I have a function f(x) defined between a and b. This function can have many zeros, but also many asymptotes. I need to retrieve all the zeros of this function. What is the best way to do it?
Actually, my strategy is the following:
I evaluate my function on a given number of points
I detect whether there is a change of sign
I find the zero between the points that are changing sign
I verify if the zero found is really a zero, or if this is an asymptote
U = numpy.linspace(a, b, 100) # evaluate function at 100 different points
c = f(U)
s = numpy.sign(c)
for i in range(100-1):
if s[i] + s[i+1] == 0: # oposite signs
u = scipy.optimize.brentq(f, U[i], U[i+1])
z = f(u)
if numpy.isnan(z) or abs(z) > 1e-3:
continue
print('found zero at {}'.format(u))
This algorithm seems to work, except I see two potential problems:
It will not detect a zero that doesn't cross the x axis (for example, in a function like f(x) = x**2) However, I don't think it can occur with the function I'm evaluating.
If the discretization points are too far, there could be more that one zero between them, and the algorithm could fail finding them.
Do you have a better strategy (still efficient) to find all the zeros of a function?
I don't think it's important for the question, but for those who are curious, I'm dealing with characteristic equations of wave propagation in optical fiber. The function looks like (where V and ell are previously defined, and ell is an positive integer):
def f(u):
w = numpy.sqrt(V**2 - u**2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.jnjn(ell-1, u)
kl = scipy.special.jnkn(ell, w)
kl1 = scipy.special.jnkn(ell-1, w)
return jl / (u*jl1) + kl / (w*kl1)
Why are you limited to numpy? Scipy has a package that does exactly what you want:
http://docs.scipy.org/doc/scipy/reference/optimize.nonlin.html
One lesson I've learned: numerical programming is hard, so don't do it :)
Anyway, if you're dead set on building the algorithm yourself, the doc page on scipy I linked (takes forever to load, btw) gives you a list of algorithms to start with. One method that I've used before is to discretize the function to the degree that is necessary for your problem. (That is, tune \delta x so that it is much smaller than the characteristic size in your problem.) This lets you look for features of the function (like changes in sign). AND, you can compute the derivative of a line segment (probably since kindergarten) pretty easily, so your discretized function has a well-defined first derivative. Because you've tuned the dx to be smaller than the characteristic size, you're guaranteed not to miss any features of the function that are important for your problem.
If you want to know what "characteristic size" means, look for some parameter of your function with units of length or 1/length. That is, for some function f(x), assume x has units of length and f has no units. Then look for the things that multiply x. For example, if you want to discretize cos(\pi x), the parameter that multiplies x (if x has units of length) must have units of 1/length. So the characteristic size of cos(\pi x) is 1/\pi. If you make your discretization much smaller than this, you won't have any issues. To be sure, this trick won't always work, so you may need to do some tinkering.
I found out it's relatively easy to implement your own root finder using the scipy.optimize.fsolve.
Idea: Find any zeroes from interval (start, stop) and stepsize step by calling the fsolve repeatedly with changing x0. Use relatively small stepsize to find all the roots.
Can only search for zeroes in one dimension (other dimensions must be fixed). If you have other needs, I would recommend using sympy for calculating the analytical solution.
Note: It may not always find all the zeroes, but I saw it giving relatively good results. I put the code also to a gist, which I will update if needed.
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
# Defined below
r = RootFinder(1, 20, 0.01)
args = (90, 5)
roots = r.find(f, *args)
print("Roots: ", roots)
# plot results
u = np.linspace(1, 20, num=600)
fig, ax = plt.subplots()
ax.plot(u, f(u, *args))
ax.scatter(roots, f(np.array(roots), *args), color="r", s=10)
ax.grid(color="grey", ls="--", lw=0.5)
plt.show()
Example output:
Roots: [ 2.84599497 8.82720551 12.38857782 15.74736542 19.02545276]
zoom-in:
RootFinder definition
import numpy as np
import scipy
from scipy.optimize import fsolve
from matplotlib import pyplot as plt
class RootFinder:
def __init__(self, start, stop, step=0.01, root_dtype="float64", xtol=1e-9):
self.start = start
self.stop = stop
self.step = step
self.xtol = xtol
self.roots = np.array([], dtype=root_dtype)
def add_to_roots(self, x):
if (x < self.start) or (x > self.stop):
return # outside range
if any(abs(self.roots - x) < self.xtol):
return # root already found.
self.roots = np.append(self.roots, x)
def find(self, f, *args):
current = self.start
for x0 in np.arange(self.start, self.stop + self.step, self.step):
if x0 < current:
continue
x = self.find_root(f, x0, *args)
if x is None: # no root found.
continue
current = x
self.add_to_roots(x)
return self.roots
def find_root(self, f, x0, *args):
x, _, ier, _ = fsolve(f, x0=x0, args=args, full_output=True, xtol=self.xtol)
if ier == 1:
return x[0]
return None
Test function
The scipy.special.jnjn does not exist anymore, but I created similar test function for the case.
def f(u, V=90, ell=5):
w = np.sqrt(V ** 2 - u ** 2)
jl = scipy.special.jn(ell, u)
jl1 = scipy.special.yn(ell - 1, u)
kl = scipy.special.kn(ell, w)
kl1 = scipy.special.kn(ell - 1, w)
return jl / (u * jl1) + kl / (w * kl1)
The main problem I see with this is if you can actually find all roots --- as have already been mentioned in comments, this is not always possible. If you are sure that your function is not completely pathological (sin(1/x) was already mentioned), the next one is what's your tolerance to missing a root or several of them. Put differently, it's about to what length you are prepared to go to make sure you did not miss any --- to the best of my knowledge, there is no general method to isolate all the roots for you, so you'll have to do it yourself. What you show is a reasonable first step already. A couple of comments:
Brent's method is indeed a good choice here.
First of all, deal with the divergencies. Since in your function you have Bessels in the denominators, you can first solve for their roots -- better look them up in e.g., Abramovitch and Stegun (Mathworld link). This will be a better than using an ad hoc grid you're using.
What you can do, once you've found two roots or divergencies, x_1 and x_2, run the search again in the interval [x_1+epsilon, x_2-epsilon]. Continue until no more roots are found (Brent's method is guaranteed to converge to a root, provided there is one).
If you cannot enumerate all the divergencies, you might want to be a little more careful in verifying a candidate is indeed a divergency: given x don't just check that f(x) is large, check that, e.g. |f(x-epsilon/2)| > |f(x-epsilon)| for several values of epsilon (1e-8, 1e-9, 1e-10, something like that).
If you want to make sure you don't have roots which simply touch zero, look for the extrema of the function, and for each extremum, x_e, check the value of f(x_e).
I've also encountered this problem to solve equations like f(z)=0 where f was an holomorphic function. I wanted to be sure not to miss any zero and finally developed an algorithm which is based on the argument principle.
It helps to find the exact number of zeros lying in a complex domain. Once you know the number of zeros, it is easier to find them. There are however two concerns which must be taken into account :
Take care about multiplicity : when solving (z-1)^2 = 0, you'll get two zeros as z=1 is counting twice
If the function is meromorphic (thus contains poles), each pole reduce the number of zero and break the attempt to count them.