I need to find the root of a multidimentional function F(x), I'm using the scipy function scipy.optimization.root(...,method=''), which allows me to select different methods for the solution. However, for some problems it becomes slow and not convergent, maybe it would be useful to try an alternative package. Do you know some of them?
Generally, the more you know about the problem the better. For example you may know an approximate range in which the root occurs. Then you may first run a brute search (using np.linspace for example) to find a good starting point for the method you want to use. Example:
Let's say you have a function like
def f(x):
return np.exp(-x)*(x-1)**4
scipy will fail to find a root if you start at x0=5, because of the exponential.
However, if you know that the solution is somewhere in (-10,10), you can do something like
X=np.linspace(-10,10,10)
x0 = X[ np.argmin( np.abs( f(X) ) ) ]
from scipy.optimize import root
y=root(f,x0)
print(y.x)
and you get a nice result (fast!), because np.argmin( np.abs( f(X) ) ) gives you the argument of X where f is closest to 0.
You have to keep in mind that such "tricks" are also dangerous if you use them without triple checking, and you always should have some intuition (or even better an analytical approximation) on what you expect.
Related
I have a very complicated function of two variables, let's call them x and y. I want to create a Python program where the user can input two values, a and b, where a is the value of that complicated function of x and y, and b = math.atan(y/x). This program should then output the values of x and y.
I am clueless as to where to start. I have tried to make the function into that of just one variable, then generate many random values for x and pick the closest one, but I have learnt that this is horribly inefficient and produces a result which is only accurate to about 2 significant figures, which is pretty horrible. Is there a better way to do this? Many thanks!
(P.S. I did not reveal the function here due to copyright issues. For the sake of example, you can consider the function
a = 4*math.atan(math.sqrt(math.tan(x)*math.tan(y)/math.tan(x+y)))
where y = x * math.tan(b).)
Edit: After using the approach of the sympy library, it appears as though the program ignores my second equation (the complicated one). I suspect it is too complicated for sympy to handle. Thus, I am asking for another approach which does not utilise sympy.
You could use sympy and import the trigonometric functions from sympy.
from sympy.core.symbol import symbols
from sympy.solvers.solveset import nonlinsolve
from sympy import sqrt, tan, atan
y = symbols('y', real=True)
a,b = 4,5 # user-given values
eq2 = a - 4*atan(sqrt(tan(y/tan(b))*tan(y)/tan((y/tan(b))+y)))
S = nonlinsolve( [eq2], [y] )
print(S)
It'll return you a series of conditions ( ConditionSet object ) for possible adequate results.
If that wasn't clear enough, you can read the docs for nonlinsolve.
I want to solve my other question here so I need sympy to return an error whenever there is no analytical/symbolic solution for and integral.
For example if I try :
from sympy import *
init_printing(use_unicode=False, wrap_line=False, no_global=True)
x = Symbol('x')
integrate(1/cos(x**2), x)
It just [pretty] prints the integral itself
without solving and/or giving an error about not being able to solve it!
P.S. I have also asked this question here on Reddit.
A "symbolic" solution always exists: I just invented a new function intcos(x), which by definition is the antiderivative of 1/cos(x**2). Now this integral has a symbolic solution!
For the question to be rigorously answerable, one has to restrict the class of functions allowed in the answer. Typically one considers elementary functions. As SymPy integral reference explains, the Risch algorithm it employs can prove that some functions do not have elementary antiderivatives. Use the option risch=True and check whether the return value is an instance of sympy.integrals.risch.NonElementaryIntegral
from sympy.integrals.risch import NonElementaryIntegral
isinstance(integrate(1/exp(x**2), x, risch=True), NonElementaryIntegral) # True
However, since Risch algorithm implementation is incomplete, in many cases like 1/cos(x**2) it returns an ordinary Integral object. This means it was not able to either find an elementary antiderivative or prove that one does not exist.
For this example, it helps to rewrite the trigonometric function in terms of exponential, with rewrite(cos, exp):
isinstance(integrate((1/cos(x**2)).rewrite(cos, exp), x, risch=True), NonElementaryIntegral)
returns True, so we know the integral is nonelementary.
Non-elementary antiderivatives
But often we don't really need an elementary function; something like Gamma or erf or Bessel functions may be okay; as long as it's some "known" function (which of course is a fuzzy term). The question becomes: how to tell if SymPy was able to integrate a specific expression or not? Use .has(Integral) check for that:
integrate(2/cos(x**2), x).has(Integral) # True
(not isinstance(Integral) because the return value can be, like here, 2*Integral(1/cos(x**2), x).) This does not prove anything other than SymPy's failure to find the antiderivative. The antiderivative may well be a known function, even an elementary one.
I did this little test program in python to see how solve and nsolve work.
from sympy import *
theta = Symbol('theta')
phi = Symbol('phi')
def F(theta,phi):
return sin(theta)*cos(phi)+cos(phi)**2
def G(phi):
return ((1 + sqrt(3))*sin(phi) - 4*pi*sin(2*phi)*cos(2*phi))
solution1 = solve(F(pi/2,phi),phi)
solution2 = solve(G(phi),phi)
solution3 = nsolve(G(phi),0)
solution4 = nsolve(G(phi),1)
solution5 = nsolve(G(phi),2)
solution6 = nsolve(G(phi),3)
print solution1, solution2, solution3, solution4, solution5, solution6
And I get this output:
[pi/2, pi] [] 0.0 -0.713274788952698 2.27148961717279 3.14159265358979
The first call of solve gave me two solutions of the corresponding function. But not the second one. I wonder why? nsolve seems to work with an initial test value, but depending on that value, it gives different numerical solutions. Is there a way to get the list all numerical solutions with nsolve or with another function, in just one line?
The first call of solve gave me two solutions of the corresponding function. But not the second one. I wonder why?
In general, you cannot solve an equation symbolically and apparently solve does exactly that. In other words: Consider yourself lucky if solve can solve your equation, the typical technical applications don't have analytic solutions, that is, cannot be solved symbolically.
So the fall-back option is to solve the equation numerically, which starts from an initial point. In the general case, there is no guarantee that nsolve will find a solution even if exists one.
Is there a way to get the list all numerical solutions with nsolve or with another function, in just one line?
In general, no. Nevertheless, you can start nsolve from a number of initial guesses and keep track of the solutions found. You might want to distribute your initial guesses uniformly in the interval of interest. This is called multi-start method.
The following example is stated just for the purpose of precise definition of the query. Consider a recursive equation x[k+1] = a*x[k] where a is some constant. Now, is there an easier way or an existing method within sympy/numpy that does the following (i.e., gives an expression over a horizon for a given recursive equation):
def get_expr(init, num):
a = Symbol('a')
expr = init
for i in range(num):
expr = a*expr
return expr
x0 = Symbol('x0')
get_expr(x0,3)
Horizon above is 3.
I was going to suggest using SymPy's rsolve to try to find a closed form solution to your equation, but it seems that at least for this specific one, there is a bug that prevents it from working. See http://code.google.com/p/sympy/issues/detail?id=2943. Maybe if you really want to know for a more complicated expression you could try that. For this one, the closed form solution is just a**n*x0.
Aside from that, SymPy doesn't have any functions that would do this evaluation directly, but it does have some things that can help. There are some memoization decorators in sympy.utilities.memoization that are made for internal use, but should work just fine for external uses. They can help make your evaluation more efficient by caching the result of previous evaluations. You'll need to write the get_expr recursively for it to work effectively. Or you could just write your own cacher. It's not that complicated.
I face a problem in scipy 'leastsq' optimisation routine, if i execute the following program it says
raise errors[info][1], errors[info][0]
TypeError: Improper input parameters.
and sometimes index out of range for an array...
from scipy import *
import numpy
from scipy import optimize
from numpy import asarray
from math import *
def func(apar):
apar = numpy.asarray(apar)
x = apar[0]
y = apar[1]
eqn = abs(x-y)
return eqn
Init = numpy.asarray([20.0, 10.0])
x = optimize.leastsq(func, Init, full_output=0, col_deriv=0, factor=100, diag=None, warning=True)
print 'optimized parameters: ',x
print '******* The End ******'
I don't know what is the problem with my func optimize.leastsq() call, please help me
leastsq works with vectors so the residual function, func, needs to return a vector of length at least two. So if you replace return eqn with return [eqn, 0.], your example will work. Running it gives:
optimized parameters: (array([10., 10.]), 2)
which is one of the many correct answers for the minimum of the absolute difference.
If you want to minimize a scalar function, fmin is the way to go, optimize.fmin(func, Init).
The issue here is that these two functions, although they look the same for a scalars are aimed at different goals. leastsq finds the least squared error, generally from a set of idealized curves, and is just one way of doing a "best fit". On the other hand fmin finds the minimum value of a scalar function.
Obviously yours is a toy example, for which neither of these really makes sense, so which way you go will depend on what your final goal is.
Since you want to minimize a simple scalar function (func() returns a single value, not a list of values), scipy.optimize.leastsq() should be replaced by a call to one of the fmin functions (with the appropriate arguments):
x = optimize.fmin(func, Init)
correctly works!
In fact, leastsq() minimizes the sum of squares of a list of values. It does not appear to work on a (list containing a) single value, as in your example (even though it could, in theory).
Just looking at the least squares docs, it might be that your function func is defined incorrectly. You're assuming that you always receive an array of at least length 2, but the optimize function is insanely vague about the length of the array you will receive. You might try writing to screen whatever apar is, to see what you're actually getting.
If you're using something like ipython or the python shell, you ought to be getting stack traces that show you exactly which line the error is occurring on, so start there. If you can't figure it out from there, posting the stack trace would probably help us.