I did this little test program in python to see how solve and nsolve work.
from sympy import *
theta = Symbol('theta')
phi = Symbol('phi')
def F(theta,phi):
return sin(theta)*cos(phi)+cos(phi)**2
def G(phi):
return ((1 + sqrt(3))*sin(phi) - 4*pi*sin(2*phi)*cos(2*phi))
solution1 = solve(F(pi/2,phi),phi)
solution2 = solve(G(phi),phi)
solution3 = nsolve(G(phi),0)
solution4 = nsolve(G(phi),1)
solution5 = nsolve(G(phi),2)
solution6 = nsolve(G(phi),3)
print solution1, solution2, solution3, solution4, solution5, solution6
And I get this output:
[pi/2, pi] [] 0.0 -0.713274788952698 2.27148961717279 3.14159265358979
The first call of solve gave me two solutions of the corresponding function. But not the second one. I wonder why? nsolve seems to work with an initial test value, but depending on that value, it gives different numerical solutions. Is there a way to get the list all numerical solutions with nsolve or with another function, in just one line?
The first call of solve gave me two solutions of the corresponding function. But not the second one. I wonder why?
In general, you cannot solve an equation symbolically and apparently solve does exactly that. In other words: Consider yourself lucky if solve can solve your equation, the typical technical applications don't have analytic solutions, that is, cannot be solved symbolically.
So the fall-back option is to solve the equation numerically, which starts from an initial point. In the general case, there is no guarantee that nsolve will find a solution even if exists one.
Is there a way to get the list all numerical solutions with nsolve or with another function, in just one line?
In general, no. Nevertheless, you can start nsolve from a number of initial guesses and keep track of the solutions found. You might want to distribute your initial guesses uniformly in the interval of interest. This is called multi-start method.
Related
I am looking into using Nlopt for solving optimisation problems in Python.
I have a series of simultaneous equations of the form
Ax = b
where A is an NxM matrix, with x the solution. Another way to think about this is that I have N simultaneous equations of the form x_1c_1m + x_2c_2m + .... + x_Nc_Nm = k_M, where x_i are variables to solve for, c_im is a constant associated with x_i when in equation M=m, and k_m is some constant in equation M=m. c_im and k_m are all known.
What confuses me is how to even approach this in Nlopt. Nlopt requires you to have actual callable functions, which I don't have? I suppose I could generalise each of the equations in that matrix equation above to something like:
def fn(x,c_m,k_m):
val = 0
for x_i, c_im in zip(x,c_m):
val += x_i * c_im
return val - k_m
where c_m and k_m would be already known, with the variables to solve for in x. All the examples I've seen have only been looking at a single variable problem, which has kind of thrown me a little. Would I then have to somehow define M copies of this function, and set each copy of fn as an equality constraint in the Nlopt optimisation object? It's all rather confusing. I'm looking to solve for x, which itself has multiple solutions, and I want to try to find the minimum values of x (or atleast an approximate solution if an exact solution cannot be found). Would I have to then set multiple objective functions, ie obj_fn_i = min(x_i) or something like that? It's all a little confusing to me in terms of what needs to be presented to the solver. I've already got an analytical solution to the above problem, so I can check my results reliably. Any help appreciated.
Cheers!
I have been using NLopt for a couple of problems, and what I have come to understand is the solver requires an objective function which returns a float value, so you must set the function as an MSE sum, or still as a single float value to be minimized. And it can solve for an array of variables x, in which both the objective function and constraint must depend. All equations that are involved in the system you can insert either in the objective function directly, or as constraints.
Hope this was helpful somehow!
I need to find the root of a multidimentional function F(x), I'm using the scipy function scipy.optimization.root(...,method=''), which allows me to select different methods for the solution. However, for some problems it becomes slow and not convergent, maybe it would be useful to try an alternative package. Do you know some of them?
Generally, the more you know about the problem the better. For example you may know an approximate range in which the root occurs. Then you may first run a brute search (using np.linspace for example) to find a good starting point for the method you want to use. Example:
Let's say you have a function like
def f(x):
return np.exp(-x)*(x-1)**4
scipy will fail to find a root if you start at x0=5, because of the exponential.
However, if you know that the solution is somewhere in (-10,10), you can do something like
X=np.linspace(-10,10,10)
x0 = X[ np.argmin( np.abs( f(X) ) ) ]
from scipy.optimize import root
y=root(f,x0)
print(y.x)
and you get a nice result (fast!), because np.argmin( np.abs( f(X) ) ) gives you the argument of X where f is closest to 0.
You have to keep in mind that such "tricks" are also dangerous if you use them without triple checking, and you always should have some intuition (or even better an analytical approximation) on what you expect.
I have a very complicated function of two variables, let's call them x and y. I want to create a Python program where the user can input two values, a and b, where a is the value of that complicated function of x and y, and b = math.atan(y/x). This program should then output the values of x and y.
I am clueless as to where to start. I have tried to make the function into that of just one variable, then generate many random values for x and pick the closest one, but I have learnt that this is horribly inefficient and produces a result which is only accurate to about 2 significant figures, which is pretty horrible. Is there a better way to do this? Many thanks!
(P.S. I did not reveal the function here due to copyright issues. For the sake of example, you can consider the function
a = 4*math.atan(math.sqrt(math.tan(x)*math.tan(y)/math.tan(x+y)))
where y = x * math.tan(b).)
Edit: After using the approach of the sympy library, it appears as though the program ignores my second equation (the complicated one). I suspect it is too complicated for sympy to handle. Thus, I am asking for another approach which does not utilise sympy.
You could use sympy and import the trigonometric functions from sympy.
from sympy.core.symbol import symbols
from sympy.solvers.solveset import nonlinsolve
from sympy import sqrt, tan, atan
y = symbols('y', real=True)
a,b = 4,5 # user-given values
eq2 = a - 4*atan(sqrt(tan(y/tan(b))*tan(y)/tan((y/tan(b))+y)))
S = nonlinsolve( [eq2], [y] )
print(S)
It'll return you a series of conditions ( ConditionSet object ) for possible adequate results.
If that wasn't clear enough, you can read the docs for nonlinsolve.
The following example is stated just for the purpose of precise definition of the query. Consider a recursive equation x[k+1] = a*x[k] where a is some constant. Now, is there an easier way or an existing method within sympy/numpy that does the following (i.e., gives an expression over a horizon for a given recursive equation):
def get_expr(init, num):
a = Symbol('a')
expr = init
for i in range(num):
expr = a*expr
return expr
x0 = Symbol('x0')
get_expr(x0,3)
Horizon above is 3.
I was going to suggest using SymPy's rsolve to try to find a closed form solution to your equation, but it seems that at least for this specific one, there is a bug that prevents it from working. See http://code.google.com/p/sympy/issues/detail?id=2943. Maybe if you really want to know for a more complicated expression you could try that. For this one, the closed form solution is just a**n*x0.
Aside from that, SymPy doesn't have any functions that would do this evaluation directly, but it does have some things that can help. There are some memoization decorators in sympy.utilities.memoization that are made for internal use, but should work just fine for external uses. They can help make your evaluation more efficient by caching the result of previous evaluations. You'll need to write the get_expr recursively for it to work effectively. Or you could just write your own cacher. It's not that complicated.
I face a problem in scipy 'leastsq' optimisation routine, if i execute the following program it says
raise errors[info][1], errors[info][0]
TypeError: Improper input parameters.
and sometimes index out of range for an array...
from scipy import *
import numpy
from scipy import optimize
from numpy import asarray
from math import *
def func(apar):
apar = numpy.asarray(apar)
x = apar[0]
y = apar[1]
eqn = abs(x-y)
return eqn
Init = numpy.asarray([20.0, 10.0])
x = optimize.leastsq(func, Init, full_output=0, col_deriv=0, factor=100, diag=None, warning=True)
print 'optimized parameters: ',x
print '******* The End ******'
I don't know what is the problem with my func optimize.leastsq() call, please help me
leastsq works with vectors so the residual function, func, needs to return a vector of length at least two. So if you replace return eqn with return [eqn, 0.], your example will work. Running it gives:
optimized parameters: (array([10., 10.]), 2)
which is one of the many correct answers for the minimum of the absolute difference.
If you want to minimize a scalar function, fmin is the way to go, optimize.fmin(func, Init).
The issue here is that these two functions, although they look the same for a scalars are aimed at different goals. leastsq finds the least squared error, generally from a set of idealized curves, and is just one way of doing a "best fit". On the other hand fmin finds the minimum value of a scalar function.
Obviously yours is a toy example, for which neither of these really makes sense, so which way you go will depend on what your final goal is.
Since you want to minimize a simple scalar function (func() returns a single value, not a list of values), scipy.optimize.leastsq() should be replaced by a call to one of the fmin functions (with the appropriate arguments):
x = optimize.fmin(func, Init)
correctly works!
In fact, leastsq() minimizes the sum of squares of a list of values. It does not appear to work on a (list containing a) single value, as in your example (even though it could, in theory).
Just looking at the least squares docs, it might be that your function func is defined incorrectly. You're assuming that you always receive an array of at least length 2, but the optimize function is insanely vague about the length of the array you will receive. You might try writing to screen whatever apar is, to see what you're actually getting.
If you're using something like ipython or the python shell, you ought to be getting stack traces that show you exactly which line the error is occurring on, so start there. If you can't figure it out from there, posting the stack trace would probably help us.