I'm having trouble optimizing a very simple function I'm using as a test case before moving on to something more complex. I've tried different optimization methods, giving the method a bound and even giving the exact solution as the initial guess.
Function I'm trying to optimize: f(x) = 1 / x - x
Here is my code:
import scipy
def testfun(x): return (1 / x - x)
sol = scipy.optimize.minimize(testfun, 1).x
it returns large numbers (3.2 e+08) as the solution
Am I using the optimization function incorrectly?
As Victor mentioned, the optimization function is working correctly,
I was looking to solve f(x) --> 0 which requires a root finding method rather than an optimization routine.
for example:
scipy.optimize.root(testfun, 1) or scipy.optimize.Newton(testfun, 1)
Related
I am looking into using Nlopt for solving optimisation problems in Python.
I have a series of simultaneous equations of the form
Ax = b
where A is an NxM matrix, with x the solution. Another way to think about this is that I have N simultaneous equations of the form x_1c_1m + x_2c_2m + .... + x_Nc_Nm = k_M, where x_i are variables to solve for, c_im is a constant associated with x_i when in equation M=m, and k_m is some constant in equation M=m. c_im and k_m are all known.
What confuses me is how to even approach this in Nlopt. Nlopt requires you to have actual callable functions, which I don't have? I suppose I could generalise each of the equations in that matrix equation above to something like:
def fn(x,c_m,k_m):
val = 0
for x_i, c_im in zip(x,c_m):
val += x_i * c_im
return val - k_m
where c_m and k_m would be already known, with the variables to solve for in x. All the examples I've seen have only been looking at a single variable problem, which has kind of thrown me a little. Would I then have to somehow define M copies of this function, and set each copy of fn as an equality constraint in the Nlopt optimisation object? It's all rather confusing. I'm looking to solve for x, which itself has multiple solutions, and I want to try to find the minimum values of x (or atleast an approximate solution if an exact solution cannot be found). Would I have to then set multiple objective functions, ie obj_fn_i = min(x_i) or something like that? It's all a little confusing to me in terms of what needs to be presented to the solver. I've already got an analytical solution to the above problem, so I can check my results reliably. Any help appreciated.
Cheers!
I have been using NLopt for a couple of problems, and what I have come to understand is the solver requires an objective function which returns a float value, so you must set the function as an MSE sum, or still as a single float value to be minimized. And it can solve for an array of variables x, in which both the objective function and constraint must depend. All equations that are involved in the system you can insert either in the objective function directly, or as constraints.
Hope this was helpful somehow!
I have a very complicated function of two variables, let's call them x and y. I want to create a Python program where the user can input two values, a and b, where a is the value of that complicated function of x and y, and b = math.atan(y/x). This program should then output the values of x and y.
I am clueless as to where to start. I have tried to make the function into that of just one variable, then generate many random values for x and pick the closest one, but I have learnt that this is horribly inefficient and produces a result which is only accurate to about 2 significant figures, which is pretty horrible. Is there a better way to do this? Many thanks!
(P.S. I did not reveal the function here due to copyright issues. For the sake of example, you can consider the function
a = 4*math.atan(math.sqrt(math.tan(x)*math.tan(y)/math.tan(x+y)))
where y = x * math.tan(b).)
Edit: After using the approach of the sympy library, it appears as though the program ignores my second equation (the complicated one). I suspect it is too complicated for sympy to handle. Thus, I am asking for another approach which does not utilise sympy.
You could use sympy and import the trigonometric functions from sympy.
from sympy.core.symbol import symbols
from sympy.solvers.solveset import nonlinsolve
from sympy import sqrt, tan, atan
y = symbols('y', real=True)
a,b = 4,5 # user-given values
eq2 = a - 4*atan(sqrt(tan(y/tan(b))*tan(y)/tan((y/tan(b))+y)))
S = nonlinsolve( [eq2], [y] )
print(S)
It'll return you a series of conditions ( ConditionSet object ) for possible adequate results.
If that wasn't clear enough, you can read the docs for nonlinsolve.
I'm currently trying to solve numerically a minimization problem and I tried to use the optimization library available in SciPy.
My function and derivative are a bit too complicated to be presented here, but they are based on the following functions, the minimization of which do not work either:
def func(x):
return np.log(1 + np.abs(x))
def grad(x):
return np.sign(x) / (1.0 + np.abs(x))
When calling the fmin_bfgs function (and initializing the descent method to x=10), I get the following message:
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 2.397895
Iterations: 0
Function evaluations: 24
Gradient evaluations: 22
and the output is equal to 10 (i.e. initial point). I suppose that this error may be caused by two problems:
The objective function is not convex: however I checked with other non-convex functions and the method gave me the right result.
The objective function is "very flat" when far from the minimum because of the log.
Are my suppositions true? Or does the problem come from anything else?
Whatever the error can be, what can I do to correct this? In particular, is there any other available minimization method that I could use?
Thanks in advance.
abs(x) is always somewhat dangerous as it is non-differentiable. Most solvers expect problems to be smooth. Note that we can drop the log from your objective function and then drop the 1, so we are left with minimizing abs(x). Often this can be done better by the following.
Instead of min abs(x) use
min t
-t <= x <= t
Of course this requires a solver that can solve (linearly) constrained NLPs.
Abstract problem to be solved:
we have n d-dimentional design variables, say {k_0, k_1, ..., k_n}
maximize the minimum of [f(k_0), f(k_1), ... f(k_n)], where f() a nonlinear function, i.e. maximin
constraint: mean([k_0, k_1, ...,k_n])==m, m known constant
Can someone provide an example of how this can be solved (maximin, d-dim variables) via pyOpt?
EDIT: i tried this:
import scipy as sp
from pyOpt.pyOpt_optimization import Optimization
from pyOpt.pyALPSO.pyALPSO import ALPSO
def __objfunc(x,**kwargs):
f=min([x[0]+x[1],x[2]+x[3]])
g=[0.0]
g[0]=(((x[0]+x[1])+(x[2]+x[3]))/2.0)-5
fail=0
return f,g,fail
if __name__=='__main__':
op=Optimization('test', __objfunc)
op.addVarGroup('p0',4,type='c')
op.addObj('f')
op.addCon('ineq','i')
o=ALPSO()
o(op)
print(op._solutions[0])
suppose 2-dimentional design variables
is there any better way?
I probably would reformulate this as:
The min() function you used is non-differentiable (and thus dangerous). Also the mean() function can be replaced by a linear constraint (which is easier).
I am not familiar with the ALPSO solver, but this reformulation would usually be helpful for more traditional solvers like SNOPT, NLPQL and FSQP.
I am converting some Matlab code into python using numpy. Everything worked pretty smoothly but recently I encountered fminsearch function.
So, to cut it short: is there an easy way to make in python something like this:
banana = #(x)100*(x(2)-x(1)^2)^2+(1-x(1))^2;
[x,fval] = fminsearch(banana,[-1.2, 1])
which will return
x = 1.0000 1.0000
fval = 8.1777e-010
Up till now I have not found anything that looks similar in numpy. The only thing that I found similar is scipy.optimize.fmin. Based on the definition it
Minimize a function using the downhill simplex algorithm.
But right now I can not find to write the above-mentioned Matlab code using this function
It's just a straight-forward conversion from Matlab syntax to python syntax:
import scipy.optimize
banana = lambda x: 100*(x[1]-x[0]**2)**2+(1-x[0])**2
xopt = scipy.optimize.fmin(func=banana, x0=[-1.2,1])
with output:
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 85
Function evaluations: 159
array([ 1.00002202, 1.00004222])
fminsearch implements the Nelder-Mead method, see Matlab document: http://www.mathworks.com/help/matlab/ref/fminsearch.html. In the reference section.
To find its equivalent in scipy, you just need to check the doc strings of the methods provided in scipy.optimize. See: http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin.html#scipy.optimize.fmin. fmin also implements Nelder-Mead method.
The names do not always translate directly from matlab to scipy and are sometimes even misleading. For example, Brent's method is implemented as fminbnd in Matlab but optimize.brentq in scipy. So, checking the doc strings are always a good idea.