pyOpt multi-objective minimax example - python

Abstract problem to be solved:
we have n d-dimentional design variables, say {k_0, k_1, ..., k_n}
maximize the minimum of [f(k_0), f(k_1), ... f(k_n)], where f() a nonlinear function, i.e. maximin
constraint: mean([k_0, k_1, ...,k_n])==m, m known constant
Can someone provide an example of how this can be solved (maximin, d-dim variables) via pyOpt?
EDIT: i tried this:
import scipy as sp
from pyOpt.pyOpt_optimization import Optimization
from pyOpt.pyALPSO.pyALPSO import ALPSO
def __objfunc(x,**kwargs):
f=min([x[0]+x[1],x[2]+x[3]])
g=[0.0]
g[0]=(((x[0]+x[1])+(x[2]+x[3]))/2.0)-5
fail=0
return f,g,fail
if __name__=='__main__':
op=Optimization('test', __objfunc)
op.addVarGroup('p0',4,type='c')
op.addObj('f')
op.addCon('ineq','i')
o=ALPSO()
o(op)
print(op._solutions[0])
suppose 2-dimentional design variables
is there any better way?

I probably would reformulate this as:
The min() function you used is non-differentiable (and thus dangerous). Also the mean() function can be replaced by a linear constraint (which is easier).
I am not familiar with the ALPSO solver, but this reformulation would usually be helpful for more traditional solvers like SNOPT, NLPQL and FSQP.

Related

Formulating an optimisation problem with Nlopt in Python

I am looking into using Nlopt for solving optimisation problems in Python.
I have a series of simultaneous equations of the form
Ax = b
where A is an NxM matrix, with x the solution. Another way to think about this is that I have N simultaneous equations of the form x_1c_1m + x_2c_2m + .... + x_Nc_Nm = k_M, where x_i are variables to solve for, c_im is a constant associated with x_i when in equation M=m, and k_m is some constant in equation M=m. c_im and k_m are all known.
What confuses me is how to even approach this in Nlopt. Nlopt requires you to have actual callable functions, which I don't have? I suppose I could generalise each of the equations in that matrix equation above to something like:
def fn(x,c_m,k_m):
val = 0
for x_i, c_im in zip(x,c_m):
val += x_i * c_im
return val - k_m
where c_m and k_m would be already known, with the variables to solve for in x. All the examples I've seen have only been looking at a single variable problem, which has kind of thrown me a little. Would I then have to somehow define M copies of this function, and set each copy of fn as an equality constraint in the Nlopt optimisation object? It's all rather confusing. I'm looking to solve for x, which itself has multiple solutions, and I want to try to find the minimum values of x (or atleast an approximate solution if an exact solution cannot be found). Would I have to then set multiple objective functions, ie obj_fn_i = min(x_i) or something like that? It's all a little confusing to me in terms of what needs to be presented to the solver. I've already got an analytical solution to the above problem, so I can check my results reliably. Any help appreciated.
Cheers!
I have been using NLopt for a couple of problems, and what I have come to understand is the solver requires an objective function which returns a float value, so you must set the function as an MSE sum, or still as a single float value to be minimized. And it can solve for an array of variables x, in which both the objective function and constraint must depend. All equations that are involved in the system you can insert either in the objective function directly, or as constraints.
Hope this was helpful somehow!

How to check if a SymPy expression has analytical integral

I want to solve my other question here so I need sympy to return an error whenever there is no analytical/symbolic solution for and integral.
For example if I try :
from sympy import *
init_printing(use_unicode=False, wrap_line=False, no_global=True)
x = Symbol('x')
integrate(1/cos(x**2), x)
It just [pretty] prints the integral itself
without solving and/or giving an error about not being able to solve it!
P.S. I have also asked this question here on Reddit.
A "symbolic" solution always exists: I just invented a new function intcos(x), which by definition is the antiderivative of 1/cos(x**2). Now this integral has a symbolic solution!
For the question to be rigorously answerable, one has to restrict the class of functions allowed in the answer. Typically one considers elementary functions. As SymPy integral reference explains, the Risch algorithm it employs can prove that some functions do not have elementary antiderivatives. Use the option risch=True and check whether the return value is an instance of sympy.integrals.risch.NonElementaryIntegral
from sympy.integrals.risch import NonElementaryIntegral
isinstance(integrate(1/exp(x**2), x, risch=True), NonElementaryIntegral) # True
However, since Risch algorithm implementation is incomplete, in many cases like 1/cos(x**2) it returns an ordinary Integral object. This means it was not able to either find an elementary antiderivative or prove that one does not exist.
For this example, it helps to rewrite the trigonometric function in terms of exponential, with rewrite(cos, exp):
isinstance(integrate((1/cos(x**2)).rewrite(cos, exp), x, risch=True), NonElementaryIntegral)
returns True, so we know the integral is nonelementary.
Non-elementary antiderivatives
But often we don't really need an elementary function; something like Gamma or erf or Bessel functions may be okay; as long as it's some "known" function (which of course is a fuzzy term). The question becomes: how to tell if SymPy was able to integrate a specific expression or not? Use .has(Integral) check for that:
integrate(2/cos(x**2), x).has(Integral) # True
(not isinstance(Integral) because the return value can be, like here, 2*Integral(1/cos(x**2), x).) This does not prove anything other than SymPy's failure to find the antiderivative. The antiderivative may well be a known function, even an elementary one.

Using scipy minimize with constraint on one parameter

I am using a scipy.minimize function, where I'd like to have one parameter only searching for options with two decimals.
def cost(parameters,input,target):
from sklearn.metrics import mean_squared_error
output = self.model(parameters = parameters,input = input)
cost = mean_squared_error(target.flatten(), output.flatten())
return cost
parameters = [1, 1] # initial parameters
res = minimize(fun=cost, x0=parameters,args=(input,target)
model_parameters = res.x
Here self.model is a function that performs some matrix manipulation based on the parameters. Input and target are two matrices. The function works the way I want to, except I would like to have parameter[1] to have a constraint. Ideally I'd just like to give an numpy array, like np.arange(0,10,0.01). Is this possible?
In general this is very hard to do as smoothness is one of the core-assumptions of those optimizers.
Problems where some variables are discrete and some are not are hard and usually tackled either by mixed-integer optimization (working good for MI-linear-programming, quite okay for MI-convex-programming although there are less good solvers) or global-optimization (usually derivative-free).
Depending on your task-details, i recommend decomposing the problem:
outer-loop for np.arange(0,10,0.01)-like fixing of variable
inner-loop for optimizing, where this variable is fixed
return the model with the best objective (with status=success)
This will effect in N inner-optimizations, where N=state-space of your to fix-var.
Depending on your task/data, it might be a good idea to traverse the fixing-space monotonically (like using np's arange) and use the solution of iteration i as initial-point for the problem i+1 (potentially less iterations needed if guess is good). But this is probably not relevant here, see next part.
If you really got 2 parameters, like indicated, this decomposition leads to an inner-problem with only 1 variable. Then, don't use minimize, use minimize_scalar (faster and more robust; does not need an initial-point).

Adding constraints to iminuit fitter

I am trying to add a constraint to a very complicated minimization problem I have but I am not sure how to implement it, even after reading the docs.
I have a simple example that if answered will help me with my original problem. Here is the code:
from iminuit import Minuit
def f(x,y,z):
return (x-1.)**2 +(y-2*x)**2 + (z-3.*x)**2 -1.
m=Minuit(f, x=.5, error_x=0.2, limit_x=(0.,1.), y=0.,limit_y=
(0.,1.), print_level=1)
m.migrad();
I would love to add a constraint, say x+y=1.
Thanks
Answer to my own question is don't bother using minuit. Use scipy.optimize with method SLSQP. It has equality and inequality constraint methods built in.

sympy - Composition of functions and operators

Let's assume that we have a function f and an operator L. In this case, it can be something simple, like,
L[f](x)=\sum_{k=1}^{4}f(x+k)
My main objective is to compute compositions of operators, like L above, using sympy. Sympy has no problem handling compositions of functions but we can quickly see that there is gonna be a problem with the operator above.
For example, I can define it as,
class L(Function):
#classmethod
def eval(cls, f,x):
k = Symbol('k')
return summation(f(k+x),(k,1,4))
And this indeed computes L[f] but returns an evaluated object that is no longer a function of x, so computing L[L[f]] no longer makes sense.
Is there a way in sympy to convert what L returns to be a function of x? I think that would solve the problem, since then I would be able to re-apply L on the new object.
Thanks for your time.
This question had a simple answer after all. Sympy's Lambda does the trick in this case and then I can re-apply L after evaluation is done.

Categories

Resources