How to set objective/constraint violation tolerance in Scipy SLSQP? - python

My understanding of SLSQP is that, when it is iterating towards a solution, it simultaneously works to reduce constraint violations and minimize the given function. Since these are two side-by-side processes, I would expect there to be someway to set the tolerance for constraint violation and the tolerance for function minimization separately. Yet the SLSQP documentation doesn't indicate any way to set these two tolerances separately.
For example, in one minimization I may be ok with letting the constraints be violated to the order of 1e-2 while minimizing, yet in another minimization I would want the constraints to be violated with less than 1e-15 of precision. Is there a way to set this?

Found a solution. Instead of using an equality constraint, can change this to an inequality constraint where the constraint, instead of being set to 0, can be set to be less than desired tolerance.

Related

Optimizing a problem in OpenMDAO so that objective takes specific value

I want to know if it is possible to optimize a problem in OpenMDAO in such a way that the objective approaches a specified value rather than minimizing or maximizing the objective?
For example:
prob.model.add_objective("objective1", equals=10)
as in specifying constraints is not possible.
You can not specify an equality for the objective like that. You could specify a given objective, then secondarily add an equality constraint for that same value. This is technically valid, but it would be a very strange way to pose an optimization problem.
If you have a specific design variable you hope to vary to satisfy the equality constraint, then you probably don't want to do an optimization at all. Instead, you likely want to use a solver. You can use solvers to vary just one variable, or potentially more than one (as long as you have one equality constraint per variable). An generic example of using a solver can be found here, setting up a basic nonlinear circit analysis.
However, in your case you more likely want to use a BalanceComp. You can set a specific fixed value into the right hand side of the balance, using an init argument like this:
bal = BalanceComp()
bal.add_balance('x', val=1.0, rhs_val=3.0)
Then you can connect the variable you want to hold fixed to that value to the left hand side of the balance.

How to define Var with an open interval in Pyomo?

Pyomo provides some features to add constraints into variables like bellow code in the document.
model.LumberJack = Var(within=NonNegativeReals, bounds=(0,6), initialize=1.5)
But, I want to define a variable with open interval constraints such as (0, 1]. In my understanding, the bounds argument means closed interval, so, if I set the param as bounds=(0,1), it means [0, 1].
I think closed interval constraints are common things and Pyomo provide this kind of features, but I couldn't find it. Is it a implementation issue? or theoretical issues in optimization?
An open interval would mean a "strictly less" constraint in the model, i.e.
variable < upper bound
instead of
variable <= upper bound
Depending on your solution algorithm this may not be supported by the underlying theory. For example, in linear and mixed integer programming theory there is no support for strict inequalities. The only thing you can have is <= and >=.
So even if Pyomo would support (half-)open intervals, the algorithms to solve the problem may not.
The usual approach to work around this is to use a small epsilon and write
variable <= upper bound - epsilon
to "emulate" a strict inequality. This may of course introduce numerical difficulties.
Finally, given that most algorithms operate with finite precision and numerical tolerances, there is the question what a strict inequality on a variable bound should mean. As soon as the tolerance is bigger than 0 the variable would be allowed to attain the value at the upper bound and that would be considered feasible within tolerances.

is it possible to force cp-sat to meet all the constraints for a feasible solution?

Dears
I read that CP-SAT for a feasible solution does not ensure that all constraints are met. Am I right? Is there a way to force it to met all of them even if the solution is "feasible"?
Does it provide the constrains met and not met?
Thank you
This is wrong.
Every constraint must be met. And the solver checks all solutions produced to make sure they are valid.
I believe you are confused by the notion of enforced literals in constraints. This is the equivalent of index constraints in the MIP world.
Given a Boolean variable b, and a constraint (currently limited to bool_or, bool_and and linear_constraints), you can write
b => constraint
or
negation(b) => constraint
meaning that if the prefix is true, then the constraint must be satisfied.
Where did you read that? CP-approach means finding rather a good feasible solution than an optimal solutions, which obviously speeds up the computation.
The definition of a fesaible solution is: A set of values for the decision variables that satisfies all of the constraints in an optimization problem.
You can run an CP-SAT example and check, if a constraint is violated. But there should be not violation.
Edit: Or are we talking about relaxation?

Alternatives to fmincon in python for constrained non-linear optimisation problems

I am having trouble solving an optimisation problem in python, involving ~20,000 decision variables. The problem is non-linear and I wish to apply both bounds and constraints to the problem. In addition to this, the gradient with respect to each of the decision variables may be calculated.
The bounds are simply that each decision variable must lie in the interval [0, 1] and there is a monotonic constraint placed upon the variables, i.e each decision variable must be greater than the previous one.
I initially intended to use the L-BFGS-B method provided by the scipy.optimize package however I found out that, while it supports bounds, it does not support constraints.
I then tried using the SQLSP method which does support both constraints and bounds. However, because it requires more memory than L-BFGS-B and I have a large number of decision variables, I ran into memory errors fairly quickly.
The paper which this problem comes from used the fmincon solver in Matlab to optimise the function, which, to my knowledge, supports the application of both bounds and constraints in addition to being more memory efficient than the SQLSP method provided by scipy. I do not have access to Matlab however.
Does anyone know of an alternative I could use to solve this problem?
Any help would be much appreciated.

Fixed-point bounds for optimisation using scipy linprog

I want to run an optimisation problem repeatedly to further refine the end result.
Essentially, the objective is to minimise the maximum of a set of variables (subject to inequality and equality constraints), and then minimise the maximum of the set excluding the maximum, and then minimise the maximum of the set excluding the two largest numbers and so on...
The algorithm I have in mind is:
Run scipy.linprog(..., bounds=[(-numpy.inf, numpy.inf), (-numpy.inf, numpy.inf), (-numpy.inf, numpy.inf), ...]) with all variables unbounded, to minimise the maximum of the set of numbers.
Assuming optimisation problem is feasible and successfully solved, fix the maximum to opt_val by setting bounds=[..., (opt_val, opt_val), ...], where all other variables have the bounds (-numpy.inf, numpy.inf).
Make inequality constraints corresponding to that variable ineffective, by changing the coefficient of b_ub to numpy.inf.
Rerun simulation with modified bounds and inequality vector.
This can run without error, but it seems like scipy/numpy explicitly ignore the bounds I place on the variables - I get results for the variables that I have 'fixed' that are not the corresponding opt_val.
Can scipy handle bounds that restrict a variable to a single floating point number?
Is this the best way to be solving my problem?
The code I have developed is really quite extensive, which is why I have not posted it here, so of course I don't expect a code-based solution. What I am looking for here is a yes/no answer as to whether scipy can handle bounding intervals restricted to a single float, and, on a higher level, whether I have the correct approach.
The documentation at https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.linprog.html does not explicitly say whether or not it is possible to specify fixed-point bounds.
It turns out it was a problem with relaxing the inequality constraints. I had mistakenly relaxed all constraints regarding the fixed variables, when instead I needed to have relaxed some of the constraints.
#ErwinKalvelagen's comment is still worth noting however.

Categories

Resources