Pyomo provides some features to add constraints into variables like bellow code in the document.
model.LumberJack = Var(within=NonNegativeReals, bounds=(0,6), initialize=1.5)
But, I want to define a variable with open interval constraints such as (0, 1]. In my understanding, the bounds argument means closed interval, so, if I set the param as bounds=(0,1), it means [0, 1].
I think closed interval constraints are common things and Pyomo provide this kind of features, but I couldn't find it. Is it a implementation issue? or theoretical issues in optimization?
An open interval would mean a "strictly less" constraint in the model, i.e.
variable < upper bound
instead of
variable <= upper bound
Depending on your solution algorithm this may not be supported by the underlying theory. For example, in linear and mixed integer programming theory there is no support for strict inequalities. The only thing you can have is <= and >=.
So even if Pyomo would support (half-)open intervals, the algorithms to solve the problem may not.
The usual approach to work around this is to use a small epsilon and write
variable <= upper bound - epsilon
to "emulate" a strict inequality. This may of course introduce numerical difficulties.
Finally, given that most algorithms operate with finite precision and numerical tolerances, there is the question what a strict inequality on a variable bound should mean. As soon as the tolerance is bigger than 0 the variable would be allowed to attain the value at the upper bound and that would be considered feasible within tolerances.
Related
I want to know if it is possible to optimize a problem in OpenMDAO in such a way that the objective approaches a specified value rather than minimizing or maximizing the objective?
For example:
prob.model.add_objective("objective1", equals=10)
as in specifying constraints is not possible.
You can not specify an equality for the objective like that. You could specify a given objective, then secondarily add an equality constraint for that same value. This is technically valid, but it would be a very strange way to pose an optimization problem.
If you have a specific design variable you hope to vary to satisfy the equality constraint, then you probably don't want to do an optimization at all. Instead, you likely want to use a solver. You can use solvers to vary just one variable, or potentially more than one (as long as you have one equality constraint per variable). An generic example of using a solver can be found here, setting up a basic nonlinear circit analysis.
However, in your case you more likely want to use a BalanceComp. You can set a specific fixed value into the right hand side of the balance, using an init argument like this:
bal = BalanceComp()
bal.add_balance('x', val=1.0, rhs_val=3.0)
Then you can connect the variable you want to hold fixed to that value to the left hand side of the balance.
I am trying to minimize a complex objective function that has 2 decision variables
The variables have bounds as mentioned below:
0<= var1/var2 < Some_upper_bound
As per my understanding of bounds variable in the optimize.minimize() function, both upper and lower values are inclusive.
How do I create a bound such that one value is inclusive (0) and the other is exclusive (Some_upper_bound)?
Any help with this is much appreciated. Thanks in advance!
No tool will do what you want. For a reason: the feasible region is no longer compact and the optimization problem is no longer well defined. Also, most solvers actually use a feasibility tolerance, so their constraint effectively becomes: x <= a + feastol. This is done because computations suffer from limited floating point precision. See: https://yetanothermathprogrammingconsultant.blogspot.com/2017/03/strict-inequalities-in-optimization.html.
I'm trying to optimize a set of equations with the L-BFGS-B optimizer in SciPy where I know the lower bound is zero (not inclusive) but do not know the upper bound.
I'm wondering if there is a way to tell SciPy to set an upper bound for the lowest input that creates an error. In other words, can SciPy automatically "feel its way" through what the upper bound should be in constrained optimization? If not, are there any standard ways to do this? The naive way I am considering would be to start calling values within a try/except loop to find an acceptably accurate upper bound by brute force.
From a relatively dramatic discussion on SciPy's issues page I know that SciPy's current implementation of L-BFGS-B is written in Fortran (see the original paper here), so I'm not incredibly hopeful for an automated way of doing this. If no ways have been thought of before me and brute forcing the upper bound proves unfeasible, I suppose I may have to begin trying to find approximations for it :)
I'm having the following numerical issue while using SCIP solver as a callable library via PySCIPOpt:
two equivalent and almost identical models for the same problem yield different optimal
values and optimal solutions, with optimal values having a relative difference of order 1e-6
An independent piece of software verified that the solutions yielded by both models are feasible for the original problem and that their true values agree with the optimal values reported by SCIP for each model. Until the appearance of this instance a bunch of instances of the same problem had been solved, with both models always agreeing on their optimal solutions and values.
Is it possible to modify the numerical precision of SCIP for comparing the values of primal solutions among them and against dual bounds? Which parameters should be modified to overcome this numerical difficulty?
What I tried
I've tried the following things and the problem persisted:
Turning presolving off with model.setPresolve(SCIP_PARAMSETTING.OFF).
Setting model.setParam('numerics/feastol', epsilon) with different values of epsilon.
Since feasible sets and objective functions agree (see description below), I've checked that the actual coefficients of the objective functions agree by calling model.getObjective() and comparing coefficients for equality for each variable appearing in the objective function.
The only thing that seemed to help was to add some noise (multiplying by numbers of the form 1+eps with small eps) to the coefficients in the objective function of the model yielding the worst solution. This makes SCIP to yield the same (and the better) solution for both models if eps is within certain range.
Just in case, this is what I get with scip -v in the terminal:
SCIP version 6.0.2 [precision: 8 byte] [memory: block] [mode:
optimized] [LP solver: SoPlex 4.0.2] [GitHash: e639a0059d]
Description of the models
Model (I) has approximately 30K binary variables, say X[i] for i in some index set I. It's feasible and not a hard MIP.
Model (II) is has the same variables of model (I) plus ~100 continuous variables, say Y[j] for j in some index set J. Also model (II) has some constraints like this X[i_1] + X[i_2] + ... + X[i_n] <= Y[j].
Both objective functions agree and depend only on X[i] variables, the sense is minimization. Note that variables Y[j] are essentially free in model (II), since they are continuous and they do not appear in the objective function. Obviously, there is no point in including the Y[j] variables, but the optimal values shouldn't be different.
Model (II) is the one yielding the better (i.e. smaller) value.
sorry for the late answer.
So, in general, it can definitely happen that any MIP solver reports different optimal solutions for formulations that are slightly perturbed (even if they are mathematically the same), due to the use of floating-point arithmetic.
Your problem does seem a bit weird though. You say the Y-variables are free, i.e. they do not have any variable bounds attached to them? If this is the case, I would be very surprised if they don't get presolved away.
If you are still interested in this problem, could you provide your instance files to me and I will look at them?
I want to run an optimisation problem repeatedly to further refine the end result.
Essentially, the objective is to minimise the maximum of a set of variables (subject to inequality and equality constraints), and then minimise the maximum of the set excluding the maximum, and then minimise the maximum of the set excluding the two largest numbers and so on...
The algorithm I have in mind is:
Run scipy.linprog(..., bounds=[(-numpy.inf, numpy.inf), (-numpy.inf, numpy.inf), (-numpy.inf, numpy.inf), ...]) with all variables unbounded, to minimise the maximum of the set of numbers.
Assuming optimisation problem is feasible and successfully solved, fix the maximum to opt_val by setting bounds=[..., (opt_val, opt_val), ...], where all other variables have the bounds (-numpy.inf, numpy.inf).
Make inequality constraints corresponding to that variable ineffective, by changing the coefficient of b_ub to numpy.inf.
Rerun simulation with modified bounds and inequality vector.
This can run without error, but it seems like scipy/numpy explicitly ignore the bounds I place on the variables - I get results for the variables that I have 'fixed' that are not the corresponding opt_val.
Can scipy handle bounds that restrict a variable to a single floating point number?
Is this the best way to be solving my problem?
The code I have developed is really quite extensive, which is why I have not posted it here, so of course I don't expect a code-based solution. What I am looking for here is a yes/no answer as to whether scipy can handle bounding intervals restricted to a single float, and, on a higher level, whether I have the correct approach.
The documentation at https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.linprog.html does not explicitly say whether or not it is possible to specify fixed-point bounds.
It turns out it was a problem with relaxing the inequality constraints. I had mistakenly relaxed all constraints regarding the fixed variables, when instead I needed to have relaxed some of the constraints.
#ErwinKalvelagen's comment is still worth noting however.