I want to run an optimisation problem repeatedly to further refine the end result.
Essentially, the objective is to minimise the maximum of a set of variables (subject to inequality and equality constraints), and then minimise the maximum of the set excluding the maximum, and then minimise the maximum of the set excluding the two largest numbers and so on...
The algorithm I have in mind is:
Run scipy.linprog(..., bounds=[(-numpy.inf, numpy.inf), (-numpy.inf, numpy.inf), (-numpy.inf, numpy.inf), ...]) with all variables unbounded, to minimise the maximum of the set of numbers.
Assuming optimisation problem is feasible and successfully solved, fix the maximum to opt_val by setting bounds=[..., (opt_val, opt_val), ...], where all other variables have the bounds (-numpy.inf, numpy.inf).
Make inequality constraints corresponding to that variable ineffective, by changing the coefficient of b_ub to numpy.inf.
Rerun simulation with modified bounds and inequality vector.
This can run without error, but it seems like scipy/numpy explicitly ignore the bounds I place on the variables - I get results for the variables that I have 'fixed' that are not the corresponding opt_val.
Can scipy handle bounds that restrict a variable to a single floating point number?
Is this the best way to be solving my problem?
The code I have developed is really quite extensive, which is why I have not posted it here, so of course I don't expect a code-based solution. What I am looking for here is a yes/no answer as to whether scipy can handle bounding intervals restricted to a single float, and, on a higher level, whether I have the correct approach.
The documentation at https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.optimize.linprog.html does not explicitly say whether or not it is possible to specify fixed-point bounds.
It turns out it was a problem with relaxing the inequality constraints. I had mistakenly relaxed all constraints regarding the fixed variables, when instead I needed to have relaxed some of the constraints.
#ErwinKalvelagen's comment is still worth noting however.
Related
My understanding of SLSQP is that, when it is iterating towards a solution, it simultaneously works to reduce constraint violations and minimize the given function. Since these are two side-by-side processes, I would expect there to be someway to set the tolerance for constraint violation and the tolerance for function minimization separately. Yet the SLSQP documentation doesn't indicate any way to set these two tolerances separately.
For example, in one minimization I may be ok with letting the constraints be violated to the order of 1e-2 while minimizing, yet in another minimization I would want the constraints to be violated with less than 1e-15 of precision. Is there a way to set this?
Found a solution. Instead of using an equality constraint, can change this to an inequality constraint where the constraint, instead of being set to 0, can be set to be less than desired tolerance.
I want to know if it is possible to optimize a problem in OpenMDAO in such a way that the objective approaches a specified value rather than minimizing or maximizing the objective?
For example:
prob.model.add_objective("objective1", equals=10)
as in specifying constraints is not possible.
You can not specify an equality for the objective like that. You could specify a given objective, then secondarily add an equality constraint for that same value. This is technically valid, but it would be a very strange way to pose an optimization problem.
If you have a specific design variable you hope to vary to satisfy the equality constraint, then you probably don't want to do an optimization at all. Instead, you likely want to use a solver. You can use solvers to vary just one variable, or potentially more than one (as long as you have one equality constraint per variable). An generic example of using a solver can be found here, setting up a basic nonlinear circit analysis.
However, in your case you more likely want to use a BalanceComp. You can set a specific fixed value into the right hand side of the balance, using an init argument like this:
bal = BalanceComp()
bal.add_balance('x', val=1.0, rhs_val=3.0)
Then you can connect the variable you want to hold fixed to that value to the left hand side of the balance.
Pyomo provides some features to add constraints into variables like bellow code in the document.
model.LumberJack = Var(within=NonNegativeReals, bounds=(0,6), initialize=1.5)
But, I want to define a variable with open interval constraints such as (0, 1]. In my understanding, the bounds argument means closed interval, so, if I set the param as bounds=(0,1), it means [0, 1].
I think closed interval constraints are common things and Pyomo provide this kind of features, but I couldn't find it. Is it a implementation issue? or theoretical issues in optimization?
An open interval would mean a "strictly less" constraint in the model, i.e.
variable < upper bound
instead of
variable <= upper bound
Depending on your solution algorithm this may not be supported by the underlying theory. For example, in linear and mixed integer programming theory there is no support for strict inequalities. The only thing you can have is <= and >=.
So even if Pyomo would support (half-)open intervals, the algorithms to solve the problem may not.
The usual approach to work around this is to use a small epsilon and write
variable <= upper bound - epsilon
to "emulate" a strict inequality. This may of course introduce numerical difficulties.
Finally, given that most algorithms operate with finite precision and numerical tolerances, there is the question what a strict inequality on a variable bound should mean. As soon as the tolerance is bigger than 0 the variable would be allowed to attain the value at the upper bound and that would be considered feasible within tolerances.
I'm having the following numerical issue while using SCIP solver as a callable library via PySCIPOpt:
two equivalent and almost identical models for the same problem yield different optimal
values and optimal solutions, with optimal values having a relative difference of order 1e-6
An independent piece of software verified that the solutions yielded by both models are feasible for the original problem and that their true values agree with the optimal values reported by SCIP for each model. Until the appearance of this instance a bunch of instances of the same problem had been solved, with both models always agreeing on their optimal solutions and values.
Is it possible to modify the numerical precision of SCIP for comparing the values of primal solutions among them and against dual bounds? Which parameters should be modified to overcome this numerical difficulty?
What I tried
I've tried the following things and the problem persisted:
Turning presolving off with model.setPresolve(SCIP_PARAMSETTING.OFF).
Setting model.setParam('numerics/feastol', epsilon) with different values of epsilon.
Since feasible sets and objective functions agree (see description below), I've checked that the actual coefficients of the objective functions agree by calling model.getObjective() and comparing coefficients for equality for each variable appearing in the objective function.
The only thing that seemed to help was to add some noise (multiplying by numbers of the form 1+eps with small eps) to the coefficients in the objective function of the model yielding the worst solution. This makes SCIP to yield the same (and the better) solution for both models if eps is within certain range.
Just in case, this is what I get with scip -v in the terminal:
SCIP version 6.0.2 [precision: 8 byte] [memory: block] [mode:
optimized] [LP solver: SoPlex 4.0.2] [GitHash: e639a0059d]
Description of the models
Model (I) has approximately 30K binary variables, say X[i] for i in some index set I. It's feasible and not a hard MIP.
Model (II) is has the same variables of model (I) plus ~100 continuous variables, say Y[j] for j in some index set J. Also model (II) has some constraints like this X[i_1] + X[i_2] + ... + X[i_n] <= Y[j].
Both objective functions agree and depend only on X[i] variables, the sense is minimization. Note that variables Y[j] are essentially free in model (II), since they are continuous and they do not appear in the objective function. Obviously, there is no point in including the Y[j] variables, but the optimal values shouldn't be different.
Model (II) is the one yielding the better (i.e. smaller) value.
sorry for the late answer.
So, in general, it can definitely happen that any MIP solver reports different optimal solutions for formulations that are slightly perturbed (even if they are mathematically the same), due to the use of floating-point arithmetic.
Your problem does seem a bit weird though. You say the Y-variables are free, i.e. they do not have any variable bounds attached to them? If this is the case, I would be very surprised if they don't get presolved away.
If you are still interested in this problem, could you provide your instance files to me and I will look at them?
I am having trouble solving an optimisation problem in python, involving ~20,000 decision variables. The problem is non-linear and I wish to apply both bounds and constraints to the problem. In addition to this, the gradient with respect to each of the decision variables may be calculated.
The bounds are simply that each decision variable must lie in the interval [0, 1] and there is a monotonic constraint placed upon the variables, i.e each decision variable must be greater than the previous one.
I initially intended to use the L-BFGS-B method provided by the scipy.optimize package however I found out that, while it supports bounds, it does not support constraints.
I then tried using the SQLSP method which does support both constraints and bounds. However, because it requires more memory than L-BFGS-B and I have a large number of decision variables, I ran into memory errors fairly quickly.
The paper which this problem comes from used the fmincon solver in Matlab to optimise the function, which, to my knowledge, supports the application of both bounds and constraints in addition to being more memory efficient than the SQLSP method provided by scipy. I do not have access to Matlab however.
Does anyone know of an alternative I could use to solve this problem?
Any help would be much appreciated.