objective function with binary variables - python

I am working on a complex model in pyomo. Unfortunately, i have to change the formula of the objective function, based on how is the previous value.
In particular my objective function is composed of two terms ,call them A and B, that have different order of magnitude (A is usually 2 or 3 order of magnitude higher than B, but this may vary)
In order to guarantee that A and B have the same weight of the formula, i need to write my objective function as below:
objective= A + B*K`
Where K is the value which bring the second term at the same scale/magnitude of A
example:
A=4e10
B=2e3
K=1e(10-3)=1e7
The problem is that, in order to know K, i must know the values of A and B, but pyomo doesn't give value, it just pass an expression to the solver.
I have read that thanks to a smart use of binary variables is possible to overcome this issue, anyone could suggest a useful methodology?
Kind regards

It seems like you are dealing with a multi-objective optimization problem. Since the values of variables involved in A and B are not known before solving the model, you can't define the value of K based on A and B.
There are different ways to solve multi-objective optimization problems which you can consider for your specific problem (e.g., ε-constraints method). In these problems, usually you are not interested in finding a single solution, but finding a set of Pareto optimal solutions which are not dominated by any other solution in the feasible region.

Related

How to obtain multiple solutions of a binary LP problem using Google's OR-Tools in Python?

I am new to integer optimization. I am trying to solve the following large (although not that large) binary linear optimization problem:
max_{x} x_1+x_2+...+x_n
subject to: A*x <= b ; x_i is binary for all i=1,...,n
As you can see,
. the control variable is a vector x of lengh, say, n=150; x_i is binary for all i=1,...,n
. I want to maximize the sum of the x_i's
. in the constraint, A is an nxn matrix and b is an nx1 vector. So I have n=150 linear inequality constraints.
I want to obtain a certain number of solutions, NS. Say, NS=100. (I know there is more than one solution, and there are potentially millions of them.)
I am using Google's OR-Tools for Python. I was able to write the problem and to obtain one solution. I have tried many different ways to obtain more solutions after that, but I just couldn't. For example:
I tried using the SCIP solver, and then I used the value of the objective function at the optimum, call it V, to add another constraint, x_1+x_2+...+x_n >= V, on top of the original "Ax<=b," and then used the CP-SAT solver to find NS feasible vectors (I followed the instructions in this guide). There is no optimization in this second step, just a quest for feasibility. This didn't work: the solver produced N replicas of the same vector. Still, when asked for the number of solutions found, it misleadingly replies that solution_printer.solution_count() is equal to NS. Here's a snippet of the code that I used:
# Define the constraints (A and b are lists)
for j in range(n):
constraint_expr = [int(A[j][l])*x[l] for l in range(n)]
model.Add(sum(constraint_expr) <= int(b[j][0]))
V = 112
constraint_obj_val = [-x[l] for l in range(n)]
model.Add(sum(constraint_obj_val) <= -V)
# Call the solver:
solver = cp_model.CpSolver()
solution_printer = VarArraySolutionPrinterWithLimit(x, NS)
solver.parameters.enumerate_all_solutions = True
status = solver.Solve(model, solution_printer)
I tried using the SCIP solver and then using solver.NextSolution(), but every time I was using this command, the algorithm would produce a vector that was less and less optimal every time: the first one corresponded to a value of, say, V=112 (the optimal one!); the second vector corresponded to a value of 111; the third one, to 108; fourth to sixth, to 103; etc.
My question is, unfortunately, a bit vague, but here it goes: what's the best way to obtain more than one solution to my optimization problem?
Please let me know if I'm not being clear enough or if you need more/other chunks of the code, etc. This is my first time posting a question here :)
Thanks in advance.
Is your matrix A integral ? if not, you are not solving the same problem with scip and CP-SAT.
Furthermore, why use scip? You should solve both part with the same solver.
Furthermore, I believe the default solution pool implementation in scip will return all solutions found, in reverse order, thus in decreasing quality order.
In Gurobi, you can do something like this to get more than one optimal solution :
solver->SetSolverSpecificParametersAsString("PoolSearchMode=2"); // or-tools [Gurobi]
From Gurobi Reference [Section 20.1]:
By default, the Gurobi MIP solver will try to find one proven optimal solution to your model.
You can use the PoolSearchMode parameter to control the approach used to find solutions.
In its default setting (0), the MIP search simply aims to find one
optimal solution. Setting the parameter to 1 causes the MIP search to
expend additional effort to find more solutions, but in a
non-systematic way. You will get more solutions, but not necessarily
the best solutions. Setting the parameter to 2 causes the MIP to do a
systematic search for the n best solutions. For both non-default
settings, the PoolSolutions parameter sets the target for the number
of solutions to find.
Another way to find multiple optimal solutions could be to first solve the original problem to optimality and then add the objective function as a constraint with lower and upper bound as the optimal objective value.

Numerical issue with SCIP: same problem, different optimal values

I'm having the following numerical issue while using SCIP solver as a callable library via PySCIPOpt:
two equivalent and almost identical models for the same problem yield different optimal
values and optimal solutions, with optimal values having a relative difference of order 1e-6
An independent piece of software verified that the solutions yielded by both models are feasible for the original problem and that their true values agree with the optimal values reported by SCIP for each model. Until the appearance of this instance a bunch of instances of the same problem had been solved, with both models always agreeing on their optimal solutions and values.
Is it possible to modify the numerical precision of SCIP for comparing the values of primal solutions among them and against dual bounds? Which parameters should be modified to overcome this numerical difficulty?
What I tried
I've tried the following things and the problem persisted:
Turning presolving off with model.setPresolve(SCIP_PARAMSETTING.OFF).
Setting model.setParam('numerics/feastol', epsilon) with different values of epsilon.
Since feasible sets and objective functions agree (see description below), I've checked that the actual coefficients of the objective functions agree by calling model.getObjective() and comparing coefficients for equality for each variable appearing in the objective function.
The only thing that seemed to help was to add some noise (multiplying by numbers of the form 1+eps with small eps) to the coefficients in the objective function of the model yielding the worst solution. This makes SCIP to yield the same (and the better) solution for both models if eps is within certain range.
Just in case, this is what I get with scip -v in the terminal:
SCIP version 6.0.2 [precision: 8 byte] [memory: block] [mode:
optimized] [LP solver: SoPlex 4.0.2] [GitHash: e639a0059d]
Description of the models
Model (I) has approximately 30K binary variables, say X[i] for i in some index set I. It's feasible and not a hard MIP.
Model (II) is has the same variables of model (I) plus ~100 continuous variables, say Y[j] for j in some index set J. Also model (II) has some constraints like this X[i_1] + X[i_2] + ... + X[i_n] <= Y[j].
Both objective functions agree and depend only on X[i] variables, the sense is minimization. Note that variables Y[j] are essentially free in model (II), since they are continuous and they do not appear in the objective function. Obviously, there is no point in including the Y[j] variables, but the optimal values shouldn't be different.
Model (II) is the one yielding the better (i.e. smaller) value.
sorry for the late answer.
So, in general, it can definitely happen that any MIP solver reports different optimal solutions for formulations that are slightly perturbed (even if they are mathematically the same), due to the use of floating-point arithmetic.
Your problem does seem a bit weird though. You say the Y-variables are free, i.e. they do not have any variable bounds attached to them? If this is the case, I would be very surprised if they don't get presolved away.
If you are still interested in this problem, could you provide your instance files to me and I will look at them?

Is it possible to model a min-max-problem using pyomo

Ist it possible to formulate a min-max-optimization problem of the following form in pyomo:
min(max(g_m(x)) s.t. L
where g_m are nonlinear functions (actually constrains of another model) and L is a set of linear constrains?
How would I create the expression for the objective function of the model?
The problem is that using max() on a list of constraint-objects returns only the constraint possessesing the maximum value at a given point.
I think yes, but unless you find a clever way to reformulate your model, it might not be very efficent.
You could solve all possiblity of max(g_m(x)), then select the solution with the lowest objective function value.
I fear that the max operation is not something you can add to a minimization model, since it is not a mathematical operation, but a solver operation. This operation is on the problems level. Keep in mind that when solving a model, Pyomo requires as argument only one sense of optimization (min or max), thus making it unable to understand min-max sense. Even if it did, how could it knows what to maximize or minimize? This is why I suggest you to break your problem in two, unless you work on its formulation.

parameter within an interval while optimizing

Usually I use Mathematica, but now trying to shift to python, so this question might be a trivial one, so I am sorry about that.
Anyways, is there any built-in function in python which is similar to the function named Interval[{min,max}] in Mathematica ? link is : http://reference.wolfram.com/language/ref/Interval.html
What I am trying to do is, I have a function and I am trying to minimize it, but it is a constrained minimization, by that I mean, the parameters of the function are only allowed within some particular interval.
For a very simple example, lets say f(x) is a function with parameter x and I am looking for the value of x which minimizes the function but x is constrained within an interval (min,max) . [ Obviously the actual problem is just not one-dimensional rather multi-dimensional optimization, so different paramters may have different intervals. ]
Since it is an optimization problem, so ofcourse I do not want to pick the paramter randomly from an interval.
Any help will be highly appreciated , thanks!
If it's a highly non-linear problem, you'll need to use an algorithm such as the Generalized Reduced Gradient (GRG) Method.
The idea of the generalized reduced gradient algorithm (GRG) is to solve a sequence of subproblems, each of which uses a linear approximation of the constraints. (Ref)
You'll need to ensure that certain conditions known as the KKT conditions are met, etc. but for most continuous problems with reasonable constraints, you'll be able to apply this algorithm.
This is a good reference for such problems with a few examples provided. Ref. pg. 104.
Regarding implementation:
While I am not familiar with Python, I have built solver libraries in C++ using templates as well as using function pointers so you can pass on functions (for the objective as well as constraints) as arguments to the solver and you'll get your result - hopefully in polynomial time for convex problems or in cases where the initial values are reasonable.
If an ability to do that exists in Python, it shouldn't be difficult to build a generalized GRG solver.
The Python Solution:
Edit: Here is the python solution to your problem: Python constrained non-linear optimization

Solve multi-objectives optimization of a graph in Python

I'm trying to find what seems to be a complicated and time-consuming multi-objective optimization on a large-ish graph.
Here's the problem: I want to find a graph of n vertices (n is constant at, say 100) and m edges (m can change) where a set of metrics are optimized:
Metric A needs to be as high as possible
Metric B needs to be as low as possible
Metric C needs to be as high as possible
Metric D needs to be as low as possible
My best guess is to go with GA. I am not very familiar with genetic algorithms, but I can spend a little time to learn the basics. From what I'm reading so far, I need to go as such:
Generate a population of graphs of n nodes randomly connected to
each other by m = random[1,2000] (for instance) edges
Run the metrics A, B, C, D on each graph
Is an optimal solution found (as defined in the problem)?
If yes, perfect. If not:
Select the best graphs
Crossover
Mutate (add or remove edges randomly?)
Go to 3.
Now, I usually use Python for my little experiments. Could DEAP (https://code.google.com/p/deap/) help me with this problem?
If so, I have many more questions (especially on the crossover and mutate steps), but in short: are the steps (in Python, using DEAP) easy enough to be explain or summarized here?
I can try and elaborate if needed. Cheers.
Disclaimer: I am one of DEAP lead developer.
Your individual could be represented by a binary string. Each bit would indicate whether there is an edge between two vertices. Therefore, your individuals would be composed of n * (n - 1) / 2 bits, where n is the number of vertices. To evaluate your individual, you would simply need to build an adjacency matrix from the individual genotype. For an evaluation function example, see the following gist https://gist.github.com/cmd-ntrf/7816665.
Your fitness would be composed of 4 objectives, and based on what you said regarding minimization and maximization of each objective, the fitness class would be created like this :
creator.create("Fitness", base.Fitness, weights=(1.0, -1.0, 1.0, -1.0)
The crossover and mutation operators could be the same as in the OneMax example.
http://deap.gel.ulaval.ca/doc/default/examples/ga_onemax_short.html
However, since you want to do multi-objective, you would need a multi-objective selection operator, either NSGA2 or SPEA2. Finally, the algorithm would have to be mu + lambda. For both multi-objective selection and mu + lambda algorithm usage, see the GA Knapsack example.
http://deap.gel.ulaval.ca/doc/default/examples/ga_knapsack.html
So essentially, to get up and running, you only have to merge a part of the onemax example with the knapsack while using the proposed evaluation function.
I suggest the excellent pyevolve library https://github.com/perone/Pyevolve. This will do most of the work for you, you will only have to define the fitness function and your representation nodes/functions. You can specify the crossover and mutation rate as well.

Categories

Resources