I am using Pyomo to model my optimization problem (MILP) and solve it using Gurobi.
What would be the best, fastest or easiest way to find a heuristic solution using the Pyomo model, knowing that I do not care about the Gap bounds.
Note: I know that Gurobi has a heuristic solver but it doesn't tell what heuristic algorithm they are using!
Finding a heuristic solution to some MILP problem is complexity-wise as hard as optimizing it!
There is no best, fastest, easiest way in general. You always want to exploit some problem-characteristics.
As start, just use any MIP-solver and tune the params to reflect your needs. If you want just any heuristic solution, tune the solver for feasibility, probably meaning a higher frequency of heuristic-steps and early-stop with the first feasible solution.
Yes, you won't know what's Gurobi using internally. But knowing all of the code would not help much either. It's surely not something which you can find on wikipedia then (except for classic stuff like the feasibility pump or Relaxation induced neighborhood search).
If you want to know more about these methods, check out papers on MIP-heuristics in general! You will see, that most Heuristics are tightly coupled with the MIP-nature of the problem (although i expect some SAT-solver-usage internally too in commercial ones).
Related
So I want to try to solve my optimization problem using particle swarm optimiztion algorithm. As I comoratable with python I was looking into PySwarms toolkit. The issue is I am not really experienced in this field and don't really know how to account for integrality constraints of my problem. I was looking for advice on what are some approches to dealing with integral variables in PSO. And maybe some examples with PySwarms or any good alternative packages?
You can try pymoo module, which is an excellent multi-objective optimization tool. It can also solve mixed variable problems. Despite pymoo is first of all designed to solve such problems using genetic algorithms, there is an implementation of PSO (single-objective with continuous variables). Maybe you'll find it useful to try to solve your mixed variable problem using genetic algorithm or one of its modifications (e.g. NSGAII).
I am trying to use GLPK for solving an LP problem. My problem is the routing problem in a computer network. Given network topology and each link capacity and the traffic demand matrix for each source-destination pair in the network, I want to minimize maximum link utilization in the network. This is an LP problem and I know how to use GLPK to get the optimum solution.
My problem is that I want to get the sub-optimal solutions also. Is there any way that I can get say top 10 suboptimal solutions by GLPK?
Best
For a pure LP (with only continuous variables), the concept of finding "next best" solutions is very difficult (just move an epsilon away, and you have another solution). We can define this differently: find "next best" corner points (a.k.a. bases). This is not so easy to do, but there is a somewhat complex way by encoding bases using binary variables (link).
If the problem is actually a MIP (with binary variables) it is easier to find "next best" solutions. Some advanced solvers have built-in facilities for this (called: solution pool). Note: glpk does not have this option. Alternatively, we can also do this by adding a cut that forbids the best-found solution and then resolve (link). In this case we exploited some structure. A general cut for 0-1 variables is derived here. This can also be done for general integer variables, but then things get a bit messy.
I have a simple solver which I am using to solve a knapsack-like problem. I am looking to maximize a value while keeping constraints in mind
self.solver = pywraplp.Solver(
'FD',
pywraplp.Solver.CBC_MIXED_INTEGER_PROGRAMMING
)
self.objective = self.solver.Objective()
self.objective.SetMaximization()
self.solver.solve()
I left out the code to define variables, but my question is: Running this code will get me the optimal lineup. Is there a way to find the 2nd, 3rd, etc. best solution?
With CBC no.
CPLEX, Gurobi do support keeping more solutions, but this is available in OR-Tools only for Gurobi through the NextSolution() method.
If your model is purely integral, you can have a look at the CP-SAT solver.
The trick is the unless you explore all solutions, the second best solution is heuristic at best.
In a knapsack like problem it is straight forward to obtain the next best solution in an iterative procedure.
After the problem was solved for the first time, you can add a constraint where the left hand side sums over all items included in the optimal solution and the right hand side limits this sum to one less than the number of items included in the optimal solution.
This is essentially a cut which excludes the first optimal solution from the solution space. Thus, a different solution will be obtained by solving the problem after adding the additional constraint.
I am curious about the algorithm in the PuLP
Is this LPsolver is using the simplex method?
PuLP provides a convenient frontend for a number of solvers. Some of these solvers may use simplex, others may not. You can specify the solver in order to better control this, but you'd need to look at the details for the individual solvers to figure out if any meet your criteria.
I am using z3py to solve a set of equations. How would I calculate the runtime order of it?
It has bitvecs variables which need to be satisfied in a set of linear equations. The documentation and the guide does not give a way to calculate the runtime.
Are you asking for the (worst-case) time complexity of the used solvers? If so, I don't think that you'll be able to get a good answer: it depends on the (combination of) logic(s) into which your problem falls, e.g. QF_BV or UFNIA, and then on the ((semi) decision) procedures that the solver implements for that (combination of) logic(s).
Have a look at papers from the Z3 authors (https://github.com/Z3Prover/z3/wiki/Publications) - they might provide some details.