Particle swarm optimization with both continuous and discrete variables - python

So I want to try to solve my optimization problem using particle swarm optimiztion algorithm. As I comoratable with python I was looking into PySwarms toolkit. The issue is I am not really experienced in this field and don't really know how to account for integrality constraints of my problem. I was looking for advice on what are some approches to dealing with integral variables in PSO. And maybe some examples with PySwarms or any good alternative packages?

You can try pymoo module, which is an excellent multi-objective optimization tool. It can also solve mixed variable problems. Despite pymoo is first of all designed to solve such problems using genetic algorithms, there is an implementation of PSO (single-objective with continuous variables). Maybe you'll find it useful to try to solve your mixed variable problem using genetic algorithm or one of its modifications (e.g. NSGAII).

Related

Solve optimization problem with python library which has a logarithmic objective function

How can I solve optimization problem:
subject to:
(I am looking for a library that its objective function can accept logarithms.)
I found glpk and gurobipy but they don't seem to be able to do it.
Based on your comments under the question, I am just going to refer you to one of the more standard libraries to solve this problem. Note the your objective concave and its a maximization problem. So, it is straightforward to rewrite it as a convex minimization problem and your constraints are linear. For such problems, you can use CVXOPT(https://cvxopt.org/index.html). In particular, look at some of the examples for how to use the library: https://cvxopt.org/examples/index.html#book-examples

Algorithm behind standard pulp solver

I'm currently working on an LP optimization problem with and looked into PuLP.
I know that PuLPs default solver is: PULP-CBC-CMD. I solved a test problem with this and I'm wondering what kind of algorithm this solver actually uses... it doesnt seem to be a simplex as my problem got interpreted completely differently than a simplex interpretion would look like?
Also: Every other solver for PuLP has to be added to PuLP manually right?
Also: what solvers are you guys working with in python?
Thanks in advance!
CBC is based on simplex, yes. But, like most solvers, it combines simplex with many other algorithms such as branch-and-bound and cut-generation.
In particular, to solve linear programs it uses Clp: https://github.com/coin-or/Clp
More information on the CBC solver in their site: https://github.com/coin-or/Cbc

Getting top 10 sub-optimal solutions computed by GLPK solver for LP in python

I am trying to use GLPK for solving an LP problem. My problem is the routing problem in a computer network. Given network topology and each link capacity and the traffic demand matrix for each source-destination pair in the network, I want to minimize maximum link utilization in the network. This is an LP problem and I know how to use GLPK to get the optimum solution.
My problem is that I want to get the sub-optimal solutions also. Is there any way that I can get say top 10 suboptimal solutions by GLPK?
Best
For a pure LP (with only continuous variables), the concept of finding "next best" solutions is very difficult (just move an epsilon away, and you have another solution). We can define this differently: find "next best" corner points (a.k.a. bases). This is not so easy to do, but there is a somewhat complex way by encoding bases using binary variables (link).
If the problem is actually a MIP (with binary variables) it is easier to find "next best" solutions. Some advanced solvers have built-in facilities for this (called: solution pool). Note: glpk does not have this option. Alternatively, we can also do this by adding a cut that forbids the best-found solution and then resolve (link). In this case we exploited some structure. A general cut for 0-1 variables is derived here. This can also be done for general integer variables, but then things get a bit messy.

How does PuLP linear programming solver work?

I am curious about the algorithm in the PuLP
Is this LPsolver is using the simplex method?
PuLP provides a convenient frontend for a number of solvers. Some of these solvers may use simplex, others may not. You can specify the solver in order to better control this, but you'd need to look at the details for the individual solvers to figure out if any meet your criteria.

Using Pyomo with heuristic solvers

I am using Pyomo to model my optimization problem (MILP) and solve it using Gurobi.
What would be the best, fastest or easiest way to find a heuristic solution using the Pyomo model, knowing that I do not care about the Gap bounds.
Note: I know that Gurobi has a heuristic solver but it doesn't tell what heuristic algorithm they are using!
Finding a heuristic solution to some MILP problem is complexity-wise as hard as optimizing it!
There is no best, fastest, easiest way in general. You always want to exploit some problem-characteristics.
As start, just use any MIP-solver and tune the params to reflect your needs. If you want just any heuristic solution, tune the solver for feasibility, probably meaning a higher frequency of heuristic-steps and early-stop with the first feasible solution.
Yes, you won't know what's Gurobi using internally. But knowing all of the code would not help much either. It's surely not something which you can find on wikipedia then (except for classic stuff like the feasibility pump or Relaxation induced neighborhood search).
If you want to know more about these methods, check out papers on MIP-heuristics in general! You will see, that most Heuristics are tightly coupled with the MIP-nature of the problem (although i expect some SAT-solver-usage internally too in commercial ones).

Categories

Resources