Solving minimization problem over discrete matrices with constraints - python

I'm trying to solve an order minimization problem with python. Therefore I distribute M orders over N workers. Every worker has a basic energy-level X_i which is gathered in the vector X. Also, every order has a specific energy consumption E_j which is gathered in E. With that being said I'm trying to solve the following problem
where Y is some optimal energy level, with the norm beeing the 2-norm. Under the constraints, that any column adds up to exactly one, since an order should be done and could only be done by one worker. I looked at scipy.optimize but it doesn't seem to support this sort of optimization as far as I can tell.
Does one know any tools in Python for this sort of discrete optimization problem?

The answer depends on the norm. If you want the 2-norm, this is a MIQP (Mixed Integer Quadratic Programming) problem. It is convex, so there are quite a number of solvers around (e.g. Cplex, Gurobi, Xpress -- these are commercial solvers). It can also be handled by an MINLP solver such as BonMin (open source). Some modeling tools that can help are Pyomo and CVXPY.
If you want the 1-norm, this can be formulated as a linear MIP (Mixed Integer Programming) model. There are quite a few MIP solvers such as Cplex, Gurobi, Xpress (commercial) and CBC, GLPK (open source). Some modeling tools are Pyomo, CVXPY, and PuLP.

Related

Result inconsistencies between solvers in CVXPY

I am running an LP optimization using CVXPY and testing a number of solvers. Previously, when running the same mathematical formulation of my problem I received consistent results, regardless of which solver used (Gurobi, CBC, or ECOS). Since middle of 2021, the results with ECOS have maintained consistency but the other solvers have discrepancies.
For example, I am solving a power distribution problem where the results provide the dispatch of power generators throughout an operating horizon.
Previous Result (and with current ECOS)
G1 = [0,0,0,140,140,140,140,140,140,140,140,140,140,0,0,0,0,0,0,0,0,0,0,0,0]
Current Result
G1 = [0,0,0,140,140,140,140,0,140,140,140,140,140,0,0,0,0,0,0,0,0,0,0,0,0]
When I add a penalty to my objective function in order to avoid this behaviour, solve times go up quite substantially (5x).
My questions is whether anyone has seen similar behaviour and found a solution.

Algorithm behind standard pulp solver

I'm currently working on an LP optimization problem with and looked into PuLP.
I know that PuLPs default solver is: PULP-CBC-CMD. I solved a test problem with this and I'm wondering what kind of algorithm this solver actually uses... it doesnt seem to be a simplex as my problem got interpreted completely differently than a simplex interpretion would look like?
Also: Every other solver for PuLP has to be added to PuLP manually right?
Also: what solvers are you guys working with in python?
Thanks in advance!
CBC is based on simplex, yes. But, like most solvers, it combines simplex with many other algorithms such as branch-and-bound and cut-generation.
In particular, to solve linear programs it uses Clp: https://github.com/coin-or/Clp
More information on the CBC solver in their site: https://github.com/coin-or/Cbc

Regularizing viscosity with scipy's ode solvers

Consider for the sake of simplicity the following equation (Burgers equation):
Let's solve it using scipy (in my case scipy.integrate.ode.set_integrator("zvode", ..).integrate(T)) with a variable time-step solver.
The issue is the following: if we use the naïve implementation in Fourier space
then the viscosity term nu * d2x(u[t]) can cause an overshoot if the time step is too big. This can lead to a fair amount of noise in the solutions, or even to (fake) diverging solutions (even with stiff solvers, on slightly more complex version of this equation).
One way to regularize this is to evaluate the viscosity term at step t+dt, and the update step becomes
This solution works well when programmed explicitly. How can I use scipy's variable-step ode solver to implement it ? To my surprise I haven't found any documentation on this fairly elementary thorny issue...
You actually can't, or on the other extreme, odeint or ode->zvode already does that to any given problem.
To the first, you would need to give the two parts of the equation separately. Obviously, that is not part of the solver interface. Look at DDE and SDE solvers where such a partition of the equation is actually required.
To the second, odeint and ode->zvode use implicit multi-step methods, which means that the values of u(t+dt) and the right side there enter the computation and the underlying local approximation.
You could still try to hack your original approach into the solver by providing a Jacobian function that only contains the second derivative term, but quite probably you will not achieve an improvement.
You could operator-partition the ODE and solve the linear part separately introducing
vhat(k,t) = exp(nu*k^2*t)*uhat(k,t)
so that
d/dt vhat(k,t) = -i*k*exp(nu*k^2*t)*conv(uhat(.,t),uhat(.,t))(k)

Alternatives to fmincon in python for constrained non-linear optimisation problems

I am having trouble solving an optimisation problem in python, involving ~20,000 decision variables. The problem is non-linear and I wish to apply both bounds and constraints to the problem. In addition to this, the gradient with respect to each of the decision variables may be calculated.
The bounds are simply that each decision variable must lie in the interval [0, 1] and there is a monotonic constraint placed upon the variables, i.e each decision variable must be greater than the previous one.
I initially intended to use the L-BFGS-B method provided by the scipy.optimize package however I found out that, while it supports bounds, it does not support constraints.
I then tried using the SQLSP method which does support both constraints and bounds. However, because it requires more memory than L-BFGS-B and I have a large number of decision variables, I ran into memory errors fairly quickly.
The paper which this problem comes from used the fmincon solver in Matlab to optimise the function, which, to my knowledge, supports the application of both bounds and constraints in addition to being more memory efficient than the SQLSP method provided by scipy. I do not have access to Matlab however.
Does anyone know of an alternative I could use to solve this problem?
Any help would be much appreciated.

How does PuLP linear programming solver work?

I am curious about the algorithm in the PuLP
Is this LPsolver is using the simplex method?
PuLP provides a convenient frontend for a number of solvers. Some of these solvers may use simplex, others may not. You can specify the solver in order to better control this, but you'd need to look at the details for the individual solvers to figure out if any meet your criteria.

Categories

Resources