I want to find a center point of a dataset with some given inequality constraints using cvxpy.
I have tried to make a constraint such that distance from the optimal point (opt_point) to the origin is greater than a constant, e.g., 300. There are two ways to do this.
First,
constraints = [opt_point[0]**2 + opt_point[1]**2 >= 300]
This did not work, I realized that it can solve with <= inequality but it could not with >=. After reading some pages, I realized that the quad_form function is convex, so I tried:
constraints = [cp.quad_form(opt_point, P)>=300]
However, neither can solve the problem.
opt_point = cp.Variable(2);
cost = 0
for i in range(len(xPoints)):
cost += cp.sum_squares(opt_point-data[i,:])
assert cost.is_convex()
P = cp.diag([1, 1])
constraints = [cp.quad_form(opt_point, P)>>300]
prob = cp.Problem(cp.Minimize(cost),constraints)
prob.solve()
opt_point = opt_point.value;
Could you please explain how I can impose more inequality constraints without yielding the following error?
Problem does not follow DCP rules. However, the problem does follow DGP rules. Consider calling this function with gp=True.
Related
I'm trying to use Gurobi solver in pulp to solve a big linear programming problem. The status Gurobi returned is 2, which means that an optimal solution is available, but the solution doesn't meet my expectations. Here is the part where problem occurs:
MyProblem = pulp.LpProblem("MyProblem",LpMinimize) # define the problem
# define variable
var_shengmu = {i: LpVariable(name=f"var_shengmu{i}", lowBound=0, upBound=100,cat=LpInteger) for i in range(N)}
var_qiuyi_shengmu={i:{j:LpVariable(name=f"var_qiuyi_shengmu{i}{j}",cat=LpBinary)for j in range(i+1,N)} for i in range(N)}
# add constraint
inf=10**6
eps=10**(-5)
for i in range(N):
for j in range(i+1,N):
if some_condition: # if var_shengmu[i] and var_shengmu[j] should be different
#constraint (a)
MyProblem+=(var_shengmu[i]-var_shengmu[j])<=-eps+inf*var_qiuyi_shengmu[i][j]
#constraint (b)
MyProblem+=(var_shengmu[j]-var_shengmu[i])>=eps-inf*(1-var_qiuyi_shengmu[i][j])
The last two lines above is an inequality constraint, I want to make var_shengmu[i] and var_shengmu[j] different. The idea is that if var_shengmu[j]==var_shengmu[i], whatever var_qiuyi_shengmu[i][j] is, constraint(a) and (b) cannot be satisfied together.
However, the variables var_shengmu are all 0(from var_shengmu[0] to var_shengmu[N-1]) in the solution.
I followed this answer to print the constraints, and I surprisingly found that for all i and j, the constraint (b) I listed above are not satisfied. Some of my outputs are here:
-1000000*var_qiuyi_shengmu25472548 - var_shengmu17040 + var_shengmu17046 <= -1e-05
is satisfied
-1000000*var_qiuyi_shengmu25472548 - var_shengmu17040 + var_shengmu17046 >= -999999.99999
not satisfied
I'm extremely bewildered why the status is optimal but some constraints are ignored. Did I make something wrong? Thank in advance for your help!
By the way, you may wonder why don't I have an objective function. It's because the code I put here is only a small part of my problem, and in other parts, the objective function is defined.
It is solved here.https://github.com/coin-or/pulp/issues/592
Pulp use solve matrix equations with floating point arithmetic with certain tolerances. I just increased eps, and it worked.
I am trying to build a MIP model in Pyomo and having trouble creating an Or constraint. An OR constraint r = or{x1, ..., xn} states that the binary resultant variable r should be 1 if and only if any of the operand variables x1, ..., xn is equal to 1. I failed no such function that can create OR constraint in Pyomo, so I use my own code
m = ConcreteModel()
m.r = Var(within=Binary)
m.x1 = Var(within=Binary)
m.x2 = Var(within=Binary)
m.or_constraint = Constraint(expr=m.r==min(sum(m.x1, m.x2), 1)
Then ran the code and got the error msg that variables m.x1 and m.x2 should be initialized. I initialized them with 1 and found that m.or_constraint degraded to force m.r equal to 1. In other words, m.or_constraint just used m.x1 and m.x2 initial value to build the constraint and never updated in the process of solving the MIP problem.
I tried different expressions in Pyomo to create this constraint, like call a rule function in the constraint definition. However, every time I got the same result.
Could you direct me to create OR constraint in Pyomo?
The relation
y = x(1) or x(2) or ... or x(n) ( same as y = max{x(i)} )
y, x(i) ∈ {0,1} ( all binary variables )
can be formulated as a set of n+1 linear inequalities
y <= sum(i, x(i))
y >= x(i) for all i
It is also possible to write this as just two constraints:
y <= sum(i,x(i))
y >= sum(i,x(i))/n
The first version is tighter, however. That is the one I typically use.
Note: the first version is so tight that we even can relax y to be continuous between 0 and 1. Whether this is advantageous for the solver, requires a bit of experimentation. For the second version, it is required that y is binary.
Actually, there is another way to create a 'or' statement by using Pyomo with Binary(only for two constraints).
Obj: Min(OF)=f(x)
s.t. x<=A or x<=B
We can add a binary Number(Y) to form this model.M is a number which great enough(100000).
Obj: Min(OF)=f(x,y)
s.t. X<= A+M*y
X<=B+(1-Y)M
Y=0,1
I'm dealing with a mathematical optimization problem, in more detail it is a semi-definite program (see code-snipped below), which is used to solve another problem iteratively.
It is required that the equality-constraints are met up to ~10^(-10) or better. Even if I start my optimization with a matrix M that meets the constraints up to 10^(-12) or better, the optimization result X doesn't meet the requirements for X+M very close (at least two or three of them are only met up to 10**(-7)).
Is there a way to improve the accuracy of how close cvx (mosek) meets the constraints?
Sidenote: I got the initial value of my optimization as solution of exactly the same problem, so it seems to be possible to yield a higher accuracy, but I guess this was only lucky. Unfortunatly, this matrix isn't close to minimum, so I need to do another iteration.
# defining variable
X = cp.Variable((m,m), hermitian=True)
#pos. semi-definite constraint
constraints = [M+X >> 0]
# all the other constraints
for i in range(0,len(b)):
constraints += [ cp.trace(A[i]#(M+X)) == b[i]]
#problem formulation
prob = cp.Problem(cp.Minimize(cp.real(cp.trace(C#X))), constraints)
Result = prob.solve(solver=cp.MOSEK, verbose = False, parallel = True)
Here M and C are known matrices, A and b is a list of matrices and scalars respectively.
I've already tried to find an answer in the documentation and on the internet, but I couldn't find a solution. Therefore, I'd be grateful for any help!
Thank's in advance!
I have been using lpsolve in Python for a very long time, with generally good results. I have even written my own Cython wrapper to it to overcome the mess that the original Python wrapper is.
My Cython wrapper is much faster in the setup of the problem, but of course the solution time only depends on the lpsolve C code and has got nothing to do with Python.
I am only solving real-valued linear problems, no MIPs. The latest (and one of the biggest) LP I had to solve had a constraint matrix of size about 5,000 x 3,000, and the setup plus solve takes about 150 ms. The problem is, I have to solve a slightly modified version of the same problem many times in my simulation (constraints, RHS, bounds and so on are time-dependent for a simulation with many timesteps). The constraint matrix is usually very sparse, with about 0.1% - 0.5% of NNZ or less.
Using lpsolve, the setup and solve of the problem is as easy as the following:
import numpy
from lp_solve.lp_solve import PRESOLVE_COLS, PRESOLVE_ROWS, PRESOLVE_LINDEP, PRESOLVE_NONE, SCALE_DYNUPDATE, lpsolve
# The constraint_matrix id a 2D NumPy array and it contains
# both equality and inequality constraints
# And I need it to be single precision floating point, like all
# other NumPy arrays from here onwards
m, n = constraint_matrix.shape
n_le = len(inequality_constraints)
n_e = len(equality_constraints)
# Setup RHS vector
b = numpy.zeros((m, ), dtype=numpy.float32)
# Assign equality and inequality constraints (= and <=)
b[0:n_e] = equality_constraints
b[-n_le:] = inequality_constraints
# Tell lpsolve which rows are equalities and which are inequalities
e = numpy.asarray(['LE']*m)
e[0:-n_le] = 'EQ'
# Make the LP
lp = lpsolve('make_lp', m, n)
# Some options for scaling the problem
lpsolve('set_scaling', lp, SCALE_DYNUPDATE)
lpsolve('set_verbose', lp, 'IMPORTANT')
# Use presolve as it is much faster
lpsolve('set_presolve', lp, PRESOLVE_COLS | PRESOLVE_ROWS | PRESOLVE_LINDEP)
# I only care about maximization
lpsolve('set_sense', lp, True)
# Set the objective function of the problem
lpsolve('set_obj_fn', lp, objective_function)
lpsolve('set_mat', lp, constraint_matrix)
# Tell lpsolve about the RHS
lpsolve('set_rh_vec', lp, b)
# Set the constraint type (equality or inequality)
lpsolve('set_constr_type', lp, e)
# Set upper bounds for variables - lower bounds are automatically 0
lpsolve('set_upbo', lp, ub_values)
# Solve the problem
out = lpsolve('solve', lp)
# Retrieve the solution for all the variables
vars_sol = numpy.asarray([lpsolve('get_var_primalresult', lp, i) for i in xrange(m + 1, m + n + 1)], dtype=numpy.float32)
# Delete the problem, timestep done
lpsolve('delete_lp', lp)
For reasons that are too long to explain, my NumPy arrays are all single precision floating point arrays, and I'd like them to stay that way.
Now, after all this painful introduction, I would like to ask: does anyone know of another library (with reasonable Python wrappers) that allows me to setup and solve a problem of this size as fast (or potentially faster) than lpsolve? Most of the libraries I have looked at (PuLP, CyLP, PyGLPK and so on) do not seem to have a straightforward way to say "this is my entire constraint matrix, set it in one go". They mostly seem to be oriented towards being "modelling languages" that allow fancy things like this (CyLP example):
# Add variables
x = s.addVariable('x', 3)
y = s.addVariable('y', 2)
# Create coefficients and bounds
A = np.matrix([[1., 2., 0],[1., 0, 1.]])
B = np.matrix([[1., 0, 0], [0, 0, 1.]])
D = np.matrix([[1., 2.],[0, 1]])
a = CyLPArray([5, 2.5])
b = CyLPArray([4.2, 3])
x_u= CyLPArray([2., 3.5])
# Add constraints
s += A * x <= a
s += 2 <= B * x + D * y <= b
s += y >= 0
s += 1.1 <= x[1:3] <= x_u
I honestly don't care about the flexibility, I just need raw speed in the problem setup and solve. Creating NumPy matrices plus doing all those fancy operations above is definitely going to be a performance killer.
I'd rather stay on Open Source solvers if possible, but any suggestion is most welcome. My apologies for the long post.
Andrea.
#infinity77,
I am looking at using lpsolve now. It is fairly straight forward to generate a .lp file for input and it did solve several smaller problems. BUT... I am now attempting to solve a node coloring problem. This is a MIP. My first attempt at a 500 node problem ran about 15 minutes as an lp relaxation. lpsolve has been grinding on the true MIP since last night. Still grinding. These coloring problems are difficult but 14 hours with no end in sight is tooooo much.
The best alternative I am aware of is from coin-or.org;[Coin-OR Solvers] 1 Try their clp and cbc solvers depending on which type of problem you are solving. Benchmarks I have seen say they are the best choice outside of CPLEX and Gurobi. It is free but you will need to be sure it is "legal" for your purposes. A good benchmark paper by Bernhard Meindl and Matthias Templ at Open Source Solver Benchmarks
I will try to keep my question short and simple. If you need any further information, please let me know.
I have an MIP, implemented in Python with the package PuLP. (Roughly 100 variables and constraints) The mathematical formulation of the problem is from a research paper. This paper also includes a numerical study. However, my results differ from the results of the authors.
My problem variable is called prob
prob = LpProblem("Replenishment_Policy", LpMinimize)
I solve the problem with prob.solve()
LpStatus returns Optimal
When I add some of the optimal (paper) results as contraints, I get a slightly better objective value. Same goes for constraining the objecive function to a slightly lower value. The LpStatus remains Optimal.
original objective value: total = 1704.20
decision variable: stock[1] = 370
adding constraints: prob += stock[1] == 379
new objective value: 1704.09
adding constraints: prob += prob.objective <= 1704
new objective value: 1702.81
My assumption is that PuLP's solver approximates the solution. The calculation is very fast, but apparently not very accurate. Is there a way I can improve the accuracy of the solver PuLP is using? I am looking for something like: prob.solve(accuracy=100%). I had a look at the documentation but couldn't figure out what to do. Are there any thoughts what the problem could be?
Any help is appreciated. Thanks.
The answer to my question was given by ayhan: To specify the accuracy of the solver, you can use the fracGap argument of the selected solver.
prob.solve(solvers.PULP_CBC_CMD(fracGap=0.01))
However, the question I asked, was not aligned with the problem I had. The deviation of the results was indeed not a matter of accuracy of the solver (as sascha pointed out in the comments).
The cause to my problem:
The algorithm I implemented was the optimization of the order policy parameters for a (Rn, Sn) policy under non-stationary, stochastic demand. The above mentioned paper is:
Tarim, S. A., & Kingsman, B. G. (2006). Modelling and computing (R n, S n) policies for inventory systems with non-stationary stochastic demand. European Journal of Operational Research, 174(1), 581-599.
The algorithm has two binary variables delta[t] and P[t][j]. The following two constraints only allow values of 0 and 1 for P[t][j], as long as delta[t] is defined as a binary.
for t in range(1, T+1):
prob += sum([P[t][j] for j in range(1, t+1)]) == 1
for j in range(1, t+1):
prob += P[t][j] >= delta[t-j+1] - sum([delta[k] for k in range(t-j+2, t+1)])
Since P[t][j] can only take values of 0 or 1, hence being a binary variable, I declared it as follows:
for t in range(1, T+1):
for j in range(1, T+1):
P[t][j] = LpVariable(name="P_"+str(t)+"_"+str(j), lowBound=0, upBound=1, cat="Integer")
The objective value for the minimization returns: 1704.20
After researching for a solution for quite a while, I noticed a part of the paper that says:
... it follows that P_tj must still take a binary value even if it is
declared as a continuous variable. Therefore, the total number of
binary variables reduces to the total number of periods, N.
Therefore I changed the cat argument of the P[t][j] variable to cat="Continuous". Whithout changing anything else, I got the lower objective value of 1702.81. The status of the result shows in both cases: Optimal
I am still not sure how all these aspects are interrelated, but I guess for me this tweek worked. For everyone else who is directed to this question, will probably find the necessary help with the answer given at the top of this post.