PuLP - How to specify the solver's accuracy - python

I will try to keep my question short and simple. If you need any further information, please let me know.
I have an MIP, implemented in Python with the package PuLP. (Roughly 100 variables and constraints) The mathematical formulation of the problem is from a research paper. This paper also includes a numerical study. However, my results differ from the results of the authors.
My problem variable is called prob
prob = LpProblem("Replenishment_Policy", LpMinimize)
I solve the problem with prob.solve()
LpStatus returns Optimal
When I add some of the optimal (paper) results as contraints, I get a slightly better objective value. Same goes for constraining the objecive function to a slightly lower value. The LpStatus remains Optimal.
original objective value: total = 1704.20
decision variable: stock[1] = 370
adding constraints: prob += stock[1] == 379
new objective value: 1704.09
adding constraints: prob += prob.objective <= 1704
new objective value: 1702.81
My assumption is that PuLP's solver approximates the solution. The calculation is very fast, but apparently not very accurate. Is there a way I can improve the accuracy of the solver PuLP is using? I am looking for something like: prob.solve(accuracy=100%). I had a look at the documentation but couldn't figure out what to do. Are there any thoughts what the problem could be?
Any help is appreciated. Thanks.

The answer to my question was given by ayhan: To specify the accuracy of the solver, you can use the fracGap argument of the selected solver.
prob.solve(solvers.PULP_CBC_CMD(fracGap=0.01))
However, the question I asked, was not aligned with the problem I had. The deviation of the results was indeed not a matter of accuracy of the solver (as sascha pointed out in the comments).
The cause to my problem:
The algorithm I implemented was the optimization of the order policy parameters for a (Rn, Sn) policy under non-stationary, stochastic demand. The above mentioned paper is:
Tarim, S. A., & Kingsman, B. G. (2006). Modelling and computing (R n, S n) policies for inventory systems with non-stationary stochastic demand. European Journal of Operational Research, 174(1), 581-599.
The algorithm has two binary variables delta[t] and P[t][j]. The following two constraints only allow values of 0 and 1 for P[t][j], as long as delta[t] is defined as a binary.
for t in range(1, T+1):
prob += sum([P[t][j] for j in range(1, t+1)]) == 1
for j in range(1, t+1):
prob += P[t][j] >= delta[t-j+1] - sum([delta[k] for k in range(t-j+2, t+1)])
Since P[t][j] can only take values of 0 or 1, hence being a binary variable, I declared it as follows:
for t in range(1, T+1):
for j in range(1, T+1):
P[t][j] = LpVariable(name="P_"+str(t)+"_"+str(j), lowBound=0, upBound=1, cat="Integer")
The objective value for the minimization returns: 1704.20
After researching for a solution for quite a while, I noticed a part of the paper that says:
... it follows that P_tj must still take a binary value even if it is
declared as a continuous variable. Therefore, the total number of
binary variables reduces to the total number of periods, N.
Therefore I changed the cat argument of the P[t][j] variable to cat="Continuous". Whithout changing anything else, I got the lower objective value of 1702.81. The status of the result shows in both cases: Optimal
I am still not sure how all these aspects are interrelated, but I guess for me this tweek worked. For everyone else who is directed to this question, will probably find the necessary help with the answer given at the top of this post.

Related

In pulp, problem status is 2(optimal) but seems to ignore some constraints

I'm trying to use Gurobi solver in pulp to solve a big linear programming problem. The status Gurobi returned is 2, which means that an optimal solution is available, but the solution doesn't meet my expectations. Here is the part where problem occurs:
MyProblem = pulp.LpProblem("MyProblem",LpMinimize) # define the problem
# define variable
var_shengmu = {i: LpVariable(name=f"var_shengmu{i}", lowBound=0, upBound=100,cat=LpInteger) for i in range(N)}
var_qiuyi_shengmu={i:{j:LpVariable(name=f"var_qiuyi_shengmu{i}{j}",cat=LpBinary)for j in range(i+1,N)} for i in range(N)}
# add constraint
inf=10**6
eps=10**(-5)
for i in range(N):
for j in range(i+1,N):
if some_condition: # if var_shengmu[i] and var_shengmu[j] should be different
#constraint (a)
MyProblem+=(var_shengmu[i]-var_shengmu[j])<=-eps+inf*var_qiuyi_shengmu[i][j]
#constraint (b)
MyProblem+=(var_shengmu[j]-var_shengmu[i])>=eps-inf*(1-var_qiuyi_shengmu[i][j])
The last two lines above is an inequality constraint, I want to make var_shengmu[i] and var_shengmu[j] different. The idea is that if var_shengmu[j]==var_shengmu[i], whatever var_qiuyi_shengmu[i][j] is, constraint(a) and (b) cannot be satisfied together.
However, the variables var_shengmu are all 0(from var_shengmu[0] to var_shengmu[N-1]) in the solution.
I followed this answer to print the constraints, and I surprisingly found that for all i and j, the constraint (b) I listed above are not satisfied. Some of my outputs are here:
-1000000*var_qiuyi_shengmu25472548 - var_shengmu17040 + var_shengmu17046 <= -1e-05
is satisfied
-1000000*var_qiuyi_shengmu25472548 - var_shengmu17040 + var_shengmu17046 >= -999999.99999
not satisfied
I'm extremely bewildered why the status is optimal but some constraints are ignored. Did I make something wrong? Thank in advance for your help!
By the way, you may wonder why don't I have an objective function. It's because the code I put here is only a small part of my problem, and in other parts, the objective function is defined.
It is solved here.https://github.com/coin-or/pulp/issues/592
Pulp use solve matrix equations with floating point arithmetic with certain tolerances. I just increased eps, and it worked.

Implement variational approach for budget closure with 2 constraints in python

I'm new to Python and am quite helpless with a problem I have to solve:
I have two budget equations, let's say a+b+c+d=Res1 and a+c+e+f=Res2, now every term has a specific standard deviation a_std, b_std,... and I want to distribute the budget residuals Res1 and Res2 onto the individual terms relative to their uncertainty (see eqution below), to get a_new+b_new+c_new+d_new=0 and a_new+c_new+e_new+f_new=0
Regarding only 1 budget equation I'm able to solve the problem and get the terms a_new, b_new, c_new and d_new. But how can I add the second constraint to also get e_new and f_new?
e.g. I calculate a_new = a + (a_std^2/(a_std+b_std+c_std))*Res1 , however this is only dependent of the first equation, but I want a to be modified that way to also satisfy the second equation..
I appreciate any help/any ideas on how to approach this problem.
Thanks in advance,
Sue
Edit:
What I have so far:
def var_close(a,a_std,b,b_std,c,c_std,d,d_std,e,e_std,f,f_std,g,g_std):
x=[a,b,c,d,e]
Res1=np.sum([x])
std_ges1=a_std*a_std+b_std*b_std+c_std*c_std+d_std*d_std+e_std*e_std
y=[a,c,f,g]
Res2=np.sum([y])
std_ges2=a_std*a_std+c_std*c_std+f_std*f_std+g_std*g_std
a_new=a-((a_std*a_std)/std_ges1)*Res1
b_new=b-((b_std*b_std)/std_ges1)*Res1
c_new=c-((c_std*c_std)/std_ges1)*Res1
d_new=d-((d_std*d_std)/std_ges1)*Res1
e_new=e-((e_std*e_std)/std_ges1)*Res1
a_new2=a-((a_std*a_std)/std_ges2)*Res2
c_new2=c-((c_std*c_std)/std_ges2)*Res2
f_new=f-((f_std*f_std)/std_ges2)*Res2
g_new=g-((g_std*g_std)/std_ges2)*Res2
return a_new,b_new,c_new,d_new,e_new,a_new2,c_new2,f_new,g_new
But like this e.g. a_new and a_new2 are slightly different, but I want them to be equal and the other terms modified correspondng to their uncertainty..

Reducing constraint-minimization at CVXPY

I'm dealing with a mathematical optimization problem, in more detail it is a semi-definite program (see code-snipped below), which is used to solve another problem iteratively.
It is required that the equality-constraints are met up to ~10^(-10) or better. Even if I start my optimization with a matrix M that meets the constraints up to 10^(-12) or better, the optimization result X doesn't meet the requirements for X+M very close (at least two or three of them are only met up to 10**(-7)).
Is there a way to improve the accuracy of how close cvx (mosek) meets the constraints?
Sidenote: I got the initial value of my optimization as solution of exactly the same problem, so it seems to be possible to yield a higher accuracy, but I guess this was only lucky. Unfortunatly, this matrix isn't close to minimum, so I need to do another iteration.
# defining variable
X = cp.Variable((m,m), hermitian=True)
#pos. semi-definite constraint
constraints = [M+X >> 0]
# all the other constraints
for i in range(0,len(b)):
constraints += [ cp.trace(A[i]#(M+X)) == b[i]]
#problem formulation
prob = cp.Problem(cp.Minimize(cp.real(cp.trace(C#X))), constraints)
Result = prob.solve(solver=cp.MOSEK, verbose = False, parallel = True)
Here M and C are known matrices, A and b is a list of matrices and scalars respectively.
I've already tried to find an answer in the documentation and on the internet, but I couldn't find a solution. Therefore, I'd be grateful for any help!
Thank's in advance!

Minimize the number of outputs

For a linear optimization problem, I would like to include a penalty. The penalty of every option (penalties[(i)]) should be 1 if the the sum is larger than 0 and 0 if the penalty is zero. Is there a way to do this?
The penalty is defined as:
penalties = {}
for i in A:
penalties[(i)]=(lpSum(choices[i][k] for k in B))/len(C)
prob += Objective Function + sum(penalties)
For example:
penalties[(0)]=0
penalties[(1)]=2
penalties[(3)]=6
penalties[(4)]=0
The sum of the penalties should then be:
sum(penalties)=0+1+1+0= 2
Yes. What you need to do is to create binary variables: use_ith_row. The interpretation of this variable will be ==1 if any of the choices[i][k] are >= 0 for row i (and 0 otherwise).
The penalty term in your objective function simply needs to be sum(use_ith_row[i] for i in A).
The last thing you need is the set of constraints which enforce the rule described above:
for i in A:
lpSum(choices[i][k] for k in B) <= use_ith_row[i]*M
Finnaly, you need to choose M large enough so that the constraint above has no limiting effect when use_ith_row is 1 (you can normally work out this bound quite easily). Choosing an M which is way too large will also work, but will tend to make your problem solve slower.
p.s. I don't know what C is or why you divide by its length - but typically if this penalty is secondary to you other/primary objective you would weight it so that improvement in your primary objective is always given greater weight.

How to calculate the shadow price in Gurobi

I want to analyze whether the boundary should increase or reduce in Constraints in a programming problem:
The following is simplified problem. V[(i,t)]is decision variable and S[i] is input. I want to know if the obj increases or reduces when increasing one unit of S[i]`.
I know may the shadow price and marginal cost are for decision variable not inputs. In Gurobi, Dual value (also known as the shadow price) can use the Pi function.
for t in range(T):
for i in range(I):
m.addConstr(V[(i,t)] <= Lambda*S[i])
m.addConstr(other constrints without S[i])
obj =cf*quicksum(V[(i,0)] for i in range(I))+ cs*quicksum(S[i]for i in range(I))+...
m.setObjective(obj, GRB.MAXIMIZE)
m.optimize()
There are two ways to get the shadow price:(Python + Gurobi):
shadow_price = model.getAttr('Pi', model.getConstrs())
or
shadow_price = model.getAttr(GRB.Attr.Pi)
It returns the shadow prices of all constraints in sequence into an array.

Categories

Resources