How to calculate the shadow price in Gurobi - python

I want to analyze whether the boundary should increase or reduce in Constraints in a programming problem:
The following is simplified problem. V[(i,t)]is decision variable and S[i] is input. I want to know if the obj increases or reduces when increasing one unit of S[i]`.
I know may the shadow price and marginal cost are for decision variable not inputs. In Gurobi, Dual value (also known as the shadow price) can use the Pi function.
for t in range(T):
for i in range(I):
m.addConstr(V[(i,t)] <= Lambda*S[i])
m.addConstr(other constrints without S[i])
obj =cf*quicksum(V[(i,0)] for i in range(I))+ cs*quicksum(S[i]for i in range(I))+...
m.setObjective(obj, GRB.MAXIMIZE)
m.optimize()

There are two ways to get the shadow price:(Python + Gurobi):
shadow_price = model.getAttr('Pi', model.getConstrs())
or
shadow_price = model.getAttr(GRB.Attr.Pi)
It returns the shadow prices of all constraints in sequence into an array.

Related

PuLP Conditional Sum based on key of the loop

I am trying to use this conditional sum in Pulp's objective function. For the second lpSum, I am trying to calculate the costs of when we don't have enough chassis' to cover the demand and will need pool chassis' with a higher costs. Of course, I only want to calculate this when we don't have enough dedicated chassis'(dedicated_chassis_needed) to cover the demand(chassis_needed) for each day.
The problem is a cost minimizing one. The last "if" part doesn't seem to be working and the lpSum seems to be summing up every date's pool cost and ignoring the if condition, and it just sets the decision variable of dedicated_chassis_needed to 0(lower constraint) and the objective value is a negative number which should not be allowed.
prob += lpSum(dedicated_chassis_needed * dedicated_rate for date in chassis_needed.keys()) + \
lpSum(((chassis_needed[(date)] - dedicated_chassis_needed) * pool_rate_day) \
for date in chassis_needed.keys() if ((chassis_needed[(date)] - dedicated_chassis_needed) >= 0))
In general, in LP, you cannot use a conditional statement that is dependent on the value of a variable in any of the constraints or objective function because the value of the variable is unknown when the model is built before solving, so you will have to reformulate.
You don't have much information there about what the variables and constants are, so it isn't possible to give good suggestions. However, a well-designed objective function should be able to handle extra cost for excess demand without a condition as the model will select the cheaper items first.
For example, if:
demand[day 5] = 20
and
cheap_units[day 5] = 15 # $100 (availability)
and
reserve units = 100 # $150 (availability from some pool of reserves)
and you have some constraint to meet demand via both of those sources and an objective function like:
min(cost) s.t. cost[day] = cheap_units[day] * 100 + reserve_units * 150
it should work out fine...

How to constrain optimization based on number of negative values of variable in pyomo

I'm working with a pyomo model (mostly written by someone else, to be updated by me) that optimizes electric vehicle charging (ie, how much power will a vehicle import or export at a given timestep). The optimization variable (u) is power, and the objective is to minimize total charging cost given the charging cost at each timestep.
I'm trying to write a new optimization function to limit the number of times that the model will allow each vehicle to export power (ie, to set u < 0). I've written a constraint called max_call_rule that counts the number of times u < 0, and constrains it to be less than a given value (max_calls) for each vehicle. (max_calls is a dictionary with a label for each vehicle paired with an integer value for the number of calls allowed.)
The code is very long, but I've put the core pieces below:
model.u = Var(model.t, model.v, domain=Integers, doc='Power used')
model.max_calls = Param(model.v, initialize = max_calls)
def max_call_rule(model, v):
return len([x for x in [model.u[t, v] for t in model.t] if x < 0]) <= model.max_calls[v]
model.max_call_rule = Constraint(model.v, rule=max_call_rule, doc='Max call rule')
This approach doesn't work--I get the following error when I try to run the code.
ERROR: Rule failed when generating expression for constraint max_call_rule
with index 16: ValueError: Cannot create an InequalityExpression with more
than 3 terms.
ERROR: Constructing component 'max_call_rule' from data=None failed:
ValueError: Cannot create an InequalityExpression with more than 3 terms.
I'm new to working with pyomo and suspect that this error means that I'm trying to do something that fundamentally won't work with an optimization program. So--is there a better way for me to constrain the number of times that my variable u can be less than 0?
If what you're trying to do is minimize the number of times vehicles are exporting power, you can introduce a binary variable that allows/disallows vehicles discharging. You want this variable to be indexed over time and vehicles.
Note that if the rest of your model is LP (linear, without any integer variables), this will turn it into a MIP/MILP. There's a significant difference in terms of computational effort required to solve, and the types of solvers you can use. The larger the problems, the bigger the difference this will make. I'm not sure why u is currently set as Integers, that seems quite strange given it represents power.
model.allowed_to_discharge = Var(model.t, model.v, within=Boolean)
def enforce_vehicle_discharging_logic_rule(model, t, v):
"""
When `allowed_to_discharge[t,v]` is 1,
this constraint doesn't have any effect.
When `allowed_to_discharge[t,v]` is 1, u[t,v] >= 0.
Note that 1e9 is just a "big M", i.e. any big number
that you're sure exceeds the maximum value of `model.u`.
"""
return model.u[t,v] >= 0 - model.allowed_to_discharge[t,v] * 1e9
model.enforce_vehicle_discharging_logic = Constraint(
model.t, model.v, rule=enforce_vehicle_discharging_logic_rule
)
Now that you have the binary variable, you can count the events, and specifically you can assign a cost to such events and add it to your objective function (just in case, you can only have one objective function, so you're just adding a "component" to it, not adding a second objective function).
def objective_rule(model):
return (
... # the same objective function as before
+ sum(model.u[t, v] for t in model.t for v in model.v) * model.cost_of_discharge_event
)
model.objective = Objective(rule=objective_rule)
If what you instead of what you add to your objective function is a cost associated to the total energy discharged by the vehicles (instead of the number of events), you want to introduce two separate variables for charging and discharging - both non-negative, and then define the "net discharge" (which right now you call u) as an Expression which is the difference between discharge and charge.
You can then add a cost component to your objective function that is the sum of all the discharge power, and multiply it by the cost associated with it.

Minimize the number of outputs

For a linear optimization problem, I would like to include a penalty. The penalty of every option (penalties[(i)]) should be 1 if the the sum is larger than 0 and 0 if the penalty is zero. Is there a way to do this?
The penalty is defined as:
penalties = {}
for i in A:
penalties[(i)]=(lpSum(choices[i][k] for k in B))/len(C)
prob += Objective Function + sum(penalties)
For example:
penalties[(0)]=0
penalties[(1)]=2
penalties[(3)]=6
penalties[(4)]=0
The sum of the penalties should then be:
sum(penalties)=0+1+1+0= 2
Yes. What you need to do is to create binary variables: use_ith_row. The interpretation of this variable will be ==1 if any of the choices[i][k] are >= 0 for row i (and 0 otherwise).
The penalty term in your objective function simply needs to be sum(use_ith_row[i] for i in A).
The last thing you need is the set of constraints which enforce the rule described above:
for i in A:
lpSum(choices[i][k] for k in B) <= use_ith_row[i]*M
Finnaly, you need to choose M large enough so that the constraint above has no limiting effect when use_ith_row is 1 (you can normally work out this bound quite easily). Choosing an M which is way too large will also work, but will tend to make your problem solve slower.
p.s. I don't know what C is or why you divide by its length - but typically if this penalty is secondary to you other/primary objective you would weight it so that improvement in your primary objective is always given greater weight.

PuLP - How to specify the solver's accuracy

I will try to keep my question short and simple. If you need any further information, please let me know.
I have an MIP, implemented in Python with the package PuLP. (Roughly 100 variables and constraints) The mathematical formulation of the problem is from a research paper. This paper also includes a numerical study. However, my results differ from the results of the authors.
My problem variable is called prob
prob = LpProblem("Replenishment_Policy", LpMinimize)
I solve the problem with prob.solve()
LpStatus returns Optimal
When I add some of the optimal (paper) results as contraints, I get a slightly better objective value. Same goes for constraining the objecive function to a slightly lower value. The LpStatus remains Optimal.
original objective value: total = 1704.20
decision variable: stock[1] = 370
adding constraints: prob += stock[1] == 379
new objective value: 1704.09
adding constraints: prob += prob.objective <= 1704
new objective value: 1702.81
My assumption is that PuLP's solver approximates the solution. The calculation is very fast, but apparently not very accurate. Is there a way I can improve the accuracy of the solver PuLP is using? I am looking for something like: prob.solve(accuracy=100%). I had a look at the documentation but couldn't figure out what to do. Are there any thoughts what the problem could be?
Any help is appreciated. Thanks.
The answer to my question was given by ayhan: To specify the accuracy of the solver, you can use the fracGap argument of the selected solver.
prob.solve(solvers.PULP_CBC_CMD(fracGap=0.01))
However, the question I asked, was not aligned with the problem I had. The deviation of the results was indeed not a matter of accuracy of the solver (as sascha pointed out in the comments).
The cause to my problem:
The algorithm I implemented was the optimization of the order policy parameters for a (Rn, Sn) policy under non-stationary, stochastic demand. The above mentioned paper is:
Tarim, S. A., & Kingsman, B. G. (2006). Modelling and computing (R n, S n) policies for inventory systems with non-stationary stochastic demand. European Journal of Operational Research, 174(1), 581-599.
The algorithm has two binary variables delta[t] and P[t][j]. The following two constraints only allow values of 0 and 1 for P[t][j], as long as delta[t] is defined as a binary.
for t in range(1, T+1):
prob += sum([P[t][j] for j in range(1, t+1)]) == 1
for j in range(1, t+1):
prob += P[t][j] >= delta[t-j+1] - sum([delta[k] for k in range(t-j+2, t+1)])
Since P[t][j] can only take values of 0 or 1, hence being a binary variable, I declared it as follows:
for t in range(1, T+1):
for j in range(1, T+1):
P[t][j] = LpVariable(name="P_"+str(t)+"_"+str(j), lowBound=0, upBound=1, cat="Integer")
The objective value for the minimization returns: 1704.20
After researching for a solution for quite a while, I noticed a part of the paper that says:
... it follows that P_tj must still take a binary value even if it is
declared as a continuous variable. Therefore, the total number of
binary variables reduces to the total number of periods, N.
Therefore I changed the cat argument of the P[t][j] variable to cat="Continuous". Whithout changing anything else, I got the lower objective value of 1702.81. The status of the result shows in both cases: Optimal
I am still not sure how all these aspects are interrelated, but I guess for me this tweek worked. For everyone else who is directed to this question, will probably find the necessary help with the answer given at the top of this post.

Using fitness sharing on a minimization function

I'm trying to use fitness/function sharing on a minimization function. I'm using the standard definition of the sharing function found here which then divides the fitness by the niche count. This will lower the fitness, proportional to the amount of individuals in its niche. However, in my case the lower the fitness the more fit the individual is. How can I make my fitness sharing function increase the fitness proportionally to the amount of individuals in its niche?
Here's the code:
def evalTSPsharing(individual, radius, pop):
individualFitness = evalTSP(individual)[0]
nicheCount = 0
for ind in pop:
distance = abs(individualFitness - evalTSP(ind)[0])
if distance < radius:
nicheCount += (1-(distance/radius))
return (individualFitness/nicheCount,)
I couldn't find a non-pdf of the paper, but here's a picture of the relevant parts. Again, this is from the link above.
Question is two years old now, but I'll give it a try:
You can try replacing the niche_count division penalty with a multiplication, i.e.:
individualFitness * nicheCount
instead of:
individualFitness / nicheCount

Categories

Resources