I have a system that simplifies to the following: a power generation and storage unit are being used to meet demand. The objective function is the cost to produce the power times the power produced. However, the power produced is stratified into bins of different costs, and the "clearing price" to produce power is the cost at the highest bin produced each hour:
T = np.arange(5, dtype=int)
produce_cap = 90 # MW
store_cap = 100 # MWh
store_init = 0 # MWh
m = pyo.ConcreteModel()
m.T = pyo.Set(initialize=T) # time, hourly
m.produce = pyo.Var(m.T, within=pyo.NonNegativeReals, initialize=0) # generation
m.store = pyo.Var(m.T, within=pyo.Reals, initialize=0) # storage
stack = np.arange(10, 91, 20) # cumulative sum of generation subidivisions
price = np.arange(0.9, 0.01, -0.2) # marginal cost for subdivision of generation
demand = np.asarray([35, 5, 75, 110, 15]) # load to meet
m.produce_cap = pyo.Constraint(m.T, rule=lambda m, t: m.produce[t] <= produce_cap)
m.store_max = pyo.Constraint(m.T, rule=lambda m, t: m.store[t] <= store_cap)
m.store_min = pyo.Constraint(m.T, rule=lambda m, t: m.store[t] >= -store_cap)
rule = lambda m, t: m.produce[t] + m.store[t] == demand[t] # conservation rule
m.consv = pyo.Constraint(m.T, rule=rule)
# objective
def obj(stack, price, demand, m):
cost = 0
for t in m.T:
load = m.produce[t]
idx = np.searchsorted(stack, m.produce[t])
p = price[idx] if idx < len(price) else 1000 # penalty for exceeding production capability
cost += m.produce[t] * p
return cost
rule = functools.partial(obj, stack, price, demand)
m.objective = pyo.Objective(rule=rule, sense=pyo.minimize)
# more constraints added below ...
The problem seems to be in the objective function definition, using the np.searchsorted algorithm. The specific error is
Cannot create a compound inequality with identical upper and lower
bounds using strict inequalities: constraint infeasible:
produce[0] < produce[0] and 50.0 < produce[0]
If I try to implement my own searchsorted-like algorithm, I get a similar error. I gather the expression for the objective function that Pyomo is trying to create can't deal with this kind of table lookup, at least how I've implemented it. Is there another approach or reformulation I can consider?
There's a lot going on here.
The root cause is a conceptual misunderstanding of how Pyomo works: the rules for forming constraints and objectives are not callback functions that are called during optimization. Instead, the rules are functions that Pyomo calls to generate the model, and those rules are expected to return expression objects. Pyomo then passes those expressions to the underlying solver(s) through one of several standard intermediate formats (e.g., LP, NL, BAR, GMS formats). As a result, as a general rule, you should not have rules that have logic that is conditioned on the value of a variable (the rule can run, but the result will be a function of the initial variable value and will not be updated/changed during the optimization process).
For your specific example, the challenge is that searchsorted is iterating over the m.produce variable and comparing it to the cutpoints. That is causing Pyomo to start generating expression objects (through operator overloading). You are then running afoul of a (deprecated) feature where Pyomo allowed for generating compound (range) inequality expressions with a syntax like "lower <= m.x <= upper".
The solution is that you need to reformulate your objective to return an expression for the objective cost. There are several approaches to doing this, and the "best" approach depends on the balance of the model and the actual shape of the cost curve. From your example, it looks like the cost curve is intended to be piecewise linear, so I would consider either directly reformulating the expression (using an intermediate variable and a set of constraits), or to use Pyomo's "Piecewise" component for generating piecewise linear expressions.
Related
I am trying to use this conditional sum in Pulp's objective function. For the second lpSum, I am trying to calculate the costs of when we don't have enough chassis' to cover the demand and will need pool chassis' with a higher costs. Of course, I only want to calculate this when we don't have enough dedicated chassis'(dedicated_chassis_needed) to cover the demand(chassis_needed) for each day.
The problem is a cost minimizing one. The last "if" part doesn't seem to be working and the lpSum seems to be summing up every date's pool cost and ignoring the if condition, and it just sets the decision variable of dedicated_chassis_needed to 0(lower constraint) and the objective value is a negative number which should not be allowed.
prob += lpSum(dedicated_chassis_needed * dedicated_rate for date in chassis_needed.keys()) + \
lpSum(((chassis_needed[(date)] - dedicated_chassis_needed) * pool_rate_day) \
for date in chassis_needed.keys() if ((chassis_needed[(date)] - dedicated_chassis_needed) >= 0))
In general, in LP, you cannot use a conditional statement that is dependent on the value of a variable in any of the constraints or objective function because the value of the variable is unknown when the model is built before solving, so you will have to reformulate.
You don't have much information there about what the variables and constants are, so it isn't possible to give good suggestions. However, a well-designed objective function should be able to handle extra cost for excess demand without a condition as the model will select the cheaper items first.
For example, if:
demand[day 5] = 20
and
cheap_units[day 5] = 15 # $100 (availability)
and
reserve units = 100 # $150 (availability from some pool of reserves)
and you have some constraint to meet demand via both of those sources and an objective function like:
min(cost) s.t. cost[day] = cheap_units[day] * 100 + reserve_units * 150
it should work out fine...
I'm working with a pyomo model (mostly written by someone else, to be updated by me) that optimizes electric vehicle charging (ie, how much power will a vehicle import or export at a given timestep). The optimization variable (u) is power, and the objective is to minimize total charging cost given the charging cost at each timestep.
I'm trying to write a new optimization function to limit the number of times that the model will allow each vehicle to export power (ie, to set u < 0). I've written a constraint called max_call_rule that counts the number of times u < 0, and constrains it to be less than a given value (max_calls) for each vehicle. (max_calls is a dictionary with a label for each vehicle paired with an integer value for the number of calls allowed.)
The code is very long, but I've put the core pieces below:
model.u = Var(model.t, model.v, domain=Integers, doc='Power used')
model.max_calls = Param(model.v, initialize = max_calls)
def max_call_rule(model, v):
return len([x for x in [model.u[t, v] for t in model.t] if x < 0]) <= model.max_calls[v]
model.max_call_rule = Constraint(model.v, rule=max_call_rule, doc='Max call rule')
This approach doesn't work--I get the following error when I try to run the code.
ERROR: Rule failed when generating expression for constraint max_call_rule
with index 16: ValueError: Cannot create an InequalityExpression with more
than 3 terms.
ERROR: Constructing component 'max_call_rule' from data=None failed:
ValueError: Cannot create an InequalityExpression with more than 3 terms.
I'm new to working with pyomo and suspect that this error means that I'm trying to do something that fundamentally won't work with an optimization program. So--is there a better way for me to constrain the number of times that my variable u can be less than 0?
If what you're trying to do is minimize the number of times vehicles are exporting power, you can introduce a binary variable that allows/disallows vehicles discharging. You want this variable to be indexed over time and vehicles.
Note that if the rest of your model is LP (linear, without any integer variables), this will turn it into a MIP/MILP. There's a significant difference in terms of computational effort required to solve, and the types of solvers you can use. The larger the problems, the bigger the difference this will make. I'm not sure why u is currently set as Integers, that seems quite strange given it represents power.
model.allowed_to_discharge = Var(model.t, model.v, within=Boolean)
def enforce_vehicle_discharging_logic_rule(model, t, v):
"""
When `allowed_to_discharge[t,v]` is 1,
this constraint doesn't have any effect.
When `allowed_to_discharge[t,v]` is 1, u[t,v] >= 0.
Note that 1e9 is just a "big M", i.e. any big number
that you're sure exceeds the maximum value of `model.u`.
"""
return model.u[t,v] >= 0 - model.allowed_to_discharge[t,v] * 1e9
model.enforce_vehicle_discharging_logic = Constraint(
model.t, model.v, rule=enforce_vehicle_discharging_logic_rule
)
Now that you have the binary variable, you can count the events, and specifically you can assign a cost to such events and add it to your objective function (just in case, you can only have one objective function, so you're just adding a "component" to it, not adding a second objective function).
def objective_rule(model):
return (
... # the same objective function as before
+ sum(model.u[t, v] for t in model.t for v in model.v) * model.cost_of_discharge_event
)
model.objective = Objective(rule=objective_rule)
If what you instead of what you add to your objective function is a cost associated to the total energy discharged by the vehicles (instead of the number of events), you want to introduce two separate variables for charging and discharging - both non-negative, and then define the "net discharge" (which right now you call u) as an Expression which is the difference between discharge and charge.
You can then add a cost component to your objective function that is the sum of all the discharge power, and multiply it by the cost associated with it.
I have next first order differential equation (example):
dn/dt=A*n; n(0)=28
When A is constant, it is perfectly solved with python odeint.
But i have an array of different values of A from .txt file [not function,just an array of values]
A = [0.1,0.2,0.3,-0.4,0.7,...,0.0028]
And i want that in each iteration (or in each moment of time t) of solving ode A is a new value from array.
I mean that:
First iteration (or t=0) - A=0.1
Second iteration (or t=1) - A=0.2 and etc from array.
How can i do it with using python odeint?
Yes, you can to that, but not directly in odeint, as that has no event mechanism, and what you propose needs an event-action mechanism.
But you can separate your problem into steps, use inside each step odeint with the now constant A parameter, and then in the end join the steps.
T = [[0]]
N = [[n0]]
for k in range(len(A)):
t = np.linspan(k,k+1,11);
n = odeint(lambda u,t: A[k]*u, [n0],t)
n0 = n[-1]
T.append(t[1:])
N.append(n[1:])
T = np.concatenate(T)
N = np.concatenate(N)
If you are satisfied with less efficiency, both in the evaluation of the ODE and in the number of internal steps, you can also implement the parameter as a piecewise constant function.
tA = np.arange(len(A));
A_func = interp1d(tA, A, kind="zero", fill_value="extrapolate")
T = np.linspace(0,len(A)+1, 10*len(A)+11);
N = odeint(lambda u,t: A_func(t)*u, [n0], T)
The internal step size controller works on the assumption that the ODE function is well differentiable to 5th or higher order. The jumps are then seen via the implicit numerical differentiation inherent in the step error calculation as highly oscillatory events, requiring a very small step size. There is some mitigation inside the code that usually allows the solver to eventually step over such a jump, but it will require much more internal steps and thus function evaluations than the first variant above.
For a linear optimization problem, I would like to include a penalty. The penalty of every option (penalties[(i)]) should be 1 if the the sum is larger than 0 and 0 if the penalty is zero. Is there a way to do this?
The penalty is defined as:
penalties = {}
for i in A:
penalties[(i)]=(lpSum(choices[i][k] for k in B))/len(C)
prob += Objective Function + sum(penalties)
For example:
penalties[(0)]=0
penalties[(1)]=2
penalties[(3)]=6
penalties[(4)]=0
The sum of the penalties should then be:
sum(penalties)=0+1+1+0= 2
Yes. What you need to do is to create binary variables: use_ith_row. The interpretation of this variable will be ==1 if any of the choices[i][k] are >= 0 for row i (and 0 otherwise).
The penalty term in your objective function simply needs to be sum(use_ith_row[i] for i in A).
The last thing you need is the set of constraints which enforce the rule described above:
for i in A:
lpSum(choices[i][k] for k in B) <= use_ith_row[i]*M
Finnaly, you need to choose M large enough so that the constraint above has no limiting effect when use_ith_row is 1 (you can normally work out this bound quite easily). Choosing an M which is way too large will also work, but will tend to make your problem solve slower.
p.s. I don't know what C is or why you divide by its length - but typically if this penalty is secondary to you other/primary objective you would weight it so that improvement in your primary objective is always given greater weight.
I want to analyze whether the boundary should increase or reduce in Constraints in a programming problem:
The following is simplified problem. V[(i,t)]is decision variable and S[i] is input. I want to know if the obj increases or reduces when increasing one unit of S[i]`.
I know may the shadow price and marginal cost are for decision variable not inputs. In Gurobi, Dual value (also known as the shadow price) can use the Pi function.
for t in range(T):
for i in range(I):
m.addConstr(V[(i,t)] <= Lambda*S[i])
m.addConstr(other constrints without S[i])
obj =cf*quicksum(V[(i,0)] for i in range(I))+ cs*quicksum(S[i]for i in range(I))+...
m.setObjective(obj, GRB.MAXIMIZE)
m.optimize()
There are two ways to get the shadow price:(Python + Gurobi):
shadow_price = model.getAttr('Pi', model.getConstrs())
or
shadow_price = model.getAttr(GRB.Attr.Pi)
It returns the shadow prices of all constraints in sequence into an array.