How to add a flag constraint in Pyomo? - python

I am trying to simulate a battery dispatch models with charging and discharging constraints. The BESS is charging from a Solar PV system. When I run the model currently, there are some time periods when the BESS is charging and discharging at the same time. How can I add a flag such that when Charge >, Discharge =0 and vice-versa.
def market_constraintx0(model, t):
return (model.Charge[t] <= df.loc[t,'PVGeneration']*stripeff)
model.market_rulex0 = Constraint(model.T, rule=market_constraintx0)
def market_constraintx1(model, t):
return (model.Charge[t] + model.RegDown[t] <= model.ChargeMax)
model.market_rulex1 = Constraint(model.T, rule=market_constraintx1)
def market_constraintx2(model, t):
return ( model.Discharge[t] + model.RegUp[t] <= model.DischargeMax)
model.market_rulex2 = Constraint(model.T, rule=market_constraintx2)
def charge_soc(model, t):
return model.RegUp[t] + model.Discharge[t] <= model.SoC[t] * stripeff ###Battery discharge and regup capacity is limited by SOC
model.charge_soc = Constraint(model.T, rule=charge_soc)
def discharge_soc(model, t):
return model.RegDown[t] + model.Charge[t] <= (model.SoCmax - model.SoC[t])/stripeff ### Battery can be charged by the amount of capacity left to charge.
model.discharge_soc = Constraint(model.T, rule=discharge_soc)

The constraint
x >= 0 or y >= 0
is sometimes called a complementarity condition. It can also be written as:
x * y = 0
(I assume x and y are non-negative variables). There are different ways to solve this:
complementarity solver. Some solvers support this kind of constraints directly. Complementarity constraints inside a math programming model is known as MPEC (Mathematical Programming with Equilibrium Constraints). So these solvers are sometimes called MPEC solvers.
nonlinear formulation. The constraint x*y=0 is not very easy, but a global solver should be able to handle this reliably. However these solvers only handle relatively small models (compared to local solvers).
discrete formulation. Formulate the OR condition using binary variables or a SOS1 construct. This is especially useful if the rest of the model is linear.
You may want to look into pyomo.mpec. For further information see link.

If you want to stick to (mixed-integer) linear formulations, you can also look for indicator constraints, which are discussed generally in this question. Some solvers like CPLEX and Gurobi seem to have specific constraint types for indicator constraints, but I'm not familiar with how to use those within Pyomo.
In general, you can get similar functionality by using a "Big M" formulation. In your case, something like:
model.Indicator = Var(model.T, within=Binary)
model.M = Param(initialize=1000)
def charge_indicator_constraint(model, t):
return model.M * model.Indicator[t] >= model.Charge[t]
...
def discharge_indicator_constraint(model, t):
return (1 - model.M) * model.Indicator >= model.Discharge[t]
...
As the discussed in the question I linked to, picking the right value of model.M is important to keep your model formulation "tight", and in your case, you would probably tie it directly to the power rating of your BESS.

Related

Creating a conditional constraint for a specific Python Pulp Maximization problem

I can't manage to find a way to create a constraint that sounds like this: for example I have 2 variables, one is a regular product and the other one is a super rare product. In order to have a super rare product, you will need to have already 25 of the regular version of that product. This can be stackable (e.g. if the algorithm select 75 of that regular product, it can have 3 super rare). The reason for this is that the super rare is more profitable, so if I place it without any constraints, it will select only the super rare ones. Any ideas on how to write such a constraint?
Thanks in advance!
Part of the code:
hwProblem = LpProblem("HotWheels", LpMaximize)
# Variables
jImportsW_blister = LpVariable("HW J-Imports w/ blister", lowBound=20, cat=LpInteger) # regular product
jImportsTH = LpVariable("HW J-Imports treasure hunt", lowBound=None, cat=LpInteger) # super rare product
# Objective Function
hwProblem += 19 * jImportsW_blister + 350 * jImportsTH # profit for each type of product
# Constraints
hwProblem += jImportsW_blister <= 50, "HW J-Imports maximum no. of products"
hwProblem += jImportsTH <= jImportsW_blister / 25
# ^this is where the error is happening
There's a few "missing pieces" here regarding the structure of your model, but in general, you can limit the "super rare" (SR) by doing something like:
prob += SR <= R / 25

Google OR Tools Constraint SetCoefficient, setting linear inequalities

From Google's OR-Tools Linear Solver tutorial this block of code is referenced.
foods = [solver.NumVar(0.0, solver.infinity(), item[0]) for item in data]
# Create the constraints, one per nutrient.
constraints = []
for i, nutrient in enumerate(nutrients):
constraints.append(solver.Constraint(nutrient[1], solver.infinity()))
for j, item in enumerate(data):
constraints[i].SetCoefficient(foods[j], item[i + 3])
print('Number of constraints =', solver.NumConstraints())
Solver.Constraint appears to accept a lower bound and upper bound arguments. However, when this constraint is later referenced in the nested for loop and the method, .SetCoefficient is referenced, foods[j] retrieves from list, foods, a NumVar object and item[i+3] retrieves the nutrient i of food item j. But beyond here, I can't make sense of what SetCoefficient is doing and how this functions as a linear constraint.
The source code provides the following guidance:
class Constraint(object):
r"""
The class for constraints of a Mathematical Programming (MP) model.
A constraint is represented as a linear equation or inequality.
"""
thisown = property(lambda x: x.this.own(), lambda x, v: x.this.own(v), doc="The membership flag")
def __init__(self, *args, **kwargs):
raise AttributeError("No constructor defined")
__repr__ = _swig_repr
...
def SetCoefficient(self, var: "Variable", coeff: "double") -> "void":
r"""
Sets the coefficient of the variable on the constraint.
If the variable does not belong to the solver, the function just returns,
or crashes in non-opt mode.
"""
return _pywraplp.Constraint_SetCoefficient(self, var, coeff)
Define the constraints
The constraints for Stigler diet require the total amount of the nutrients provided by all foods to be at least the minimum requirement for each nutrient. Next, we write these constraints as inequalities involving the arrays data and nutrients, and the variables food[i].
First, the amount of nutrient i provided by food j per dollar is data[j][i+3] (we add 3 to the column index because the nutrient data begins in the fourth column of data.) Since the amount of money to be spent on food j is food[j], the amount of nutrient i provided by food j is ( data[j][i+3] \cdot food[j] ). Finally, since the minimum requirement for nutrient i is nutrientsi, we can write constraint i as follows: [ \sum_{j} data[j][i+3] \cdot food[j] \geq nutrientsi ;;;;; (1) ] The following code defines these constraints.
I don't understand the syntax of these constraints. A typical linear constraint might take the form of x + 2*y <= 3; however, the syntax here doesn't function that way. How should I use OR-Tools library to define constraints in this more conventional way?
Likewise, what is the usage of SetCoefficient doing and how does it establish a constraint in the form of an inequality?

Find maximum value of a variable given constraints in z3py

For example, given the following 4 constraints, a and x are ints, b is array, maps int to int:
a >= 0
b[0] == 10
x == 0
b[x] >= a
find_max(a) => 10
find_min(a) => 0
Can z3py do something like this?
Yeah, sure.
You can either do it incrementally, via multiple single-objective optimization searches, or use the more efficient boxed (a.k.a. Multi-Independent) combination offered by z3 for dealing with multi-objective optimization.
Definition 4.6.3. (Multiple-Independent OMT [LAK+14, BP14, BPF15, ST15b, ST15c]).
Let <φ,O> be a multi-objective OMT problem, where φ
is a ground SMT formula and O = {obj_1 , ..., obj_N},
is a sorted list of N objective functions.
We call Multiple-Independent OMT problem,
a.k.a Boxed OMT problem [BP14, BPF15],
the problem of finding in one single run a set of
models {M_1, ...,M_N} such that each M_i makes
obj_i minimum on the common formula φ.
Remark 4.6.3. Solving a Multiple-Independent
OMT problem <φ, {obj_1, ..., obj_N }> is akin to
independently solving N single-objective OMT
problems <φ, obj_1>, ..., <φ, obj_N>.
However, the former allows for factorizing the search
and thus obtaining a significant performance boost
when compared to the latter approach [LAK+14, BP14, ST15c].
[source, pag. 104]
Example:
from z3 import *
a = Int('a')
x = Int('x')
b = Array('I', IntSort(), IntSort())
opt = Optimize()
opt.add(a >= 0)
opt.add(x == 0)
opt.add(Select(b, 0) == 10)
opt.add(Select(b, x) >= a)
obj1 = opt.maximize(a)
obj2 = opt.minimize(a)
opt.set('priority', 'box') # Setting Boxed Multi-Objective Optimization
is_sat = opt.check()
assert is_sat
print("Max(a): " + str(obj1.value()))
print("Min(a): " + str(obj2.value()))
Output:
~$ python test.py
Max(a): 10
Min(a): 0
See publications on the topic like, e.g.
1. Nikolaj Bjorner and Anh-Dung Phan. νZ - Maximal Satisfaction with Z3. In Proc International Symposium on Symbolic Computation in Software Science, Gammart, Tunisia, December 2014. EasyChair Proceedings in Computing (EPiC). [PDF]
2. Nikolaj Bjorner, Anh-Dung Phan, and Lars Fleckenstein. Z3 - An Optimizing SMT Solver. In Proc. TACAS, volume 9035 of LNCS. Springer, 2015. [Springer] [[PDF]

scipy minimize not always solving function

Sorry, a little finance related on the topic, but also a scipy/python question. For context of what I am trying to do, it's literally the same as these two blog posts.
https://quantdare.com/risk-parity-in-python/
https://thequantmba.wordpress.com/2016/12/14/risk-parityrisk-budgeting-portfolio-in-python/
So I have a bunch of returns on stocks, and I want to equalize the risk contributions of each stock. To do this I will need to solve for the weights that will give me an equal risk contribution for each using the scipy minimize optimizer.
So I will pass in my target risk contributions, and my initial guess into the optimizer. For example, 6 stocks. My initial guess is merely 1/6 of the total 100% weight in the portfolio.
initial_weight = [0.16666666666667, 0.16666666666667, 0.16666666666667,
0.16666666666667, 0.16666666666667, 0.16666666666667]
risk_contrib_target =[0.16666666666667, 0.16666666666667, 0.16666666666667,
0.16666666666667, 0.16666666666667, 0.16666666666667]
This was taken from the quantmba link, so all credit to that guy. It looks right to me.
# risk budgeting optimization
def calculate_portfolio_var(w,V):
# function that calculates portfolio risk
w = np.matrix(w)
return (w*V*w.T)[0,0]
def calculate_risk_contribution(w,V):
# function that calculates asset contribution to total risk
w = np.matrix(w, dtype=object)
sigma = np.sqrt(calculate_portfolio_var(w,V))
# Marginal Risk Contribution
MRC = V*w.T
# Risk Contribution
RC = np.multiply(MRC,w.T)/sigma
RC = RC / sum(RC)
return RC
def risk_budget_objective(x,pars):
# calculate portfolio risk
V = pars[0]# covariance table
x_t = pars[1] # risk target in percent of portfolio risk
sig_p = np.sqrt(calculate_portfolio_var(x,V)) # portfolio sigma
risk_target = np.asmatrix(x_t, dtype=object)
asset_RC = calculate_risk_contribution(x,V)
J = sum(np.square(asset_RC-risk_target.T))[0,0] * 1000 # sum of squared error
return J
I also have a list of dates that I am running through to solve this many times over a time period.
rebalance_dates = my_list_of_dates
I noticed that sometimes, it doesn't solve this correctly. This is easy to check because the way it is set up, the function should have a 0 solution. Also I can check the risk contribution afterwards to see that they reached my target. To get around this, I kick it to basin hopping if it does not find this 0 solution. I think it is solving a local minimum and not a global minimum and I read this is one solution to that problem.
The get_returns_matrix function is just getting the data that I want from one of my files. This part is not important.
returns_matrix = get_returns_matrix(asset_returns, 60, date, components)
This is the optimization.
for date in rebalance_dates:
print(date)
returns_matrix = get_returns_matrix(asset_returns, 60, date, components)
covariance = np.cov(returns_matrix)
annual_covar = [map(lambda x:x * 260, group) for group in covariance]
annual_covar = [list(x) for x in annual_covar]
cons = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1.0},
{'type': 'ineq', 'fun': lambda x: x})
res= minimize(risk_budget_objective, initial_weight, args=[annual_covar, risk_contrib_target], method='SLSQP',constraints=cons,
options={'disp': False, 'ftol': .00000000001, 'eps' : .0000000000000005, 'maxiter':1000})
if res.fun > .00000000001:
print("Kick to basin hopping")
minimizer_kwargs = dict(method="SLSQP", constraints=cons, args=[annual_covar, risk_contrib_target], options={'ftol': .000000000000000000001, 'eps' : .0000000000000005, 'maxiter':100})
res = basinhopping(risk_budget_objective, initial_weight, niter=50, minimizer_kwargs=minimizer_kwargs)
I have two constraints, one being the sum of weights needs to equal 100% and the other being all weights should be positive.
This solves correctly about 75% of the time, the other times it gets stuck at a local minimum I believe. So a correct result from this would look like:
|--------|-----------|-----------|-----------|-----------|-----------|-----------|
|Category|Stock 1 |Stock 2 |Stock 3 |Stock 4 |Stock 5 |Stock 6 |
|--------|-----------|-----------|-----------|-----------|-----------|-----------|
|Weights |0.121465654|0.17829418 |0.091558469|0.105659033|0.156959021|0.346063642|
|--------|-----------|-----------|-----------|-----------|-----------|-----------|
|Risk Con|0.166666667|0.166666667|0.166666667|0.166666667|0.166666667|0.166666667|
Function return val 0.0000000000
But occasionally (25% of the times) I will get a result that does not solve the function, like this:
|--------|-----------|-----------|-----------|-----------|-----------|-----------|
|Category|Stock 1 |Stock 2 |Stock 3 |Stock 4 |Stock 5 |Stock 6 |
|--------|-----------|-----------|-----------|-----------|-----------|-----------|
|Weights |0.159442825|0.166949713|0.235404372|0.175430619|0.262772472|0.000000000|
|--------|-----------|-----------|-----------|-----------|-----------|-----------|
|Risk Con|0.199661774|0.199803048|0.200448716|0.199943667|0.200142796|0.000000000|
Function return val 33.33371143
The times that it is wrong, it seems to completely disregard stock 6. Giving it both a 0 weight and a 0 risk contribution.
Is there any parameter I am not using correctly in the solver? Sorry, this might be a little difficult to solve without the data that I'm using. But just wondering if there is anything obviously wrong with my approach.
I also happen to know there is a solution to the ones scipy doesn't solve correctly because I can do the same thing correctly in an excel spreadsheet with the GRG-nonlinear constraint solver.
Thanks so much!
Basinhopping is a stochastic global optimizer. There is no guarantee that it will find the global optimum within the specified number of iterations.
It sounds like from your description that you have a way of checking whether a solution is the global optimum. In that case you can use the callback parameter to optimize your search
callback : callable, callback(x, f, accept), optional
A callback function which will be called for all minima found. x and f are the coordinates and function value of the trial minimum, and accept is whether or not that minimum was accepted. This can be used, for example, to save the lowest N minima found. Also, callback can be used to specify a user defined stop criterion by optionally returning True to stop the basinhopping routine.
def my_callback(x, f, accept):
return minimum_is_global_minimum(x)
Then you can set niter to some large number and it will stop as soon as it finds the gobal minimum.

Python curve fit with change point

As I'm really struggleing to get from R-code, to Python code, I would like to ask some help. The code I want to use has been provided to my from withing the mathematics forum of stackexchange.
https://math.stackexchange.com/questions/2205573/curve-fitting-on-dataset
I do understand what is going on. But I'm really having a hard time trying to solve the R-code, as I have never seen anything of it. I have written the function to return the sum of squares. But I'm stuck at how I could use a function similar to the optim function. And also I don't really like the guesswork at the initial values. I would like it better to run and re-run a type of optim function untill I get the wanted result, because my needs for a nearly perfect curve fit are really high.
def model (par,x):
n = len(x)
res = []
for i in range(1,n):
A0 = par[3] + (par[4]-par[1])*par[6] + (par[5]-par[2])*par[6]**2
if(x[i] == par[6]):
res[i] = A0 + par[1]*x[i] + par[2]*x[i]**2
else:
res[i] = par[3] + par[4]*x[i] + par[5]*x[i]**2
return res
This is my model function...
def sum_squares (par, x, y):
ss = sum((y-model(par,x))^2)
return ss
And this is the sum of squares
But I have no idea on how to convert this:
#I found these initial values with a few minutes of guess and check.
par0 <- c(7,-1,-395,70,-2.3,10)
sol <- optim(par= par0, fn=sqerror, x=x, y=y)$par
To Python code...
I wrote an open source Python package (BSD license) that has a genetic algorithm (Differential Evolution) front end to the scipy Levenberg-Marquardt solver, it functions similarly to what you describe in your question. The github URL is:
https://github.com/zunzun/pyeq3
It comes with a "user-defined function" example that's fairly easy to use:
https://github.com/zunzun/pyeq3/blob/master/Examples/Simple/FitUserDefinedFunction_2D.py
along with command-line, GUI, cluster, parallel, and web-based examples. You can install the package with "pip3 install pyeq3" to see if it might suit your needs.
Seems like I have been able to fix the problem.
def model (par,x):
n = len(x)
res = np.array([])
for i in range(0,n):
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
if(x[i] <= par[5]):
res = np.append(res, A0 + par[0]*x[i] + par[1]*x[i]**2)
else:
res = np.append(res,par[2] + par[3]*x[i] + par[4]*x[i]**2)
return res
def sum_squares (par, x, y):
ss = sum((y-model(par,x))**2)
print('Sum of squares = {0}'.format(ss))
return ss
And then I used the functions as follow:
parameter = sy.array([0.0,-8.0,0.0018,0.0018,0,200])
res = least_squares(sum_squares, parameter, bounds=(-360,360), args=(x1,y1),verbose = 1)
The only problem is that it doesn't produce the results I'm looking for... And that is mainly because my x values are [0,360] and the Y values only vary by about 0.2, so it's a hard nut to crack for this function, and it produces this (poor) result:
Result
I think that the range of x values [0, 360] and y values (which you say is ~0.2) is probably not the problem. Getting good initial values for the parameters is probably much more important.
In Python with numpy / scipy, you would definitely want to not loop over values of x but do something more like
def model(par,x):
res = par[2] + par[3]*x + par[4]*x**2
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
res[np.where(x <= par[5])] = A0 + par[0]*x + par[1]*x**2
return res
It's not clear to me that that form is really what you want: why should A0 (a value independent of x added to a portion of the model) be so complicated and interdependent on the other parameters?
More importantly, your sum_of_squares() function is actually not what least_squares() wants: you should return the residual array, you should not do the sum of squares yourself. So, that should be
def sum_of_squares(par, x, y):
return (y - model(par, x))
But most importantly, there is a conceptual problem that is probably going to plague this model: Your par[5] is meant to represent a breakpoint where the model changes form. This is going to be very hard for these optimization routines to find. These routines generally make a very small change to each parameter value to estimate to derivative of the residual array with respect to that variable in order to figure out how to change that variable. With a parameter that is essentially used as an integer, the small change in the initial value will have no effect at all, and the algorithm will not be able to determine the value for this parameter. With some of the scipy.optimize algorithms (notably, leastsq) you can specify a scale for the relative change to make. With leastsq that is called epsfcn. You may need to set this as high as 0.3 or 1.0 for fitting the breakpoint to work. Unfortunately, this cannot be set per variable, only per fit. You might need to experiment with this and other options to least_squares or leastsq.

Categories

Resources