PYOMO: Constraint does not have a proper value - python

I'm pretty new to pyomo and python, so this might be a pretty dumb mistake.
The gist of what I'm trying to do: I have a demand array, one demand value for each time step. The power bought plus the power provided by a CHP should equal the demand in each time step. (That's what I'm trying to do with the constraint). Running it leads to the following error:
ValueError: Constraint 'ElPowerBalanceEq' does not have a proper value. Found '<generator object ElPowerBalance.<locals>.<genexpr> at 0x000001BBF81DC040>'
Expecting a tuple or equation. Examples:
sum(model.costs) == model.income
(0, model.price[item], 50)
Here's the relevant code.
Thanks in advance :-)
from pyomo.environ import*
import numpy as np
t = np.linspace(0,24,97) #time variable, one day in 0.25 steps
model.i=range(t.size) #index
model.Pel_buy = Var(within=PositiveReals) #electrical power bought
model.Pel_chp = Var(within=PositiveReals) #electrical power of chp
Del = 2+2*np.exp(-(t-12)**2/8**2) #demand electrical
#Define constraints
#Power Balance
def ElPowerBalance(model) :
return (model.Pel_chp[i] + model.Pel_buy[i] == Del[i] for i in model.i)
model.ElPowerBalanceEq = Constraint(rule = ElPowerBalance)

Your ElPowerBalance() function is returning a generator object because you have the return value wrapped in parantheses (which python interprets as a generator). The simplest solution would be to use the * operator to unpack your generator, like so:
def ElPowerBalance(model) :
return *(model.Pel_chp[i] + model.Pel_buy[i] == Del[i] for i in model.i)

Related

How to iterate over external input list in pyomo objective function?

I am trying to run a simple LP pyomo Concrete model with Gurobisolver :
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
model = pyo.ConcreteModel()
nb_years = 3
nb_mins = 2
step = 8760*1.5
delta = 10000
#Range of hour
model.h = pyo.RangeSet(0,8760*nb_years-1)
#Individual minimums
model.min = pyo.RangeSet(0, nb_mins-1)
model.mins = pyo.Var(model.min, within=model.h, initialize=[i for i in model.min])
def maximal_step_between_mins_constraint_rule(model, min):
next_min = min + 1 if min < nb_mins-1 else 0
if next_min == 0: # We need to take circularity into account
return 8760*nb_years - model.mins[min] + model.mins[next_min] <= step + delta
return model.mins[next_min] - model.mins[min] <= step + delta
def minimal_step_between_mins_constraint_rule(model, min):
next_min = min + 1 if min < nb_mins-1 else 0
if next_min == 0: # We need to take circularity into account
return 8760*nb_years - model.mins[min] + model.mins[next_min] >= step - delta
return model.mins[next_min] - model.mins[min] >= step - delta
model.input_list = pyo.Param(model.h, initialize=my_input_list, within=pyo.Reals, mutable=False)
def objective_rule(model):
return sum([model.input_list[model.mins[min]] for min in model.min])
model.maximal_step_between_mins_constraint= pyo.Constraint(model.min, rule=maximal_step_between_mins_constraint_rule)
model.minimal_step_between_mins_constraint= pyo.Constraint(model.min, rule=minimal_step_between_mins_constraint_rule)
model.objective = pyo.Objective(rule=objective_rule, sense=pyo.minimize)
opt = SolverFactory('gurobi')
results = opt.solve(model, options={'Presolve':2})
Basically I am trying to find two hours in my input list (which looks like this) spanning over 3 years of data, with constraints on the distance separating them, and where the sum of both value is minimized by the model.
I implemented my list as a parameter of fixed value, however even if mutable is set to False running my model produces this error :
ERROR: Rule failed when generating expression for Objective objective with
index None: RuntimeError: Error retrieving the value of an indexed item
input_list: index 0 is not a constant value. This is likely not what you
meant to do, as if you later change the fixed value of the object this
lookup will not change. If you understand the implications of using non-
constant values, you can get the current value of the object using the
value() function.
ERROR: Constructing component 'objective' from data=None failed: RuntimeError:
Error retrieving the value of an indexed item input_list: index 0 is not a
constant value. This is likely not what you meant to do, as if you later
change the fixed value of the object this lookup will not change. If you
understand the implications of using non-constant values, you can get the
current value of the object using the value() function.
Any idea why I get this error and how to fix it ?
Obviously, changing the objective function to sum([pyo.value(model.input_list[model.mins[min]]) for min in model.min]) is not a solution to my problem.
I also tried not to use pyomo parameters (with something like sum([input_list[model.mins[min]] for min in model.min]), but pyomo can't iterate over it and raises the following error :
ERROR: Constructing component 'objective' from data=None failed: TypeError:
list indices must be integers or slices, not _GeneralVarData
You have a couple serious syntax and structure problems in your model. Not all of the elements are included in the code you provide, but you (minimally) need to fix these:
In this snippet, you are initializing the value of each variable to a list, which is invalid. Start with no variable initializations:
model.mins = pyo.Var(model.min, within=model.h, initialize=[i for i in model.min])
In this summation, you appear to be using a variable as the index for some data. This is an invalid construct. The value of the variable is unkown when the model is built. You need to reformulate:
return sum([model.input_list[model.mins[min]] for min in model.min])
My suggestion: Start with a very small chunk of your data and pprint() your model and read it carefully for quality before you attempt to solve.
model.pprint()

PYOMO Constraints - setting constraints over indexed variables

I have been trying to get into python optimization, and I have found that pyomo is probably the way to go; I had some experience with GUROBI as a student, but of course that is no longer possible, so I have to look into the open source options.
I basically want to perform an non-linear mixed integer problem in which I will minimized a certain ratio. The problem itself is setting up a power purchase agreement (PPA) in a renewable energy scenario. Depending on the electricity generated, you will have to either buy or sell electricity acording to the PPA.
The only starting data is the generation; the PPA is the main decision variable, but I will need others. "buy", "sell", "b1" and "b2" are unknown without the PPA value. These are the equations:
Equations that rule the problem (by hand).
Using pyomo, I was trying to set up the problem as:
# Dataframe with my Generation information:
January = Data['Full_Data'][(Data['Full_Data']['Month'] == 1) & (Data['Full_Data']['Year'] == 2011)]
Gen = January['Producible (MWh)']
Time = len(Generacion)
M=100
# Model variables and definition:
m = ConcreteModel()
m.IDX = range(time)
m.PPA = Var(initialize = 2.0, bounds =(1,7))
m.compra = Var(m.IDX, bounds = (0, None))
m.venta = Var(m.IDX, bounds = (0, None))
m.b1 = Var(m.IDX, within = Binary)
m.b2 = Var(m.IDX, within = Binary)
And then, the constraint; only the first one, as I was already getting errors:
m.b1_rule = Constraint(
expr = (((Gen[i] - PPA)/M for i in m.IDX) <= m.b1[i])
)
which gives me the error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-5d5f5584ebca> in <module>
1 m.b1_rule = Constraint(
----> 2 expr = (((Generacion[i] - PPA)/M for i in m.IDX) <= m.b1[i])
3 )
pyomo\core\expr\numvalue.pyx in pyomo.core.expr.numvalue.NumericValue.__ge__()
pyomo\core\expr\logical_expr.pyx in pyomo.core.expr.logical_expr._generate_relational_expression()
AttributeError: 'generator' object has no attribute 'is_expression_type'
I honestly have no idea what this means. I feel like this should be a simple problem, but I am strugling with the syntax. I basically have to apply a constraint to each individual data from "Generation", there is no sum involved; all constraints are 1-to-1 contraints set so that the physical energy requirements make sense.
How do I set up the constraints like this?
Thank you very much
You have a couple things to fix. First, the error you are getting is because you have "extra parenthesis" around an expression that python is trying to convert to a generator. So, step 1 is to remove the outer parenthesis, but that will not solve your issue.
You said you want to generate this constraint "for each" value of your index. Any time you want to generate copies of a constraint "for each" you will need to either do that by making a constraint list and adding to it with some kind of loop, or use a function-rule combination. There are examples of each in the pyomo documentation and plenty on this site (I have posted a ton if you look at some of my posts.) I would suggest the function-rule combo and you should end up with something like:
def my_constr(m, i):
return m.Gen[i] - m.PPA <= m.b1[i] * M
m.C1 = Constraint(m.IDX, rule=my_constr)

How to find discount rate value if total present value and future time series of cash flows is known in Python

While going through an article, I encountered a situation where I encountered below polynomial equation.
For reference, below is the equation.
15446 = 537.06/(1+r) + 612.25/(1+r)**2 + 697.86/(1+r)**3 + 795.67/(1+r)**4 + 907.07/(1+r)**5
This is discount cash flow time series values which we use in finance to get the idea of present value of future cash flows after applying the appropriate discount rate.
So from above equation, I need to calculate the variable r in python programming environment?. I do hope that there must be some library which can be used to solve such equations?.
I solve this, I thought to use the numpy.npv API.
import numpy as np
presentValue = 15446
futureValueList = [537.06, 612.25, 697.86,795.67, 907.07]
// I know it is not possible to get r from below. Just put
// it like this to describe my intention.
presentValue = np.npv(r, futureValueList)
print(r)
You can multiply your NPV formula with the highest power or (1+r) and then find the roots of the polynomial with polyroots (just take the only real root and disregard the complex ones):
import numpy as np
presentValue = 15446
futureValueList = [537.06, 612.25, 697.86,795.67, 907.07]
roots = np.polynomial.polynomial.polyroots(futureValueList[::-1]+[-presentValue])
r = roots[np.argwhere(roots.imag==0)].real[0,0] - 1
print(r)
#-0.3332398877886278
As it turns out the formula given is incomplete, see p. 14 of the linked article. The correct equation can be solved with standard optimization procedures, e.g. optimize.root providing a sensible initial guess:
from scipy import optimize
def fun(r):
r1 = 1 + r
return 537.06/r1 + 612.25/r1**2 + 697.86/r1**3 + 795.67/r1**4 + 907.07/r1**5 * (1 + 1.0676/(r-.0676)) - 15446
roots = optimize.root(fun, [.1])
print(roots.x if roots.success else roots.message)
#[0.11177762]

Python curve fit with change point

As I'm really struggleing to get from R-code, to Python code, I would like to ask some help. The code I want to use has been provided to my from withing the mathematics forum of stackexchange.
https://math.stackexchange.com/questions/2205573/curve-fitting-on-dataset
I do understand what is going on. But I'm really having a hard time trying to solve the R-code, as I have never seen anything of it. I have written the function to return the sum of squares. But I'm stuck at how I could use a function similar to the optim function. And also I don't really like the guesswork at the initial values. I would like it better to run and re-run a type of optim function untill I get the wanted result, because my needs for a nearly perfect curve fit are really high.
def model (par,x):
n = len(x)
res = []
for i in range(1,n):
A0 = par[3] + (par[4]-par[1])*par[6] + (par[5]-par[2])*par[6]**2
if(x[i] == par[6]):
res[i] = A0 + par[1]*x[i] + par[2]*x[i]**2
else:
res[i] = par[3] + par[4]*x[i] + par[5]*x[i]**2
return res
This is my model function...
def sum_squares (par, x, y):
ss = sum((y-model(par,x))^2)
return ss
And this is the sum of squares
But I have no idea on how to convert this:
#I found these initial values with a few minutes of guess and check.
par0 <- c(7,-1,-395,70,-2.3,10)
sol <- optim(par= par0, fn=sqerror, x=x, y=y)$par
To Python code...
I wrote an open source Python package (BSD license) that has a genetic algorithm (Differential Evolution) front end to the scipy Levenberg-Marquardt solver, it functions similarly to what you describe in your question. The github URL is:
https://github.com/zunzun/pyeq3
It comes with a "user-defined function" example that's fairly easy to use:
https://github.com/zunzun/pyeq3/blob/master/Examples/Simple/FitUserDefinedFunction_2D.py
along with command-line, GUI, cluster, parallel, and web-based examples. You can install the package with "pip3 install pyeq3" to see if it might suit your needs.
Seems like I have been able to fix the problem.
def model (par,x):
n = len(x)
res = np.array([])
for i in range(0,n):
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
if(x[i] <= par[5]):
res = np.append(res, A0 + par[0]*x[i] + par[1]*x[i]**2)
else:
res = np.append(res,par[2] + par[3]*x[i] + par[4]*x[i]**2)
return res
def sum_squares (par, x, y):
ss = sum((y-model(par,x))**2)
print('Sum of squares = {0}'.format(ss))
return ss
And then I used the functions as follow:
parameter = sy.array([0.0,-8.0,0.0018,0.0018,0,200])
res = least_squares(sum_squares, parameter, bounds=(-360,360), args=(x1,y1),verbose = 1)
The only problem is that it doesn't produce the results I'm looking for... And that is mainly because my x values are [0,360] and the Y values only vary by about 0.2, so it's a hard nut to crack for this function, and it produces this (poor) result:
Result
I think that the range of x values [0, 360] and y values (which you say is ~0.2) is probably not the problem. Getting good initial values for the parameters is probably much more important.
In Python with numpy / scipy, you would definitely want to not loop over values of x but do something more like
def model(par,x):
res = par[2] + par[3]*x + par[4]*x**2
A0 = par[2] + (par[3]-par[0])*par[5] + (par[4]-par[1])*par[5]**2
res[np.where(x <= par[5])] = A0 + par[0]*x + par[1]*x**2
return res
It's not clear to me that that form is really what you want: why should A0 (a value independent of x added to a portion of the model) be so complicated and interdependent on the other parameters?
More importantly, your sum_of_squares() function is actually not what least_squares() wants: you should return the residual array, you should not do the sum of squares yourself. So, that should be
def sum_of_squares(par, x, y):
return (y - model(par, x))
But most importantly, there is a conceptual problem that is probably going to plague this model: Your par[5] is meant to represent a breakpoint where the model changes form. This is going to be very hard for these optimization routines to find. These routines generally make a very small change to each parameter value to estimate to derivative of the residual array with respect to that variable in order to figure out how to change that variable. With a parameter that is essentially used as an integer, the small change in the initial value will have no effect at all, and the algorithm will not be able to determine the value for this parameter. With some of the scipy.optimize algorithms (notably, leastsq) you can specify a scale for the relative change to make. With leastsq that is called epsfcn. You may need to set this as high as 0.3 or 1.0 for fitting the breakpoint to work. Unfortunately, this cannot be set per variable, only per fit. You might need to experiment with this and other options to least_squares or leastsq.

How do I vectorize the following loop in Numpy?

"""Some simulations to predict the future portfolio value based on past distribution. x is
a numpy array that contains past returns.The interpolated_returns are the returns
generated from the cdf of the past returns to simulate future returns. The portfolio
starts with a value of 100. portfolio_value is filled up progressively as
the program goes through every loop. The value is multiplied by the returns in that
period and a dollar is removed."""
portfolio_final = []
for i in range(10000):
portfolio_value = [100]
rand_values = np.random.rand(600)
interpolated_returns = np.interp(rand_values,cdf_values,x)
interpolated_returns = np.add(interpolated_returns,1)
for j in range(1,len(interpolated_returns)+1):
portfolio_value.append(interpolated_returns[j-1]*portfolio_value[j-1])
portfolio_value[j] = portfolio_value[j]-1
portfolio_final.append(portfolio_value[-1])
print (np.mean(portfolio_final))
I couldn't find a way to write this code using numpy. I was having a look at iterations using nditer but I was unable to move ahead with that.
I guess the easiest way to figure out how you can vectorize your stuff would be to look at the equations that govern your evolution and see how your portfolio actually iterates, finding patterns that could be vectorized instead of trying to vectorize the code you already have. You would have noticed that the cumprod actually appears quite often in your iterations.
Nevertheless you can find the semi-vectorized code below. I included your code as well such that you can compare the results. I also included a simple loop version of your code which is much easier to read and translatable into mathematical equations. So if you share this code with somebody else I would definitely use the simple loop option. If you want some fancy-pants vectorizing you can use the vector version. In case you need to keep track of your single steps you can also add an array to the simple loop option and append the pv at every step.
Hope that helps.
Edit: I have not tested anything for speed. That's something you can easily do yourself with timeit.
import numpy as np
from scipy.special import erf
# Prepare simple return model - Normal distributed with mu &sigma = 0.01
x = np.linspace(-10,10,100)
cdf_values = 0.5*(1+erf((x-0.01)/(0.01*np.sqrt(2))))
# Prepare setup such that every code snippet uses the same number of steps
# and the same random numbers
nSteps = 600
nIterations = 1
rnd = np.random.rand(nSteps)
# Your code - Gives the (supposedly) correct results
portfolio_final = []
for i in range(nIterations):
portfolio_value = [100]
rand_values = rnd
interpolated_returns = np.interp(rand_values,cdf_values,x)
interpolated_returns = np.add(interpolated_returns,1)
for j in range(1,len(interpolated_returns)+1):
portfolio_value.append(interpolated_returns[j-1]*portfolio_value[j-1])
portfolio_value[j] = portfolio_value[j]-1
portfolio_final.append(portfolio_value[-1])
print (np.mean(portfolio_final))
# Using vectors
portfolio_final = []
for i in range(nIterations):
portfolio_values = np.ones(nSteps)*100.0
rcp = np.cumprod(np.interp(rnd,cdf_values,x) + 1)
portfolio_values = rcp * (portfolio_values - np.cumsum(1.0/rcp))
portfolio_final.append(portfolio_values[-1])
print (np.mean(portfolio_final))
# Simple loop
portfolio_final = []
for i in range(nIterations):
pv = 100
rets = np.interp(rnd,cdf_values,x) + 1
for i in range(nSteps):
pv = pv * rets[i] - 1
portfolio_final.append(pv)
print (np.mean(portfolio_final))
Forget about np.nditer. It does not improve the speed of iterations. Only use if you intend to go one and use the C version (via cython).
I'm puzzled about that inner loop. What is it supposed to be doing special? Why the loop?
In tests with simulated values these 2 blocks of code produce the same thing:
interpolated_returns = np.add(interpolated_returns,1)
for j in range(1,len(interpolated_returns)+1):
portfolio_value.append(interpolated_returns[j-1]*portfolio[j-1])
portfolio_value[j] = portfolio_value[j]-1
interpolated_returns = (interpolated_returns+1)*portfolio - 1
portfolio_value = portfolio_value + interpolated_returns.tolist()
I assuming that interpolated_returns and portfolio are 1d arrays of the same length.

Categories

Resources