Pyomo time-dependant model? - python

I'm quite new to pyomo but I'm having a hard time figuring how to create a time dependant model and plot it on a graph. By time dependant I mean just a variable that assumes different values for each time step (like from 1 to T in this case).
I used this very simple model but when I run the script I receive in output only one solution. How can I change that?
I also have errors related to the constraint function but I'm not sure what's wrong
(ValueError: Constraint 'constraint[1]' does not have a proper value. Found . at 0x7f202b540850>' Expecting a tuple or equation.)
I'd like to show how the value of x(t) varies in all timesteps.
Any help is appreciated.
from __future__ import division
from pyomo.environ import *
from pyomo.opt import SolverFactory
import sys
model = AbstractModel()
model.n = Param()
model.T = RangeSet(1, model.n)
model.a = Param(model.T)
model.b = Param(model.T)
model.x = Var(model.T, domain= NonNegativeReals)
data = DataPortal()
data.load(filename='N.csv', range='N', param=model.n)
data.load(filename='A.csv', range= 'A', param=model.a)
data.load(filename='B.csv', range= 'B', param=model.b)
def objective(model):
return model.x
model.OBJ = Objective(rule=objective)
def somma(model):
return model.a[t]*model.x[t] for t in model.T) >= model.b[t] for t in model.T
model.constraint = Constraint(model.T, rule=somma)
instance = model.create_instance(data)
opt = SolverFactory('glpk')
results = opt.solve(instance)

You can build up lists of the values you would like to plot like this:
T_plot = list(instance.T)
x_plot = [value(instance.x[t]) for t in T_plot]
and then use your favorite Python plotting package to make the plots. I usually use Matplotlib.

Related

Pyomo with glpk solver doesn't solve anything

Shouldn't the following result in a number different than zero?
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
m = pyo.ConcreteModel()
m.x = pyo.Var([1,2], domain=pyo.Reals,initialize=0)
m.obj = pyo.Objective(expr = 2*m.x[1] + 3*m.x[2],sense=pyo.minimize)
m.c1 = pyo.Constraint(expr = 3*m.x[1] + 4*m.x[2] >= 3)
SolverFactory('glpk', executable='/usr/bin/glpsol').solve(m)
pyo.value(m.x[1])
I have tried following the documentation but its quite limited for simple examples. When I execute this code it just prints zero...
The problem you have written is unbounded. Try changing the domain of x to NonNegativeReals or put in constraints to do same.
You should always check the solver status, which you seem to have skipped over and will state “unbounded” for this model.

How to declare an objective function (Minimize) and it's constraints with upper and lower limits on Python?

I have a linear programming problem with 8 variables.How I can generate a set of constraints (equalities and/or inequalities) with upper and lower bounds on Python in order to minimize an objective function?. I am specifically asking to do it with Pyomo solver if possible,if not using any other solver on Python (e.g., Gurobi, Cplex,etc) is fine, I just want to have an idea on how to tackle this problems on Python.
Very simple bus and zoo example:
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
opt = pyo.SolverFactory('cplex')
model = pyo.ConcreteModel()
model.nbBus = pyo.Var([40,30], domain=pyo.PositiveIntegers)
model.OBJ = pyo.Objective(expr = 500*model.nbBus[40] + 400*model.nbBus[30])
model.Constraint1 = pyo.Constraint(expr = 40*model.nbBus[40] + 30*model.nbBus[30] >= 300)
results = opt.solve(model)
print("nbBus40=",model.nbBus[40].value)
print("nbBus30=",model.nbBus[30].value)

Tensorflow Probability Error: OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed

I am trying to estimate a model in tensorflow using NUTS by providing it a likelihood function. I have checked the likelihood function is returning reasonable values. I am following the setup here for setting up NUTS:
https://rlhick.people.wm.edu/posts/custom-likes-tensorflow.html
and some of the examples here for setting up priors, etc.:
https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Multilevel_Modeling_Primer.ipynb
My code is in a colab notebook here:
https://drive.google.com/file/d/1L9JQPLO57g3OhxaRCB29do2m808ZUeex/view?usp=sharing
I get the error: OperatorNotAllowedInGraphError: iterating overtf.Tensoris not allowed: AutoGraph did not convert this function. Try decorating it directly with #tf.function. This is my first time using tensorflow and I am quite lost interpreting this error. It would also be ideal if I could pass the starting parameter values as a single input (example I am working off doesn't do it, but I assume it is possible).
Update
It looks like I had to change the position of the #tf.function decorator. The sampler now runs, but it gives me the same value for all samples for each of the parameters. Is it a requirement that I pass a joint distribution through the log_prob() function? I am clearly missing something. I can run the likelihood through bfgs optimization and get reasonable results (I've estimated the model via maximum likelihood with fixed parameters in other software). It looks like I need to define the function to return a joint distribution and call log_prob(). I can do this if I set it up as a logistic regression (logit choice model is logistically distributed in differences). However, I lose the standard closed form.
My function is as follows:
#tf.function
def mmnl_log_prob(init_mu_b_time,init_sigma_b_time,init_a_car,init_a_train,init_b_cost,init_scale):
# Create priors for hyperparameters
mu_b_time = tfd.Sample(tfd.Normal(loc=init_mu_b_time, scale=init_scale),sample_shape=1).sample()
# HalfCauchy distributions are too wide for logit discrete choice
sigma_b_time = tfd.Sample(tfd.Normal(loc=init_sigma_b_time, scale=init_scale),sample_shape=1).sample()
# Create priors for parameters
a_car = tfd.Sample(tfd.Normal(loc=init_a_car, scale=init_scale),sample_shape=1).sample()
a_train = tfd.Sample(tfd.Normal(loc=init_a_train, scale=init_scale),sample_shape=1).sample()
# a_sm = tfd.Sample(tfd.Normal(loc=init_a_sm, scale=init_scale),sample_shape=1).sample()
b_cost = tfd.Sample(tfd.Normal(loc=init_b_cost, scale=init_scale),sample_shape=1).sample()
# Define a heterogeneous random parameter model with MultivariateNormalDiag()
# Use MultivariateNormalDiagPlusLowRank() to define nests, etc.
b_time = tfd.Sample(tfd.MultivariateNormalDiag( # b_time
loc=mu_b_time,
scale_diag=sigma_b_time),sample_shape=num_idx).sample()
# Definition of the utility functions
V1 = a_train + tfm.multiply(b_time,TRAIN_TT_SCALED) + b_cost * TRAIN_COST_SCALED
V2 = tfm.multiply(b_time,SM_TT_SCALED) + b_cost * SM_COST_SCALED
V3 = a_car + tfm.multiply(b_time,CAR_TT_SCALED) + b_cost * CAR_CO_SCALED
print("Vs",V1,V2,V3)
# Definition of loglikelihood
eV1 = tfm.multiply(tfm.exp(V1),TRAIN_AV_SP)
eV2 = tfm.multiply(tfm.exp(V2),SM_AV_SP)
eV3 = tfm.multiply(tfm.exp(V3),CAR_AV_SP)
eVD = eV1 + eV2 +
eV3
print("eVs",eV1,eV2,eV3,eVD)
l1 = tfm.multiply(tfm.truediv(eV1,eVD),tf.cast(tfm.equal(CHOICE,1),tf.float32))
l2 = tfm.multiply(tfm.truediv(eV2,eVD),tf.cast(tfm.equal(CHOICE,2),tf.float32))
l3 = tfm.multiply(tfm.truediv(eV3,eVD),tf.cast(tfm.equal(CHOICE,3),tf.float32))
ll = tfm.reduce_sum(tfm.log(l1+l2+l3))
print("ll",ll)
return ll
The function is called as follows:
nuts_samples = 1000
nuts_burnin = 500
chains = 4
## Initial step size
init_step_size=.3
init = [0.,0.,0.,0.,0.,.5]
##
## NUTS (using inner step size averaging step)
##
#tf.function
def nuts_sampler(init):
nuts_kernel = tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=mmnl_log_prob,
step_size=init_step_size,
)
adapt_nuts_kernel = tfp.mcmc.DualAveragingStepSizeAdaptation(
inner_kernel=nuts_kernel,
num_adaptation_steps=nuts_burnin,
step_size_getter_fn=lambda pkr: pkr.step_size,
log_accept_prob_getter_fn=lambda pkr: pkr.log_accept_ratio,
step_size_setter_fn=lambda pkr, new_step_size: pkr._replace(step_size=new_step_size)
)
samples_nuts_, stats_nuts_ = tfp.mcmc.sample_chain(
num_results=nuts_samples,
current_state=init,
kernel=adapt_nuts_kernel,
num_burnin_steps=100,
parallel_iterations=5)
return samples_nuts_, stats_nuts_
samples_nuts, stats_nuts = nuts_sampler(init)
I have an answer to my question! It is simply a matter of different nomenclature. I need to define my model as a softmax function, which I knew was what I would call a "logit model", but it just wasn't clicking for me. The following blog post gave me the epiphany:
http://khakieconomics.github.io/2019/03/17/Putting-it-all-together.html

Integrate function of random variables in pymc3

I am trying to construct a model in pymc3 which requires me to integrate a function of random variables. The basic idea using actual numbers is this:
from scipy.integrate import quad
def PowerLaw(x,N0,alpha):
""" A PowerLaw distribution"""
return N0 * x**-alpha
print quad(PowerLaw,0.1,10,args=(1e-4,2)) #(0.0009900000000000002, 3.008094474110274e-12)
I can also do this in theano:
from theano import function,tensor as tt
xt = tt.dscalar('x')
N0 = tt.dscalar('N0')
alpha = tt.dscalar('alpha')
y = PowerLaw(xt,N0,alpha)
func = function([xt,N0,alpha],y)
print quad(func,0.1,10,args=(1e-4,2)) #Same answer as before
Here is an example of what I want to do:
with pm.Model() as myModel:
N0 = pm.Uniform("N0",1e-5,1e-1)
alpha = pm.Uniform("alpha",1,5)
yval = quad(PowerLaw,0.1,10,args=(N0,alpha))
But of course when I try this I get a TypeError because N0 and alpha are not floats. Of course, in this simple case, I know the analytical solution of the integral; my actual model requires more complicated integrals where I do not know the closed form. Is there any way to accomplish this in pymc3?

PyMC, deterministic nodes in loops

I'm a bit new to Python and PyMC, and making rapid progress. But I'm just confused about the use of setting deterministic values of a 2D matrix. I have a model below, that I cannot get to parse correctly. The problem relates to setting the value theta in the model.
import numpy as np
import pymc
define known variables
N = 2
T = 10
tau = 1
define model... which I cannot get to parse correctly. It's the allocation of theta that I'm having trouble with. The aim to to get samples of D and x. Theta is just an intermediate variable, but I need to keep it as it's used in more complex variations of the model.
def NAFCgenerator():
D = np.empty(T, dtype=object)
theta = np.empty([N,T], dtype=object)
x = np.empty([N,T], dtype=object)
# true location of signal
for t in range(T):
D[t] = pymc.DiscreteUniform('D_%i' % t, lower=0, upper=N-1)
for t in range(T):
for n in range(N):
#pymc.deterministic(plot=False)
def temp_theta(dt=D[t], n=n):
return dt==n
theta[n,t] = temp_theta
x[n,t] = pymc.Normal('x_%i,%i' % (n,t),
mu=theta[n,t], tau=tau)
return locals()
** EDIT **
Explicit indexing is useful for me as I'm learning both PyMC and Python. But it seems that extracting MCMC samples is a bit clunky, e.g.
D0values = pymc_generator.trace('D_0')[:]
But I am probably missing something. But did I managed to get a vectorised version working
# Approach 1b - actually quite promising
def NAFCgenerator():
# NOTE TO SELF. It's important to declare these as objects
D = np.empty(T, dtype=object)
theta = np.empty([N,T], dtype=object)
x = np.empty([N,T], dtype=object)
# true location of signal
D = pymc.Categorical('D', spatial_prior, size=T)
# displayed stimuli
#pymc.deterministic(plot=False)
def theta(D=D):
theta = np.zeros([N,T])
theta[0,D==0]=1
theta[1,D==1]=1
return theta
#for n in range(N):
x = pymc.Normal('x', mu=theta, tau=tau)
return locals()
Which seems easier to get at MCMC samples using this for example
Dvalues = pymc_generator.trace('D')[:]
In PyMC2, when creating deterministic nodes with decorators, the default is to take the node name from the function name. The solution is simple: specify the node name as a parameter for the decorator.
#pymc.deterministic(name='temp_theta_%d_%d'%(t,n), plot=False)
def temp_theta(dt=D[t], n=n):
return dt==n
theta[n,t] = temp_theta
Here is a notebook that puts this in context.

Categories

Resources