I know that ROmodel works with pyomo.environ, but I haven’t been able to get it to work with pyomo.kernel. Admittedly, it says here that pyomo.kernel is not compatible with extension modules. Here’s my attempt at getting it to work with ROmodel:
import romodel as ro
import pyomo.kernel as pmo
import numpy as np
# Create Pyomo model using pyomo.environ
m = pmo.block()
m.x = pmo.variable(value = 1, lb = 0, ub = 4)
# Add some regular constraints (not uncertain)
m.c1 = pmo.constraint(m.x <= 0)
m.c2 = pmo.constraint(0 <= m.x)
# Create Objective function
m.o = pmo.objective(-m.x)
# # solve deterministic model
solver = pmo.SolverFactory('ipopt') # other options gurobi, ipopt, cplex, glpk
solver.solve(m)
print('decision variable values are')
print('objective value is ', m.o.expr())
# create Robust model
from romodel.uncset import PolyhedralSet
# # Define polyhedral set
m.uncset_d = PolyhedralSet(mat=[[1]], rhs=[100]) # can't add this to block object because no attribute "parent"
upper_bound = 500
m.uncset_b = PolyhedralSet(mat=[[1]], rhs=[upper_bound])
m.d = ro.UncParam([0], nominal=[0.1], uncset=m.uncset_d)
m.b = ro.UncParam([0], nominal=[1], uncset=m.uncset_b)
m.unc_const_d = pmo.constraint(expr = m.x[0] + m.d[0] <= 1)
m.unc_const_b = pmo.constraint(expr = m.x[0] + m.b[0] <= 1)
I was excited to find pyomo.kernel, because I’m writing a robust optimization problem, and it would be nice to take advantage of the vectorized constraints in pyomo.kernel. However, if pyomo.kernel isn’t compatible with ROmodel, then this won’t work.
Is there either:
A way to get pyomo.kernel to integrate with ROmodel?
OR
Another Robust Optimization framework that extends Pyomo which offers vectorized constraints? I saw PyROS and RSOME. But I couldn’t tell if those offer vectorized constraints.
Related
Shouldn't the following result in a number different than zero?
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
m = pyo.ConcreteModel()
m.x = pyo.Var([1,2], domain=pyo.Reals,initialize=0)
m.obj = pyo.Objective(expr = 2*m.x[1] + 3*m.x[2],sense=pyo.minimize)
m.c1 = pyo.Constraint(expr = 3*m.x[1] + 4*m.x[2] >= 3)
SolverFactory('glpk', executable='/usr/bin/glpsol').solve(m)
pyo.value(m.x[1])
I have tried following the documentation but its quite limited for simple examples. When I execute this code it just prints zero...
The problem you have written is unbounded. Try changing the domain of x to NonNegativeReals or put in constraints to do same.
You should always check the solver status, which you seem to have skipped over and will state “unbounded” for this model.
I have a linear programming problem with 8 variables.How I can generate a set of constraints (equalities and/or inequalities) with upper and lower bounds on Python in order to minimize an objective function?. I am specifically asking to do it with Pyomo solver if possible,if not using any other solver on Python (e.g., Gurobi, Cplex,etc) is fine, I just want to have an idea on how to tackle this problems on Python.
Very simple bus and zoo example:
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
opt = pyo.SolverFactory('cplex')
model = pyo.ConcreteModel()
model.nbBus = pyo.Var([40,30], domain=pyo.PositiveIntegers)
model.OBJ = pyo.Objective(expr = 500*model.nbBus[40] + 400*model.nbBus[30])
model.Constraint1 = pyo.Constraint(expr = 40*model.nbBus[40] + 30*model.nbBus[30] >= 300)
results = opt.solve(model)
print("nbBus40=",model.nbBus[40].value)
print("nbBus30=",model.nbBus[30].value)
I'm trying to perform solve an optimization problem in python. At this point, I'm already familiar with Scipy, but I didn't manage to use it properly with a constraint of unique integer values.
The example below might better fit mlrose tag.
In a high level, I'm trying to create a Swiss pairing, I have seen a few articles and one of those suggested a "Penalty Matrix" and to minimize the penalty. Here is what I have done:
import mlrose
import numpy as np
# Create Penalty Matrix
x = np.round(np.random.rand(8,8)*50)
z = np.eye(8, dtype=int)*100 + x
print(z)
# fitness problem given a penalty matrix and an order
def pairing_fittness(order, panalty):
print(order)
order = np.array(order)
a = np.bincount(order)
order = order.reshape(-1, 2)
PF = 0
for pair in order:
print("{}, {}: {}".format(pair[0],pair[1], panalty[pair[0],pair[1]]))
PF = PF + panalty[pair[0],pair[1]]
print("Real PF: ",PF)
print("order penalty: {}".format((np.max(a) - 1) * 1000))
return (PF + (np.max(a) - 1) * 1000)
One of the problems this tries to solve is to create an array with unique values (The same player can not play twice in the same round) this is why the penalty for duplicated values is high (1000)
kwargs = {'panalty': z}
fitness_cust_problem_fun = mlrose.CustomFitness(pairing_fittness, **kwargs)
problem = mlrose.DiscreteOpt(length = 8,
fitness_fn = fitness_cust_problem_fun,
maximize = False,
max_val = 8,
)
best_state, best_fitness = mlrose.simulated_annealing(
problem,
max_attempts = 300,
max_iters = 100000,
random_state = 1)
print(best_state)
print(best_fitness)
No matter what I do, with over 6 variables it can not find a unique values array to optimize it. While I can do it in Excel (Solver > Evolutionary).
I'm looking for a better tool (I used scipy.optimize but I'm not sure it works well for integer problems, as I got better results with mlrose) or suggestions on how to improve my optimization problem so it is solvable.
I am using PyMC3 to calculate something which I won't get into here but you can get the idea from this link if interested.
The '2-lambdas' case is basically a switch function, which needs to be compiled to a Theano function to avoid dtype errors and looks like this:
import theano
from theano.tensor import lscalar, dscalar, lvector, dvector, argsort
#theano.compile.ops.as_op(itypes=[lscalar, dscalar, dscalar], otypes=[dvector])
def lambda_2_distributions(tau, lambda_1, lambda_2):
"""
Return values of `lambda_` for each observation based on the
transition value `tau`.
"""
out = zeros(num_observations)
out[: tau] = lambda_1 # lambda before tau is lambda1
out[tau:] = lambda_2 # lambda after (and including) tau is lambda2
return out
I am trying to generalize this to apply to 'n-lambdas', where taus.shape[0] = lambdas.shape[0] - 1, but I can only come up with this horribly slow numpy implementation.
#theano.compile.ops.as_op(itypes=[lvector, dvector], otypes=[dvector])
def lambda_n_distributions(taus, lambdas):
out = zeros(num_observations)
np_tau_indices = argsort(taus).eval()
num_taus = taus.shape[0]
for t in range(num_taus):
if t == 0:
out[: taus[np_tau_indices[t]]] = lambdas[t]
elif t == num_taus - 1:
out[taus[np_tau_indices[t]]:] = lambdas[t + 1]
else:
out[taus[np_tau_indices[t]]: taus[np_tau_indices[t + 1]]] = lambdas[t]
return out
Any ideas on how to speed this up using pure Theano (avoiding the call to .eval())? It's been a few years since I've used it and so don't know the right approach.
Using a switch function is not recommended, as it breaks the nice geometry of the parameters space and makes sampling using modern sampler like NUTS difficult.
Instead, you can try model it using a continuous relaxation of a switch function. The main idea here would be to model the rate before the first switch point as a baseline; and add the prediction from a logistic function after each switch point:
def logistic(L, x0, k=500, t=np.linspace(0., 1., 1000)):
return L/(1+tt.exp(-k*(t_-x0)))
with pm.Model() as m2:
lambda0 = pm.Normal('lambda0', mu, sd=sd)
lambdad = pm.Normal('lambdad', 0, sd=sd, shape=nbreak-1)
trafo = Composed(pm.distributions.transforms.LogOdds(), Ordered())
b = pm.Beta('b', 1., 1., shape=nbreak-1, transform=trafo,
testval=[0.3, 0.5])
theta_ = pm.Deterministic('theta', tt.exp(lambda0 +
logistic(lambdad[0], b[0]) +
logistic(lambdad[1], b[1])))
obs = pm.Poisson('obs', theta_, observed=y)
trace = pm.sample(1000, tune=1000)
There are a few tricks I used here as well, for example, the composite transformation that is not on the PyMC3 code base yet. You can have a look at the full code here: https://gist.github.com/junpenglao/f7098c8e0d6eadc61b3e1bc8525dd90d
If you have more question, please post to https://discourse.pymc.io with your model and (simulated) data. I check and answer on the PyMC3 discourse much more regularly.
I am playing around with this code which is for Univariate linear mixed effects modelling. The data set denotes:
students as s
instructors as d
departments as dept
service as service
In the syntax of R's lme4 package (Bates et al., 2015), the model implemented can be summarized as:
y ~ 1 + (1|students) + (1|instructor) + (1|dept) + service
where 1 denotes an intercept term,(1|x) denotes a random effect for x, and x denotes a fixed effect.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import edward as ed
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from edward.models import Normal
from observations import insteval
data = pd.DataFrame(data, columns=metadata['columns'])
train = data.sample(frac=0.8)
test = data.drop(train.index)
train.head()
s_train = train['s'].values
d_train = train['dcodes'].values
dept_train = train['deptcodes'].values
y_train = train['y'].values
service_train = train['service'].values
n_obs_train = train.shape[0]
s_test = test['s'].values
d_test = test['dcodes'].values
dept_test = test['deptcodes'].values
y_test = test['y'].values
service_test = test['service'].values
n_obs_test = test.shape[0]
n_s = max(s_train) + 1 # number of students
n_d = max(d_train) + 1 # number of instructors
n_dept = max(dept_train) + 1 # number of departments
n_obs = train.shape[0] # number of observations
# Set up placeholders for the data inputs.
s_ph = tf.placeholder(tf.int32, [None])
d_ph = tf.placeholder(tf.int32, [None])
dept_ph = tf.placeholder(tf.int32, [None])
service_ph = tf.placeholder(tf.float32, [None])
# Set up fixed effects.
mu = tf.get_variable("mu", [])
service = tf.get_variable("service", [])
sigma_s = tf.sqrt(tf.exp(tf.get_variable("sigma_s", [])))
sigma_d = tf.sqrt(tf.exp(tf.get_variable("sigma_d", [])))
sigma_dept = tf.sqrt(tf.exp(tf.get_variable("sigma_dept", [])))
# Set up random effects.
eta_s = Normal(loc=tf.zeros(n_s), scale=sigma_s * tf.ones(n_s))
eta_d = Normal(loc=tf.zeros(n_d), scale=sigma_d * tf.ones(n_d))
eta_dept = Normal(loc=tf.zeros(n_dept), scale=sigma_dept * tf.ones(n_dept))
yhat = (tf.gather(eta_s, s_ph) +
tf.gather(eta_d, d_ph) +
tf.gather(eta_dept, dept_ph) +
mu + service * service_ph)
y = Normal(loc=yhat, scale=tf.ones(n_obs))
#Inference
q_eta_s = Normal(
loc=tf.get_variable("q_eta_s/loc", [n_s]),
scale=tf.nn.softplus(tf.get_variable("q_eta_s/scale", [n_s])))
q_eta_d = Normal(
loc=tf.get_variable("q_eta_d/loc", [n_d]),
scale=tf.nn.softplus(tf.get_variable("q_eta_d/scale", [n_d])))
q_eta_dept = Normal(
loc=tf.get_variable("q_eta_dept/loc", [n_dept]),
scale=tf.nn.softplus(tf.get_variable("q_eta_dept/scale", [n_dept])))
latent_vars = {
eta_s: q_eta_s,
eta_d: q_eta_d,
eta_dept: q_eta_dept}
data = {
y: y_train,
s_ph: s_train,
d_ph: d_train,
dept_ph: dept_train,
service_ph: service_train}
inference = ed.KLqp(latent_vars, data)
This works fine in the univariate case for Linear mixed effects modelling. I am trying to extend this approach to the multivariate case. Any ideas are more than welcome.
There are a number of ways to conduct linear mixed effects models in Python. It looks like you've adapted the Tensorflow approach but if that is not a hard requirement then there are several other potentially more convenient options.
You can use the Statsmodels implementation of LMER which is conveniently contained in Python but the syntax is a bit different from traditional formulaic expressions from R's LMER. It looks like you are using python to split your data to training and test sets so you can also write a loop to call the
You can also install R and rpy2 on your local machine and call the LMER packages from your Python environment. This allows you to keep your familiarity with working in R but allows you to do everything else in Python. All you have to do is use the rmagic %%R or (%R for inline) in your cell block in Jupyter Notebooks to pass variables and models between Python and R. The latter would be useful if you are passing the train/test data you split in Python to R to run lmer and retrieve the parameters back in a loop.
Lastly, another option is to use Pymer4 which is a wrapper for rpy2 allowing you to directly call LMER in R but without having to deal with rmagic.
I wrote a tutorial on how to use LMER with each of these methods which also works on Cloud setups like Google Colab. These methods will all allow you to run the multivariate approach like you asked for using the LMER in R but from a Python environment.