Why doesn't this Boolean Variable in or-tools work? - python

I'm getting back into Google's OR tools and trying a (relatively) simple optimization. I'm using the CP SAT solver and I'm probably missing something elementary here.
I have some variables x, y and some constant c. I'd like x to be equal to 1 if y is smaller than that c, and 0 otherwise.
from ortools.sat.python import cp_model
solver = cp_model.CpSolver()
model = cp_model.CpModel()
c = 50
x = model.NewBoolVar(name='x')
y = model.NewIntVar(name='y', lb=0, ub=2**10)
model.Add(x == (y < c))
model.Maximize(x+y)
status = solver.Solve(model)
I'm getting an error message
TypeError: bad operand type for unary -: 'BoundedLinearExpression'
It seems I'm misusing OR tools syntax for my constraint here. I'm having difficulties understanding the documentation for OR tools online, and I seem to have forgotten quite a bit more about it than I thought.

According to an example from here, you're almost there.
Build x == (y < c) constraint
Case 1: x = true
If x is true, then y < c must also be true. That's exactly what OnlyEnforceIf method is for. If an argument of OnlyEnforceIf is true, then the constraint in the Add method will be activated:
model.Add(y < c).OnlyEnforceIf(x)
Case 2: x = false,
As mentioned in Case 1, OnlyEnforceIf won't activate the constraint, if its argument is not evaluated to true. Hence, you cannot just leave case y >= c on its own and hope that it will be implied if x = false. So, add a reverse constraint for this case as below:
model.Add(y >= c).OnlyEnforceIf(x.Not())
Since, for x = false, x.Not() will be true, the constraint y >= c will be activated and used when solving your equation.
Full Code
from ortools.sat.python import cp_model
solver = cp_model.CpSolver()
model = cp_model.CpModel()
c = 50
x = model.NewBoolVar(name='x')
y = model.NewIntVar(name='y', lb=0, ub=2**10)
model.Add(y < c).OnlyEnforceIf(x)
model.Add(y >= c).OnlyEnforceIf(x.Not())
model.Maximize(x+y)
status = solver.Solve(model)

Related

Modulo operation on Real values (Z3Py)

I want to implement the modulo operation using Z3Py. I've found this discussion on the Z3 github page where one of the creators has the following solution. However, I'm not sure I fully understand it.
from z3 import *
mod = z3.Function('mod', z3.RealSort(), z3.RealSort(), z3.RealSort())
quot = z3.Function('quot', z3.RealSort(), z3.RealSort(), z3.IntSort())
s = z3.Solver()
def mk_mod_axioms(X, k):
s.add(Implies(k != 0, 0 <= mod(X, k)),
Implies(k > 0, mod(X, k) < k),
Implies(k < 0, mod(X, k) < -k),
Implies(k != 0, k * quot(X, k) + mod(X, k) == X))
x, y = z3.Reals('x y')
mk_mod_axioms(x, 3)
mk_mod_axioms(y, 5)
print(s)
If you set no additional constraints the model evaluates to 0, the first solution. If you set additional constraints that x and y should be less than 0, it produces correct solutions. However, if you set the constraint that x and y should be above 0 it produces incorrect results.
s.add(x > 0)
s.add(y > 0)
The model evaluates to 1/2 for x and 7/2 for y.
Here's the model z3 prints:
sat
[y = 7/2,
x = 1/2,
mod = [(7/2, 5) -> 7/2, else -> 1/2],
quot = [else -> 0]]
So, what it's telling you is that it "picked" mod and quot to be functions that are:
def mod (x, y):
if x == 3.5 and y == 5:
return 3.5
else:
return 0.5
def quot (x, y):
return 0
Now go over the axioms you put in: You'll see that the model does satisfy them just fine; so there's nothing really wrong with this.
What the answer you linked to is saying is about what sort of properties you can state to get a "reasonable" model. Not that it's the unique such model. In particular, you want quot to be the maximum such value, but there's nothing in the axioms that require that.
Long story short, the answer you're getting is correct; but it's perhaps not useful. Axiomatizing will take more work, in particular you'll need quantification and SMT solvers don't deal with such specifications that well. But it all depends on what you're trying to achieve: For specific problems you can get away with a simpler model. Without knowing your actual application, the only thing we can say is that this axiomatization is too weak for your use case.

Pyomo: How to include a penalty in the objective function

I'm trying to minimize the cost of manufacturing a product with two machines. The cost of machine A is $30/product and cost of machine B is $40/product.
There are two constraints:
we must cover a demand of 50 products per month (x+y >= 50)
the cheap machine (A) can only manufacture 40 products per month (x<=40)
So I created the following Pyomo code:
from pyomo.environ import *
model = ConcreteModel()
model.x = Var(domain=NonNegativeReals)
model.y = Var(domain=NonNegativeReals)
def production_cost(m):
return 30*m.x + 40*m.y
# Objective
model.mycost = Objective(expr = production_cost, sense=minimize)
# Constraints
model.demand = Constraint(expr = model.x + model.y >= 50)
model.maxA = Constraint(expr = model.x <= 40)
# Let's solve it
results = SolverFactory('glpk').solve(model)
# Display the solution
print('Cost=', model.mycost())
print('x=', model.x())
print('y=', model.y())
It works ok, with the obvious solution x=40;y=10 (Cost = 1600)
However, if we start to use the machine B, there will be a fixed penalty of $300 over the cost.
I tried with
def production_cost(m):
if (m.y > 0):
return 30*m.x + 40*m.y + 300
else:
return 30*m.x + 40*m.y
But I get the following error message
Rule failed when generating expression for Objective mycost with index
None: PyomoException: Cannot convert non-constant Pyomo expression (0 <
y) to bool. This error is usually caused by using a Var, unit, or mutable
Param in a Boolean context such as an "if" statement, or when checking
container membership or equality. For example,
>>> m.x = Var() >>> if m.x >= 1: ... pass
and
>>> m.y = Var() >>> if m.y in [m.x, m.y]: ... pass
would both cause this exception.
I do not how to implement the condition to include the penalty into the objective function through the Pyomo code.
Since m.y is a Var, you cannot use the if statement with it. You can always use a binary variable using the Big M approach as Airsquid said it. This approach is usually not recommended, since it turns the problem from LP into a MILP, but it is effective.
You just need to create a new Binary Var:
model.bin_y = Var(domain=Binary)
Then constraint model.y to be zero if model.bin_y is zero, or else, be any value between its bounds. I use a bound of 100 here, but you can even use the demand:
model.bin_y_cons = Constraint(expr= model.y <= model.bin_y*100)
then, in your objective just apply the new fixed value of 300:
def production_cost(m):
return 30*m.x + 40*m.y + 300*model.bin_y
model.mycost = Objective(rule=production_cost, sense=minimize)

PuLP: How to write a multi variable constraint?

I am trying to solve this optimization problem in Python. I have written the following code using PuLP:
import pulp
D = range(0, 10)
F = range(0, 10)
x = pulp.LpVariable.dicts("x", (D), 0, 1, pulp.LpInteger)
y = pulp.LpVariable.dicts("y", (F, D), 0, 1, pulp.LpInteger)
model = pulp.LpProblem("Scheduling", pulp.LpMaximize)
model += pulp.lpSum(x[d] for d in D)
for f in F:
model += pulp.lpSum(y[f][d] for d in D) == 1
for d in D:
model += x[d]*pulp.lpSum(y[f][d] for f in F) == 0
model.solve()
The one-but-last line here returns: TypeError: Non-constant expressions cannot be multiplied. I understand it is returning this since it cannot solve non-linear optimization problems. Is it possible to formulate this problem as a proper linear problem, such that it can be solved using PuLP?
It is always a good idea to start with a mathematical model. You have:
min sum(d, x[d])
sum(d,y[f,d]) = 1 ∀f
x[d]*sum(f,y[f,d]) = 0 ∀d
x[d],y[f,d] ∈ {0,1}
The last constraint is non-linear (it is quadratic). This can not be handled by PuLP. The constraint can be interpreted as:
x[d] = 0 or sum(f,y[f,d]) = 0 ∀d
or
x[d] = 1 ==> sum(f,y[f,d]) = 0 ∀d
This can be linearized as:
sum(f,y[f,d]) <= (1-x[d])*M
where M = |F| (number of elements in F, i.e. |F|=10). You can check that:
x[d]=0 => sum(f,y[f,d]) <= M (i.e. non-binding)
x[d]=1 => sum(f,y[f,d]) <= 0 (i.e. zero)
So you need to replace your quadratic constraint with this linear one.
Note that this is not the only formulation. You could also linearize the individual terms z[f,d]=x[d]*y[f,d]. I'll leave that as an exercise.

Integral of piecewise function gives incorrect result

Using the recent version of sympy (0.7.6) I get the following bad result when determining the integral of a function with support [0,y):
from sympy import *
a,b,c,x,z = symbols("a,b,c,x,z",real = True)
y = Symbol("y",real=True,positive=True)
inner = Piecewise((0,(x>=y)|(x<0)|(b>c)),(a,True))
I = Integral(inner,(x,0,z))
Eq(I,I.doit())
This is incorrect as the actual result should have the last two cases swapped. This can be confirmed by checking the derivative:
Derivative(I.doit(),z).doit().simplify().subs(z,x)
which reduces to 0 everywhere.
Interestingly, when dropping the condition (b>c) by substituting inner = Piecewise((0,(x>=y)|(x<0)),(a,True)) I get a TypeError:
TypeError: cannot determine truth value of
-oo < y
Am I using the library incorrectly or is this actually a serious sympy bug?
Yes, sympy 0.7.6 is wrong in this case, and in some other such cases. Generally, I don't know any symbolic math package that I would trust to do calculus with piecewise defined functions.
Note that although
inner = Piecewise((0, (x>=y)|(x<0)), (a,True))
throws a TypeError at integration time, a logically equivalent definition
inner = Piecewise((a, (x<y)&(x>=0)), (0,True))
leads to the correct result
Piecewise((a*z, And(z < y, z >= 0)), (0, And(z <= 0, z >= -oo)), (a*y, True))
By the way, the previous version, sympy 0.7.5, handles
inner = Piecewise( (0, (x>=y)|(x<0)), (a,True) )
without a TypeError, producing the correct result (in a different form):
Piecewise((0, z <= 0), (a*y, z >= y), (a*z, True))
Here is another, simpler example of buggy behavior:
>>> Integral(Piecewise((1,(x<1)|(z<x)), (0,True)) ,(x,0,2)).doit()
-Max(0, Min(2, Max(0, z))) + 3
>>> Integral(Piecewise((1,(x<1)|(x>z)), (0,True)) ,(x,0,2)).doit()
-Max(0, Min(2, Max(1, z))) + 3
The first result is incorrect (it fails for z=0, for example). The second is correct. The only difference between two formulas is z<x vs x>z.

How can I avoid value errors when using numpy.random.multinomial?

When I use this random generator: numpy.random.multinomial, I keep getting:
ValueError: sum(pvals[:-1]) > 1.0
I am always passing the output of this softmax function:
def softmax(w, t = 1.0):
e = numpy.exp(numpy.array(w) / t)
dist = e / np.sum(e)
return dist
except now that I am getting this error, I also added this for the parameter (pvals):
while numpy.sum(pvals) > 1:
pvals /= (1+1e-5)
but that didn't solve it. What is the right way to make sure I avoid this error?
EDIT: here is function that includes this code
def get_MDN_prediction(vec):
coeffs = vec[::3]
means = vec[1::3]
stds = np.log(1+np.exp(vec[2::3]))
stds = np.maximum(stds, min_std)
coe = softmax(coeffs)
while np.sum(coe) > 1-1e-9:
coe /= (1+1e-5)
coeff = unhot(np.random.multinomial(1, coe))
return np.random.normal(means[coeff], stds[coeff])
I also encountered this problem during my language modelling work.
The root of this problem rises from numpy's implicit data casting: the output of my sorfmax() is in float32 type, however, numpy.random.multinomial() will cast the pval into float64 type IMPLICITLY. This data type casting would cause pval.sum() exceed 1.0 sometimes due to numerical rounding.
This issue is recognized and posted here
I know the question is old but since I faced the same problem just now, it seems to me it's still valid. Here's the solution I've found for it:
a = np.asarray(a).astype('float64')
a = a / np.sum(a)
b = np.random.multinomial(1, a, 1)
I've made the important part bold. If you omit that part the problem you've mentioned will happen from time to time. But if you change the type of array into float64, it will never happen.
Something that few people noticed: a robust version of the softmax can be easily obtained by removing the logsumexp from the values:
from scipy.misc import logsumexp
def log_softmax(vec):
return vec - logsumexp(vec)
def softmax(vec):
return np.exp(log_softmax(vec))
Just check it:
print(softmax(np.array([1.0, 0.0, -1.0, 1.1])))
Simple, isn't it?
The softmax implementation I was using is not stable enough for the values I was using it with. As a result, sometimes the output has a sum greater than 1 (e.g. 1.0000024...).
This case should be handled by the while loop. But sometimes the output contains NaNs, in which case the loop is never triggered, and the error persists.
Also, numpy.random.multinomial doesn't raise an error if it sees a NaN.
Here is what I'm using right now, instead:
def softmax(vec):
vec -= min(A(vec))
if max(vec) > 700:
a = np.argsort(vec)
aa = np.argsort(a)
vec = vec[a]
i = 0
while max(vec) > 700:
i += 1
vec -= vec[i]
vec = vec[aa]
e = np.exp(vec)
return e/np.sum(e)
def sample_multinomial(w):
"""
Sample multinomial distribution with parameters given by softmax of w
Returns an int
"""
p = softmax(w)
x = np.random.uniform(0,1)
for i,v in enumerate(np.cumsum(p)):
if x < v: return i
return len(p)-1 # shouldn't happen...

Categories

Resources