Pyomo: How to include a penalty in the objective function - python

I'm trying to minimize the cost of manufacturing a product with two machines. The cost of machine A is $30/product and cost of machine B is $40/product.
There are two constraints:
we must cover a demand of 50 products per month (x+y >= 50)
the cheap machine (A) can only manufacture 40 products per month (x<=40)
So I created the following Pyomo code:
from pyomo.environ import *
model = ConcreteModel()
model.x = Var(domain=NonNegativeReals)
model.y = Var(domain=NonNegativeReals)
def production_cost(m):
return 30*m.x + 40*m.y
# Objective
model.mycost = Objective(expr = production_cost, sense=minimize)
# Constraints
model.demand = Constraint(expr = model.x + model.y >= 50)
model.maxA = Constraint(expr = model.x <= 40)
# Let's solve it
results = SolverFactory('glpk').solve(model)
# Display the solution
print('Cost=', model.mycost())
print('x=', model.x())
print('y=', model.y())
It works ok, with the obvious solution x=40;y=10 (Cost = 1600)
However, if we start to use the machine B, there will be a fixed penalty of $300 over the cost.
I tried with
def production_cost(m):
if (m.y > 0):
return 30*m.x + 40*m.y + 300
else:
return 30*m.x + 40*m.y
But I get the following error message
Rule failed when generating expression for Objective mycost with index
None: PyomoException: Cannot convert non-constant Pyomo expression (0 <
y) to bool. This error is usually caused by using a Var, unit, or mutable
Param in a Boolean context such as an "if" statement, or when checking
container membership or equality. For example,
>>> m.x = Var() >>> if m.x >= 1: ... pass
and
>>> m.y = Var() >>> if m.y in [m.x, m.y]: ... pass
would both cause this exception.
I do not how to implement the condition to include the penalty into the objective function through the Pyomo code.

Since m.y is a Var, you cannot use the if statement with it. You can always use a binary variable using the Big M approach as Airsquid said it. This approach is usually not recommended, since it turns the problem from LP into a MILP, but it is effective.
You just need to create a new Binary Var:
model.bin_y = Var(domain=Binary)
Then constraint model.y to be zero if model.bin_y is zero, or else, be any value between its bounds. I use a bound of 100 here, but you can even use the demand:
model.bin_y_cons = Constraint(expr= model.y <= model.bin_y*100)
then, in your objective just apply the new fixed value of 300:
def production_cost(m):
return 30*m.x + 40*m.y + 300*model.bin_y
model.mycost = Objective(rule=production_cost, sense=minimize)

Related

Why doesn't this Boolean Variable in or-tools work?

I'm getting back into Google's OR tools and trying a (relatively) simple optimization. I'm using the CP SAT solver and I'm probably missing something elementary here.
I have some variables x, y and some constant c. I'd like x to be equal to 1 if y is smaller than that c, and 0 otherwise.
from ortools.sat.python import cp_model
solver = cp_model.CpSolver()
model = cp_model.CpModel()
c = 50
x = model.NewBoolVar(name='x')
y = model.NewIntVar(name='y', lb=0, ub=2**10)
model.Add(x == (y < c))
model.Maximize(x+y)
status = solver.Solve(model)
I'm getting an error message
TypeError: bad operand type for unary -: 'BoundedLinearExpression'
It seems I'm misusing OR tools syntax for my constraint here. I'm having difficulties understanding the documentation for OR tools online, and I seem to have forgotten quite a bit more about it than I thought.
According to an example from here, you're almost there.
Build x == (y < c) constraint
Case 1: x = true
If x is true, then y < c must also be true. That's exactly what OnlyEnforceIf method is for. If an argument of OnlyEnforceIf is true, then the constraint in the Add method will be activated:
model.Add(y < c).OnlyEnforceIf(x)
Case 2: x = false,
As mentioned in Case 1, OnlyEnforceIf won't activate the constraint, if its argument is not evaluated to true. Hence, you cannot just leave case y >= c on its own and hope that it will be implied if x = false. So, add a reverse constraint for this case as below:
model.Add(y >= c).OnlyEnforceIf(x.Not())
Since, for x = false, x.Not() will be true, the constraint y >= c will be activated and used when solving your equation.
Full Code
from ortools.sat.python import cp_model
solver = cp_model.CpSolver()
model = cp_model.CpModel()
c = 50
x = model.NewBoolVar(name='x')
y = model.NewIntVar(name='y', lb=0, ub=2**10)
model.Add(y < c).OnlyEnforceIf(x)
model.Add(y >= c).OnlyEnforceIf(x.Not())
model.Maximize(x+y)
status = solver.Solve(model)

How to deal with inconsistent pulp solution depending on specific inputs?

I have set a small script that describes a diet optimization solution in pulp. The particular integers are not really relevant, they are just macros from foods. The strange thing is that when one of protein_ratio, carb_ratio or fat_ratio is 0.1, then the problem becomes infeasible. For other combinations of these factors (which always should add up to 1) the problem has a solution. Is there any way to sort of relax the objective function so that the solution might have a small error margin? For example instead of giving you the grams that will lead to a 800 calorie meal, it would give you the grams that lead to a 810 calorie meal. This would still be acceptable. Here is the script:
from pulp import *
target_calories = 1500
protein_ratio = 0.4 #play around with this - 0.1 breaks it
carb_ratio = 0.4 #play around with this - 0.1 breaks it
fat_ratio = 0.2 #play around with this - 0.1 breaks it
problem = LpProblem("diet", sense = LpMinimize)
gramsOfMeat = LpVariable("gramsOfMeat", lowBound = 1)
gramsOfPasta = LpVariable("gramsOfPasta", lowBound = 1 )
gramsOfOil = LpVariable("gramsOfOil", lowBound = 1)
problem += gramsOfMeat*1.29 + gramsOfPasta*3.655 + gramsOfOil*9 - target_calories
totalprotein = gramsOfMeat*0.21 + gramsOfPasta*0.13 + gramsOfOil*0
totalcarb = gramsOfMeat*0 + gramsOfPasta*0.75 + gramsOfOil*0
totalfat = gramsOfMeat*0.05 + gramsOfPasta*0.015 + gramsOfOil*1
totalmacros = totalprotein + totalcarb + totalfat
problem += totalfat== fat_ratio*totalmacros
problem += totalcarb == carb_ratio*totalmacros
problem += totalprotein == protein_ratio*totalmacros
problem += gramsOfMeat*1.29 + gramsOfPasta*3.655 + gramsOfOil*9 - target_calories == 0
status = problem.solve()
print(status)
#assert status == pulp.LpStatusOptimal
#print(totalmacros)
print("Grams of meat: {}, grams of pasta: {}, grams of oil: {}, error: {}".format(value(gramsOfMeat), value(gramsOfPasta), value(gramsOfOil), value(problem.objective)))
You can add a penalty for violating the target. The idea would be to introduce two new decision variables, say under and over, and add constraints that say
problem += gramsOfMeat*1.29 + gramsOfPasta*3.655 + gramsOfOil*9 - target_calories <= under
problem += target_calories - (gramsOfMeat*1.29 + gramsOfPasta*3.655 + gramsOfOil*9) <= over
Then change your objective function to something like
problem += c_under * under + c_over * over
where c_under is the penalty per unit for being under the target and c_over is the penalty for being over. (These are parameters.) If you want to impose a hard bound on the over/under, you can add new constraints:
problem += under <= max_under
problem += over <= max_over
where max_under and max_over are the maximum allowable deviations (again, parameters).
One note: Your model is a little weird because it doesn't really have an objective function. Normally in the diet problem you want to minimize cost or maximize calories or something like that, and in general in linear programming you want to minimize or maximize something. In your model, you only have constraints. True, there is something that looks like an objective function --
problem += gramsOfMeat*1.29 + gramsOfPasta*3.655 + gramsOfOil*9 - target_calories
-- but since you have constrained this to equal 0, it doesn't really have any effect. There's certainly nothing incorrect about not having an objective function, but it's unusual, and I wanted to mention it in case this is not what you intended.

How to write multiobjective function in Gurobi?

I am working with multi-objective functionality of Gurobi 7.0, I am having two objective functions:
First minimizes the summation of product of Decision Variable with coefficient matrix-1
Second minimizes the summation of product of Decision Variable with coefficient matrix-2
I am using hierarchical or lexicographic approach, in which i set a priority for each objective, and optimize in priority order.
I can not use model.setObjective() function here because I will not be able to specify the objective function number and model will get confused. How can I write both of the objective functions?
I've been testing this feature.
The documentation is not too much clear about the way we must set the objective functions. However, I did the following:
Define Variables associated with the objective function (cost etc.)
Then I changed the number of objectives m.NumObj = 3
Set the parameters for each objectives.
m.setParam(GRB.Param.ObjNumber, 0)
m.ObjNPriority = 5
m.ObjNName = 'Z'
m.ObjNRelTol = x/10.0
m.ObjNAbsTol = 0
Z.objN = 1.0
m.setParam(GRB.Param.ObjNumber, 1)
m.ObjNPriority = 4
m.ObjNName = 'Custo'
m.ObjNRelTol = x/10.0
m.ObjNAbsTol = 0
m.ObjNWeight = -1.0
Custo.ObjN = 1.0
m.setParam(GRB.Param.ObjNumber, 2)
m.ObjNPriority = 10
m.ObjNName = 'Hop'
m.ObjNRelTol = x/10.0
m.ObjNWeight = -1.0
Hop.ObjN = 1.0
In my case, there are three objective functions (Z, Custo, Hop).
The parameter GRB.Param.ObjNumber is used to change the objective function you are working on.
Another thing that I concluded is that the the number of the objective is defined based on the order we define the variable associated to it (best of my knowledge).
Details about definition of the objective function
Custo = m.addVar(vtype=GRB.INTEGER, name="Custo", obj=1)
m.update ()
expr = []
for k in xrange (1, KSIZE ):
expr.append ( quicksum (var_y[ (l[0],l[1],k) ] * links[l][0] for l in links.keys()) )
expr.append ( quicksum (var_y[ (l[1],l[0],k) ] * links[l][0] for l in links.keys()) )
m.addConstr (quicksum (expr) == Custo, name= ' custo')
m.update ()

How can I avoid value errors when using numpy.random.multinomial?

When I use this random generator: numpy.random.multinomial, I keep getting:
ValueError: sum(pvals[:-1]) > 1.0
I am always passing the output of this softmax function:
def softmax(w, t = 1.0):
e = numpy.exp(numpy.array(w) / t)
dist = e / np.sum(e)
return dist
except now that I am getting this error, I also added this for the parameter (pvals):
while numpy.sum(pvals) > 1:
pvals /= (1+1e-5)
but that didn't solve it. What is the right way to make sure I avoid this error?
EDIT: here is function that includes this code
def get_MDN_prediction(vec):
coeffs = vec[::3]
means = vec[1::3]
stds = np.log(1+np.exp(vec[2::3]))
stds = np.maximum(stds, min_std)
coe = softmax(coeffs)
while np.sum(coe) > 1-1e-9:
coe /= (1+1e-5)
coeff = unhot(np.random.multinomial(1, coe))
return np.random.normal(means[coeff], stds[coeff])
I also encountered this problem during my language modelling work.
The root of this problem rises from numpy's implicit data casting: the output of my sorfmax() is in float32 type, however, numpy.random.multinomial() will cast the pval into float64 type IMPLICITLY. This data type casting would cause pval.sum() exceed 1.0 sometimes due to numerical rounding.
This issue is recognized and posted here
I know the question is old but since I faced the same problem just now, it seems to me it's still valid. Here's the solution I've found for it:
a = np.asarray(a).astype('float64')
a = a / np.sum(a)
b = np.random.multinomial(1, a, 1)
I've made the important part bold. If you omit that part the problem you've mentioned will happen from time to time. But if you change the type of array into float64, it will never happen.
Something that few people noticed: a robust version of the softmax can be easily obtained by removing the logsumexp from the values:
from scipy.misc import logsumexp
def log_softmax(vec):
return vec - logsumexp(vec)
def softmax(vec):
return np.exp(log_softmax(vec))
Just check it:
print(softmax(np.array([1.0, 0.0, -1.0, 1.1])))
Simple, isn't it?
The softmax implementation I was using is not stable enough for the values I was using it with. As a result, sometimes the output has a sum greater than 1 (e.g. 1.0000024...).
This case should be handled by the while loop. But sometimes the output contains NaNs, in which case the loop is never triggered, and the error persists.
Also, numpy.random.multinomial doesn't raise an error if it sees a NaN.
Here is what I'm using right now, instead:
def softmax(vec):
vec -= min(A(vec))
if max(vec) > 700:
a = np.argsort(vec)
aa = np.argsort(a)
vec = vec[a]
i = 0
while max(vec) > 700:
i += 1
vec -= vec[i]
vec = vec[aa]
e = np.exp(vec)
return e/np.sum(e)
def sample_multinomial(w):
"""
Sample multinomial distribution with parameters given by softmax of w
Returns an int
"""
p = softmax(w)
x = np.random.uniform(0,1)
for i,v in enumerate(np.cumsum(p)):
if x < v: return i
return len(p)-1 # shouldn't happen...

How to define codependent functions in Python?

I need to plot the position of a particle at time t, given the following formulae: s(t) = -0.5*g(s)*t^2+v0*t, where g(s) = G*M/(R+s(t))^2 (G, M, and R are constants, s being a value, not the function s(t)). The particle is being shot up vertically, and I want to print its current position every second until it hits the ground. But I can't figure out how to define one function without using the other before it's defined. This is my code so far:
G = 6.6742*10^(-11)
M = 5.9736*10^24
R = 6371000
s0 = 0
v0 = 300
t = 0
dt = 0.005
def g(s):
def s(t):
s(t) = -0.5*g(s)*t^2+v0*t
g(s) = G*M/(R+s(t))^2
def v(t):
v(t) = v(t-dt)-g(s(t-dt))*dt
while s(t) >= 0:
s(t) = s(t-dt)+v(t)*dt
t = t+dt
if t == int(t):
print s(t)
When I run the function, it says that it can't assign the function call.
The error means that you can't write s(t) = x, because s(t) is a function, and assignment on functions is performed with def .... Instead, you'll want to return the value, so you'd rewrite it like this:
def g(s):
def s(t):
return -0.5*g(s)*t^2+v0*t
return G*M/(R+s(t))^2
However, there are other issues with that as well. From a computational standpoint, this calculation would never terminate. Python is not an algebra system and can't solve for certain values. If you try to call s(t) within g(s), and g(s) within s(t), you'd never terminate, unless you define a termination condition. Otherwise they'll keep calling each other, until the recursion stack is filled up and then throws an error.
Also, since you defined s(t) within g(s), you can't call it from the outside, as you do several times further down in your code.
You seem to be confused about several syntax and semantic specifics of Python. If you ask us for what exactly you'd like to do and provide us with the mathematical formulae for it, it might be easier to formulate an answer that may help you better.
Edit:
To determine the position of a particle at time t, you'll want the following code (reformatted your code to Python syntax, use ** instead of ^ and return statements):
G = 6.6742*10**(-11)
M = 5.9736*10**24
R = 6371000
s0 = 0
v0 = 300
t = 0
dt = 0.005
sc = s0 # Current position of the particle, initially at s0
def g(s):
return -G*M/(R+s)**2
def s(t):
return 0.5*g(sc)*t**2 + v0*t + s0
count = 0
while s(t) >= 0:
if count % 200 == 0:
print(sc)
sc = s(t)
count += 1
t = dt*count
Python functions can call each other, but that's not how a function returns a value. To make a function return a particular value, use return, e.g.,
def v(t):
return v(t - dt) - g(s(t - dt)) * dt
Furthermore, I don't really understand what you're trying to do with this, but you'll probably need to express yourself differently:
while s(t) >= 0:
s(t) = s(t-dt)+v(t)*dt
t = t+dt

Categories

Resources