Is pyomo equality expression commutative? - python

Here's a constraint defined by a function:
def my_constraint(model, j):
a = sum(model.variable_1[i, j] for i in model.i) + sum(model.variable_2[o, j] for o in model.o if o != j)
b = model.variable_3[j]
# Apparently, the order matters !?
return a == b
# return b == a
model.my_constraint = pe.Constraint(model.j, rule=my_constraint)
I assumed the order of the terms of the equality wouldn't matter but if I switch them, I get different results.
I don't know how to get to the bottom of this.
The generated .nl files differ slightly but I'm in a dead end as I don't know how to interpret them.
Investigating .nl files
Two thee-line sets have a sign difference.
File 1:
[...]
24 1
32 -1
35 1
J78 3
25 1
33 -1
34 1
[...]
File 2:
[...]
24 -1
32 1
35 -1
J78 3
25 -1
33 1
34 -1
[...]
When feeding both files to ipopt, I get "infeasible" with file 1 and a solution with file 2. If I edit file 1 to change the signs in either the first or the second three-line set, I get convergence with the same results as file 2.
So the order in the equality of the expression should not matter, but when changing it, I get, in the .nl file, a sign difference that does matter.
Simple example demonstrating how the order of the terms affects the .nl file
from pyomo.environ import ConcreteModel, Set, Var, Constraint, Objective
from pyomo.opt import SolverFactory
model = ConcreteModel()
model.i = Set(initialize=['I1'])
model.j = Set(initialize=['J1'])
model.v1 = Var(model.i, model.j)
model.v2 = Var(model.i, model.j)
model.v3 = Var(initialize=0, bounds=(0, None))
def c1(model, i, j):
#return model.v2[i, j] == model.v1[i, j]
return model.v1[i, j] == model.v2[i, j]
model.c1 = Constraint(model.i, model.j, rule=c1)
def objective_rule(model):
return model.v3
model.objective = Objective(rule=objective_rule)
opt = SolverFactory('ipopt')
opt.solve(model, keepfiles=True)
Depending of the order of the terms in constraint c1, I don't get the same .nl file.
More specifically, both files are identical except for two lines:
g3 1 1 0 # problem unknown
3 1 1 0 1 # vars, constraints, objectives, ranges, eqns
0 0 0 0 0 0 # nonlinear constrs, objs; ccons: lin, nonlin, nd, nzlb
0 0 # network constraints: nonlinear, linear
0 0 0 # nonlinear vars in constraints, objectives, both
0 0 0 1 # linear network variables; functions; arith, flags
0 0 0 0 0 # discrete variables: binary, integer, nonlinear (b,c,o)
2 1 # nonzeros in Jacobian, obj. gradient
0 0 # max name lengths: constraints, variables
0 0 0 0 0 # common exprs: b,c,o,c1,o1
C0
n0
O0 0
n0
x1
2 0
r
4 0.0
b
3
3
2 0
k2
1
2
J0 2
0 -1 # The other file reads 0 1
1 1 # 1 -1
G0 1
2 1
When solving, I get the same results. Probably because the example is rubbish.

A theoretical explanation is that you're seeing alternative optimal solutions. It's entirely possible, depending on the problem formulation, that you've got more than one solution that has the optimal objective value. What order you get these in is going to be sensitive to the order of the constraints. If you're using an LP solver you ought to be able to ask it to give you all of the optimal solutions.

Related

How to solve this problem without brut-forcing and/or having a huge computation time?

I am trying to solve the following problem:
A store sells large individual wooden letters for signs to put on houses.
The letters are priced individually.
The total cost of letters in LOAN is 17 dollars.
The total cost of letters in SAM is 18 dollars.
The total cost of letters in ANNA is 20 dollars.
The total cost of letters in ROLLO is 21 dollars.
The total cost of letters in DAMAGES is 30 dollars.
The total cost of letters in SALMON is 33 dollars.
How much would the letters in the name GARDNER cost?
I am brute-forcing letters cost with the following python code, but it take hours and hours to converge, as their are 33^10 possible combinaisons to test. I use n=33 as it is the max cost of a name but indeed, n can be reduced to 15 or even 10, but without being sur it will converge.
def func(letters):
print letters
if letters['L']+letters['O']+letters['A']+letters['N'] != 17:
return False
elif letters['S']+letters['A']+letters['M'] != 18:
return False
elif 2*letters['A']+2*letters['N'] != 20:
return False
elif letters['R']+2*letters['O']+2*letters['L'] != 21:
return False
elif letters['D']+2*letters['A']+letters['M']+letters['G']+letters['E']+letters['S'] != 30:
return False
elif letters['S']+letters['A']+letters['L']+letters['M']+letters['O']+letters['N'] != 33:
return False
return True
def run(letters, n, forbidden_letters):
for letter in letters.keys():
if letter not in forbidden_letters:
for i in range(1, n):
letters[letter] = i
if not func(letters):
if letter not in forbidden_letters:
forbidden_letters+=letter
if run(letters, n, forbidden_letters):
return letters
else:
return letters
LETTERS = {
"L":1,
"O":1,
"A":1,
"N":1,
"S":1,
"M":1,
"R":1,
"D":1,
"G":1,
"E":1,
}
n=33
print run(LETTERS, n, "")
Brute-forcing will work in the end, but it is so CPU extensive that it is surely not the best solution.
Does anyone have a better solution to solve this problem? Either by reducing the computation time, either by a good math approach.
Thanks all.
This is what is called a system of linear equations. You can solve this by hand if you want, but you can also use a linear solver. For example with sympy
import sympy
l,o,a,n,s,m,r,d,g,e = sympy.symbols('l,o,a,n,s,m,r,d,g,e')
eq1 = l+o+a+n - 17
eq2 = s+a+m -18
eq3 = a+n+n+a -20
eq4 = r+o+l+l+o -21
eq5 = d+a+m+a+g+e+s -30
eq6 = s+a+l+m+o+n- 33
sol, = sympy.linsolve([eq1,eq2,eq3,eq4,eq5,eq6],(l,o,a,n,s,m,r,d,g,e))
l,o,a,n,s,m,r,d,g,e = sol
print(g+a+r+d+n+e+r)
Linear equations can be solved very fast. The complexity is O(n3), where n is the number of variables. So for such a little problem as this it is near instant.
L + O + A + N - 17 = 0
S + A + M - 18 = 0
2 * A + 2 * N - 20 = 0
and so on.
Try to make a matrix like:
L O A N S M R D G E val
[1 1 1 1 0 0 0 0 0 0 -17 | 0] LOAN
[0 0 1 0 1 1 0 0 0 0 -18 | 0] SAM
[0 0 2 2 0 0 0 0 0 0 -20 | 0] ANNA
...
[0 0 1 1 0 0 2 1 1 2 -x | 0] GARDENER
Now you can solve it using for example Gauss method. It'll take O(n^3) time complexity.

Constrain logic in Linear programming

I'm trying to build a linear optimization model for a production unit. I have Decision variable (binary variable) X(i)(j) where I is hour of J day. The constrain I need to introduce is a limitation on downtime (minimum time period the production unit needs to be turned off between two starts).
For example:
Hours: 1 2 3 4 5 6 7 8 9 10 11 12
On/off: 0 1 0 1 1 0 1 1 1 0 0 1
I cannot run the hour 4 or 7 because time period between 2 and 4 / 5 and 7 is one. I can run hour 12 since I have two hour gap after hour 9. How do I enforce this constrain in Linear programming/ optimization?
I think you are asking for a way to model: "at least two consecutive periods of down time". A simple formulation is to forbid the pattern:
t t+1 t+2
1 0 1
This can be written as a linear inequality:
x(t) - x(t+1) + x(t+2) <= 1
One way to convince yourself this is correct is to just enumerate the patterns:
x(t) x(t+1) x(t+2) LHS
0 0 0 0
0 0 1 1
0 1 0 -1
0 1 1 0
1 0 0 1
1 0 1 2 <--- to be excluded
1 1 0 0
1 1 1 1
With x(t) - x(t+1) + x(t+2) <= 1 we exactly exclude the pattern 101 but allow all others.
Similarly, "at least two consecutive periods of up time" can be handled by excluding the pattern
t t+1 t+2
0 1 0
or
-x(t) + x(t+1) - x(t+2) <= 0
Note: one way to derive the second from the first constraint is to observe that forbidding the pattern 010 is the same as saying y(t)=1-x(t) and excluding 101 in terms of y(t). In other words:
(1-x(t)) - (1-x(t+1)) + (1-x(t+2)) <= 1
This is identical to
-x(t) + x(t+1) - x(t+2) <= 0
In the comments it is argued this method does not work. That is based on a substantial misunderstanding of this method. The pattern 100 (i.e. x(1)=1,x(2)=0,x(3)=0) is not allowed because
-x(0)+x(1)-x(2) <= 0
Where x(0) is the status before we start our planning period. This is historic data. If x(0)=0 we have x(1)-x(2)<=0, disallowing 10. I.e. this method is correct (if not, a lot of my models would fail).

Finding equal variables in non solvable multi-variables linear equations

I am trying to find an algorithm to solve the current problem. I have multiple unknown variables (F1,F2,F3, ... Fx) and (R1,R2,R3 ... Rx) and multiple equations like this:
F1 + R1 = a
F1 + R2 = a
F2 + R1 = b
F3 + R2 = b
F2 + R3 = c
F3 + R4 = c
where a, b and c are known numbers. I am trying to find all equal variables in such equations. For example in the above equation I could see that F2 and F3 are equal and R3 and R4 are equal.
First equations tells us that R1 and R2 are equal, and second tells us that F2 and F3 are equal, while the third tells us that R3 and R4 are equal.
For a more complex scenario, is there any known algorithm that can find all equal (F and R) variables????
(I will edit the question if it is not clear enough)
Thanks
For the general situation, row echelon is probably the way to go. If every equation has only two variables, then you can consider each variable to be in a partition. Every time two variables appear in an equation together, their partitions are joined. So to begin with each variable is in its own partition. After the first equation, there a partition that contains F1 and R1. After the second equation, that partition is replaced by a partition that contain F1, R1 and R2. You should have the variables in some sort of order, and when two partitions are joined, put all the variables except the first one in terms of the first one (it doesn't really matter how you order the variables, you just need some way of deciding which is the "first"). So for instance, after the first equation, you have R1 = a-F1. After the second equation, you have R1 = a-F1 and R2 = a-F1. Each variable can be represented by two numbers: some number times the first variable in their partition, plus a constant. Then at the end, you go through each partition, and look for variables that have the same two numbers representing them.
Here's a hint: you have defined a system of linear equations with 7 variables and 6 equations. Here's a crude matrix/vector notation:
1 0 0 1 0 0 0 F1 a
1 0 0 0 1 0 0 F2 a
0 1 0 1 0 0 0 * F3 = b
0 0 1 0 1 0 0 R1 b
0 1 0 0 0 1 0 R2 c
0 0 1 0 0 0 1 R3 c
R4
If you do the Gaussian elimination manually, you can see that e.g. first row minus the second row results in
(0 0 0 1 -1 0 0) * (F1 F2 F3 R1 R2 R3 R4)^T = a - a
R1 - R2 = 0
R1 = R2
Which implies that R1 and R2 are what you call equivalent. There are many different methods to solve the system or interpret the results. Maybe you will find this thread useful: Is there a standard solution for Gauss elimination in Python?

Gernerate all the possible undirected graphs

What is an efficient solution to generate all the possible graphs using an incidence matrix?
The problems is equivalent of generating all the possible binary triangular matrix.
My first idea was to use python with itertools. For instance, for generating all the possibile 4x4 matrix
for b in itertools.combinations_with_replacement((0,1), n-3):
b_1=[i for i in b]
for c in itertools.combinations_with_replacement((0,1), n-2):
c_1=[i for i in c]
for d in itertools.combinations_with_replacement((0,1), n-1):
d_1=[i for i in d]
and then you create the matrix adding the respective number of zeroes..
But this is not correct since we skip some graphs...
So, any ideas?
Perhaps i can use the isomorphism between R^n matrix and R^(n*n) vector, and generate all the possibile vector of 0 and 1, and then cut it into my matrix, but i think there's a more efficient solutions.
Thank you
I add the matlab tab because it's a problem you can have in numerical analysis and matlab.
I assume you want lower triangular matrices, and that the diagonal needs not be zero. The code can be easily modified if that's not the case.
n = 4; %// matrix size
vals = dec2bin(0:2^(n*(n+1)/2)-1)-'0'; %// each row of `vals` codes a matrix
mask = tril(reshape(1:n^2, n, n))>0; %// decoding mask
for v = vals.' %'// `for` picks one column each time
matrix = zeros(n); %// initiallize to zeros
matrix(mask) = v; %// decode into matrix
disp(matrix) %// Do something with `matrix`
end
Each iteration gives one possible matrix. For example, the first matrices for n=4 are
matrix =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
matrix =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 1
matrix =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 1 0
matrix =
0 0 0 0
0 0 0 0
0 0 0 0
0 0 1 1
Here is an example solution using numpy that generates all simple graphs:
It first generates the indices of the upper triangular part iu. The loop converts the number k to it's binary representation and then assigns it to the upper triangular part G[iu].
import numpy as np
n = 4
iu = np.triu_indices(n,1) # Start at first minor diagonal
G = np.zeros([n,n])
def dec2bin(k, bitlength=0):
return [1 if digit=='1' else 0 for digit in bin(k)[2:].zfill(bitlength)]
for k in range(0,2**(iu[0].size)):
G[iu] = dec2bin(k, iu[0].size)
print(G)

How do I get the keys associated with the minimum value in a dict?

So, I'm working on a bio-informatics course problem-set right now, and have gotten stuck on an algorithm.
My problem is that I can't seem to figure out a way to make the last function (SkewMin) work, as it's supposed to return only the positions where Skew is minimized.
And yes, I know the indexing starts at 0, but I'll add 1 every value in minimumValues to adjust, so go with it :)
# Example:
# index : 1 2 3 4 5 6 7 8 9 10 11 12 13
# Genome => A T T C G G C C C G G C C
# Skew(genome) => 0 0 0 0 -1 0 +1 0 -1 -2 -1 0 -1 -2
# MinSkew(genome) => 9 13
Here's the actual code (I removed the code for Skew, as it does what I described above perfectly, and I'm not allowed to post working code online, which is also why I'm not going to mention the course name):
# MinSkew, uses Skew(genome) to find the positions where Skew is at a minimum.
#EX: Skew("ACGTGCC") gives 0 0 -1 0 0 1 0 -1
# with index 0 1 2 3 4 5 6 7
#and MinSkew("ACGTGC") gives the index of the nucletides scoring -1 in genome
# result: (2,7).
def MinSkew(genome):
dictOfSkew = dict()
skewValues = Skew(genome)
minimumSkew = 0
minimumValues = list()
for i in range(0,len(genome) + 1):
dictOfSkew[i] = skewValues[i]
if minimumSkew > skewValues[i]:
minimumSkew = skewValues[i]
minimumValues.append(i)
return minimumValues
Your core error is that you never clear minimumValues when you find a new min, so the values from previous mins are still around. BartoszKP's answer is fine, but to do it in one pass:
if minimumSkew > skewValues[i]:
minimumSkew = skewValues[i]
minimumValues = [i]
elif minimumSkew == skewValues[i]:
minimumValues.append(i)
The algorithm in the current form won't work, because filtering (finding only minimum values) and calculating the criteria for filtering (the value of minimum) are interleaved. This simplest fix is to first find the minimum:
minimumValue = min(skewValues.values())
and only then apply filtering:
minimumKeys = [k for (k,v) in skewValues.items() if v == minimumValue]
(or iteritems for Python 2)

Categories

Resources