I am trying to build a MIP model in the OR Tools Python API. I have two expressions x and y and want to make a variable b that is equal to 1 when x == y and 0 otherwise. What I've tried doing so far is adding the constraint that -M(1 - b) <= x - y <= M(1 - b) for some big value of M, which forces b to be 0 if x != y. Where I am stuck is adding a constraint that forces b to be 1 if x == y. I think I would want something such as x - y >= 1 - b or y - x >= 1 - b, but I don't know how to logically combine constraints like this. Any suggestions on how to do this? Or for some totally different approach?
I think the following expressions would work for you:
b <= x - y + 1
b <= y - x + 1
b >= 1-x + 1-y - 1
b >= y + x - 1
Please note that depending on the nature of the model, the CP-SAT solver could prove competitive. And it provides reification and half-reification natively.
Please have a look at
[Specific answer] https://github.com/google/or-tools/blob/master/ortools/sat/doc/channeling.md
[Introduction] https://developers.google.com/optimization/cp/cp_solver#cp-sat_example
[CP-SAT recipes] https://github.com/google/or-tools/blob/master/ortools/sat/doc/index.md
Related
So here's the problem. I have this variable Z which follows the following rule:
I'm guessing Z should be defined as so:
Z = cvxpy.Variable(shape=shape_j_t, name="Z", boolean=True)
So Z is constrained(?) to the balance "B". How do I inform the solver that Z should be 1 if B is positive and 0 otherwise? Especially given that B itself is composed of other cvxpy.Variables.
The general approach, assuming a-priori knowledge about bounds (which are needed in general) on B looks like:
b <= UB * z
b >= LB * z
z in {0, 1}
which describes:
z = 0 <-> b = 0
z = 1 <-> LB <= b <= UB
But this is just something general and these things usually are designed having the full model in mind. Here we don't know what you are doing exactly. Sometimes we don't need equivalence but just implication (e.g. ignore the LB-constraint...)
Maybe it's not trivial to define the notion of positive balance as you only got inequalities and using LB=0 would express non-negativity, but not strict positiveness. For the latter, some a-priori definition of some epsilon (e.g. 0.001) would be needed.
I have a rather complex quadratic problem that I am trying to solve in python, however, for the sake of this question - i am significantly simplifying the problem at hand.
I have the following quadratic function I am trying to minimize while satisfying the following constraints:
minimize 0.5 * x.T * P * x + q.T * x
where:
x >= 0
x[0] >= 1.5
x[n] >= 1.5 # n = last element
I have written the equivalent in scipy.optimize :
def minimize_func(x,y,P):
return 0.5*np.dot(x.T,np.dot(P,x)) + np.dot(y.T,x)
cons = ({'type':'ineq','fun': lambda x: x},
{'type':'ineq','fun': lambda x: x[0] - 1.5},
{'type':'ineq','fun': lambda x: x[n] - 1.5})
However, my question is how do I input specific constraints in cvxopt quadratic solver?
I have looked into the cvxopt documentation page and none of the examples they give seem related to my question.I am looking to input element wise constraints. Any help is greatly appreciated.
cvxopt is focusing on the natural matrix-form, which might look quite low-level for people without any knowledge about the internals.
All you need is documented in the user-manual
cvxopt.solvers.qp(P, q[, G, h[, A, b[, solver[, initvals]]]])
solves:
Assuming n=3 and your constraints (i assume 0-indexing -> n-1 is the last element):
x >= 0
x[0] >= 1.5
x[n-1] >= 1.5 # n-1 = last element
this will look like:
G =
-1 0 0
0 -1 0
0 0 -1
h = -1.5
0
-1,5
The reasoning is simple:
- 1 * x[0] + 0 * x[1] + 0 * x[2] <= -1.5
<-> - x[0] <= -1.5
<-> x[0] >= 1.5
(We ignored the non-negative constraints for x[0] and x[1] here as >= 1.5 is more restrictive)
There are lots of helper functions in numpy/scipy and co. to do this more easily (e.g. np.eye(n)).
In general, i would recommend to use cvxpy which is a more high-level modelling-tool allowing to call cvxopt's solvers too.
I am a beginner in gurobipy. I would like to add an inverted indicator constraint.
Indicator constraint is nothing but depending on a binary variable a constraint does or does not hold.
In gurobipy this is written as
model.addConstr((x == 1) >> (y + z <= 5))
where x is a binary variable, y and z are integer variables. This statement says that if x is True then the constraint y+z <= 5 holds.
But I would like to have an inverted constraint like this.
If y+z <= 5 then x == 1. But gurobi does not allow the lhs part of the statement to be an inequality. It can only be a binary variable equal to a constant (0 or 1).
So the inverted statement throws an error.
model.addConstr((y + z <= 5) >> (x == 1))
Any ideas how to rewrite such a conditional constraint in gurobipy?!
The implication
y+z ≤ 5 ⇒ x = 1
can be rewritten as:
x = 0 ⇒ y+z ≥ 6
This can be directly implemented as an indicator constraint.
This is based on propositional logic. This is called transposition:
A ⇒ B
⇔
not B ⇒ not A
So in theory we have
y+z ≤ 5 ⇒ x = 1
⇔
x = 0 ⇒ y+z > 5
If x and y are integers we can say x = 0 ⇒ y+z ≥ 6 If they are continuous variables you could do: x = 0 ⇒ y+z ≥ 5.0001 (in practice I would do: x = 0 ⇒ y+z ≥ 5 and keep things ambiguous at y+z = 5).
This is kind of a standard trick when using indicator constraints. It seems not everyone is aware of or appreciates this.
The indicator syntax is
binary expression >> linear constraint
So your constraint is invalid. You need a different model that forces x to 1 when y + z ≤ 5. Assuming y, z are non-negative integers, try 6x + y + z ≥ 6.
I think the best way to go with this one is to use the big-M approach
Let reconsider the problem you are trying to model
If y+z <= 5 then x == 1
It is equivalent to if y+z-5 <= 0 then x==1
From here we need a logic that will turn on and off the variable x depending on the condition on the y+z-5
y + z - 5 <= M(1-x)
will do the trick. Note that the x will need to be 1 for the relationship to hold if y+z-5 <= 0 which is what we want. Similarly, x will be turned off (set to 0) if y+z-5 >= 0
I hope this helps
I want to use the python software cvxopt to solve a small testing problem I have (if the software is able to solve this problem, then my boss will be able to use it on a future project). However, I am having trouble figuring out from the documentation how I can encode some constraints that are not of the form Ax = b or Ax < b.
The problem statement is:
x is a numpy array (1-d). Find an array y such that:
(1) We minimize ||x-y||^2
(2) y is increasing throughout (y[k] <= y[k+1] for all k)
(3) the last element of y = the last element of x
(4) y[0] >= 0
I see how encoding conditions (3) and (4) can be done, but how can I encode condition (2)?
Thank you,
Christian
First a remark: there is no A<b in (continuous) convex-optimization, only A<=b.
Your condition (2) is just a pairwise-constraint on the neighbors like:
y[0] <= y[1]
y[1] <= y[2]
....
To bring it in standard-form:
y[0] <= y[1]
<=>
y[0] - y[1] <= 0
Now you can use the A<=b formulation:
A:
1 -1 0 0 ... meaning: y[0] <= y[1]
0 1 -1 0 ... y[1] <= y[2]
0 0 1 -1 ...
b:
0
0
0
...
If you are not using cvxopt's possibilities to tune KKT-calculations and co. i highly recommend using cvxpy (coming from the same academic institution) which is so much easier to use including a lot of functions like norm(x-y, 2) and more.... It can also use cvxopt as a solver if needed (but also other open-source solvers like ECOS, SCS).
We're given N (3 <= N <= 50000) cards with unique numbers from 1 to N.
We lost some 3 cards and our goal is to find them.
Input: first line contains number of cards N.
Second line contains 3 numbers: sum of all left cards we have, sum of their squares and sum of their cubes.
Output: numbers of 3 lost cards in any order.
Here what I tried: I found the same 3 sums for lost cards and then check of possible numbers until three of them satisfy our sums.
Is there a faster solution? I have to pass 2sec time limit in Python with max N = 50000.
N = int(input())
lst = list(range(1, N+1))
s_rest, s2_rest, s3_rest = list(map(int, input().split()))
s = sum(lst)
s2 = sum([x**2 for x in lst])
s3 = sum([x**3 for x in lst])
# sums of 3 lost numbers
s_lost = s - s_rest
s2_lost = s2 - s2_rest
s3_lost = s3 - s3_rest
def find_numbers():
"""Find first appropriate option"""
for num1 in range(s_lost):
for num2 in range(s_lost):
for num3 in range(s_lost):
if (num1 + num2 + num3 == s_lost) and (num1**2 + num2**2 + num3**2 == s2_lost)\
and (num1**3 + num2**3 + num3**3 == s3_lost):
return (num1, num2, num3)
answer = find_numbers()
print(answer[0], answer[1], answer[2])
Examples
Input:
4
1 1 1
Output:
2 3 4
Input:
5
6 26 126
Output:
2 3 4
If your unknown numbers are x,y,z, then you have a system of three equations
x + y + z = a //your s_lost
x^2 + y^2 + z^2 = b //your s2_lost
x^3 + y^3 + z^3 = c //your s3_lost
While direct solution of this system seems too complex, we can fix one unknown and solve simpler system. For example, check all possible values for z and solve system for x and y
for z in range(s_lost):
....
Now let's look to new system:
x + y = a - z = aa
x^2 + y^2 = b - z^2 = bb
substitute
x = aa - y
(aa - y)^2 + y^2 = bb
2 * y^2 - 2 * y * aa - bb + aa^2 = 0
solve this quadratic equation for y
D = 4 * aa^2 - 8 * (aa^2 - bb) = 8 * bb -4 * aa^2
y(1,2) = (2*aa +- Sqrt(D)) / 4
So for every z value find:
- whether solution gives integer values of y
- then get x
- and then check if cube sum equation is true.
Using this approach you'll get solution with linear complexity O(N) against your cubic complexity O(N^3).
P.S. If rather simple mathematical solution for equation system does exist, it has complexity O(1))
This can be simplified by mathematical approach. You are given 3 equations and have 3 unknowns.
sum(1+2+..+N) - x1 - x2 - x3 = a
sum(1^2+2^2+..+N^2) - x1^2 - x2^2 - x3^3 = b
sum(1^3+2^3+..+N^3) - x1^3 - x2^3 - x3^3 = c
Obviously sum(1..N) is 1/2 *N(N+1), while sum(1^2+2^2+3^2+..+N^2) is 1/6 *N*(N+1)*(2N+1) and sum(1^3+2^3+..+N^3) can be written as 1/4 *N^2 *(N+1)^2. Here are wolframalpha outputs: ∑k, ∑k^2, ∑k^3
At this point only thing left is solving given system of equations (3 with 3 unknowns is totally solvable) and implementing this. You only need to find one solution which makes it even easier. Running time is O(1).
Surely there exists a faster approach!
For N=50,000 your brute-force function would have to make up to
N * N * N = 125,000,000,000,000
iterations, so this is not an option.
Additionally, you should check for num1 == num2 etc. to avoid duplicated numbers (the problem does not state it explicitly, but I understand 'We lost some 3 cards' means you need to find three different numbers satisfying the conditions given).
You can sort the list, and find the pairs such that a[i+1] =/= a[i]+1. For every such pair numbers [a[i]+1;a[i+1]) are missing. This would give you O(n log n) running time.
I can give you an idea,
let sum1 = 1+2+3...n
let sum2 = 1^2+2^2+3^2...n
and p, q, r is three numbers given by input consecutively.
We need to search a, b, c. Iterate for c = 1 to N.
For each iteration,
let x = sum1 - (p+c)
let y = sum2 - (q+c*c)
so a+b = x and a^2+b^2 = y.
Since a^2+b^2 = (a+b)^2 - 2ab, you can find 2ab from equation a^2+b^2 = y.
a+b is known and ab is known, by a little math calculation, you can find if there exist a solution for a and b (it forms quadratic equation). If exist, print the solution and break the iteration.
It's an O(N) solution.