I have a problem where I have 4 variables x1, x2, x3 and x4. I need to find the values for x1, x2, x3, x4 with the following conditions:
1. 1995 < 2*x1 + 4*x2 + 3*x3 + x4 < 2000
2. x1 >= 1.2*x2
3. x2 >= 1.3*x3
4. x3 >= 1.1*x4
5. x4 > 0.0
I was able do this using python-constraint (https://labix.org/python-constraint) but it takes ~30 mins to solve this on my system, which is too long.
from constraint import *
problem = Problem()
problem.addVariable("x1", range(100,500))
problem.addVariable("x2", range(100,500))
problem.addVariable("x3", range(100,500))
problem.addVariable("x4", range(100,500))
problem.addConstraint(lambda a, b, c, d: 2*a + 3*b + 4*c + 5*d > 1995, ["x1", "x2", "x3", "x4"])
problem.addConstraint(lambda a, b, c, d: 2*a + 3*b + 4*c + 5*d < 2005, ["x1", "x2", "x3", "x4"])
problem.addConstraint(lambda a, b: a >= 1.2 * b, ["x1", "x2"])
problem.addConstraint(lambda b, c: b >= 1.3 * c, ["x2", "x3"])
problem.addConstraint(lambda c, d: c >= 1.1 * d, ["x3", "x4"])
problem.addConstraint(lambda d: d > 0, ["x4"])
problem.getSolutions()
I also looked at scipy.optimize.linprog but I could not find a way to pass conditions 2, 3 and 4 because it is dependent on the value of another variable from the same problem. I can pass boundaries for each individual variables using the bounds parameter, like:
x1_bounds = (100, 200)
x2_bounds = (200, 300)
But how do I pass values of other variables in bounds, like x1_bounds >= 1.2*x2? Or is there any other way I can do this?
This can be solved using GRG non-linear solver in excel but I'm not able to find an equivalent in python.
Your problem is, in fact, linear, and so it's ideally suited to a linear programming approach. However, you are giving it to the solver with no clues as to the linearity of the problem, so it's bound to find that tricky: it pretty much has to try every possibility which is going to take a long time. It might be possible to rewrite your constraints into different forms for the python-constraint solver (it has, for example, a MaxSumConstraint constraint form) which might work better but ideally I think you should be using a solver specialised for linear problems.
There is a solver called kiwisolver which will do what you want. Here's your example converted for that library:
import kiwisolver
x1 = kiwisolver.Variable('x1')
x2 = kiwisolver.Variable('x2')
x3 = kiwisolver.Variable('x3')
x4 = kiwisolver.Variable('x4')
constraints = [1995 <= 2*x1 + 4*x2 + 3*x3 + x4,
2*x1 + 4*x2 + 3*x3 + x4 <= 2000,
x1 >= 1.2*x2,
x2 >= 1.3*x3,
x3 >= 1.1*x4,
x4 >= 0]
solver = kiwisolver.Solver()
for cn in constraints:
solver.addConstraint(cn)
for x in [x1, x2, x3, x4]:
print(x.value())
which gives
254.49152542372883
212.07627118644066
163.13559322033896
148.30508474576254
But you can also use a standard linear program solver like the scipy one. You just need to reorganise your inequalities into the right form.
You want:
1. 1995 < 2*x1 + 4*x2 + 3*x3 + x4 < 2000
2. x1 >= 1.2*x2
3. x2 >= 1.3*x3
4. x3 >= 1.1*x4
5. x4 > 0.0
So we rewrite this into:
2*x1 + 4*x2 + 3*x3 + 1*x4 < 2000
-2*x1 + -4*x2 + -3*x3 + -1*x4 < -1995
-1*x1 + 1.2*x2 + 0*x3 + 0*x4 < 0
0*x1 + -1*x2 + 1.3*x3 + 0*x4 < 0
0*x1 + 0*x2 + -1*x3 + 1.1*x4 < 0
You can add bounds for x1 to x4 as you mentioned in the question but by default they will just be non-negative. So then, for an LP, we also need to choose where in the polytope of possible solutions we want to optimise: in this case I'll just go for the solution with the minimum sum. So that gives us this:
from scipy.optimize import linprog
output = linprog([1, 1, 1, 1],
[[ 2, 4, 3, 1],
[-2, -4, -3, -1],
[-1, 1.2, 0, 0],
[0, -1, 1.3, 0],
[0, 0, -1, 1.1]],
[2000, -1995, 0, 0, 0])
print(output.x)
This gives
[274.92932862 229.10777385 176.23674912 0. ]
which is the optimal LP solution. Note that it has made x4 = 0: LPs typically don't distinguish between > and >= and so we have a solution where x4 is zero rather than a tiny epsilon greater than zero.
Finally, note that the problem is strongly under-constrained: we can choose a quite different solution by changing the objective. Here's a solution where we ask linprog to maximise 2*x1 + 4*x2 + 3*x3 + x4:
from scipy.optimize import linprog
output = linprog([-2, -4, -3, -1],
[[ 2, 4, 3, 1],
[-2, -4, -3, -1],
[-1, 1.2, 0, 0],
[0, -1, 1.3, 0],
[0, 0, -1, 1.1]],
[2000, -1995, 0, 0, 0])
print(output.x)
giving
[255.1293488 212.60779066 163.54445436 148.67677669]
Related
As an application of Eulers method, I'm trying to implement a code which would compute the recursive matrix product Yn = Yn-1 + A(Yn-1), where Y is a vector and A is a matrix such that the product is defined. This is the current code I have
def f(A, y):
return A.dot(y)
def euler(f, t0, y0, T, dt):
t = np.arange(t0, T + dt, dt)
y = [0,0,0,0]*len(t)
y[0] = y0
for i in range(1, len(t)):
y[i] = y[i - 1] + f(A, y[i - 1])*dt
return t, y
# Define problem specific values
A = np.array([[0, 0, 1, 0],
[0, 0, 0, 1],
[-2, -3, 0, 0],
[-3, -2, 0, 0]])
y1_0 = 1
y2_0 = 2
y3_0 = 0
y4_0 = 0
y0 = [y1_0, y2_0, y3_0, y4_0]
t,y = euler(f,0,y0,2,1)
print(t,y)
For example, the result for points in the range t0 = 0, T = 2 should be the vectors Y1 and Y2. Instead I have
[0 1 2] [[1, 2, 0, 0], array([ 1, 2, -8, -7]), 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Something is wrong here. While Y1 = [1, 2, -8, -7 ] does show up, there is all of this unnecessary stuff. And Y2 is not printed at all. I suspect this is due to how I define the variable y. For every point in the range of t, I need a vector of 4 zeros - which is then filled up by the function euler, I think. How should correct this?
The computer always does what you tell it to do. In your case y is constructed by repeating 4 zeros len(t) times, giving a list of 12 zeros. The first list entry is replaced by the list y0. The second list entry is replaced by the result of the numpy operations which is a numpy.array. Then the return statement at the level of the loop instructions breaks the loop and returns the t and y arrays. y still contains 10 zeros from its construction that were not replaced.
So construct
y = np.zeros([len(t), len(y0)])
and repair the indentation level.
I am attempting to convert a sum of absolute deviations to a linear programming problem so that I can utilize CPLEX (or other solver). I am stuck on how the matrices are to be set up. The problem is as follows:
minimize abs(x1 - 5) + abs(x2 - 3)
s.t. x1 + x2 = 10
I have the following constraints set up to transform the problem into a linear form:
x1 - 5 <= t1
-(x1 - 5) <= t1 and
x2 - 3 <= t2
-(x2 - 3) <= t2
I've set up the objective function as
c = [0,0,1,1]
but I am lost on how to set up
Ax <= b
in matrix form. What I have so far is:
A = [[ 1, -1, 0, 0],
[-1, -1, 0, 0],
[ 0, 0, 1,-1],
[ 0, 0,-1,-1]]
b = [ 5, -5, 3,-3]
I have set up the other constraint in matrix for as:
B = [1, 1, 0, 0]
b2 = [10]
When I run the following:
linprog(c,A_ub=A,b_ub=b,A_eq=B,b_eq=b2,bounds=[(0,None),(0,None)])
I get the following error message back:
ValueError: Invalid input for linprog: A_eq must have exactly two dimensions, and the number of columns in A_eq must be equal to the size of c
I know there is a solution because when I use scipy.optimize.minimize it solves to [6,4]. I'm sure the issue is I am not formulating the input matrices correctly but I am not sure how to set them up so that it runs.
Edit - here is the code that does not run:
import numpy as np
from scipy.optimize import linprog, minimize
c = np.block([np.zeros(2),np.ones(2)])
print("c =>",c)
A = [[ 1, -1, 0, 0],
[-1, -1, 0, 0],
[ 0, 0, 1,-1],
[ 0, 0,-1,-1]]
b = [[ 5, -5, 3,-3]]
print(A)
print(np.multiply(A,b))
B = [ 1, 1, 0, 0]
b2 = [10]
print(np.multiply(B,b2))
linprog(c,A_ub=A,b_ub=b,A_eq=B,b_eq=b2,bounds=[(0,None),(0,None)],
options={'disp':True})
I think the message is quite good. B should be 2-dimensional matrix instead of a 1-dimensional vector. So:
B = [[1, 1, 0, 0]]
Secondly, the bounds array is too short.
Thirdly, your ordering of variables is inconsistent. The columns in A are x1,t1,x2,t2 while the columns in B (and c) seem to be x1,x2,t1,t2. They need to follow the same scheme.
Suppose we have two numpy array x1 and x2 like below:
x1 = np.array([[0,2,9,1,0]])
x2 = np.array([[7,3,0,6,8]])
Is there any operation like:
x2(operation)x1 = array([[ 0, 3, 0, 6, 0]])
i.e. if x1 or x2 is 0 at any index then make the result array's index value as zero. Otherwise, keep x2 as it is.
Use numpy.where:
x3 = np.where(x1 == 0, x1, x2)
print(x3)
Output:
[[0 3 0 6 0]]
Given that you want to keep x2 but make it zero in the case x1 is zero, just multiply x2 by the boolean of x1.
>>> x2 * x1.astype(bool)
array([[0, 3, 0, 6, 0]])
Note that if x2 is zero, the result is zero as expected.
I'm trying to figure out what is wrong with my implementation, I expect the result to be [5, 10], I don't understand how it gets [7.5, 7.5], x1 should be half of x2.
from scipy.optimize import linprog
import numpy as np
c = [-1, -1]
A_eq = np.array([
[1, 0.5],
[1, -0.5],
])
b_eq = [15, 0]
x0_bounds = (0, None)
x1_bounds = (0, None)
res = linprog(
c,
A_eq=A_eq.transpose(),
b_eq=b_eq,
bounds=(x0_bounds, x1_bounds),
options={"disp": True})
print res.x
# =>
# Optimization terminated successfully.
# Current function value: -15.000000
# Iterations: 2
# [ 7.5 7.5]
Update from the author:
As it was said matrix transposition is not needed here.
The problem was in the matrix itself, in order to get desired result, which is [5, 10], it has to be:
A_eq = np.array([
[1, 1],
[1, -0.5],
])
Per the scipy linprog docs:
Minimize: c^T * x
Subject to:
A_ub * x <= b_ub
A_eq * x == b_eq
So, you are now solving the following equations:
Minimize -x1 -x2
Subject to,*
x1 + x2 = 15 (i)
0.5 * x1 - 0.5 * x2 = 0 (ii)
Now, (ii) implies x1 = x2 (so your desired solution is infeasable), and then (i) fixes x1 = x2 = 7.5. So, the solution returned by linprog() is indeed correct. Since you are expecting a different result, maybe you should look into the way you translated your problem into code, as I think that's where you will find both the issue and the solution.
*) Since you are taking the transpose.
Your problem is:
x1 + x2 == 15
0.5 * x1 - 0.5 * x2 == 0
minimize -x1 -x2
So obviously you have x1 == x2 (second constraint), and thus x1 = x2 = 7.5 (first constraint).
Looking at your question, you probably don't want to transpose A:
res = linprog(
c,
A_eq=A_eq,
b_eq=b_eq,
bounds=(x0_bounds, x1_bounds),
options={"disp": True}
)
Why gives you the problem:
x1 + 0.5 * x2 == 15
x1 - 0.5 * x2 == 0
minimize -x1 -x2
And you get x1 = 7.5 and x2 = 15 (the only possible values).
I'm looking for a pythonic (1-line) way to extract a range of values from an array
Here's some sample code that will extract the array elements that are >2 and <8 from x,y data, and put them into a new array. Is there a way to accomplish this on a single line? The code below works but seems kludgier than it needs to be. (Note I'm actually working with floats in my application)
import numpy as np
x0 = np.array([0,3,9,8,3,4,5])
y0 = np.array([2,3,5,7,8,1,0])
x1 = x0[x0>2]
y1 = y0[x0>2]
x2 = x1[x1<8]
y2 = y1[x1<8]
print x2, y2
This prints
[3 3 4 5] [3 8 1 0]
Part (b) of the problem would be to extract values say 1 < x < 3 and 7 < x < 9 as well as their corresponding y values.
You can chain together boolean arrays using & for element-wise logical and and | for element-wise logical or, so that the condition 2 < x0 and x0 < 8 becomes
mask = (2 < x0) & (x0 < 8)
For example,
import numpy as np
x0 = np.array([0,3,9,8,3,4,5])
y0 = np.array([2,3,5,7,8,1,0])
mask = (2 < x0) & (x0 < 8)
x2 = x0[mask]
y2 = y0[mask]
print(x2, y2)
# (array([3, 3, 4, 5]), array([3, 8, 1, 0]))
mask2 = ((1 < x0) & (x0 < 3)) | ((7 < x0) & (x0 < 9))
x3 = x0[mask2]
y3 = y0[mask2]
print(x3, y3)
# (array([8]), array([7]))
import numpy as np
x0 = np.array([0,3,9,8,3,4,5])
y0 = np.array([2,3,5,7,8,1,0])
list( zip( *[(x,y) for x, y in zip(x0, y0) if 1<=x<=3 or 7<=x<=9] ) )
# [(3, 9, 8, 3), (3, 5, 7, 8)]