I am starting to use sympy. I computed a convolution, but the result was not correct:
This result is wrong: The correct result is
So what did I do wrong? I had used sympy to integrate piecewise fucntions before, with no problems...
The code:
from sympy import *
init_session()
f = lambda x: Piecewise( (1 , (x >= -1) & (x <= 1)) , (0 , True) )
Conv = lambda x: integrate( f(x-y)*f(y) , (y, -oo, +oo) )
There is nothing that you did wrong. It's Sympy having an issue with the product of two piecewise expressions. By calling piecewise_fold(f(x-y)*f(y)) you can see that it does not manage to sort this product out, leaving it as a nested Piecewise construction.
Piecewise((Piecewise((1, And(x - y <= 1, x - y >= -1)), (0, True)), And(y <= 1, y >= -1)), (0, True))
The symbolic integration routine trips up on this nested thing, which may be worth filing an issue on GitHub.
Workaround
If you flatten this nested Piecewise by hand, the integration works correctly.
g = Piecewise((1, And(x-y <= 1, x-y >= -1, y <= 1, y >= -1)), (0, True))
integrate(g, (y, -oo, +oo))
outputs Min(1, x + 1) - Min(1, x + 1, Max(-1, x - 1)) which is correct although perhaps not in the form one would expect.
Related
I'm trying to minimize a function of N parameters (e.g. x[1],x[2],x[3]...,x[N]) where the boundaries for the minimization depend on the minimized parameters themselves. For instance, assume that all values of x could vary between 0 and 1 in such a way that the summing then all I got 1, then I have the following inequalities for the boundaries:
0 <= x[1] <= 1
x[1] <= x[2] <= 1 - x[1]
x[2] <= x[3] <= 1-x[1]-x[2]
...
x[N-1] <= x[N] <= 1-x[1]-x[2]-x[3]-...-x[N]
Does anyone have an idea on how can I construct some algorithm like that on python? Or maybe if I can adopt an existent method from Scipy for example?
As a rule of thumb: As soon as your boundaries depend on the optimization variables, they are inequality constraints instead of boundaries. Using 0-based indices, your inequalities can be written as
# left-hand sides
-x[0] <= 0
x[i] - x[i+1] <= 0 for all i = 0, ..., n-1
# right-hand sides
sum(x[i], i = 0, .., j) - 1 <= 0 for all j = 0, .., n
Both can be expressed by a simple matrix-vector product:
import numpy as np
D_lhs = np.diag(np.ones(N-1), k=-1) - np.diag(np.ones(N))
D_rhs = np.tril(np.ones(N))
def lhs(x):
return D_lhs # x
def rhs(x):
return D_rhs # x - np.ones(x.size)
As a result, you can use scipy.optimize.minimize to minimize your objective function subject to lhs(x) <= 0 and rhs(x) <= 0 like this:
from scipy.optimize import minimize
# minmize expects eqach inequality constraint in the form con(x) >= 0,
# so lhs(x) <= 0 is the same as -1.0*lhs(x) >= 0
con1 = {'type': 'ineq', 'fun': lambda x: -1.0*lhs(x)}
con2 = {'type': 'ineq', 'fun': lambda x: -1.0*rhs(x)}
result = minimize(your_obj_fun, x0=inital_guess, constraints=(con1, con2))
Is it possible to find the transformation expression to X from U(0, 1) in SymPy?
import sympy.stats as stat
import sympy as sp
x = sp.Symbol('x')
p = sp.Piecewise( (x + 1, (-1. <= x) & (x <= 0)), (1 - x, (x >= 0) & (x <=1 )), (0, True) )
X = stat.ContinuousRV(x, p, Interval(-1, 1))
cdf = stat.cdf(X)(x)
# Where to go from here?
stat.sample(X)
# TypeError: object of type 'ConditionSet' has no len()
sample in sympy/stats/crv.py
def sample(self):
172 """ A random realization from the distribution """
--> 173 icdf = self._inverse_cdf_expression()
174 return icdf(random.uniform(0, 1))
How can I find the inverse cdf expression from the custom piecewise?
By hand I get: 1 - sqrt(2-2u)
Is it possible with another library?
One issue is that cdf is a nested Piecewise object. These should be folded with piecewise_fold. (Aside: your formula for p has a floating point number 1., I replaced it by 1 to make SymPy's life easier.)
cdf = sp.piecewise_fold(cdf)
u = sp.Symbol('u', positive=True)
inv = sp.solveset(cdf - u, x, domain=sp.Interval(0, 1))
Now inv is
Intersection(Interval.Ropen(0, 1), {-sqrt(2)*sqrt(-u + 1) + 1, sqrt(2)*sqrt(-u + 1) + 1})
It's unfortunate that SymPy did not discard the second solution, which is obviously outside of the interval (0, 1). But at least the first one is correct.
You still can't use this for stat.sample, so any sampling would have to be coded directly. Two remarks aside:
SymPy is not a particularly effective tool for sampling, as it is a numerical task. In NumPy, sampling this specific (triangular) distribution is a one-liner:
>>> np.random.triangular(-1, 0, 1, size=(5,))
array([-0.40718329, 0.26692739, 0.84414925, 0.33518136, -0.7323011 ])
SymPy also has Triangular built in, not that it helps with sampling.
I have following test program. My query is two folded: (1) Some how the solution is giving zero and (2) Is it appropriate to use this x2= np.where(x > y, 1, x) kind of conditions on variables ? Are there any constrained optimization routines in Scipy ?
a = 13.235
b = 70.678
def system(X, a,b):
x=X[0]
y=X[1]
x2= np.where(x > y, 1, x)
f=np.zeros(3)
f[0] = 2*x2 - y - a
f[1] = 3*x2 + 2*y- b
return (X)
func= lambda X: system(X, a, b)
guess=[5,5]
sol = optimize.root(func,guess)
print(sol)
edit: (2a) Here with x2= np.where(x > y, 1, x) condition, two equations becomes one equation.
(2b) In another variation requirement is: x2= np.where(x > y, x^2, x^3). Let me comments on these two as well. Thanks !
First up, your system function is an identity, since you return X instead of return f. The return should be the same shape as the X so you had better have
f = np.array([2*x2 - y - a, 3*x2 + 2*y- b])
Next the function, as written has a discontinuity where x=y, and this is causing there to be be a problem for the initial guess of (5,5). Setting the initial guess to (5,6) allows for the the solution [13.87828571, 14.52157143] to be found rapidly.
With the second example, again using an initial guess of [5,5] causes problems of discontinuity, using [5,6] gives a solution of [ 2.40313743, 14.52157143].
Here is my code:
import numpy as np
from scipy import optimize
def system(X, a=13.235, b=70.678):
x = np.where(X[0] > X[1], X[0]**2, X[0]**3)
y=X[1]
return np.array( [2*x - y - a, 3*x + 2*y - b])
guess = [5,6]
sol = optimize.root(system, guess)
print(sol)
I want to plot a piecewise function, such as:
import sympy as sym
x = sym.symbols("x")
f = sym.Piecewise((-1, x < -1),
(x, sym.And(-1 <= x, x < 0)),
(x**2, sym.And(0 <= x, x < 1)),
(x**3, x >= 1))
sym.plotting.plot(f, (x, -3, 3))
However, when running this code, an exception was raised ...
AttributeError: 'BooleanFalse' object has no attribute 'evalf'
I think the problem may come from the two cases
sym.And(-1 <= x, x < 0)
and
sym.And(0 <= x, x < 1)
Here a python type 'bool' supposed, while the function 'evalf' can't convert the 'sympy' type 'BooleanFalse' into the python type 'bool'.
I wander how to deal with this problem, and is it possible to plot the piecewise functions without using the 'matplotlib' module?
Your function definition is overdone, sympy evaluates the conditions in order and returns the 1st expression for which the condition is True.
I don't understand exactly which went wrong in your definition but a simpler definition works for me
In [19]: f = sym.Piecewise(
(-1, x < -1),
(x, x < 0),
(x**2, x < 1),
(x**3, True))
....:
In [20]: sym.plotting.plot(f, (x, -3, 3))
Out[20]: <sympy.plotting.plot.Plot at 0x7f90cb9ec6d8>
In [21]:
PS I have understood why your plot fails, it's because plot tries to evaluate the condition feeding a value for x, but the condition is the constant BooleanFalse that's the result of evaluating sym.And() at the time of definition of your piecewise function.
Which version of sympy are you using?
I think this is a bug that's been fixed, but in the meantime, try this if you can't/don't want to update:
import sympy as sym
from sympy.abc import x
f = x**2
g = x**3
p = sym.Piecewise((-1, x < -1),
(x, x < 0),
(f, x < 1),
(g, True))
sym.plotting.plot(p, (x, -3, 3), adaptive=False)
EDIT:
you can write it as before with this method but as stated in gboffi's answer, I don't think sympy likes it... try this
import sympy as sym
from sympy.abc import x
f = x**2
g = x**3
p = sym.Piecewise((-1, x < -1),
(x, sym.And(-1 <= x, x < 0)),
(f, sym.And(0 <= x, x < 1)),
(g, x >= 1))
sym.plotting.plot(p, (x, -3, 3), adaptive=False)
Comment: I have a similar problem with sympy 1.0. The code below gives the AttributeError in 1.0, but not in version 0.7.6.1 which works fine.
f = Piecewise((0, x < 0), (0, x > L), (1+0.3*x, True))
plot(f.subs({L:1}))
I have a generic question on how to solve optimization problems of the Min-Max type, using the PICOS package in Python. I found little information in this context while searching the PICOS documentation and on the web as well.
I can imagine a simple example of the below form.
Given a matrix M, find x* = argmin_x [ max_y x^T M y ], where x > 0, y > 0, sum(x) = 1 and sum(y) = 1.
I have tried a few methods, starting with the most straightforward idea of having minimax, minmax keywords in the objective function of PICOS Problem class. It turns out that none of these keywords are valid, see the package documentation for objective functions. Furthermore, having nested objective functions also turns out to be invalid.
In the last of my naive attempts, I have two functions, Max() and Min() which are both solving a linear optimization problem. The outer function, Min(), should minimize the inner function Max(). So, I have used Max() in the objective function of the outer optimization problem.
import numpy as np
import picos as pic
import cvxopt as cvx
def MinMax(mat):
## Perform a simple min-max SDP formulated as:
## Given a matrix M, find x* = argmin_x [ max_y x^T M y ], where x > 0, y > 0, sum(x) = sum(y) = 1.
prob = pic.Problem()
## Constant parameters
M = pic.new_param('M', cvx.matrix(mat))
v1 = pic.new_param('v1', cvx.matrix(np.ones((mat.shape[0], 1))))
## Variables
x = prob.add_variable('x', (mat.shape[0], 1), 'nonnegative')
## Setting the objective function
prob.set_objective('min', Max(x, M))
## Constraints
prob.add_constraint(x > 0)
prob.add_constraint((v1 | x) == 1)
## Print the problem
print("The optimization problem is formulated as follows.")
print prob
## Solve the problem
prob.solve(verbose = 0)
objVal = prob.obj_value()
solution = np.array(x.value)
return (objVal, solution)
def Max(xVar, M):
## Given a vector l, find y* such that l y* = max_y l y, where y > 0, sum(y) = 1.
prob = pic.Problem()
# Variables
y = prob.add_variable('y', (M.size[1], 1), 'nonnegative')
v2 = pic.new_param('v1', cvx.matrix(np.ones((M.size[1], 1))))
# Setting the objective function
prob.set_objective('max', ((xVar.H * M) * y))
# Constraints
prob.add_constraint(y > 0)
prob.add_constraint((v2 | y) == 1)
# Solve the problem
prob.solve(verbose = 0)
sol = prob.obj_value()
return sol
def print2Darray(arr):
# print a 2D array in a readable (matrix like) format on the standard output
for ridx in range(arr.shape[0]):
for cidx in range(arr.shape[1]):
print("%.2e \t" % arr[ridx,cidx]),
print("")
print("========")
return None
if __name__ == '__main__':
## Testing the Simple min-max SDP
mat = np.random.rand(4,4)
print("## Given a matrix M, find x* = argmin_x [ max_y x^T M y ], where x > 0, y > 0, sum(x) = sum(y) = 1.")
print("M = ")
print2Darray(mat)
(optval, solution) = MinMax(mat)
print("Optimal value of the function is %.2e and it is attained by x = %s and that of y = %.2e." % (optval, np.array_str(solution)))
When I run the above code, it gives me the following error message.
10:stackoverflow pavithran$ python minmaxSDP.py
## Given a matrix M, find x* = argmin_x [ max_y x^T M y ], where x > 0, y > 0, sum(x) = sum(y) = 1.
M =
1.46e-01 9.23e-01 6.50e-01 7.30e-01
6.13e-01 6.80e-01 8.35e-01 4.32e-02
5.19e-01 5.99e-01 1.45e-01 6.91e-01
6.68e-01 8.46e-01 3.67e-01 3.43e-01
========
Traceback (most recent call last):
File "minmaxSDP.py", line 80, in <module>
(optval, solution) = MinMax(mat)
File "minmaxSDP.py", line 19, in MinMax
prob.set_objective('min', Max(x, M))
File "minmaxSDP.py", line 54, in Max
prob.solve(verbose = 0)
File "/Library/Python/2.7/site-packages/picos/problem.py", line 4135, in solve
self.solver_selection()
File "/Library/Python/2.7/site-packages/picos/problem.py", line 6102, in solver_selection
raise NotAppropriateSolverError('no solver available for problem of type {0}'.format(tp))
picos.tools.NotAppropriateSolverError: no solver available for problem of type MIQP
10:stackoverflow pavithran$
At this point, I am stuck and unable to fix this problem.
Is it just that PICOS does not natively support min-max problem or is my way of encoding the problem, incorrect?
Please note: The reason I am insisting on using PICOS is that ideally, I would like to know the answer to my question in the context of solving a min-max semidefinite program (SDP). But I think the addition of semidefinite constraints is not hard, once I can figure out how to do a simple min-max problem using PICOS.
The first answer is that min-max problems are not natively supported in PICOS. However, whenever the inner maximization problem is a convex optimization problem, you can reformulate it as a minimization problem (by taking the Lagrangian dual), and so you get a min-min problem.
Your particular problem is a standard zero-sum game, and can be reformulated as: (assuming M is of dimension n x m):
min_x max_{i=1...m} [M^T x]_i = min_x,t t s.t. [M^T x]_i <= t (for i=1...m)
In Picos:
import picos as pic
import cvxopt as cvx
n=3
m=4
M = cvx.normal(n,m) #generate a random matrix
P = pic.Problem()
x = P.add_variable('x',n,lower=0)
t = P.add_variable('t',1)
P.add_constraint(M.T*x <= t)
P.add_constraint( (1|x) == 1)
P.minimize(t)
print 'the solution is x='
print x
If you also need the optimal y, then you can show that it corresponds to the optimal value of the constraint M'x <= t:
print 'the solution of the inner max-problem is y='
print P.constraints[0].dual
Best,
Guillaume.