Retrieve symbolic function from cost or constraint in pydrake - python

I apparently have a complicated enough constraint function that it takes over 1 minute just to call prog.AddConstraint(), presumably because it's spending a long time constructing the symbolic function (although if you have other insights why it takes so long I would appreciate it). What I would like to do is pull out the symbolic constraint function and cache it so that once it's created, I don't need to wait 1 minute and it can just load it from disk.
My issue is I don't know how to access that function. I can clearly see the expression when I print it, for example in this simplified example:
from pydrake.all import MathematicalProgram
prog = MathematicalProgram()
x = prog.NewContinuousVariables(2, "x")
c = prog.AddConstraint(x[0] * x[1] == 1)
print(c)
I get the output:
ExpressionConstraint
0 <= (-1 + (x(0) * x(1))) <= 0
I realize I could parse that string and pull out the part that represents the function, but it seems like there should be a better way to do it? I'm thinking I could pull out the expression function and upper/lower bounds and use that subsequently in my AddConstraint calls.
In my real use-case, I need to apply the same constraint to multiple timesteps in a trajectory, so I think it would be helpful to create the symbolic function once when I call AddConstraint for the first timestep and then all subsequent timesteps shouldn't have to re-create the symbolic function, I can just use the cached version and apply it to the relevant variables. The expression involves a Cholesky decomposition of a covariance matrix so there are a lot of constraints being applied.
Any help is greatly appreciated.

In my real use-case, I need to apply the same constraint to multiple timesteps in a trajectory, so I think it would be helpful to create the symbolic function once when I call AddConstraint for the first timestep and then all subsequent timesteps shouldn't have to re-create the symbolic function, I can just use the cached version and apply it to the relevant variables. The expression involves a Cholesky decomposition of a covariance matrix so there are a lot of constraints being applied.
I would strongly encourage to write this constraint using a function evaluation, instead of a symbolic expression. One example is that if you want to impose the constraint lb <= my_evaluator(x) <= ub, then you can call it this way
def my_evaluator(x):
# Do Cholesky decomposition and other things to evaluate it. Return the evaluation result.
return result
prog.AddConstraint(my_evaluator, lb, ub, x)
Adding constraint using symbolic expression is convenient when your constraint is linear in the decision variables, otherwise it is better to avoid using symbolic expression to add constraint. (Evaluating a symbolic expression, especially one involving Cholesky decomposition, is really time consuming).
For more details on adding generic nonlinear constraint, you could refer to our tutorial

Related

Avoid calling function twice in scipy.optimize.minimize

I want to solve an optimization problem with scipy.optimize.minimize where both the objective and the inequality constraint function use the result of a common "simulation" function which depends on x.
The naive approach is simply to call the "simulation" function in the objective and the constraint. While this works, it is not very efficient because this means that "simulation" is evaluated twice.
Is there a way to avoid this, possibly by storing and reusing already computed results? In Matlab it is possible using a nested function (see here), however this did not seem to work in python.
Thank you very much for your help.
One approach is to add a decision variable and an equality constraint:
Min y
y >= c
y = fsimulation(x)
Of course, this generalizes to more-dimensional y.

Is it possible to model a min-max-problem using pyomo

Ist it possible to formulate a min-max-optimization problem of the following form in pyomo:
min(max(g_m(x)) s.t. L
where g_m are nonlinear functions (actually constrains of another model) and L is a set of linear constrains?
How would I create the expression for the objective function of the model?
The problem is that using max() on a list of constraint-objects returns only the constraint possessesing the maximum value at a given point.
I think yes, but unless you find a clever way to reformulate your model, it might not be very efficent.
You could solve all possiblity of max(g_m(x)), then select the solution with the lowest objective function value.
I fear that the max operation is not something you can add to a minimization model, since it is not a mathematical operation, but a solver operation. This operation is on the problems level. Keep in mind that when solving a model, Pyomo requires as argument only one sense of optimization (min or max), thus making it unable to understand min-max sense. Even if it did, how could it knows what to maximize or minimize? This is why I suggest you to break your problem in two, unless you work on its formulation.

Optimization in scipy with shared computation in constraints

I am trying to do non-linear constrained optimization of an objective function in scipy.
My problem is that I have many constraints that share intermediate results. Something like:
def constraint1_i(x):
T = f_i(x)
return g(T)
def constraint2_j(x):
T = f_j(x)
return h(T)
where i and j run through 1 to n.
In other words, in each constraint, I run f on x to get an intermediate value needed to compute the constraint. The same thing gets run twice unnecessarily (when i==j, for all i,j)!
Is there a way to somehow share computations across constraints to avoid the double calculation?
Note: this question is somewhat similar, but is more specific (and also has no answer).

Theano cost function

I am trying to learn how to use Theano. I work very frequently with survival analysis and I wanted therefore to try to implement a standard survival model using Theano's automatic differentiation and gradient descent. The model that I am trying to implement is called the Cox model and here is the wikipedia article: https://en.wikipedia.org/wiki/Proportional_hazards_model
Very helpfully, they have written there the partial likelihood function, which is what is maximized when estimating the parameters of a Cox model. I am quite new to Theano and as a result am having difficulties implementing this cost function and so I am looking for some guidance.
Here is the code I have written so far. My dataset has 137 records and hence the reason I hard-coded that value. T refers to the tensor module, and W refers to what the wikipedia article calls beta, and status is what wikipedia calls C. The remaining variables are identical to wikipedia's notation.
def negative_log_likelihood(self, y, status):
v = 0
for i in xrange(137):
if T.eq(status[i], 1):
v += T.dot(self.X[i], self.W)
u = 0
for j in xrange(137):
if T.gt(y[j], y[i]):
u += T.exp(T.dot(self.X[j], self.W))
v -= T.log(u)
return T.sum(-v)
Unfortunately, when I run this code, I am unhappily met with an infinite recursion error, which I hoped would not happen. This makes me think that I have not implemented this cost function in the way that Theano would like and so I am hoping to get some guidance on how to improve this code so that it works.
You are mixing symbolic and non-symbolic operations but this doesn't work.
For example, T.eq returns a non-executable symbolic expression representing the idea of comparing two things for equality but it doesn't actually do the comparison there and then. T.eq actually returns a Python object that represents the equality comparison and since a non-None object reference is considered the same as True in Python, the execution will always continue inside the if statement.
If you need to construct a Theano computation involving conditionals then you need to use one of its two symbolic conditional operations: T.switch or theano.ifelse.ifelse. See the documentation for examples and details.
You are also using Python loops which is probably not what you need. To construct a Theano computation that explicitly loops you need to use the theano.scan module. However, if you can express your computation in terms of matrix operations (dot products, reductions, etc.) then it will run much, much, faster than something using scans.
I suggest you work through some more Theano tutorials before trying to implement something complex from scratch.

parameter within an interval while optimizing

Usually I use Mathematica, but now trying to shift to python, so this question might be a trivial one, so I am sorry about that.
Anyways, is there any built-in function in python which is similar to the function named Interval[{min,max}] in Mathematica ? link is : http://reference.wolfram.com/language/ref/Interval.html
What I am trying to do is, I have a function and I am trying to minimize it, but it is a constrained minimization, by that I mean, the parameters of the function are only allowed within some particular interval.
For a very simple example, lets say f(x) is a function with parameter x and I am looking for the value of x which minimizes the function but x is constrained within an interval (min,max) . [ Obviously the actual problem is just not one-dimensional rather multi-dimensional optimization, so different paramters may have different intervals. ]
Since it is an optimization problem, so ofcourse I do not want to pick the paramter randomly from an interval.
Any help will be highly appreciated , thanks!
If it's a highly non-linear problem, you'll need to use an algorithm such as the Generalized Reduced Gradient (GRG) Method.
The idea of the generalized reduced gradient algorithm (GRG) is to solve a sequence of subproblems, each of which uses a linear approximation of the constraints. (Ref)
You'll need to ensure that certain conditions known as the KKT conditions are met, etc. but for most continuous problems with reasonable constraints, you'll be able to apply this algorithm.
This is a good reference for such problems with a few examples provided. Ref. pg. 104.
Regarding implementation:
While I am not familiar with Python, I have built solver libraries in C++ using templates as well as using function pointers so you can pass on functions (for the objective as well as constraints) as arguments to the solver and you'll get your result - hopefully in polynomial time for convex problems or in cases where the initial values are reasonable.
If an ability to do that exists in Python, it shouldn't be difficult to build a generalized GRG solver.
The Python Solution:
Edit: Here is the python solution to your problem: Python constrained non-linear optimization

Categories

Resources