How to use a decision variable as a time index? - python

I am writing a mathematical program and want to let the solver choose the optimal time to switch between two simulation phases. At that specific time, I am writing a constraint for the states of one model instance to be equal the states of a second model instance.
I am trying to achieve something like:
t_switch = prog.NewContinuousVariables(1, name="t_switch")
prog.AddConstraint(eq(q_A[t_switch[0],0], q_B[t_switch[0],0]))
but this returns an IndexError since t_switch is of pydrake type symbolic.Variable, but I need to use an integer as an index.
Any thoughts on how to use a symbolic variable as an index in another constraint, or of an alternate way to solve this problem?

I wouldn't suggest to treat the time index as a decision variable; instead you can always fix the trajectory to switch at a specific time index, but set the time duration as decision variable. This allows you to change the total timing of each phase.

Related

Optimizing a problem in OpenMDAO so that objective takes specific value

I want to know if it is possible to optimize a problem in OpenMDAO in such a way that the objective approaches a specified value rather than minimizing or maximizing the objective?
For example:
prob.model.add_objective("objective1", equals=10)
as in specifying constraints is not possible.
You can not specify an equality for the objective like that. You could specify a given objective, then secondarily add an equality constraint for that same value. This is technically valid, but it would be a very strange way to pose an optimization problem.
If you have a specific design variable you hope to vary to satisfy the equality constraint, then you probably don't want to do an optimization at all. Instead, you likely want to use a solver. You can use solvers to vary just one variable, or potentially more than one (as long as you have one equality constraint per variable). An generic example of using a solver can be found here, setting up a basic nonlinear circit analysis.
However, in your case you more likely want to use a BalanceComp. You can set a specific fixed value into the right hand side of the balance, using an init argument like this:
bal = BalanceComp()
bal.add_balance('x', val=1.0, rhs_val=3.0)
Then you can connect the variable you want to hold fixed to that value to the left hand side of the balance.

How to define output variable with dynamic shape in OpenMDAO

I am currently simulating a structural optimization problem in which the gradients of responses are extracted from Nastran and provided to SLSQP optimizer in OpenMDAO. The number of constraints changes in subsequent iterations, because the design variables included both the shape and sizing variables, therefore a new mesh is generated every time. A constraint component is defined in OpenMDAO, and it reads the response data exported from Nastran. Now, the issue here is in defining the shape of its output variable "f_const". The shape of this output variable is required to adjust according to the shape of the available response array, since outputs['f_const'] = np.loadtxt("nsatran_const.dat"). Here, nastran_const.dat is the file containing response data extracted from Nastran. The shape of this data is not known at the beginning of design iteration and keep on changing during the subsequent iterations. So, if some shape of f_const is defined at the start, then it does not change later and gives error because of the mismatch in the shapes.
In the doc of openmdao, I found https://openmdao.org/newdocs/versions/latest/features/experimental/dyn_shapes.html?highlight=varying%20shape
It explains, that the shape of input/out variable can be set dynamic by linking it to any connecting or local variables whose shapes are already known. This is different from my case because, the shape of stress array is not known before the start of computation. The shape of f_const is to be defined in setup, and I cannot figure out how to change it later. Please guide me in this regard.
You can't have arrays that change shape like that. The "dynamic" shape you found in the docs refers to setup-time variation. Once setup has been finished though, sizes are fixed. So we need a way for you arrays to be of a fixed size.
If you really must re-mesh every time (which I don't recommend) then there are two possible solutions I can think of:
Over-allocation
Constraint Aggregation
Option 1 -- Over Allocation
This topic is covered in detail in this related question, but briefly what you could do is allocate an array big enough that you always have enough space. Then you can use one entry of the array to record how many active entries are in it. Any non-active entries would be set to a default value that won't violate your constraints.
You'll have to be very careful with the way you define the derivatives. For active array entries, the derivatives come from NASTRAN. For inactive ones, you could set them to 0 but note that you are creating a discrete discontinuity when an entry switches to active. This may very well give the optimizer fits when its trying to converge and derivatives of active constraints keep flipping between 0 and nonzero values.
I really don't recommend this approach, but if you absolutely must have "variable size" arrays then over-allocation is your best best.
Option 2 -- Constraint Aggregation
They key idea here is to use an aggregation function to collapse all the stress constraints into a single value. For structural problems this is most often done with a KS function. OpenMDAO has a KScomponent in its standard library that you can use.
The key is that this component requires a constant sized input. So again, over-allocation would be used here. In this case, you shouldn
't track the number of active values in the array, because you are passing that to the aggregation function. KS functions are like smooth max functions, so if you have a bunch of 0's then it shouldn't affect it.
Your problem still has a discontinuous operation going on with the re-meshing and the noisy constraint array. The KS function should smooth some of that, but not all of it. I still think you'll have trouble converging, but it should work better than raw over-allocation.
Option 3 --- The "right" answer
Find a way to fix your grid, so it never changes. I know this is hard if you're using VSP to generate your discritizations, and letting NASTRAN re-grid things from there ... but its not impossible at all.
OpenVSP has a set of geometry-query functions that can be used to back-fit fixed meshes into the parametric space of the geometry. If you do that, then you can regenerate the geometry in VSP and use the parametric space to move your fixed grids with it. This is how the pyGeo tool that the University of Michigan MDO Lab does it, and it works very well.
Its a modest amount of work (though a lot less if you use pyGeo directly), but I think its well worth it. You'll get faster components and a much more stable optimization.

How can I use implicit components to assemble a full system?

I'm working on a panel method code at the moment. To keep us from being bogged down in the minutia, I won't show the code - this is a question about overall program structure.
Currently, I solve my system by:
Generating the corresponding rows of the A matrix and b vector in an explicit component for each boundary condition
Assembling the partial outputs into the full A, b.
Solving the linear system, Ax=b, using a LinearSystemComp.
Here's a (crude) diagram:
I would prefer to be able to do this by just writing one implicit component to represent each boundary condition, vectorising the inputs/outputs to represent multiple rows/cols in the matrix, then allowing openMDAO to solve for the x while driving the residual for each boundary condition to 0.
I've run into trouble trying to make this work, as each implicit component is underdetermined (more rows in the output vector x than the component output residuals; that is, A1.x - b1= R1, length(R1) < length(x). Essentially, I would like openMDAO to take each of these underdetermined implicit systems, and find the value of x that solves the determined full system - without needing to do all of the assembling stuff myself.
Something like this:
To try and make my goal clearer, I'll explain what I actually want from the perspective of my panel method. I'd like a component, let's say Influence, that computes the potential induced by a given panel at a given point in the panel's reference frame. I'd like to vectorise the input panels and points such that it can compute the influence coefficent of many panels on one point, of many points on one panel, or of many points on many panels.
I'd then like a system of implicit boundary conditions to find the correct value of mu to solve the system. These boundary conditions, again, should be able to be vectorised to compute the violation of the boundary condition at many points under the influence of many panels.
I get confused again at this part. Not every boundary condition will use the influence coefficient values - some, like the Kutta condition, are just enforced on the mu vector, e.g .
How would I implement this as an implicit component? It has no inputs, and doesn't output the full mu vector.
I appreciate that the question is rather long and rambling, but I'm pretty confused. To summarise:
How can I use openMDAO to solve multiple individually underdetermined (but combined, fully determined) implicit systems?
How can I use openMDAO to write an implicit component that takes no inputs and only uses a portion of the overall solution vector?
In the OpenMDAO docs there is a close analog to what you are trying to accomplish, with the node-voltage analysis tutorial. In that code, the balance comp is used to create an implicit relationship that is similar to what you're describing. Its singular on its own, but part of a larger group is a well defined system.
You'll need to find a way to build similar components for your model. Each "row" in your equation will be associated with one state variables (one entry in your x vector).
In the simplest case, each row (or set of rows) would have one input which is the associated row of the A matrix, and a second input which is ALL of the other values for x, and a final input which is the entry of the b vector (right hand side vector). Then you could evaluate the residual for that specific row, which would be the following
R['x_i'] = np.sum(A*x_full) - b
where x_full is the assembly of the full x-vector from the x_other input and the x_i state variable.
#########
Having proposed the above solution, I have to say that I don't think this is a particularly efficient way to build or solve this linear system. It is modular, and might give you some flexibility, but you're jumping through a lot of hoops to avoid doing some index-math, and shoving everything into a matrix.
Granted, the derivatives might be a bit easier in your design, because the matrix assembly is going to get handled "magically" by the connections you have to create between the various row-components. So maybe its worth the trade... but i would say you might be better of trying a more traditional coding approach and using JAX or some other AD code to make the derivatives easier.

Is it possible to model a min-max-problem using pyomo

Ist it possible to formulate a min-max-optimization problem of the following form in pyomo:
min(max(g_m(x)) s.t. L
where g_m are nonlinear functions (actually constrains of another model) and L is a set of linear constrains?
How would I create the expression for the objective function of the model?
The problem is that using max() on a list of constraint-objects returns only the constraint possessesing the maximum value at a given point.
I think yes, but unless you find a clever way to reformulate your model, it might not be very efficent.
You could solve all possiblity of max(g_m(x)), then select the solution with the lowest objective function value.
I fear that the max operation is not something you can add to a minimization model, since it is not a mathematical operation, but a solver operation. This operation is on the problems level. Keep in mind that when solving a model, Pyomo requires as argument only one sense of optimization (min or max), thus making it unable to understand min-max sense. Even if it did, how could it knows what to maximize or minimize? This is why I suggest you to break your problem in two, unless you work on its formulation.

Are shadow prices valid after an infeasible solve from COIN using PULP

I am solving a minimization linear program using COIN-OR's CLP solver with PULP in Python.
The variables that are included in the problem are a subset of the total number of possible variables and sometimes my pricing heuristic will pick a subset of variables that result in an infeasible solution. After which I use shadow prices to price new variables in.
My question is, if the problem is infeasible, I still get values from calling prob.constraints[c].pi, but those values don't always seem to be "valid" or "good" per se.
Now, a solver like Gurobi won't even let me call the shadow prices after an infeasible solve.
Actually Stu, this might work! - the "dummy var" in my case could be the source/sink node, which I can loosen the flow constraints on, allowing infinite flow in/out but with a large cost. This makes the solution feasible with a very bad high optimal cost; then the pricing of the new variables should work and show me which variables to add to the problem on the next iteration. I'll give it a try and report back. My only concern is that the bigM cost coefficient on the source/sink node may skew the pricing of the variables making all of them look relatively attractive. This would be counter productive bc adding most of the variables back into the problem will defeat the purpose of my column generation in the first place. I'll test it...

Categories

Resources