How to define output variable with dynamic shape in OpenMDAO - python

I am currently simulating a structural optimization problem in which the gradients of responses are extracted from Nastran and provided to SLSQP optimizer in OpenMDAO. The number of constraints changes in subsequent iterations, because the design variables included both the shape and sizing variables, therefore a new mesh is generated every time. A constraint component is defined in OpenMDAO, and it reads the response data exported from Nastran. Now, the issue here is in defining the shape of its output variable "f_const". The shape of this output variable is required to adjust according to the shape of the available response array, since outputs['f_const'] = np.loadtxt("nsatran_const.dat"). Here, nastran_const.dat is the file containing response data extracted from Nastran. The shape of this data is not known at the beginning of design iteration and keep on changing during the subsequent iterations. So, if some shape of f_const is defined at the start, then it does not change later and gives error because of the mismatch in the shapes.
In the doc of openmdao, I found https://openmdao.org/newdocs/versions/latest/features/experimental/dyn_shapes.html?highlight=varying%20shape
It explains, that the shape of input/out variable can be set dynamic by linking it to any connecting or local variables whose shapes are already known. This is different from my case because, the shape of stress array is not known before the start of computation. The shape of f_const is to be defined in setup, and I cannot figure out how to change it later. Please guide me in this regard.

You can't have arrays that change shape like that. The "dynamic" shape you found in the docs refers to setup-time variation. Once setup has been finished though, sizes are fixed. So we need a way for you arrays to be of a fixed size.
If you really must re-mesh every time (which I don't recommend) then there are two possible solutions I can think of:
Over-allocation
Constraint Aggregation
Option 1 -- Over Allocation
This topic is covered in detail in this related question, but briefly what you could do is allocate an array big enough that you always have enough space. Then you can use one entry of the array to record how many active entries are in it. Any non-active entries would be set to a default value that won't violate your constraints.
You'll have to be very careful with the way you define the derivatives. For active array entries, the derivatives come from NASTRAN. For inactive ones, you could set them to 0 but note that you are creating a discrete discontinuity when an entry switches to active. This may very well give the optimizer fits when its trying to converge and derivatives of active constraints keep flipping between 0 and nonzero values.
I really don't recommend this approach, but if you absolutely must have "variable size" arrays then over-allocation is your best best.
Option 2 -- Constraint Aggregation
They key idea here is to use an aggregation function to collapse all the stress constraints into a single value. For structural problems this is most often done with a KS function. OpenMDAO has a KScomponent in its standard library that you can use.
The key is that this component requires a constant sized input. So again, over-allocation would be used here. In this case, you shouldn
't track the number of active values in the array, because you are passing that to the aggregation function. KS functions are like smooth max functions, so if you have a bunch of 0's then it shouldn't affect it.
Your problem still has a discontinuous operation going on with the re-meshing and the noisy constraint array. The KS function should smooth some of that, but not all of it. I still think you'll have trouble converging, but it should work better than raw over-allocation.
Option 3 --- The "right" answer
Find a way to fix your grid, so it never changes. I know this is hard if you're using VSP to generate your discritizations, and letting NASTRAN re-grid things from there ... but its not impossible at all.
OpenVSP has a set of geometry-query functions that can be used to back-fit fixed meshes into the parametric space of the geometry. If you do that, then you can regenerate the geometry in VSP and use the parametric space to move your fixed grids with it. This is how the pyGeo tool that the University of Michigan MDO Lab does it, and it works very well.
Its a modest amount of work (though a lot less if you use pyGeo directly), but I think its well worth it. You'll get faster components and a much more stable optimization.

Related

Performing UMAP dimension reduction on inconsistently shaped data - python

first question, I will do my best to be as clear as possible.
If I can provide UMAP with a distance function that also outputs a gradient or some other relevant information, can I apply UMAP to non-traditional looking data? (I.e., a data set with points of inconsistent dimension, data points that are non-uniformly sized matrices, etc.) The closest I have gotten to finding something that looks vaguely close to my question is in the documentation here (https://umap-learn.readthedocs.io/en/latest/embedding_space.html), but this seems to be sort of the opposite process, and as far as I can tell still supposes you are starting with tuple-based data of uniform dimension.
I'm aware that one way around this is just to calculate a full pairwise distance matrix ahead of time and give that to UMAP, but from what I understand of the way UMAP is coded, it only performs a subset of all possible distance calculations, and is thus much faster for the same amount of data than if I were to take the full pre-calculation route.
I am working in python3, but if there is an implementation of UMAP dimension reduction in some other environment that permits this, I would be willing to make a detour in my workflow to obtain this greater flexibility with incoming data types.
Thank you.
Algorithmically this is quite possible, but in practice most implementations do not support anything other than fixed dimension vectors. If computing the all pairs distances is not tractable another option is to try to find a way to featurize or vectorize the data in a way that will allow for easy distance computations. This is, of course, not always possible. The final option is to implement things yourself, but this requires handling the nearest neighbour search, which is likely a non-trivial coding project in and of itself.

How can I use implicit components to assemble a full system?

I'm working on a panel method code at the moment. To keep us from being bogged down in the minutia, I won't show the code - this is a question about overall program structure.
Currently, I solve my system by:
Generating the corresponding rows of the A matrix and b vector in an explicit component for each boundary condition
Assembling the partial outputs into the full A, b.
Solving the linear system, Ax=b, using a LinearSystemComp.
Here's a (crude) diagram:
I would prefer to be able to do this by just writing one implicit component to represent each boundary condition, vectorising the inputs/outputs to represent multiple rows/cols in the matrix, then allowing openMDAO to solve for the x while driving the residual for each boundary condition to 0.
I've run into trouble trying to make this work, as each implicit component is underdetermined (more rows in the output vector x than the component output residuals; that is, A1.x - b1= R1, length(R1) < length(x). Essentially, I would like openMDAO to take each of these underdetermined implicit systems, and find the value of x that solves the determined full system - without needing to do all of the assembling stuff myself.
Something like this:
To try and make my goal clearer, I'll explain what I actually want from the perspective of my panel method. I'd like a component, let's say Influence, that computes the potential induced by a given panel at a given point in the panel's reference frame. I'd like to vectorise the input panels and points such that it can compute the influence coefficent of many panels on one point, of many points on one panel, or of many points on many panels.
I'd then like a system of implicit boundary conditions to find the correct value of mu to solve the system. These boundary conditions, again, should be able to be vectorised to compute the violation of the boundary condition at many points under the influence of many panels.
I get confused again at this part. Not every boundary condition will use the influence coefficient values - some, like the Kutta condition, are just enforced on the mu vector, e.g .
How would I implement this as an implicit component? It has no inputs, and doesn't output the full mu vector.
I appreciate that the question is rather long and rambling, but I'm pretty confused. To summarise:
How can I use openMDAO to solve multiple individually underdetermined (but combined, fully determined) implicit systems?
How can I use openMDAO to write an implicit component that takes no inputs and only uses a portion of the overall solution vector?
In the OpenMDAO docs there is a close analog to what you are trying to accomplish, with the node-voltage analysis tutorial. In that code, the balance comp is used to create an implicit relationship that is similar to what you're describing. Its singular on its own, but part of a larger group is a well defined system.
You'll need to find a way to build similar components for your model. Each "row" in your equation will be associated with one state variables (one entry in your x vector).
In the simplest case, each row (or set of rows) would have one input which is the associated row of the A matrix, and a second input which is ALL of the other values for x, and a final input which is the entry of the b vector (right hand side vector). Then you could evaluate the residual for that specific row, which would be the following
R['x_i'] = np.sum(A*x_full) - b
where x_full is the assembly of the full x-vector from the x_other input and the x_i state variable.
#########
Having proposed the above solution, I have to say that I don't think this is a particularly efficient way to build or solve this linear system. It is modular, and might give you some flexibility, but you're jumping through a lot of hoops to avoid doing some index-math, and shoving everything into a matrix.
Granted, the derivatives might be a bit easier in your design, because the matrix assembly is going to get handled "magically" by the connections you have to create between the various row-components. So maybe its worth the trade... but i would say you might be better of trying a more traditional coding approach and using JAX or some other AD code to make the derivatives easier.

Is it possible to specify starting values for each parameter (instead of bounds) for scipy's differential evolution?

Scipy's differential evolution implementation (https://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.optimize.differential_evolution.html) uses either a Latin hypercube or a random method for population initialization. Latin hypercube sampling tries to maximize coverage of the available parameter space. ‘random’ initializes the population randomly. I am wondering if it would be possible to specify starting values for each parameter, instead of relying on these default algorithms.
For complex models (particularly those that are mathematically intractable and thus need to be simulated), I have observed that 2 independent runs of scipy's differential evolution likely give different results after X iterations of the algorithm (I usually set X = 100 to avoid running the agorithm during several days). I think it is because (1) population initialization is not identical between 2 independent runs (because of the stochastic nature of the population initialization methods 'random' and 'hypercube') and (2) there's noise in model prediction. I am thus thinking of running ~10 independent runs of DE with 100 iterations, pick-up the best-fitting parameter set across the 10 runs and use this set as the starting values for a final run with more iterations (say 200). The problem is that I see no way to manually enter these starting values within scipy's DE implementation. I would be very grateful if somebody in the community could help me.
This has indeed been possible since version 1.1 of SciPy (note that you're referring to the dated 0.17.0 documentation). In particular, the recent versions lets you specify any array, instead of just 'hypercube' or 'random'. From the documentation, a possible value of init is:
array specifying the initial population. The array should have shape (M, len(x)), where len(x) is the number of parameters. init is clipped to bounds before use.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html
If, for some reason, you're forced to use the old version, it's still possible to obtain what you want by just using the underlying DifferentialEvolutionSolver instead. There, you can either monkey patch one of the initializing functions, or just run them and manually override the population attribute post-initialization.

Why are finite differences of static input variables used to calculate the Jacobian? (OpenMDAO 2.4)

I have been using the SLSQP algorithm to run some MDO problems with ExplicitComponents only. Each component has a runtime of around 10 seconds and 60-100 input variables. Most of the input variables are static input variables that will remain constant during the entire optimization. The static input variables originate from an IndepVarComp. The ExplicitComponents are black boxes, so no information is available on the partials.
I noticed that when the Jacobian is calculated in the compute_totals(), the components are linearized with respect to all their input values. In the compute_approximations() a finite difference is calculated over all the input values, including the static input values. So, my question is: why is a finite difference calculation performed over these static input variables? As the values remain constant, I’m not sure why this information would be useful?
Furthermore, if I understand it correctly, the components are linearized to get the sub-Jacobians, which are then used to calculate the total Jacobian. However, is it possible to directly calculate a finite-difference over the entire group instead of linearizing each component? With the runtimes of my components and amount of input variables, it will take a long time to perform the linearization of each component. However, the optimization problem has only 3 design variables. So, if I could perform three finite difference calculations over the entire MDA to calculate the total Jacobian, the total runtime will decrease significantly.
To answer your questions in reverse order:
1) Can you FD over the entire model instead of each individual component? Yes!
You can set up FD over any group in your model, including the top-level group. Then the FD is taken across that group rather than across each component in it.
We call that computing a semi-total derivative, because in general you can select a sub-group in your model, in which case the FD is approximating a total-derivative across that group but that total-derivative is still effectively a partial-derivative for the overall model. hence semi-total derivative.
2) Why is a finite difference calculation performed over these static input variables?
In theory, you're correct that you don't really need partial derivatives of the inputs that can't change. As of OpenMDAO 2.4, we don't handle that situation automatically though, and we don't have plans to add that in the near future. However, the framework is only taking FD across the partials you tell it to. It sounds like you are declaring your partials like this:
self.declare_partials(of=['*'], wrt=['*'], method='fd')
So you're specifically asking the framework to compute all those partials. Instead, you could specify in the wrt argument only the inputs you know are actually changing. Of course, this is mathematically incorrect because there is a derivative wrt to the static-inputs. If someone later on connects something to those inputs and tries and optimization, they would get a wrong answer. But as long as your careful, you can specifically ask for only the partials you wanted from any component and simple leave the non-changing inputs as effectively 0.

FaceVariables in FiPy

I am modeling electrical current through various structures with the help of FiPy. To do so, I solve Laplace's equation for the electrical potential. Then, I use Ohm's law to derive the field and with the help of the conductivity, I obtain the current density.
FiPy stores the potential as a cell-centered variable and its gradient as a face-centered variable which makes sense to me. I have two questions concerning face-centered variables:
If I have a two- or three-dimensional problem, FiPy computes the gradient in all directions (ddx, ddy, ddz). The gradient is a FaceVariable which is always defined on the face between two cell centers. For a structured (quadrilateral) grid, only one of the derivates should be greater than zero since for any face, the position of the two cell-centers involved should only differ in one coordinate. In my simulations however, it occurs frequently that more than one of the derivates (ddx, ddy, ddz) is greater than zero, even for a structured grid.
The manual gives the following explanation for the FaceGrad-Method:
Return gradient(phi) as a rank-1 FaceVariable using differencing for the normal direction(second-order gradient).
I do not see, how this differs from my understanding pointed out above.
What makes it even more problematic: Whenever "too many" derivates are included, current does not seem to be conserved, even in the simplest structures I model...
Is there a clever way to access the data stored in the face-centered variable? Let's assume I would want to compute the electrical current going through my modeled structure.
As of right now, I save the data stored in the FaceVariable as a tsv-file. This yields a table with (x,y,z)-positions and (ddx, ddy, ddz)-values. I read the file and save the data into arrays to use it in Python. This seems counter-intuitive and really inconvenient. It would be a lot better to be able to access the FaceVariable along certain planes or at certain points.
The documentation does not make it clear, but .faceGrad includes tangential components which account for more than just the neighboring cell center values.
Please see this Jupyter notebook for explicit expressions for the different types of gradients that FiPy can calculate (yes, this stuff should go into the documentation: #560).
The value is accessible with myFaceVar.value and the coordinates with myFaceVar.mesh.faceCenters. FiPy is designed around unstructured meshes and so taking arbitrary slices is not trivial. CellVariable objects support interpolation by calling myCellVar((xs, ys, zs)), but FaceVariable objects do not. See this discussion.

Categories

Resources