Tool for interactive exploration of function parameters - python

Context: I am evaluating libraries for stereo correspondence. They almost universally fail to work at all until you get a handful of algorithm-dependent parameters set correctly.
Is there any sort of well-generalized tool to make the process of manually tuning tens of parameters to a badly documented C++ function until it works less painful?
I am looking for something like a combination of SWIG and the dynamic-reconfigure infrastructure from ROS, where you point it at a pure C++ function, and it generates a simple gui with sliders, check-boxes, etc... for the values of the inputs, and calls the function over-and-over so you can tune the parameters interactively.

It sounds like ROS's dynamic_reconfigure with the rqt_reconfigure GUI might be close to what you're looking for. Once you specify the parameters you want to change, the GUI will generate sliders/toggles/fields/etc. to change the parameters on the fly:
You still need to explicitly add the mapping from a ROS param to the algorithm's parameter (and update the algorithm in the dynamic_reconfigure callback), but having your parameters stored in the ROS parameter server can be beneficial in the long run:
parameters can be under version control very easily (stored as a yaml file).
you can save all parameters once you find a good solution (rosparam dump)
you can have different 'versions' of parameters for different applications.
other nodes can read the parameters if necessary

Related

Parametrizing Ansys Fluent with python in linux

I'm trying to do a parametric study in Ansys fluent through python.
The idea is to calculate some parameters before feeding them to fluent as boundary conditions and initial conditions.
I have searched wide and far but could not come into any pertinent information... maybe i'm not looking with the good keywords.
Or is there an equivalent of ANSYS Parametric Design Language (APDL) for fluent ? I can only find information for mechanical.
Do anyone could guide me in the good direction or somewhere to go look for more information.
P.S.
I could not find any information in CFD-online, ansys site or here in stack overflow.
So after some long search around the global internet I found how to do it.
There are two main forms of doing it :
Via Ansys Workbench
Directly into Ansys Fluent
Ansys Workbench
Directly with scripting, I did not used this method hence this is what I understood without trying or testing it.
You can run the workbench in batch mode with the following bash command :
runwb2 -B -R "path/script.py"
Where -B stands for batch mode and -R excecutes the specified script.
An example and explanations can be found here : Scripted CFD simulations and postprocessing in Fluent and ParaVIEW
Ansys Fluent
TL;DR : Use Journals and python to modify journals, then run fluent through python.
First the simulation must be prepared with fluent GUI. You need to fix all non variable parameters as well defining monitors. You save all that information into a case file.
Once done that you must create a template with the commands to initialize the calculations. The easiest way is to search in the net and try everything in the TUI at fluent. Once everything has been validated, you create a template (The easiest way is to use jinja2)
Finally, a simple loop over the parameters to test with the following bash command with python can do magic:
# Running fluent
bashCommand = "fluent 3ddp -i "+ journal_output + " >& outputfile &"
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
It works really well and once you get use to Fluent commands it is quite easy !
Another way is to create parameters in Fluent for whatever you want to vary. For example, I wanted to vary the boundary conditions for an aerofoil simulation to change the angle of attack, e.g.
where blue arrows need to be set to different components to give different angles of attack, something like:
First make sure your case runs and gives sensible results setting the boundaries as numbers. Then, the two components of velocity in the boundary conditions can be set to parameters from the downwards arrow on the right, choose New input Parameter for each and give sensible names,
Here my two velocity components are Ux and Uy. Then go to the parametric tab, click Add Design Point a few times and export to a csv file,
Then you can simply copy the range of points you want to run, overwriting values in the csv file and adding extra rows as needed.
Then reimport this into Fluent. To get a useful output, here I wanted the drag/lift coefficient, you want to create a report item in the Report Definitions under Solution
be sure to tick
so this appears on the Parametric study tab.
Then provided the individual case ran, the Update All button should give the results of the parameter study. It's worth noting, in my case at least, reported drag and lift forces assume X=1 Y=0 and X=0 Y=1 respectively so may need some rotation base on the inflow angle...

ABAQUS python scripting inconsistencies when selecting regions

This may sound more like a rant to some extent, but I also would like to have your opinion on how to deal with the inconsistencies when using python scripting in abaqus.
here my example: in my rootAssembly (ra) I have three instances called a, b, c. in the script below I assign general seed, then mesh control, and element types, finally I generate the mesh:
ra.seedPartInstance(regions=(a,b,c), size=1.0)
ra.setMeshControls(elemShape=QUAD,
regions=(a.faces+b.faces+c.faces),
technique=STRUCTURED)
ra.setElementType(
elemTypes=eltyp,
regions=(a.faces,b.faces,c.faces))
ra.generateMesh(regions=(a,b,c))
As you can see, ABAQUS requires you to define the same region in several different modes.
Even though the argument is called "regions", ABAQUS either asks for a Set, or a Vertex, or a GeomSequence.
how do you deal with this? scripting feels a lot like trial and error, as there is no way to know in advance what is expected.
any suggestions?
Yes, there is clearly "a way to know in advance what is expected" - the docs. These spell out exactly what arguments are allowed.
But seriously - I see no inconsistency in your example. In practice, the reuse of the argument regions makes complete sense when you consider the context for what each of the functions actually do. Consider how the word "region" is a useful conceptual framework that can be adapted to easily allow the user to specify the necessary info for a variety of different tasks.
Now consider the complexity of the underlying system that the Python API exposes, and the variety of tasks that different users want to control and do with that underlying system. I doubt it would be simpler if the args were named something like seq_of_geomCells_geomFaces_or_geomSets. Or even worse, if there were a different argument for each allowable model entity that the function was designed to handle - that would be a nightmare. In this respect, the reuse of the keyword regions as a logical conceptual framework makes complete sense.
ok, i read now from the documentation of the three commands used above:
seedPartInstance(...)
regions: A sequence of PartInstance objects specifying the part instances to seed.
setMeshControls(...)
regions: A sequence of Face or Cell regions specifying the regions for which to set the mesh control parameters.
setElementType(...)
regions: A sequence of Geometry regions or MeshElement objects, or a Set object containing either geometry regions or elements, specifying the regions to which element types are to be assigned.
ok, i get the difference between partInstances and faces, but still it's not extremely clear why one is appended (using commas) and the other is added (using +), since they both call for a Sequence, and at this point, how does setElementType even works when passing faces objects to it?
I will take some more time to learn ABAQUS and to think through it, hopefully i can understand truly these differences.

Modelica Parameter studies with python

I want to run parameter studies in different modelica building libraries (buildings, IDEAS) with python: For example: change the infiltration rate.
I tried: simulateModel and simulateExtendedModel(..."zone.n50", [value])
My questions:Why is it not possible to translate the model and then change the parameter: Warning: Setting zone.n50 has no effect in model. After translation you can only set literal start-values and non-evaluated parameters.
It is also not possible to run: simulateExtendedModel. When i go to command line in dymola and write for zone.n50, then i get the actual value (that i have defined in python), but in the result file (and the plotted variable) it is always the standard n50 value.So my question: How can I change values ( befor running (and translating?) the simulation?
The value for the parameter is also not visible in the variable browser.
Kind regards
It might be a strcutrual parameter, these are evaluated also. It should work if you explicitly set Evaluate=False for the parameter that you want to study.
Is it not visible in the variable browser or is it just greyed out and constant? If it is not visible at all you should check if it is protected.
Some parameters cannot be changed after compilation, even with Evaluate=False. This is the case for parameters that influence the structure of the model, for example parameters that influence a discretization scheme and therefore influence the number of equations.
Changing such parameters requires to recompile the model. You can still do this in a parametric study though, I think you can use Modelicares to achieve this (http://kdavies4.github.io/ModelicaRes/modelicares.exps.html)

Object-oriented scientific data processing, how to cleverly fit data, analysis and visualization in objects?

As a biology undergrad i'm often writing python software in order to do some data analysis. The general structure is always :
There is some data to load, perform analysis on (statistics, clustering...) and then visualize the results.
Sometimes for a same experiment, the data can come in different formats, you can have different ways to analyses them and different visualization possible which might or not depend of the analysis performed.
I'm struggling to find a generic "pythonic" and object oriented way to make it clear and easily extensible. It should be easy to add new type of action or to do slight variations of existing ones, so I'm quite convinced that I should do that with oop.
I've already done a Data object with methods to load the experimental data. I plan to create inherited class if I have multiple data source in order to override the load function.
After that... I'm not sure. Should I do a Analysis abstract class with child class for each type of analysis (and use their attributes to store the results) and do the same for Visualization with a general Experiment object holding the Data instance and the multiple Analysis and Visualization instances ? Or should the visualizations be functions that take an Analysis and/or Data object(s) as parameter(s) in order to construct the plots ? Is there a more efficient way ? Am I missing something ?
Your general idea would work, here are some more details that will hopefully help you to proceed:
Create an abstract Data class, with some generic methods like load, save, print etc.
Create concrete subclasses for each specific form of data you are interested in. This might be task-specific (e.g. data for natural language processing) or form-specific (data given as a matrix, where each row corresponds to a different observation)
As you said, create an abstract Analysis class.
Create concrete subclasses for each form of analysis. Each concrete subclass should override a method process which accepts a specific form of Data and returns a new instance of Data with the results (if you think the form of the results would be different of that of the input data, use a different class Result)
Create a Visualization class hierarchy. Each concrete subclass should override a method visualize which accepts a specific instance of Data (or Result if you use a different class) and returns some graph of some form.
I do have a warning: Python is abstract, powerful and high-level enough that you don't generally need to create your own OO design -- it is always possible to do what you want with mininal code using numpy, scipy, and matplotlib, so before start doing the extra coding be sure you need it :)
It has been a while since you asked your question, but this might be interesting.
I created and actively develop a python library to do exactly this (albeit with a slightly broader scope). It is designed so that you can fully customize your data processing, while still having some basic tools (including for plot
The library is called Experiment NoteBook (enb), and is available in github (https://github.com/miguelinux314/experiment-notebook) and via pip (e.g., pip install enb).
I recommend any interested reader to take a look at the tutorial-like documentation (https://miguelinux314.github.io/experiment-notebook) to get an idea of the intended workflow.

Class inheritance order for a simulation program

This is a somewhat basic question about the correct order of class inheritance.
Basically I'm trying to write a numerical simulation to solve a physical model, the details are not important (I happen to be writing this in python), it is a well known algorithm solved by iterating over a volume of space.
The classes that I think I need are:
Setup: A class that defines all of the simulation parameters, like volume size, and has methods for checking for correct parameter type, calculating derived parameters etc.
Solver: Contains the actual algorithm for solving
Output: Contains handles for all the plot output and has access to save file etc.
I also need a run method which can run the solver and periodically (with periods defined in Setup) run some of the output functions.
In a high quality program which class would inherit from which? (My guess Output inherits from Solver inherits from Setup)
Where does the run method belong? Maybe there should be some extra base class like Interface that the user interacts with and includes the run method?
There is a concept that encourages the use of composition over inheritance (http://en.wikipedia.org/wiki/Composition_over_inheritance) so I would say that if you really don't need inheritance don't use it (they can be independent objects or functions, which in python are like objects).
If you model this with objects, run() should be in #Solver. Recall that the concept of interface is not necessary in python like in other languages, so you can either use objects, or functions with the algorithms you need.
Are you coming from a Java background by any chance?
First off, you've given no indication that any of your classes should inherit from another. For that matter, you probably don't need as many classes as you think you do.
Solver #Contains the actual algorithm for solving
If it's only one function you might as well just leave it as a free function.
Output #Contains handles for all the plot output and has access to save file etc.
If the functions don't have shared state, it could just as easily be a module.
As for the run method, just stick it wherever it is most convenient. The nice thing about Python is that you can start prototyping without any classes, and just refactor into a class whenever you find yourself passing the same set of data around a lot.

Categories

Resources