Modelica Parameter studies with python - python

I want to run parameter studies in different modelica building libraries (buildings, IDEAS) with python: For example: change the infiltration rate.
I tried: simulateModel and simulateExtendedModel(..."zone.n50", [value])
My questions:Why is it not possible to translate the model and then change the parameter: Warning: Setting zone.n50 has no effect in model. After translation you can only set literal start-values and non-evaluated parameters.
It is also not possible to run: simulateExtendedModel. When i go to command line in dymola and write for zone.n50, then i get the actual value (that i have defined in python), but in the result file (and the plotted variable) it is always the standard n50 value.So my question: How can I change values ( befor running (and translating?) the simulation?
The value for the parameter is also not visible in the variable browser.
Kind regards

It might be a strcutrual parameter, these are evaluated also. It should work if you explicitly set Evaluate=False for the parameter that you want to study.
Is it not visible in the variable browser or is it just greyed out and constant? If it is not visible at all you should check if it is protected.

Some parameters cannot be changed after compilation, even with Evaluate=False. This is the case for parameters that influence the structure of the model, for example parameters that influence a discretization scheme and therefore influence the number of equations.
Changing such parameters requires to recompile the model. You can still do this in a parametric study though, I think you can use Modelicares to achieve this (http://kdavies4.github.io/ModelicaRes/modelicares.exps.html)

Related

What is openMDAO's reasoning for returning 'None' for self cancelling units

I've setup an Explicit Component model...
one of my outputs is a ratio:
self.add_output('diameter_over_thickness', units='mm/mm')
Upon the class being initialized, openMDAO assigns the units to this output as None.
openMDAO .list_outputs() output
Why not unitless ? I feel it would be a more effective representation...
Yes I could just assign it to be unitless but I do not want to do this. It becomes problematic when you're using openMDAO's get_val() function and a dictionary of predefined units you can call from.
As of OpenMDAO V3.20 I would say this is more of a legacy behavior than an active choice. the unitless unit didn't exist early on AND we thought that variables with no unit defined (which defaulted to None) should be compatible with variables that were defined as unitless.
There is a connection rule that allows variables with units to be connected to other variables that have None. One of the key motivations for this is that often times ExecComp I/O is defined without units and it is very convinent to be able to connect dimensional quantities to them.
However, that same rule does not apply to unitless variables. In that case, we understand that the user has actively said there are no units and hence a connection to a dimensional quantity should not be allowed.
So there are some semantic differences between None and unitless. However, I can see the argument that mm/mm should be treated as unitless and not None. A change like this is backwards incompatible though and would require a POEM to be written and approved before an implementation change could be made. I suspect the dev team would welcome a POEM on this subject.

Possible to rename independent variable name in Built-in lmfit fitting models?

I am using lmfit to do small angle X-ray scattering pattern fitting. To this end, I use the Model class to wrap my functions and to make Composite Models which works well. However, it happened that I wrote all my function with 'q' as the independent variable (convention in the discipline). Now I wanted to combine some of those q-functions with some of the built-in models. It clashes, because the independent_variable for those is 'x'. I have tried to do something like modelBGND = lmfit.models.ConstantModel(independent_vars=['q']), but it gives the error:
ValueError: Invalid independent variable name ('q') for function
constant
Of course this can be solved, by either rewriting the built-in function again in 'q', or by recasting all my previously written functions in terms of 'x'. I am just curious to hear if there was a more straight forward approach?
Sorry, I don't think that is possible.
I think you will have to rewrite the functions to use q instead of x. That is, lmfit.Model uses function inspection to determine the names of the function arguments, and most of the built-in models really do require the first positional argument to be named x.

How to fix the Absorbing Effect Error when trying to include fixed effects in the PanelOLS function in Python's linearmodels?

I am running a fixed effects panel regression use the PanelOLS() function in linearmodels 4.5.
While trying to add the 'entity_effects=True' and 'time_effects=True' in the model estimation, it returned 'AbsorbingEffectError':
The model cannot be estimated. The included effects have fully absorbed
one or more of the variables. This occurs when one or more of the dependent
variable is perfectly explained using the effects included in the model.
How can I fix the 'AbsorbingEffectError'?
panel = panel.set_index(['firm', 'Date'])
exog_vars = panel[['ex_mkt', 'MV', 'ROA', 'BTM','leverage','2nd']]
exog = sm.add_constant(exog_vars)
y = panel[['ex_firm']]
model = PanelOLS(y, exog_vars,entity_effects=True).fit(cov_type='clustered', cluster_entity=True)
I am following the exact same steps as the example of the Fixed Effects model from the documentationhttps://bashtage.github.io/linearmodels/doc/panel/examples/examples.html#
I think G.mc and TiTo have a good point and I had the same issue today.
It appears that if you have a 'constant' variable (which means no variation), then this problem appears in python.
I tried as well in stata, and it seems to work even though constants are included.
By constant I mean the usual 'c' introduced in the analysis and any other variable which in fact is static over the time period.

Tensorflow's tensorflow variable_scope values parameter meaning

I am currently reading a source code for slim library that is based on Tensorflow and they use values argument for variable_scope method alot, like here.
From the API page I can see:
This context manager validates that the (optional) values are from the same graph, ensures that graph is the default graph, and pushes a name scope and a variable scope.
My question is: variables from values are only being checked if they are from the same graph? What are the use cases for this and why someone will need that?
The variable_scope parameter helps ensure uniqueness of variables and reuse of variables where desired.
Yes if you create two or more different computation graphs then they wouldn't necessarily share the same variable scope; however, there are ways to get them to be shared across graphs so the option is there.
Primary use cases for variable scope are for RNN's where many of the weights are tied and reused. That's one reason someone would need it. The other main reason it's there is to ensure that you are reusing the same variables when you explicitly mean to and not by accident. (For distributed settings this can become a concern.)

Tool for interactive exploration of function parameters

Context: I am evaluating libraries for stereo correspondence. They almost universally fail to work at all until you get a handful of algorithm-dependent parameters set correctly.
Is there any sort of well-generalized tool to make the process of manually tuning tens of parameters to a badly documented C++ function until it works less painful?
I am looking for something like a combination of SWIG and the dynamic-reconfigure infrastructure from ROS, where you point it at a pure C++ function, and it generates a simple gui with sliders, check-boxes, etc... for the values of the inputs, and calls the function over-and-over so you can tune the parameters interactively.
It sounds like ROS's dynamic_reconfigure with the rqt_reconfigure GUI might be close to what you're looking for. Once you specify the parameters you want to change, the GUI will generate sliders/toggles/fields/etc. to change the parameters on the fly:
You still need to explicitly add the mapping from a ROS param to the algorithm's parameter (and update the algorithm in the dynamic_reconfigure callback), but having your parameters stored in the ROS parameter server can be beneficial in the long run:
parameters can be under version control very easily (stored as a yaml file).
you can save all parameters once you find a good solution (rosparam dump)
you can have different 'versions' of parameters for different applications.
other nodes can read the parameters if necessary

Categories

Resources