I am trying to test a simple fmu to save and restore the states.
For example openmodelica:
model modelicatest
input Real In1;
output Real Out1(start=0, fixed=true);
equation
der(Out1) = In1;
end modelicatest;
Also for simulink:
I am using FMPy to simulate the generated FMUs.
But for OpenModelica v1.14.1 generated FMU, I get the following error when I call getFMUState from FMPy:
Exception: fmi2GetFMUstate failed with status 3
For Simulink (2019b) generated FMU using the built-in exporter, FMU state does not reset (i.e. the output value) when I run setFMUState.
Just wondering these functions are supported for OpenModelica and Simulink generated FMUs? or is it FMPy issue?
With respect to fmi2GetFMUstate/fmi2SetFMUstate, the FMI Specification, section 2.1.8. states:
These functions are only supported by the FMU, if the optional capability flag <fmiModelDescription> <ModelExchange / CoSimulation canGetAndSetFMUstate in = "true"> in the XML file is explicitly set to true (see sections 3.3.1 and 4.3.1).
You can unzip the fmu file and take a look at the modelDescription.xml file to find out if the flag is set: If it is false or not set all, the get and set functions are not supported.
Related
I need to run a Kedro (v0.17.4) pipeline with a node that is supposed to process data with a different logic depending on the load version of the input.
As a simple and crude example assuming there is a catalog.yml file with this entry:
test_data_set:
type: pandas.CSVDataSet
filepath: data/01_raw/test.csv
versioned: true
and there are multiple versions of test.csv (say '1' and '2') and I want to use the Catalog from the config file and run the following node/pipeline:
from kedro.config import ConfigLoader
from kedro.io import DataCatalog
conf_loader = ConfLoader(['conf/base'])
conf_catalog = conf_loader.get('catalog*', 'catalog/**')
io = DataCatalog.from_config(conf_catalog)
def my_node(my_data_set):
#if version_of_my_data_set == '1': # how to do this?
# print("do something with version 1")
# ... do something else
return
my_pipeline = Pipeline([node(func=my_node, inputs="test_data_set", outputs=None, name="process_versioned_data")])
SequentialRunner().run(my_pipeline, catalog=io)
I understand that runtime parameters or the load version are supposed to be separated from the logic in a node by design, but in my specific case it would still be useful to find a way to do this.
In general the pipeline will be executed via the API but also via the command line with the --load_version flag.
Solutions that I have considered but discarded:
store the load version somehow in the Kedro session and access it within the node via "get_current_session" (how?)
add load_version as a required input parameter for the node (would probably break compatibility with some upstream pipeline)
In short:
Is there a good way to pass the information of the user specified load version of a dataset to a kedro node?
Once an MLflow run is finished, external scripts can access its parameters and metrics using python mlflow client and mlflow.get_run(run_id) method, but the Run object returned by get_run seems to be read-only.
Specifically, .log_param .log_metric, or .log_artifact cannot be used on the object returned by get_run, raising errors like these:
AttributeError: 'Run' object has no attribute 'log_param'
If we attempt to run any of the .log_* methods on mlflow, it would log them into to a new run with auto-generated run ID in the Default experiment.
Example:
final_model_mlflow_run = mlflow.get_run(final_model_mlflow_run_id)
with mlflow.ActiveRun(run=final_model_mlflow_run) as myrun:
# this read operation uses correct run
run_id = myrun.info.run_id
print(run_id)
# this write operation writes to a new run
# (with auto-generated random run ID)
# in the "Default" experiment (with exp. ID of 0)
mlflow.log_param("test3", "This is a test")
Note that the above problem exists regardless of the Run status (.info.status can be both "FINISHED" or "RUNNING", without making any difference).
I wonder if this read-only behavior is by design (given that immutable modeling runs improve experiments reproducibility)? I can appreciate that, but it also goes against code modularity if everything has to be done within a single monolith like the with mlflow.start_run() context...
As it was pointed out to me by Hans Bambel and as it is documented here mlflow.start_run (in contrast to mlflow.ActiveRun) accepts the run_id parameter of an existing run.
Here's an example tested to work in v1.13 through v1.19 - as you see one can even overwrite an existing metric to correct a mistake:
with mlflow.start_run(run_id=final_model_mlflow_run_id):
# print(mlflow.active_run().info)
mlflow.log_param("start_run_test", "This is a test")
mlflow.log_metric("start_run_test", 1.23)
mlflow.log_metric("start_run_test", 1.33)
mlflow.log_artifact("/home/jovyan/_tmp/formula-features-20201103.json", "start_run_test")
I'm using Ray & RLlib to train RL agents on an Ubuntu system. Tensorboard is used to monitor the training progress by pointing it to ~/ray_results where all the log files for all runs are stored. Ray Tune is not being used.
For example, on starting a new Ray/RLlib training run, a new directory will be created at
~/ray_results/DQN_ray_custom_env_2020-06-07_05-26-32djwxfdu1
To visualize the training progress, we need to start Tensorboard using
tensorboard --logdir=~/ray_results
Question: Is it possible to configure Ray/RLlib to change the output directory of the log files from ~/ray_results to another location?
Additionally, instead of logging to a directory named something like DQN_ray_custom_env_2020-06-07_05-26-32djwxfdu1, can this directory name by set by ourselves?
Failed Attempt: Tried setting
os.environ['TUNE_RESULT_DIR'] = '~/another_dir`
before running ray.init(), but the result log files were still being written to ~/ray_results.
Without using Tune, you can change the logdir using rllib's "Trainer". The "Trainer" class takes in an optional "logger_creator" if you want to specify where to save the log (see here).
A concrete example:
Define your customized logger creator (you can simply modify from the default one):
def custom_log_creator(custom_path, custom_str):
timestr = datetime.today().strftime("%Y-%m-%d_%H-%M-%S")
logdir_prefix = "{}_{}".format(custom_str, timestr)
def logger_creator(config):
if not os.path.exists(custom_path):
os.makedirs(custom_path)
logdir = tempfile.mkdtemp(prefix=logdir_prefix, dir=custom_path)
return UnifiedLogger(config, logdir, loggers=None)
return logger_creator
Pass this logger_creator to the trainer, and start training:
trainer = PPOTrainer(config=config, env='CartPole-v0',
logger_creator=custom_log_creator(os.path.expanduser("~/another_ray_results/subdir"), 'custom_dir'))
for i in range(ITER_NUM):
result = trainer.train()
You will find the training results (i.e., TensorBoard events file, params, model, ...) saved under "~/another_ray_results/subdir" with your specified naming convention.
Is it possible to configure Ray/RLlib to change the output directory of the log files from ~/ray_results to another location?
There is currently no way to configure this using RLib CLI tool (rllib).
If you're okay with Python API, then, as described in documentation, local_dir parameter of tune.run is responsible for specifying output directory, default is ~/ray_results.
Additionally, instead of logging to a directory named something like DQN_ray_custom_env_2020-06-07_05-26-32djwxfdu1, can this directory name by set by ourselves?
This is governed by trial_name_creator parameter of tune.run. It must be a function that accepts trial object and formats it into a string like so:
def trial_name_id(trial):
return f"{trial.trainable_name}_{trial.trial_id}"
tune.run(...trial_name_creator=trial_name_id)
Just for anyone who bumps into this problem with Ray Tune.
You can specify local_dir for run_config within tune.Tuner:
# This logs to 2 different trial folders:
# ./results/test_experiment/trial_name_1 and ./results/test_experiment/trial_name_2
# Only trial_name is autogenerated.
tuner = tune.Tuner(trainable,
tune_config=tune.TuneConfig(num_samples=2),
run_config=air.RunConfig(local_dir="./results", name="test_experiment"))
results = tuner.fit()
Please see this link for more info.
It's the first time i have come across a recipe file written in python and it's giving me an error. The error is:
../meta-intel/recipes-rt/images/core-image-rt.bb: Error executing a python function in <code>:
This is a recipe which is coming from the meta-intel branch "[master] intel-vaapi-driver: 2.1.0 -> 2.2.0".
My poky version is" [morty] documentation: Updated manual revision table for 2.2.4 release date.
My BITBAKE version is: "BitBake Build Tool Core version 1.32.0"
The contents of core-image-rt.bb are:
require recipes-core/images/core-image-minimal.bb
# Skip processing of this recipe if linux-intel-rt is not explicitly specified as the
# PREFERRED_PROVIDER for virtual/kernel. This avoids errors when trying
# to build multiple virtual/kernel providers.
python () {
if d.getVar("PREFERRED_PROVIDER_virtual/kernel") != "linux-intel-rt":
raise bb.parse.SkipPackage("Set PREFERRED_PROVIDER_virtual/kernel to linux-intel-rt to enable it")
}
DESCRIPTION = "A small image just capable of allowing a device to boot plus a \
real-time test suite and tools appropriate for real-time use."
DEPENDS += "linux-intel-rt"
IMAGE_INSTALL += "rt-tests hwlatdetect"
LICENSE = "MIT"
If you need any additional information please let me know and i'll try and supply it.
I can normally build images on my ubuntu machine but don't believe have ever had to build an image in which the recipes were written in python
You are using incompatible API of using g.getVar method. In morty release as the last one with old way of using second parameter, there is still need to provide boolean parameter:
...
if d.getVar("PREFERRED_PROVIDER_virtual/kernel", True) != "linux-intel-rt":
...
Please take a look at one of the commit, that remove this in next releases.
I developed for the example a simple Modelica model based on the fluid library of the MSL. I connected a MassFlowSource with a pipe and a Boundary_PT as sink function as in the picture below:
http://www.casimages.com/img.php?i=14061806120359130.png
I generate a FMU package with OpenModelica (in mode model-exchange).
I manage this FMU package with python with the code below:
import pyfmi, os
from pyfmi import load_fmu
myModel = load_fmu('PathToFolder\\test3.fmu')
res1 = myModel.simulate() # First simulation with m_flow in source set to [1] Kg/s
x = myModel.get('boundary1.m_flow') # Mass flow rate of the source
y = myModel.get('pipe.port_a.m_flow') # Mass flow rate in pipe
print x, y
myModel.set('boundary1.m_flow', 2)
option = myModel.simulate_options()
option['initialize'] = False # Need to initialize the simulation
res2 = myModel.simulate(options = option) # Second simulation with m_flow in source set to [2] Kg/s
x = myModel.get('boundary1.m_flow') # Mass flow rate of the source
y = myModel.get('pipe.port_a.m_flow') # Mass flow rate in pipe
print x, y
os.system('pause')
The objective is to show a problem when you change a parameter in the model, here the "m_flow" variable in source component. This new set to "2" should change the "m_flow" in pipe but it does not.
Results: In the first simulation the both "m_flow" are gotten to "1" and it's normal because the model is set like this. In the second simulation, I set the parameter to "2" in the source but the pipe "m_flow" stay to "1" (It should be "2").
http://www.casimages.com/img.php?i=140618060905759619.png
The model of the fluid source in Modelica is this one (only our interesting part):
equation
if not use_m_flow_in then
m_flow_in_internal = m_flow;
end if;
connect(m_flow_in, m_flow_in_internal);
I think the FMU don't consider parameter when they are in a if-condition. For me it's a problem because I need to manage FMU and to be sure that if I set a parameter, the simulation will use this new set. How be sure that FMU/FMI works well? Where is the exhaustive list with the type of parameters we can't manage in FMU?
I already know that parameters which change the number of equations can't be consider in FMU management (idem for variables which change the index of DAEs).
Note that OpenModelica has a concept of structural parameters and the Evaluate=true annotation. For example, if a parameter is used as an array dimension, it might be evaluated to an Integer value. All uses of that parameter will use the evaluated value, as if it was a constant.
Rather than including a picture of the diagram, the Modelica source code would have been easier to look at in order to find out what OpenModelica did to the system.
I suspect a parameter was evaluated. If you generate non-FMU code, you could inspect the modelName_init.xml generated by OpenModelica and find the entry for a parameter and look for the property isValueChangeable.
You could also use OMEdit to debug the system and view the initial equation (generate the executable including debug information). File->Open Transformations File, then select the modelName_info.xml file. Search for the variable you tried to change and go to the initial equation that defined it. It could very well be that a start-value (set by PyFMI) is ignored because it is not needed to produce a solution.
whenever you try to set new values to the parameter,
Follow these steps:
1.Reset the model
2.set new values for the parameter
3.Simulate the model.
I am not familiar with PyFMI, but I kinda encountered the same situation before. You could try a few things below.
Try to terminate/free the instant after your first sim.
As most parameters could not be changed after init, you could make that parameter as an input connector, so that this specific parameter could be changed at any time.
(In FMU from Dymola) I also found that if that parameter involves in your initial nonlinear system of equation, then you will get an error "the model could not be initialized" if you try to init the model on the same instant.