Retrieving FMU co-simulation results when using Master in PyFMI - python

I am trying to co-simulate two Matlab-Simulink FMUs using the Master from PyFMI:
from pyfmi import Master
""" Loading FMUs, establishing connection between models, defining Master simulator and simulation options """
dummy_model = load_fmu('dummy.fmu', log_level = 4)
batt_model = load_fmu('battery.fmu', log_level = 4)
models = [dummy_model, batt_model]
connection = [(dummy_model, 'P_to_batt_fmu', batt_model, 'P_to_batt_fmu')]
mastersim = Master(models, connection)
opts = mastersim.simulate_options()
opts['step_size'] = 1000
opts['result_handling'] = 'memory'
""" Simulating """
res = mastersim.simulate(final_time = 1000, options = opts)
Everything until here runs as expected. The problem comes when trying to retrieve the results from res, since I cannot call the names of the output variables in my Simulink models. These have names like 'Ibatt', 'SoC' or 'Qbatt' (see image below), and I would expect them to be the keys of res.
Output variable names in Simulink
But they aren't. In fact, res appears to be a python dictionary with very strange keys and values, and until now I have been unable to make heads or tails of it:
{0: <pyfmi.common.algorithm_drivers.AssimuloSimResult object at 0x000002B3AC3CCF10>,
<pyfmi.fmi.FMUModelCS2 object at 0x000002B3AB8BC800>: <pyfmi.common.algorithm_drivers.AssimuloSimResult object at 0x000002B3AC3CCF10>,
1: <pyfmi.common.algorithm_drivers.AssimuloSimResult object at 0x000002B3AC3CE1A0>,
<pyfmi.fmi.FMUModelCS2 object at 0x000002B3AB889670>: <pyfmi.common.algorithm_drivers.AssimuloSimResult object at 0x000002B3AC3CE1A0>}
As I said before, I would expect res to be a dictionary with very straightforward keys, which is what happens when simulating just one of the FMUs:
batt_model = load_fmu('battery.fmu', log_level = 4)
opts = batt_model.simulate_options()
opts['ncp'] = 1000
res = batt_model.simulate(final_time=1000, options = opts)
If I run the code above, then res.keys() are just the names of my Simulink output variables.
My questions are:
How can I find my variables in res when using Master?
How can I work with them in an easy way?
So far I have been able to work with a results.txt file by stating opts['result_handling'] = 'file', however I would much prefer storing and working with the results in 'memory'

Related

Catia select a feature from a specific instance in an assembly

Lets say I have an assembly like this:
MainProduct:
-Product1 (Instance of Part1)
-Product2 (Instance of Part2)
-Product3 (Instance of Part2)
-Product4 (Instance of Part3)
...
Now, I want to copy/paste a feature from Product3 into another one.
But I run into problems when selecting the feature programmatically, because there are 2 instances of the part of that feature.
I can't control which feature will be selected by CATIA.ActiveDocument.Selection.Add(myExtractReference)
Catia always selects the feature from Product2 instead of the feature from Product3. So the position of the pasted feature will be wrong!
Does anybody know this problem and has a solution to it?
Edit:
The feature reference which I want to copy already exists as a variable because it was newly created (an extract of selected geometry)
I could get help else where. Still want to share my solution. It's written in Python but in VBA its almost the same.
The clue is to access CATIA.Selection.Item(1).LeafProduct in order to know where the initial selection was made.
import win32com.client
import pycatia
CATIA = win32com.client.dynamic.DumbDispatch('CATIA.Application')
c_doc = CATIA.ActiveDocument
c_sel = c_doc.Selection
c_prod = c_doc.Product
# New part where the feature should be pasted
new_prod = c_prod.Products.AddNewComponent("Part", "")
new_part_doc = new_prod.ReferenceProduct.Parent
# from user selection
sel_obj = c_sel.Item(1).Value
sel_prod_by_user = c_sel.Item(1).LeafProduct # reference to the actual product where the selection was made
doc_from_sel = sel_prod_by_user.ReferenceProduct.Parent # part doc from selection
hb = doc_from_sel.Part.HybridBodies.Add() # new hybrid body for the extract. will be deleted later on
extract = doc_from_sel.Part.HybridShapeFactory.AddNewExtract(sel_obj)
hb.AppendHybridShape(extract)
doc_from_sel.Part.Update()
# Add the extract to the selection and copy it
c_sel.Clear()
c_sel.Add(extract)
sel_prod_by_catia = c_sel.Item(1).LeafProduct # reference to the product where Catia makes the selection
c_sel_copy() # will call Selection.Copy from VBA. Buggy in Python.
# Paste the extract into the new part in a new hybrid body
c_sel.Clear()
new_hb = new_part_doc.Part.HybridBodies.Item(1)
c_sel.Add(new_hb)
c_sel.PasteSpecial("CATPrtResultWithOutLink")
new_part_doc.Part.Update()
new_extract = new_hb.HybridShapes.Item(new_hb.HybridShapes.Count)
# Redo changes in the part, where the selection was made
c_sel.Clear()
c_sel.Add(hb)
c_sel.Delete()
# Create axis systems from Position object of sel_prd_by_user and sel_prd_by_catia
prod_list = [sel_prod_by_user, sel_prod_by_catia]
axs_list = []
for prod in prod_list:
pc_pos = pycatia.in_interfaces.position.Position(prod.Position) # conversion to pycata's Position object, necessary
# in order to use Position.GetComponents
ax_comp = pc_pos.get_components()
axs = new_part_doc.Part.AxisSystems.Add()
axs.PutOrigin(ax_comp[9:12])
axs.PutXAxis(ax_comp[0:3])
axs.PutYAxis(ax_comp[3:6])
axs.PutZAxis(ax_comp[6:9])
axs_list.append(axs)
new_part_doc.Part.Update()
# Translate the extract from axis system derived from sel_prd_by_catia to sel_prd_by_user
extract_ref = new_part_doc.Part.CreateReferenceFromObject(new_extract)
tgt_ax_ref = new_part_doc.Part.CreateReferenceFromObject(axs_list[0])
ref_ax_ref = new_part_doc.Part.CreateReferenceFromObject(axs_list[1])
new_extract_translated = new_part_doc.Part.HybridShapeFactory.AddNewAxisToAxis(extract_ref, ref_ax_ref, tgt_ax_ref)
new_hb.AppendHybridShape(new_extract_translated)
new_part_doc.Part.Update()
I would suggest a differed approach. Instead of adding references you get from somewhere (by name probably) add the actual instance of part to selection while iterating trough all the products. Or use instance Names to get the correct part.
Here is a simple VBA example of iterating one lvl tree and select copy paste scenario.
If you want to copy features, you have to dive deeper on the Instance objects.
Public Sub CatMain()
Dim ActiveDoc As ProductDocument
Dim ActiveSel As Selection
If TypeOf CATIA.ActiveDocument Is ProductDocument Then 'of all the checks that people are using I think this one is most elegant and reliable
Set ActiveDoc = CATIA.ActiveDocument
Set ActiveSel = ActiveDoc.Selection
Else
Exit Sub
End If
Dim Instance As Product
For Each Instance In ActiveDoc.Product.Products 'object oriented for ideal for us in this scenario
If Instance.Products.Count = 0 Then 'beware that products without parts have also 0 items and are therefore mistaken for parts
Call ActiveSel.Add(Instance)
End If
Next
Call ActiveSel.Copy
Call ActiveSel.Clear
Dim NewDoc As ProductDocument
Set NewDoc = CATIA.Documents.Add("CATProduct")
Set ActiveSel = NewDoc.Selection
Call ActiveSel.Add(NewDoc.Product)
Call ActiveSel.Paste
Call ActiveSel.Clear
End Sub

create new dataframe using function

I ran into this nice blogpost : https://towardsdatascience.com/the-search-for-categorical-correlation-a1c.
The author creates a function that allows you to calculate associations between categorical features and then create a heatmap out of it.
The function is given as:
def cramers_v(x, y):
confusion_matrix = pd.crosstab(x,y)
chi2 = ss.chi2_contingency(confusion_matrix)[0]
n = confusion_matrix.sum().sum()
phi2 = chi2/n
r,k = confusion_matrix.shape
phi2corr = max(0, phi2-((k-1)*(r-1))/(n-1))
rcorr = r-((r-1)**2)/(n-1)
kcorr = k-((k-1)**2)/(n-1)
return np.sqrt(phi2corr/min((kcorr-1),(rcorr-1)))
I am able to create a list of associations between one feature and the rest by running the function in a for loop.
for item in raw[categorical].columns.tolist():
value = cramers_v(raw['status_group'], raw[item])
print(item, value)
It works in the sense that I get a list of association values
but I don't know how I would run this function for all features against eachother and turn that into a new dataframe.
The author of this article has written a nice new library that has this feature built in, but it doesn't turn out nicely for my long list of features (my laptop can't handle it).
Running it on the first 100 lines of my df results in this... (note: this is what I get by running the associations function of the dython library written by the author).
How could I run the cramers_v function for all combinations of features and then turn this into a df which I could display in a heatmap?

How to use BAC0 readRange in Python

Hi every one I try to use BAC0 package in python 3 to get value of multiple point in bacnet network.
I user something like following:
bacnet = BAC0.lite(ip=x.x.x.x)
tmp_points = bacnet.readRange("11:2 analogInput 0 presentValue");
and it seems not OK :(
error is:
BAC0.core.io.IOExceptions.NoResponseFromController: APDU Abort Reason : unrecognizedService
And in document I just can find
def readRange(
self,
args,
range_params=None,
arr_index=None,
vendor_id=0,
bacoid=None,
timeout=10,
):
"""
Build a ReadProperty request, wait for the answer and return the value
:param args: String with <addr> <type> <inst> <prop> [ <indx> ]
:returns: data read from device (str representing data like 10 or True)
*Example*::
import BAC0
myIPAddr = '192.168.1.10/24'
bacnet = BAC0.connect(ip = myIPAddr)
bacnet.read('2:5 analogInput 1 presentValue')
Requests the controller at (Network 2, address 5) for the presentValue of
its analog input 1 (AI:1).
"""
To read multiple properties from a device object, you must use readMultiple.
readRange will read from a property acting like an array (ex. TrendLogs objects implements records as an array, we use readRange to read them using chunks of records).
Details on how to use readMultiple can be found here : https://bac0.readthedocs.io/en/latest/read.html#read-multiple
A simple example would be
bacnet = BAC0.lite()
tmp_points = bacnet.readMultiple("11:2 analogInput 0 presentValue description")

Storing output from Python function necessary despite not using output

I am trying to understand why I must store the output of a Python function (regardless of the name of the variable I use, and regardless of whether I subsequently use that variable). I think this is more general to Python and not specifically to the software NEURON, thus I put it here on Stackoverflow.
The line of interest is here:
clamp_output = attach_current_clamp(cell)
If I just write attach_current_clamp(cell), without storing the output of the function into a variable, the code does not work (plot is empty), and yet I don't use clamp_output at all. Why cannot I not just call the function? Why must I use a variable to store the output even without using the output?
import sys
import numpy
sys.path.append('/Applications/NEURON-7.4/nrn/lib/python')
from neuron import h, gui
from matplotlib import pyplot
#SET UP CELL
class SingleCell(object):
def __init__(self):
self.soma = h.Section(name='soma', cell=self)
self.soma.L = self.soma.diam = 12.6517
self.all = h.SectionList()
self.all.wholetree(sec=self.soma)
self.soma.insert('pas')
self.soma.e_pas = -65
for sec in self.all:
sec.cm = 20
#CURRENT CLAMP
def attach_current_clamp(cell):
stim = h.IClamp(cell.soma(1))
stim.delay = 100
stim.dur = 300
stim.amp = 0.2
return stim
cell = SingleCell()
#IF I CALL THIS FUNCTION WITHOUT STORING THE OUTPUT, THEN IT DOES NOT WORK
clamp_output = attach_current_clamp(cell)
#RECORD AND PLOT
soma_v_vec = h.Vector()
t_vec = h.Vector()
soma_v_vec.record(cell.soma(0.5)._ref_v)
t_vec.record(h._ref_t)
h.tstop = 800
h.run()
pyplot.figure(figsize=(8,4))
soma_plot = pyplot.plot(t_vec,soma_v_vec)
pyplot.show()
This is a NEURON+Python specific bug/feature. It has to do with Python garbage collection and the way NEURON implements the Python-HOC interface.
When there are no more references to a NEURON object (e.g. the IClamp) from within Python or HOC, the object is removed from NEURON.
Saving the IClamp as a property of the cell averts the problem in the same way as saving the result, so that could be an option for you:
# In __init__:
self.IClamps = []
# In attach_current_clamp:
stim.amp = 0.2
cell.IClamps.append(stim)
#return stim

Simplistic parallel grid search in Python

Ok, full disclosure, I'm not doing a grid search but the minimum example I could find for what I'm about to ask could (with a grain of salt) be reduced to a grid search (in case you're wondering why I'm not mentioning numpy and friends).
I'm doing a grid search in Python where one axis is discrete and the other one continuous, but for simplicity, let's say I have the following:
x_axis = ['linear', 'quadratic', 'cubic']
y_axis = range(1, 100) # simplification
The evaluation of my function depends on both axes and an intermediate log structure, so in an effort to have no global data I'm defining my function as a closure:
def get_function(xval, *args):
""" creates the closure that encapsulates thread local data
"""
log = { } # initialization depends on args
def fun(yval):
""" evaluation dedicated to single x_axis value
"""
if yval in log:
# in a proper grid search I wouldn't check twice
# but this is just to show that log is used and
# ammended inside fun()
else:
log[yval] = 0
return very_time_consuming_fun(xval, yval, log)
So the script uses this set up to run a grid search:
def func_eval(fun):
for yval in y_axis:
fun(yval)
# the loop I want to parallelize
for xval in x_axis:
fun = get_function(xval, args) # args are computed based on xval
func_eval(fun) # can I do result = func_eval(fun) ?
The things I want to ask are:
Am I correct in assuming that log works with a different instance for each x_axis value?
What is the best way to parallelize the last for loop? (If synchronization is needed for the log instances please elaborate). Again, I only want the evaluation of each x_axis value to its thread / core / you name it (best practices are welcomed)
Is there a way to get results out for each func_eval i.e. could it still be parallelized, if I had the following:
out = func_eval(fun)
I hate the silence ...
This is what I'm doing for now (so at least I'll get downvoted if it ain't right)
import multiprocessing
pool = multiprocessing.Pool()
funcs = [get_function(xval, args) for x in x_axis]
outputs = pool.map(func_eval, funcs)
this should be good enough according but I get the following error:
PicklingError: Can't pickle : attribute lookup builtin.function failed

Categories

Resources