I found a limitation of the FMU-module method get_states_list(). This method seems to bring a list only of continuous time states and not of discrete time states. I do usually make models that contain both continuous and discrete time sub-models describing process and control system and I am very interested to be able to get a list of ALL states in the system.
One possibility could have been get_fmu_state() but I get the exception text “This FMU does not support get and set FMU-state”.
Another possibility would perhaps be bring out a larger list of all variables using and sort out those variables that contain in the declaration "fixed=true”, but this attribute I am not sure how to bring out, although other attributes can be brought out like min, max, nominal. The method get_model_variables() could perhaps be of help but I only get some address associated to the variable….
What to do?
The get_states_list method is a mapping back to the FMI specification which only includes the continuous time states. So this is by design.
Related
What is a good data type for a unit collection in an rts?
Im contributing to an api that lets you write bots for the strategy game Starcraft2 in Python.
Right now there is a class units that inherits from list. Every frame, a new units object gets created and then selections of these units are made, creating new units objects, for example with a filter for all enemy units or all flying units.
We use these selections to find the closest enemy to control our units, select our units that can attack right now or need a different order and so on.
But this also means we do a lot of filtering by attributes of each unit in every frame which takes a lot of time. The time for inititalizing one units object alone is 2e-5 to 5e-5 sec and we do it millions of times per game which can slow down the bot and tests a lot, in addition to the filtering process with loops over each unit in the units object.
Is there a better datataype for this?
Maybe something that does not need to be recreated every time for each selection in one frame, but just starts with the initial list of all units we get from the protocol buffer and then the selections and filters can be applied without recreating the object? What would be a good way to implement this so that filtering multiple times per frame is not that slow and/or complicated?
This doesn't sound like a ADT problem at all. This sounds like inefficient programming. It is impossible for us to tell you the correct message to construct to achieve what you're going for.
What you should probably be investigating is how to construct a UnitView if you don't actually need to modify the units data. Consider something similar to how dictionaries return views in Python 3. See here for more details.
i'm using python-fu, i want to copy the filter iwarp i added to one layer to another layer i just added to the document.
my code:
document = gimp.image_list()[0]
layer_with_filter = document.layers[0]
layer_without_filter = document.layers[3]
i dont find a way to see using:
dir(layer_with_filter)
if there is an effect or filter added to that layer, is it posible to know that or does the change with filter happens somewhere else?
thanks
No, that is not possible.
You can, through Python, execute almost all the filters with arbitrary values you put on the Python side. But there is no way to either tell GIMP to repeat a filter with previous values , or retrieve values used in a filter operation on the Python side.
I-Warp specially is not even usable in a programmatic way, as it relies on live interaction with the plug-in window to create the distortion map - o you are out of luck there.
However, any thing that can be done with the "IWarp" plug-in, could be done with the "Displace" plug-in (check Filters->Map->Displace...) that one is usable programatically, and you could apply the effect of one application of displacement to other layers using Python. However, "Displace" requires two intermediate layers indicating the offset to be used for each pixel on the original image. These two layers are combined as a 2D field, where the value of each pixel (~ its brightness) indicates one coordinate of an offset where the target pixel will be placed. Internally, that is what IWarp does - however, the displacement map itself is created by its "internal tools" such as grow, shrink, move... - and there is no programmatic way to retrieve the displacement map used by IWarp so that it could be pasted in a ayer and used with the Displacement filter. But if you really need this feature, that might be the easiest way to go: modify the source code (in C) of the IWarp filter to add a button to "save the displacement map" in it - it could them create two new layers suitable to be used by the displacement filter.
Back to the subject of programmatically repeating other filters: the development branch of GIMP - GIMP 2.9 had switched most filters to a complete new framework using GEGL (the Generic Graphic Library) - the new engine for all pixel manipulation in GIMP. However, the Python bindings had not been updated yet to be able to take advantage of these new filters. When they finally are, it may well be possible a call to retrieve the last used values can be made existent.
And, again specially for IWarp, the filter has been promoted, in the development version, to a fully interactive tool, and there is no mechanism to retrieve the resulting of interaction of the tool with one layer to "replay" that on other layer.
Tensorflow's scalar/histogram/image_summary functions are very useful for logging data for viewing with tensorboard. But I'd like that information printed to the console as well (e.g. if I'm a crazy person without a desktop environment).
Currently, I'm adding the information of interest to the fetch list before calling sess.run, but this seems redundant as I'm already fetching the merged summaries. Fetching the merged summaries returns a protobuf, so I imagine I could scrape it using some generic python protobuf library, but this seems like a common enough use case that there should be an easier way.
The main motivation here is encapsulation. Let's stay I have my model and training script in different files. My model has a bunch of calls to tf.scalar_summary for the information that useful to log. Ideally, I'd be able to specify whether or not to additionally print this information to console by changing something in the training script without changing the model file. Currently, I either pass all of the useful information to the training script (so I can fetch them), or I pepper the model file with calls to tf.Print
Overall, there isn't first class support for your use case in TensorFlow, so I would parse the merged summaries back into a tf.Summary() protocol buffer, and then filter / print data as you see fit.
If you come up with a nice pattern, you could then merge it back into TensorFlow itself. I could imagine making this an optional setting on the tf.train.SummaryWriter, but it is probably best to just have a separate class for console-printing out interesting summaries.
If you want to encode into the graph itself which items should be summarized and printed, and which items should only be summarized (or to setup a system of different verbosity levels) you could use the Collections argument to the summary op constructors to organize different summaries into different groups. E.g. the loss summary could be put in collections [GraphKeys.SUMMARIES, 'ALWAYS_PRINT'], but another summary could be in collection [GraphKeys.SUMMARIES, 'PRINT_IF_VERBOSE'], etc. Then you can have different merge_summary ops for the different types of printing, and control which ones are run via command line flags.
I would like to hear your opinion about the effective implementation of one-to-many relationship with Python NDB. (e.g. Person(one)-to-Tasks(many))
In my understanding, there are three ways to implement it.
Use 'parent' argument
Use 'repeated' Structured property
Use 'repeated' Key property
I choose a way based on the logic below usually, but does it make sense to you?
If you have better logic, please teach me.
Use 'parent' argument
Transactional operation is required between these entities
Bidirectional reference is required between these entities
Strongly intend 'Parent-Child' relationship
Use 'repeated' Structured property
Don't need to use 'many' entity individually (Always, used with 'one' entity)
'many' entity is only referred by 'one' entity
Number of 'repeated' is less than 100
Use 'repeated' Key property
Need to use 'many' entity individually
'many' entity can be referred by other entities
Number of 'repeated' is more than 100
No.2 increases the size of entity, but we can save the datastore operations. (We need to use projection query to reduce CPU time for the deserialization though). Therefore, I use this way as much as I can.
I really appreciate your opinion.
A key thing you are missing: How are you reading the data?
If you are displaying all the tasks for a given person on a request, 2 makes sense: you can query the person and show all his tasks.
However, if you need to query say a list of all tasks say due at a certain time, querying for repeated structured properties is terrible. You will want individual entities for your Tasks.
There's a fourth option, which is to use a KeyProperty in your Task that points to your Person. When you need a list of Tasks for a person you can issue a query.
If you need to search for individual Tasks, then you probably want to go with #4. You can use it in combination with #3 as well.
Also, the number of repeated properties has nothing to do with 100. It has everything to do with the size of your Person and Task entities, and how much will fit into 1MB. This is potentially dangerous, because if your Task entity can potentially be large, you might run out of space in your Person entity faster than you expect.
One thing that most GAE users will come to realize (sooner or later) is that the datastore does not encourage design according to the formal normalization principles that would be considered a good idea in relational databases. Instead it often seems to encourage design that is unintuitive and anathema to established norms. Although relational database design principles have their place, they just don't work here.
I think the basis for the datastore design instead falls into two questions:
How am I going to read this data and how do I read it with the minimum number of read operations?
Is storing it that way going to lead to an explosion in the number of write and indexing operations?
If you answer these two questions with as much foresight and actual tests as you can, I think you're doing pretty well. You could formalize other rules and specific cases, but these questions will work most of the time.
I am programming some kind of simulation with its data organised in a tree. The main object is World which holds a bunch of methods and a list of City objects. Each City object in turn has a bunch of methods and a list of Population objects. Population objects have no method of their own, they merely hold attributes.
My question regards the latter Population objects, which I can either derive from object or create as dictionaries. What is the most efficient way to organise these?
Here are few cases which illustrate my hesitation:
Saving the Data
I need to be able to save and load the simulation, for which purpose I use the built-in json (I want the data to be human readable). Because of the program is organised in a tree, saving data at each level can be cumbersome. In this case, the population is best kept as a dictionary appended to a population list as an attribute of a City instance. This way, saving is a mere matter of passing the City instance's __dict__ into Json.
Using the Data
If I want to manipulate the population data, it is easier as a class instance than as a dictionary. Not only is the syntax simple, but I can also enjoy introspection features better while coding.
Performance
I am not sure, finally, as to what is the most efficient in terms of resources. An object and a dictionary have little difference in the end, since each object has a __dict__ attribute, which can be used to access all its attributes. If i run my simulation with large numbers of City and Population objects, what will be using the less resources: objects or dictionaries?
So again, what is the most efficient way to organise data in a tree? Are dictionaries or objects preferable? Or is there any secret to organising the data trees?
Why not a hybrid dict/object?
class Population(dict):
def __getattr__(self, key):
return self[key]
def __setattr__(self, key, value):
self[key] = value
Now you can easily access known names via attributes (foo.bar), while still having the dict functionality to easily access unknown names, iterate over them, etc. without the clunky getattr/setattr syntax.
If you want to always initialize them with particular fields, you can add an __init__ method:
def __init__(self, starting=0, birthrate=100, imrate=10, emrate=10, deathrate=100):
self.update(n=starting, b=birthrate, i=imrate, e=emrate, d=deathrate)
As you've seen yourself, there is little practical difference - the main difference, in my opinion, is that using individual, hard-coded attributes is slightly easier with objects (no need to quote the name) while dicts easily allow treating all values as one collection (e.g. summing them). This is why I'd go for objects, since the data of the population objects is likely heterogenous and relatively independent.
I think you should consider using a namedtuple (see the Python docs on the collections module). You get to access the attributes of the Population object by name like you would with a normal class, e.g. population.attribute_name instead of population['attribute_name'] for a dictionary. Since you're not putting any methods on the Population class this is all you need.
For your "saving data" criterion, there's also an _asdict method which returns a dictionary of field names to values that you could pass to json. (You might need to be careful about exactly what you get back from this method depending on which version of Python you're using. Some versions return a dictionary, and some return an OrderedDict. This might not make any difference for your purposes.)
namedtuples are also pretty lightweight, so they also work with your 'Running the Simulation' resource requirement. However, I'd echo other people's caution in saying not to worry about that, there's going to be very little difference unless you're doing some serious data-crunching.
I'd say that in every case a Population is a member of a City, and if it's data only, why not use a dictionary?
Don't worry about performance, but if your really need to know I think a dict is faster.