dbus api uses a special format to describe complex parameters.
Since dbus specification wasn't written with Python in mind, it's a far fetch to find out what parameter structure you exactly have to pass.
In my example I want to call the Mount() method of the Filesystem object. This method got the signature a{sv}.
Mount() is defined like this
org.freedesktop.UDisks2.Filesystem
...
The Mount() method
Mount (IN a{sv} options,
OUT s mount_path);
source: http://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.Filesystem.html#gdbus-method-org-freedesktop-UDisks2-Filesystem.Mount
The complete code to mount a partition is this:
bus = dbus.SystemBus()
device = "/org/freedesktop/UDisks2/block_devices/sdi1"
obj = bus.get_object('org.freedesktop.UDisks2', device)
obj.Mount(..., dbus_interface="org.freedesktop.UDisks2.Filesystem")
Where ... is the parameters in question.
The answer is separated into different layers:
parameter structure
key names
legal values
The parameter structure for dbus is defined here: https://dbus.freedesktop.org/doc/dbus-specification.html#type-system
We learn from it that a{sv} is an ARRAY that contains one (or multiple?) DICT (list of key-value pairs). The key is STRING, the value is VARIANT which is data of any type preceded by a type code.
Thankfully we don't have to deal with low-level details. Python is going to deal with that.
So the solution simply is:
obj.Mount(dict(key="value", key2="value2"),
dbus_interface="org.freedesktop.UDisks2.Filesystem")
The actual key names are defined in udisks docs
IN a{sv} options: Options - known options (in addition to standard options)
includes fstype (of type 's') and options (of type 's').
OUT s mount_path: The filesystem path where the device was mounted.
from http://storaged.org/doc/udisks2-api/latest/gdbus-org.freedesktop.UDisks2.Filesystem.html#gdbus-method-org-freedesktop-UDisks2-Filesystem.Mount
while standard options refers to
Option name, Value type, Description
auth.no_user_interaction, 'b', If set to TRUE, then no user interaction will happen when checking if the method call is authorized.
from http://storaged.org/doc/udisks2-api/latest/udisks-std-options.html
So, adding the key names we have
obj.Mount(dict(fstype="value", options="value2"),
dbus_interface="org.freedesktop.UDisks2.Filesystem")
Regarding the values I think you have to study the sections Filesystem Independent Mount Options and Filesystem Dependent Mount Options from https://linux.die.net/man/8/mount
So the final solution looks like
obj.Mount(dict(fstype="vfat", options="ro"),
dbus_interface="org.freedesktop.UDisks2.Filesystem")
Related
I want to subscribe to vehicles in my simulation. However, I find it very difficult to understand which variables I can use for this, as the documentation does not include this information.
traci.vehicle.subscribe(veh_id, [
tc.VAR_SPEED,
tc.VAR_ACCELERATION,
tc.VAR_EMERGENCY_DECEL,
# tc.VAR_ROUTE,
tc.VAR_POSITION,
# tc.VAR_FOLLOWER,
tc.VAR_NEXT_TLS
])
The issue is, that tc.VAR_ROUTE causes this error in the terminal:
traci.exceptions.TraCIException: Could not add subscription. Get Vehicle Variable: unsupported variable 0x57 specified
and tc.VAR_FOLLOWER causes this error in SUMO:
Error: Storage::readChar(): invalid position
Quitting (on error).
Why is that? Also I do not quite understand how to learn more about the different constants. For example, which ones can I use to subscribe to vehicles?
When I look into the file under traci/constants.py, there are different types of variables.
starting with CMD_ (the comments call these "command")
starting with RESPONSE_ (the comments call these "response")
starting with VAR_ (the comments say these are "Variable types (for CMD_GET_*_VARIABLE)")
stop flags, departure flags, and many more
Also, the comments sometimes say something like: (get: ...; set: ...)
e.g. here:
# position (2D) (get: vehicle, poi, inductionloop, lane area detector; set: poi)
VAR_POSITION = 0x42
What does that mean, I know that I can get the subscription results from subscribing to these constants, but how could I possibly set them?
My main question is if someone can please explain how these constants are structured and which ones I can use for subscribing to vehicles.
You are right this is not well documented. I added a ticket: https://github.com/eclipse/sumo/issues/8579
The answer to your direct question is that everything which is listed in the value retrieval section of the docs (which should be equivalent to having a "get" in the constants file comment) should be "subscribable".
VAR_ROUTE is not, because it is only used to set a route using the edge list but there is no equivalent getter. You can only subscribe to the route ID for instance. VAR_FOLLOWER in turn needs a parameter when subscribing, so you need to give that as an additional argument. How to do that unfortunately depends on your SUMO version.
For SUMO 1.9 and later the call looks like this:
traci.vehicle.subscribe(vehID, (tc.VAR_FOLLOWER,), begin, end, {tc.VAR_FOLLOWER: ("d", dist)})
Please read this whole question before answering, as it's not what you think... I'm looking at creating python object wrappers that represent hardware devices on a system (trimmed example below).
class TPM(object):
#property
def attr1(self):
"""
Protects value from being accidentally modified after
constructor is called.
"""
return self._attr1
def __init__(self, attr1, ...):
self._attr1 = attr1
...
#classmethod
def scan(cls):
"""Calls Popen, parses to dict, and passes **dict to constructor"""
Most of the constructor inputs involve running command line outputs in subprocess.Popen and then parsing the output to fill in object attributes. I've come up with a few ways to handle these, but I'm unsatisfied with what I've put together just far and am trying to find a better solution. Here are the common catches that I've found. (Quick note: tool versions are tightly controlled, so parsed outputs don't change unexpectedly.)
Many tools produce variant outputs, sometimes including fields and sometimes not. This means that if you assemble a dict to be wrapped in a container object, the constructor is more or less forced to take **kwargs and not really have defined fields. I don't like this because it makes static analysis via pylint, etc less than useful. I'd prefer a defined interface so that sphinx documentation is clearer and errors can be more reliably detected.
In lieu of **kwargs, I've also tried setting default args to None for many of the fields, with what ends up as pretty ugly results. One thing I dislike strongly about this option is that optional fields don't always come at the end of the command line tool output. This makes it a little mind-bending to look at the constructor and match it up to tool output.
I'd greatly prefer to avoid constructing a dictionary in the first place, but using setattr to create attributes will make pylint unable to detect the _attr1, etc... and create warnings. Any ideas here are welcome...
Basically, I am looking for the proper Pythonic way to do this. My requirements, for a re-summary are the following:
Command line tool output parsed into a container object.
Container object protects attributes via properties post-construction.
Varying number of inputs to constructor, with working static analysis and error detection for missing required fields during runtime.
Is there a good way of doing this (hopefully without a ton of boilerplate code) in Python? If so, what is it?
EDIT:
Per some of the clarification requests, we can take a look at the tpm_version command. Here's the output for my laptop, but for this TPM it doesn't include every possible attribute. Sometimes, the command will return extra attributes that I also want to capture. This makes parsing to known attribute names on a container object fairly difficult.
TPM 1.2 Version Info:
Chip Version: 1.2.4.40
Spec Level: 2
Errata Revision: 3
TPM Vendor ID: IFX
Vendor Specific data: 04280077 0074706d 3631ffff ff
TPM Version: 01010000
Manufacturer Info: 49465800
Example code (ignore lack of sanity checks, please. trimmed for brevity):
def __init__(self, chip_version, spec_level, errata_revision,
tpm_vendor_id, vendor_specific_data, tpm_version,
manufacturer_info):
self._chip_version = chip_version
...
#classmethod
def scan(cls):
tpm_proc = Popen("/usr/sbin/tpm_version")
stdout, stderr = Popen.communicate()
tpm_dict = dict()
for line in tpm_proc.stdout.splitlines():
if "Version Info:" in line:
pass
else:
split_line = line.split(":")
attribute_name = (
split_line[0].strip().replace(' ', '_').lower())
tpm_dict[attribute_name] = split_line[1].strip()
return cls(**tpm_dict)
The problem here is that this (or a different one that I may not be able to review the source of to get every possible field) could add extra things that cause my parser to work, but my object to not capture the fields. That's what I'm really trying to solve in an elegant way.
I've been working on a more solid answer to this the last few months, as I basically work on hardware support libraries and have finally come up with a satisfactory (though pretty verbose) answer.
Parse the tool outputs, whatever they look like, into objects structures that match up to how the tool views the device. These can have very generic dict structures, but should be broken out as much as possible.
Create another container class on top of that that which uses attributes to access items in the tool-container-objects. This enforces an API and can return sane errors across multiple versions of the tool, and across differing tool outputs!
My target is to allow an easy way to “filter” previously defined nodes. Consider this fictional YAML file:
%YAML 1.1
---
- fruit: &fruitref { type: banana, color: yellow }
- another_fruit: !rotten *fruitref
What do I need to define in either the YAML file or the Python code that parses this file in order to call a custom function with *fruitref (i.e. the previously defined object, in this case a map) as argument and get the return value? The target is as simple and terse a syntax for “filtering” a previously defined value (map, sequence, whatever).
Note
It seems to me that the construct !tag *alias is invalid YAML, because of the error:
expected <block end>, but found '<alias>'
in "/tmp/test.yaml", line 4, column 21
which most possibly implies that I won't be able to achieve the required syntax, but I do care about terseness (or rather, the target users will).
Routes taken
YAML: !!python/object/apply:__main__.rotten [*fruitref]
It works but it is too verbose for the intended use; and there is no need for multiple arguments, the use case is ALWAYS a filter for an alias (a previously defined map/sequence/object).
YAML: %TAG !f! !!python/object/apply:__main__.
Perhaps !f!rotten [*fruitref] would be acceptable, but I can't find how to make use of the %TAG directive.
EDIT: I discovered that the !! doesn't work for PyYAML 3.10, it has to be the complete URL like this: %TAG !f! %TAG !f! tag:yaml.org,2002:python/object/apply:__main__.
Python: yaml.add_constructor
I already use add_constructor for “casting” maps to specific instances of my classes; the caveat is that tag alias seems to be invalid YAML.
Best so far
add_constructor('!rotten', filter_rotten) in Python and !rotten [*fruitref] in YAML seem to work, but I'm wondering how to omit the square brackets if possible.
It seems that it is not possible to apply a tag to an already tagged reference, so:
!tag *reference
is not acceptable. The best possible solution is to enclose the reference to square brackets (create a sequence) and make the tag to be either a function call or a special constructor expecting a sequence of one object, so the tersest syntax available is:
!prefix!suffix [*reference]
or
!tag [*reference]
I'm using bulbflow (python) with Neo4j and I'm trying to add an index only on a subset of my keys (for now, simply keys named 'name' for optional index-based lookup).
I don't love the bulbflow Models (too restrictive) and I couldn't figure out how to do selective indexing without changing code since the 'autoindex' is a global setting -- I don't see how to configure it based on the key.
Has anyone done something like this?
-Andrew
You can disable Bulbs auto-indexing by setting g.config.autoindex to False.
See https://github.com/espeed/bulbs/blob/master/bulbs/config.py#L62
>>> from bulbs.neo4jserver import Graph
>>> g = Graph()
>>> g.config.autoindex = False
>>> g.vertices.create(name="James")
In the example above, this will cause the name property not to be indexed automatically.
Setting autoindex to False will switch to using the low-level client's create_vertex() method instead of the create_indexed_vertex() method:
See https://github.com/espeed/bulbs/blob/master/bulbs/neo4jserver/client.py#L422
The create_indexed_vertex() method has a keys arg, which you can use for selective indexing:
See https://github.com/espeed/bulbs/blob/master/bulbs/neo4jserver/client.py#L424
This is the low-level client method used by Bulbs models. You generally don't need to explicitly call the low-level client methods, but if you do, you can selectively index properties by including the property name in the keys arg.
To selectively index properties in a Model, simply override get_index_keys() in your Model definition:
See https://github.com/espeed/bulbs/blob/master/bulbs/model.py#L383
By default, Bulbs models index all properties. If no keys are provided, then all properties are indexed (like in TinkerPop/Blueprints).
See the Model _create() and get_bundle() methods:
_create() https://github.com/espeed/bulbs/blob/master/bulbs/model.py#L583
get_bundle() https://github.com/espeed/bulbs/blob/master/bulbs/model.py#L363
get_index_keys() https://github.com/espeed/bulbs/blob/master/bulbs/model.py#L383
To enable selective indexing for generic vertices and edges, I updated the Bulbs generic vertex/edge methods to include a _keys arg where you can supply a list of property names (keys) to index.
See https://github.com/espeed/bulbs/commit/4fe39d5a76675020286ec9aeaa8e71d58e3a432a
Now, to selectively index properties on generic vertices/edges, you can supply a list of property names to index:
>>> from bulbs.neo4jserver import Graph
>>> g = Graph()
>>> g.config.autoindex = False
>>> james = g.vertices.create(name="James", city="Dallas", _keys=["name"])
>>> julie = g.vertices.create(name="Julie", city="Dallas", _keys=["name"])
>>> g.edges.create(james, "knows", julie, timestamp=12345, someprop="somevalue", _keys=["someprop"])
In the example above, the name property will be indexed for each vertex, and someprop will be indexed for the edge. Note that city and timestamp will not be indexed because those property names were not explicitly included in the list of index keys.
If g.config.autoindex is True and _keys is None (the default), all properties will be indexed (just like before).
If g.config.autoindex is False and _keys is None, no properties will be indexed.
If _keys is explicitly set to a list of property names, only those properties will be indexed, regardless if g.config.autoindex is True or False.
See https://github.com/espeed/bulbs/blob/master/bulbs/neo4jserver/client.py#L422
NOTE: How auto-indexing works differs somewhat if you're using Neo4j Server, Rexster, or Titan Server, and the indexing architecture for all the graph-database servers has been in a state of flux for the past few months. It appears that all are moving from a manual-indexing system to auto-indexing.
For graph-database servers that did not have auto-indexing capability until recently (e.g. Neo4j Server), Bulbs enabled auto-indexing via custom Gremlin scripts that used the database's low-level manual indexing methods:
https://github.com/espeed/bulbs/blob/master/bulbs/neo4jserver/client.py#L1008
https://github.com/espeed/bulbs/blob/master/bulbs/neo4jserver/gremlin.groovy#L11
However, manual indexing has been deprecated among Neo4j Server, TinkerPop/Rexster, and Titan Server so Bulbs 0.4 indexing architecture will change accordingly. Selective indexing will still be possible by declaring your index keys upfront, like you would in an SQL create table statement.
BTW: What about did you find restrictive about Models? Bulbs Models (actually the entire library) is designed to be flexible so you can modify it to whatever you need.
See the Lightbulb example for how to customize Bulbs Models: Is there a equivalent to commit in bulbs framework for neo4j
Let me know if you have any questions.
Lets say I have a program that has a large number of configuration options. The user can specify them in a config file. My program can parse this config file, but how should it internally store and pass around the options?
In my case, the software is used to perform a scientific simulation. There are about 200 options most of which have sane defaults. Typically the user only has to specify a dozen or so. The difficulty I face is how to design my internal code. Many of the objects that need to be constructed depend on many configuration options. For example an object might need several paths (for where data will be stored), some options that need to be passed to algorithms that the object will call, and some options that are used directly by the object itself.
This leads to objects needing a very large number of arguments to be constructed. Additionally, as my codebase is under very active development, it is a big pain to go through the call stack and pass along a new configuration option all the way down to where it is needed.
One way to prevent that pain is to have a global configuration object that can be freely used anywhere in the code. I don't particularly like this approach as it leads to functions and classes that don't take any (or only one) argument and it isn't obvious to the reader what data the function/class deals with. It also prevents code reuse as all of the code depends on a giant config object.
Can anyone give me some advice about how a program like this should be structured?
Here is an example of what I mean for the configuration option passing style:
class A:
def __init__(self, opt_a, opt_b, ..., opt_z):
self.opt_a = opt_a
self.opt_b = opt_b
...
self.opt_z = opt_z
def foo(self, arg):
algo(arg, opt_a, opt_e)
Here is an example of the global config style:
class A:
def __init__(self, config):
self.config = config
def foo(self, arg):
algo(arg, config)
The examples are in Python but my question stands for any similar programming langauge.
matplotlib is a large package with many configuration options. It use a rcParams module to manage all the default parameters. rcParams save all the default parameters in a dict.
Every functions will get the options from keyword argurments:
for example:
def f(x,y,opt_a=None, opt_b=None):
if opt_a is None: opt_a = rcParams['group1.opt_a']
A few design patterns will help
Prototype
Factory and Abstract Factory
Use these two patterns with configuration objects. Each method will then take a configuration object and use what it needs. Also consider applying a logical grouping to config parameters and think about ways to reduce the number of inputs.
psuedo code
// Consider we can run three different kinds of Simulations. sim1, sim2, sim3
ConfigFactory configFactory = new ConfigFactory("/path/to/option/file");
....
Simulation1 sim1;
Simulation2 sim2;
Simulation3 sim3;
sim1.run( configFactory.ConfigForSim1() );
sim2.run( configFactory.ConfigForSim2() );
sim3.run( configFactory.ConfigForSim3() );
Inside of each factory method it might create a configuration from a prototype object (that has all of the "sane" defaults) and the option file becomes just the things that are different from default. This would be paired with clear documentation on what these defaults are and when a person (or other program) might want to change them.
** Edit: **
Also consider that each config returned by the factory is a subset of the overall config.
Pass around either the config parsing class, or write a class that wraps it and intelligently pulls out the requested options.
Python's standard library configparser exposes the sections and options of an INI style configuration file using the mapping protocol, and so you can retrieve your options directly from that as though it were a dictionary.
myconf = configparser.ConfigParser()
myconf.read('myconf.ini')
what_to_do = myconf['section']['option']
If you explicitly want to provide the options using the attribute notation, create a class that overrides __getattr__:
class MyConf:
def __init__(self, path):
self._parser = configparser.ConfigParser()
self._parser.read('myconf.ini')
def __getattr__(self, option):
return self._parser[{'what_to_do': 'section'}[option]][option]
myconf = MyConf()
what_to_do = myconf.what_to_do
Have a module load the params to its namespace, then import it and use wherever you want.
Also see related question here