Here I am pulling json data from different websites and I intend to embed some of the retrived information into instance variables. The trouble is, the json package retrieved keeps the info I want under different keys and list positions, and so each dictionary address is unique per website.
I am struggling to find a way of creating a new instance of a class and passing different dictionary keys to lookup for when new data is retrived.
Something like this would be too easy I feel...
import requests, json
class B(object):
def __init__(self, name, url, size, price):
self.name = name
self.url = url
self.size_address = size
self.price_address = price
self.size = 0
self.price = 0
self.data = {}
def retrieve(self):
#Data grab from web
try:
grab = requests.get(self.url, timeout=10)
self.data = tick.json()
except:
raise RuntimeError(self.name + 'Error')
def size(self):
self.size = data[self.size_address]
print self.size
def price(self):
self.price = data[self.price_address]
print self.price
>>> a = B('Dave','www.dave.com/api',['names'][0]['size'],[['names'][0]['prices'][0])
>>> a.size()
42030.20
I've had a look at abstract methods as well as binding functions written outside of the class definition. Binding functions looked to be promising but I couldn't create the same function for each class with different variables because I would be using the same name.
Hopefully someone can point me in the right direction.
One way to address this is through design-pattern decorators (thanks to martineau for the correction) employing method injection.
Suppose you start off with the following class:
class Info(object):
def __init__(self, data):
self.data = data
This class takes a dictionary data, and stores it. The problem is, the key might change among instances.
E.g.,
a1 = Info({'a': 3, 'b': 4})
a2 = Info({'c': 5, 'b': 4})
and we then realize that for a1, the key is a, but for a2, the key is c.
So, we can write a function like so:
import types
def patch_it(info, key):
def get_it(info):
return info.data[key]
info.get_it = types.MethodType(get_it, info)
which takes an Info object and a key, and injects into it a method get_it that returns, when called the dictionary's value of that key.
Here's an example:
>> a1 = Info({'a': 3, 'b': 4})
>> patch_it(a1, 'a')
>> a1.get_it()
3
>> a2 = Info({'c': 5, 'b': 4})
>> patch_it(a2, 'c')
>> a2.get_it()
5
Note
For the specific above example, it is an overkill - Why not add a member to info specifying what is the key? The point is that this method allows you to manipulate objects of Info, in arbitrarily complex ways, independently from the class itself.
Related
I have an application I'm building in Python2.7 that works as-is, but doesn't feel clean, nor is it very explicit in what's happening, so if I walk away from the code for a while I have a hard time remembering how it's actually working under the hood, which is obviously not good. I've refactored the code and it seems more explicit, but not really any cleaner.
I'm trying to figure out the cleanest way to initialize these classes in two different ways - 1) from a user-generated instantiation (in the event of adding a new object from scratch during program execution), or 2) from importing the history of an object (from a previous program execution) from JSON. Here's my latest way of going about this:
class Device(object):
def __init__(self, dev_type, preset_prefix, default_preset,
from_json=False, json_path=None, **device_attrs):
if not from_json: # otherwise set in child class __init__
self.name = device_attrs['name']
self.sn = device_attrs['sn']
self.mfg = device_attrs['mfg']
self.tech = device_attrs['tech']
self.model = device_attrs['model']
self.sw_ver = device_attrs['sw_ver']
self.hours = 0
else:
self.hours = device_attrs['hours']
self.type = dev_type
self.json = json_path
self.preset_prefix = preset_prefix
self.preset = default_preset
class Monitor(Device):
def __init__(self, name, sn, mfg, tech, model, sw_ver, from_json=False,
json_path=None, **monitor_dict):
if from_json:
self.__dict__ = monitor_dict
device_properties = {'name': name, 'sn': sn, 'mfg': mfg, 'tech': tech,
'model': model, 'sw_ver': sw_ver}
monitor_dict.update(device_properties)
super(Monitor, self).__init__('monitor', 'user', 1, from_json,
json_path, **monitor_dict)
if cals:
self._init_cal_from_json(monitor_dict['cals'])
Now I can initialize from a previously saved JSON (generated from this object so I can be sure the key/value pairs are correct):
my_monitor = Monitor(from_json=True, json_path=device_json_file, **device_json_dict))
Or as a new object from scratch:
my_monitor = Monitor('monitor01', '12345', 'HP', 'LCD',
'HP-27', 'v1.0')
This seems a little bit messy, but still better than my original version which didn't have any positional arguments for the child init (making it hard to know what data MUST be passed in), it just took **monitor_dict hoping it contained the right key/value pairs. However this method of taking those arguments and merging them into a dict seems strange, but I've refactored this multiple times and this seems to be the cleanest way of going about it.
Is this the best way to handle initializing an object in multiple ways or can I somehow create two separate init functions, one for loading from JSON, and one for new creation of brand new objects?
I prefer to create new constructors as class methods, something like this, you could create more if you need, or adjust it if necesary:
class Device(object):
def __init__(self, dev_type, preset_prefix, default_preset, json_path=None, **device_attrs):
self.name = device_attrs['name']
self.sn = device_attrs['sn']
self.mfg = device_attrs['mfg']
self.tech = device_attrs['tech']
self.model = device_attrs['model']
self.sw_ver = device_attrs['sw_ver']
self.hours = 0
self.type = dev_type
self.json = json_path
self.preset_prefix = preset_prefix
self.preset = default_preset
class Monitor(Device):
#classmethod
def new_from_json(self, name, sn, mfg, tech, model, sw_ver, json_path=None, **monitor_dict):
self.__dict__ = monitor_dict
device_properties = {'name': name, 'sn': sn, 'mfg': mfg, 'tech': tech,
'model': model, 'sw_ver': sw_ver}
monitor_dict.update(device_properties)
super(Monitor, self).__init__('monitor', 'user', 1,
json_path, **monitor_dict)
As an example:
class Parent():
def __init__(self,some):
self.some = some
class Object(Parent):
#classmethod
def new_from_dict(self,some):
Parent.__init__(self,some)
self.adress = {"Me": 123}
return self
then:
obj = Object.new_from_dict("ME")
obj.adress
{"Me": 123}
obj.some
"ME"
I have a class in python that is used to generate parameter files for a piece of software. This software is used in an iterative process and requires a new set of parameter files for each iteration. As such the class PropGen is called upon to create the new files just before each iteration.
The class is feed the default parameters for these files once before the entire process and then given the current iteration modifies these parameters and writes them to the new file. The way I have been accomplishing this is by storing the defaults into an OrderedDict self.params and creating another OrderedDict self.output_params that collects the modified values before being used to write to a file.
My problem is that no matter how I move the values from self.params to self.output_params the two dictionaries have the same object id and thus any changes to self.output_params are reflected in self.params. So far I have tried the following:
EDIT Found error with missing call to deepcopy at end of file.
class A(object):
def __init__(self):
self.a = OrderedDict({'a':1, 'b':2})
self.b = deepcopy(self.a)
self.iter = 0
def do_some_work(self, key):
val = self.a[key]
self.b[key] = val.replace('#', self.iter)
def create(self):
lines = []
for item in self.output_params.items():
lines.append('='.join(item) + '\n')
with open(filename, 'w') as file_obj:
file_obj.writelines(lines)
# Here was the error
self.b = self.a
# should have been self.b = deepcopy(self.a)
The problem is in something you haven't shown us, so work harder at providing an executable example that actually demonstrates the problem. For example, you can run this:
from collections import OrderedDict
class C:
def __init__(self):
self.d1 = OrderedDict(a=1, b=2)
def copy(self):
self.d2 = self.d1.copy()
c = C()
c.copy()
print(c.d1 is c.d2)
c.d1['a'] = 666
print(c.d1)
print(c.d2)
For me, under Python 2 or 3, it prints:
False
OrderedDict([('a', 666), ('b', 2)])
OrderedDict([('a', 1), ('b', 2)])
What does it print for you? Assuming it works for you, what haven't you shown us about your code?
Very new to Python and could do with some help. How do I go about referencing members in a class?
I have two csv files. One contains a series of parts and associated material ID. The other is a material index that contains materials ID's and some information about that material.
My intention is to create a third file that contains all of the parts, their material Id's and the information if present in the material index.
I have created a class for the material index and am trying to access objects in this class using material Ids from the part file however, this is not working and I am unsure as to why. Any help is appreciated:
class material():
def __init__(self, name, ftu, e, nu):
self.name = name
self.ftu = ftu
self.e = e
self.nu = nu
def extract_FTU_Strain(input_file_parts,input_file_FTU,output_file):
parts = {}
materials = {}
for aline in open(input_file_FTU, 'r'):
comma_split = aline.strip().split(',')
name = comma_split[1]
ftu = comma_split[8]
e = comma_split[9]
nu = comma_split[7]
try:
materials[int(comma_split[0])] = material(comma_split[1],comma_split[8],comma_split[9],comma_split[7])
#materials[comma_split[0]] = material(comma_split[1],comma_split[8],comma_split[9],comma_split[7])
except:
pass
for i in open(input_file_parts, 'r'):
semicolon_split = i.strip().split(';')
material_id = semicolon_split[3]
part = semicolon_split[0]
part_id = semicolon_split[1]
material_name = materials[material_id].name
FTU = materials[material_id].ftu
Stress = materials[material_id].e
output.write(','.join([part,part_id,material_name,material_id,FTU,Stress]) + '\n')
output = open (output_file,'w')
output.write('Part Title, Part Id, Material Id, FTU, e' + '\n')
output.close()
import sys
input_file_parts = '/parttable.csv'
input_file_FTU = '/Material_Index.csv'
output_file = '/PYTHONTESTING123.csv'
extract_FTU_Strain(input_file_parts,input_file_FTU,output_file)
Since in the comments you said your error is in materials[material_id] make material_id an integer as it was an integer when you created the object.
You created it this way
materials[int(comma_split[0])]=...
But later called it without converting material_id to an int. Do this before calling it in your for loop to write in the output.
material_id = int(material_id)
I may have misinterpreted your question, but going off the line 'How do I go about referencing members in a class?' you can reference member variables like so:
class Demonstration:
def __init__(self, a, b):
self.a = a
self.b = b
def printMembers(self):
print self.a, self.b
So inside the class you can use self.someVariable to reference member variables.
If you want to access them outside of the class:
myclass.myvariable
I'll happily edit the answer if I have't quite understood your question or if there is a specific error you are getting.
I did not understand what error you have, could you put the traceback? Anyway, you are creating a class instance at the time of assignment. For more elegant programming, you could simply do:
m = materials(name, ftu, e, nu)
This way you can access the instance variables like this:
m.name
m.ftu
...
And try, except -> pass it's very dangerous
I have a class which is pulling JSON data with keys, but the problem is that per instance of this class, the JSON data may not have keys for everything I am trying to grab. Currently, my class is set up like this:
class Show():
def __init__(self, data):
self.data = data
self.status = self.data['status']
self.rating = self.data['rating']
self.genres = self.data['genres']
self.weight = self.data['weight']
self.updated = self.data['updated']
self.name = self.data['name']
self.language = self.data['language']
self.schedule = self.data['schedule']
self.url = self.data['url']
self.image = self.data['image']
And so on, there are more parameters than that. I'm trying to avoid the messiness of having a try-except block for EACH AND EVERY one of those (27) lines. Is there a better way? Ultimately, I want a parameter to be assigned None if the JSON key doesn't exist.
If you're going to set a default value to the attribute if it's not in the data dictionary, use data.get('key') rather than data['key']. The get method will return None if the key does not exist, rather than raising a KeyError exception. If you want a different default value than None, you can pass a second argument to get and that is what will be returned.
So, your code could become:
class Show():
def __init__(self, data):
self.data = data
self.status = self.data.get('status')
self.rating = self.data.get('rating')
self.genres = self.data.get('genres')
self.weight = self.data.get('weight')
self.updated = self.data.get('updated')
self.name = self.data.get('name')
self.language = self.data.get('language')
self.schedule = self.data.get('schedule')
self.url = self.data.get('url')
self.image = self.data.get('image')
Use dict.get, which provides a default value instead of raising an exception for missing keys.
For example, you can change this:
self.status = self.data['status']
into this:
self.status = self.data.get('status')
You could change your code to something like:
class Show():
def __init__(self, data):
self.data = data
self.__dict__.update(data)
data = {'status': True, 'ratings': [1,2,3], 'foo': "blahblah"}
aShow = Show(data)
"""
>>> aShow.status
True
>>> aShow.ratings
[1,2,3]
>>> aShow.something_not_in_dict
AttributeError: Show instance has no attribute 'something_not_in_dict'
"""
Which does exactly the same, and trying to access something from your Show instance that isn't a key in your data dictionary would raise an AttributeError
I've got a bad smell in my code. Perhaps I just need to let it air out for a bit, but right now it's bugging me.
I need to create three different input files to run three Radiative Transfer Modeling (RTM) applications, so that I can compare their outputs. This process will be repeated for thousands of sets of inputs, so I'm automating it with a python script.
I'd like to store the input parameters as a generic python object that I can pass to three other functions, who will each translate that general object into the specific parameters needed to run the RTM software they are responsible. I think this makes sense, but feel free to criticize my approach.
There are many possible input parameters for each piece of RTM software. Many of them over-lap. Most of them are kept at sensible defaults, but should be easily changed.
I started with a simple dict
config = {
day_of_year: 138,
time_of_day: 36000, #seconds
solar_azimuth_angle: 73, #degrees
solar_zenith_angle: 17, #degrees
...
}
There are a lot of parameters, and they can be cleanly categorized into groups, so I thought of using dicts within the dict:
config = {
day_of_year: 138,
time_of_day: 36000, #seconds
solar: {
azimuth_angle: 73, #degrees
zenith_angle: 17, #degrees
...
},
...
}
I like that. But there are a lot of redundant properties. The solar azimuth and zenith angles, for example, can be found if the other is known, so why hard-code both? So I started looking into python's builtin property. That lets me do nifty things with the data if I store it as object attributes:
class Configuration(object):
day_of_year = 138,
time_of_day = 36000, #seconds
solar_azimuth_angle = 73, #degrees
#property
def solar_zenith_angle(self):
return 90 - self.solar_azimuth_angle
...
config = Configuration()
But now I've lost the structure I had from the second dict example.
Note that some of the properties are less trivial than my solar_zenith_angle example, and might require access to other attributes outside of the group of attributes it is a part of. For example I can calculate solar_azimuth_angle if I know the day of year, time of day, latitude, and longitude.
What I'm looking for:
A simple way to store configuration data whose values can all be accessed in a uniform way, are nicely structured, and may exist either as attributes (real values) or properties (calculated from other attributes).
A possibility that is kind of boring:
Store everything in the dict of dicts I outlined earlier, and having other functions run over the object and calculate the calculatable values? This doesn't sound fun. Or clean. To me it sounds messy and frustrating.
An ugly one that works:
After a long time trying different strategies and mostly getting no where, I came up with one possible solution that seems to work:
My classes: (smells a bit func-y, er, funky. def-initely.)
class SubConfig(object):
"""
Store logical groupings of object attributes and properties.
The parent object must be passed to the constructor so that we can still
access the parent object's other attributes and properties. Useful if we
want to use them to compute a property in here.
"""
def __init__(self, parent, *args, **kwargs):
super(SubConfig, self).__init__(*args, **kwargs)
self.parent = parent
class Configuration(object):
"""
Some object which holds many attributes and properties.
Related configurations settings are grouped in SubConfig objects.
"""
def __init__(self, *args, **kwargs):
super(Configuration, self).__init__(*args, **kwargs)
self.root_config = 2
class _AConfigGroup(SubConfig):
sub_config = 3
#property
def sub_property(self):
return self.sub_config * self.parent.root_config
self.group = _AConfigGroup(self) # Stinky?!
How I can use them: (works as I would like)
config = Configuration()
# Inspect the state of the attributes and properties.
print("\nInitial configuration state:")
print("config.rootconfig: %s" % config.root_config)
print("config.group.sub_config: %s" % config.group.sub_config)
print("config.group.sub_property: %s (calculated)" % config.group.sub_property)
# Inspect whether the properties compute the correct value after we alter
# some attributes.
config.root_config = 4
config.group.sub_config = 5
print("\nState after modifications:")
print("config.rootconfig: %s" % config.root_config)
print("config.group.sub_config: %s" % config.group.sub_config)
print("config.group.sub_property: %s (calculated)" % config.group.sub_property)
The behavior: (output of execution of all of the above code, as expected)
Initial configuration state:
config.rootconfig: 2
config.group.sub_config: 3
config.group.sub_property: 6 (calculated)
State after modifications:
config.rootconfig: 4
config.group.sub_config: 5
config.group.sub_property: 20 (calculated)
Why I don't like it:
Storing configuration data in class definitions inside of the main object's __init__() doesn't feel elegant. Especially having to instantiate them immediately after definition like that. Ugh. I can deal with that for the parent class, sure, but doing it in a constructor...
Storing the same classes outside the main Configuration object doesn't feel elegant either, since properties in the inner classes may depend on the attributes of Configuration (or their siblings inside it).
I could deal with defining the functions outside of everything, so inside having things like
#property
def solar_zenith_angle(self):
return calculate_zenith(self.solar_azimuth_angle)
but I can't figure out how to do something like
#property
def solar.zenith_angle(self):
return calculate_zenith(self.solar.azimuth_angle)
(when I try to be clever about it I always run into <property object at 0xXXXXX>)
So what is the right way to go about this? Am I missing something basic or taking a very wrong approach? Does anyone know a clever solution?
Help! My python code isn't beautiful! I must be doing something wrong!
Phil,
Your hesitation about func-y config is very familiar to me :)
I suggest you to store your config not as a python file but as a structured data file. I personally prefer YAML because it looks clean, just as you designed in the very beginning. Of course, you will need to provide formulas for the auto calculated properties, but it is not too bad unless you put too much code. Here is my implementation using PyYAML lib.
The config file (config.yml):
day_of_year: 138
time_of_day: 36000 # seconds
solar:
azimuth_angle: 73 # degrees
zenith_angle: !property 90 - self.azimuth_angle
The code:
import yaml
yaml.add_constructor("tag:yaml.org,2002:map", lambda loader, node:
type("Config", (object,), loader.construct_mapping(node))())
yaml.add_constructor("!property", lambda loader, node:
property(eval("lambda self: " + loader.construct_scalar(node))))
config = yaml.load(open("config.yml"))
print "LOADED config.yml"
print "config.day_of_year:", config.day_of_year
print "config.time_of_day:", config.time_of_day
print "config.solar.azimuth_angle:", config.solar.azimuth_angle
print "config.solar.zenith_angle:", config.solar.zenith_angle, "(calculated)"
print
config.solar.azimuth_angle = 65
print "CHANGED config.solar.azimuth_angle = 65"
print "config.solar.zenith_angle:", config.solar.zenith_angle, "(calculated)"
The output:
LOADED config.yml
config.day_of_year: 138
config.time_of_day: 36000
config.solar.azimuth_angle: 73
config.solar.zenith_angle: 17 (calculated)
CHANGED config.solar.azimuth_angle = 65
config.solar.zenith_angle: 25 (calculated)
The config can be of any depth and properties can use any subgroup values. Try this for example:
a: 1
b:
c: 3
d: some text
e: true
f:
g: 7.01
x: !property self.a + self.b.c + self.b.f.g
Assuming you already loaded this config:
>>> config
<__main__.Config object at 0xbd0d50>
>>> config.a
1
>>> config.b
<__main__.Config object at 0xbd3bd0>
>>> config.b.c
3
>>> config.b.d
'some text'
>>> config.b.e
True
>>> config.b.f
<__main__.Config object at 0xbd3c90>
>>> config.b.f.g
7.01
>>> config.x
11.01
>>> config.b.f.g = 1000
>>> config.x
1004
UPDATE
Let us have a property config.b.x which uses both self, parent and subgroup attributes in its formula:
a: 1
b:
x: !property self.parent.a + self.c + self.d.e
c: 3
d:
e: 5
Then we just need to add a reference to parent in subgroups:
import yaml
def construct_config(loader, node):
attrs = loader.construct_mapping(node)
config = type("Config", (object,), attrs)()
for k, v in attrs.iteritems():
if v.__class__.__name__ == "Config":
setattr(v, "parent", config)
return config
yaml.add_constructor("tag:yaml.org,2002:map", construct_config)
yaml.add_constructor("!property", lambda loader, node:
property(eval("lambda self: " + loader.construct_scalar(node))))
config = yaml.load(open("config.yml"))
And let's see how it works:
>>> config.a
1
>>> config.b.c
3
>>> config.b.d.e
5
>>> config.b.parent == config
True
>>> config.b.d.parent == config.b
True
>>> config.b.x
9
>>> config.a = 1000
>>> config.b.x
1008
Well, here's an ugly way to at least make sure your properties get called:
class ConfigGroup(object):
def __init__(self, config):
self.config = config
def __getattribute__(self, name):
v = object.__getattribute__(self, name)
if hasattr(v, '__get__'):
return v.__get__(self, ConfigGroup)
return v
class Config(object):
def __init__(self):
self.a = 10
self.group = ConfigGroup(self)
self.group.a = property(lambda group: group.config.a*2)
Of course, at this point you might as well forego property entirely and just check if the attribute is callable in __getattribute__.
Or you could go all out and have fun with metaclasses:
def config_meta(classname, parents, attrs):
defaults = {}
groups = {}
newattrs = {'defaults':defaults, 'groups':groups}
for name, value in attrs.items():
if name.startswith('__'):
newattrs[name] = value
elif isinstance(value, type):
groups[name] = value
else:
defaults[name] = value
def init(self):
for name, value in defaults.items():
self.__dict__[name] = value
for name, value in groups.items():
group = value()
group.config = self
self.__dict__[name] = group
newattrs['__init__'] = init
return type(classname, parents, newattrs)
class Config2(object):
__metaclass__ = config_meta
a = 10
b = 2
class group(object):
c = 5
#property
def d(self):
return self.c * self.config.a
Use it like this:
>>> c2.a
10
>>> c2.group.d
50
>>> c2.a = 6
>>> c2.group.d
30
Final edit (?): if you don't want to have to "backtrack" using self.config in subgroup property definitions, you can use the following instead:
class group_property(property):
def __get__(self, obj, objtype=None):
return super(group_property, self).__get__(obj.config, objtype)
def __set__(self, obj, value):
super(group_property, self).__set__(obj.config, value)
def __delete__(self, obj):
return super(group_property, self).__del__(obj.config)
class Config2(object):
...
class group(object):
...
#group_property
def e(config):
return config.group.c * config.a
group_property receives the base config object instead of the group object, so paths always start from the root. Therefore, e is equivalent to the previously defined d.
BTW, supporting nested groups is left as an exercise for the reader.
Wow, I just read an article about descriptors on r/python today, but I don't think hacking descriptors is going to give you what you want.
The only thing I know that handles sub-configurations like that is flatland. Here's how it would work in Flatland anyhow.
But you could do:
class Configuration(Form):
day_of_year = Integer
time_of_day = Integer
class solar(Form):
azimuth_angle = Integer
solar_angle = Integer
Then load the dictionary in
config = Configuration({
day_of_year: 138,
time_of_day: 36000, #seconds
solar: {
azimuth_angle: 73, #degrees
zenith_angle: 17, #degrees
...
},
...
})
I love flatland, but I'm not sure you gain much by using it.
You could add a metaclass or decorator to your class definition.
something like
def instantiate(klass):
return klass()
class Configuration(object):
#instantiate
class solar(object):
#property
def azimuth_angle(self):
return self.azimuth_angle
That might be better. Then create a nice __init__ on Configuration that can load all the data from a dictionary. I dunno maybe someone else has a better idea.
Here's something a little more complete (without as much magic as LaC's answer, but slightly less generic).
def instantiate(clazz): return clazz()
#dummy functions for testing
calc_zenith_angle = calc_azimuth_angle = lambda(x): 3
class Solar(object):
def __init__(self):
if getattr(self,'azimuth_angle',None) is None and getattr(self,'zenith_angle',None) is None:
return AttributeError("must have either azimuth_angle or zenith_angle provided")
if getattr(self,'zenith_angle',None) is None:
self.zenith_angle = calc_zenith_angle(self.azimuth_angle)
elif getattr(self,'azimuth_angle',None) is None:
self.azimuth_angle = calc_azimuth_angle(self.zenith_angle)
class Configuration(object):
day_of_year = 138
time_of_day = 3600
#instantiate
class solar(Solar):
azimuth_angle = 73
#zenith_angle = 17 #not defined
#if you don't want auto-calculation to be done automagically
class ConfigurationNoAuto(object):
day_of_year = 138
time_of_day = 3600
#instantiate
class solar(Solar):
azimuth_angle = 73
#property
def zenith_angle(self):
return calc_zenith_angle(self.azimuth_angle)
config = Configuration()
config_no_auto = ConfigurationNoAuto()
>>> config.day_of_year
138
>>> config_no_auto.day_of_year
138
>>> config_no_auto.solar.azimuth_angle
73
>>> config_no_auto.solar.zenith_angle
3
>>> config.solar.zenith_angle
3
>>> config.solar.azimuth_angle
7
I think I would rather subclass dict so that it fell back to a default if no data was available. Something like this:
class fallbackdict(dict):
...
defaults = { 'pi': 3.14 }
x_config = fallbackdict(defaults)
x_config.update({
'planck': 6.62606957e-34
})
The other aspect can be addressed with callables. Wether this is elegant or ugly depends on wether datatype declarations are useful:
pi: (float, 3.14)
calc = lambda v: v[0](v[1])
x_config.update({
'planck': (double, 6.62606957e-34),
'calculated': (lambda x: 1.0 - calc(x_config['planck']), None)
})
Depending on the circumstances, the lambda might be broken out if it is used many times.
Don't know if it is better, but it mostly preserves the dictionary style.