I've created a subclass of dict as per this question. What I'd like to do is be able to create a new dictionary of my subclass by using bracket notation. Namely, when I have {"key":"value"}, I'd like it to call mydict(("key","value")). Is this possible?
No. And for good reasons: it would violate the expectations of people reading the code. It would also likely break any third-party libraries that you import which would (reasonably) expect dict literals to be standard python dictionaries.
A better method to fix this without lots of extra typing is to simply add a static .from method on your custom dictionary class that attempts to consume the literal and returns an instance of your custom dictionary.
MyDict.from({
"key": "value"
})
Where an implementation of from might look something like
#classmethod
def from(cls, dictionary):
new_inst = cls()
for key, value of dictionary.items():
new_inst[key] = value
return newInst
Edit based on comment:
user2357112 correctly points out that you could just use the constructor as long as the dict constructor's signature is the same as your custom class:
some_instance = MyDict({"key": "value"})
If you've messed with it though you'll have to go the custom route a la from.
Related
I've seen few questions with similar title but none of them seems suitable for me.
I'd like to create python objects, possibly with methods, on the fly with clean and "pythonic" syntax. What I have so far is:
import types
class QuickObject:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def with_methods(self, **kwargs):
for name, method in kwargs.items():
self.__dict__[name] = types.MethodType(method, self)
return self
Possible usages:
ob = QuickObject(x=10, y=20)
print(ob.x, ob.y)
ob = QuickObject(x=10, y=20).with_methods(sum=lambda s: s.x + s.y)
print(ob.x, ob.y, ob.sum())
It works and looks clear but seems a little bit bizarre and I believe there's a better way to achieve this than my definition of QuickObject class.
I am aware of namedtuple but it's immutable, does not go with methods and it's not that cool because it has to be defined beforehand for every object with different fields.
Built-in dictionaries also does not provide syntax for method calling nor let to access data with dictionary.key syntax without custom dict subclass.
TLDR: I am looking for a recipe for on-the-fly objects with:
mutable fields,
methods,
nice ClassName(field1=value, field2=value, ...) syntax.
Further reading:
How to use a dot "." to access members of dictionary?
How to create objects on the fly in python?
Using type (external link)
I'm writing a YAML configuration serialization in python (using YAML because its a tree of objects configuration and I want the configuration to be as humanly readable as possible).
I have several problems with this:
Several internal (non configuration) members that are used by the objects and thus I wish not to store in the config file
Some configuration members have default values, I don't want to store them if they are default (this also does not touch deserialization)
In Java you had jackson annotations s.a. #JsonInclude(Include.NON_NULL) and others that do this for json files, I found nothing similar for yaml (or even for JSON) in python, I know how to write this (using YAML package API) but I'd rather not if it's already implemented somewhere.
example of a class I would like to serialize
class Locator(object):
def __init__(self, multiple=False):
# configurable parameters
self.multiple = multiple
# internal parameters (used in locate method)
self.precedents = []
self.segments
self.probabilities = []
def locate(self, message):
"""
do stuff to locate stuff on message
""" . . .
yield segment
Here we see the root class that holds configuration parameter (multiple) which I only wish to serialize if it is True and other members that are used in its operation s.a. sons (precedents) etc... which I don't want to serialize at all
Can anyone help me with this?
I think the honest answer is "probably not", and the reason is that what you're reaching for here just isn't really idiomatic in Python. If you squint a little, there is a strong resemblance between Python dicts and JSON objects -- and squint some more and YAML looks like a whitespace-y dialect of JSON -- so when we need to serialize things from Python we tend to write some custom mapping of thing to dict, stuff it in a JSON/YAML serializer, and be done with it.
There are some shortcuts and idiomatic trickery that can come in handy in the thing => dict step. For example, a namedtuple subclass with methods on it will leave said methods out when you call asdict on it:
In [1]: from collections import namedtuple
In [2]: class Locator(namedtuple("Locator", "foo bar baz")):
...: def hide(self):
...: pass
...:
In [3]: wow = Locator(1,2,3)
In [4]: wow._asdict()
Out[4]: OrderedDict([('foo', 1), ('bar', 2), ('baz', 3)])
Of course a tuple is not mutable, so this is not a general purpose solution if you really need a class with mutable attributes, and furthermore this doesn't address your desire to drop certain attributes from the serialization in a declarative way.
One nice third party library that might fit your needs is attrs... this library provides something like an extra-fancy namedtuple with a lot of customizability, including filters and defaults, which you might be able to work in to something you find comfortable. It's not 1:1 with what you're reaching for here but it could be a start.
This is a solution I wrote meanwhile for JSON, but's its very specific and I would love to find a package that already solved this is a more general manner
class LocatorEncoder(json.JSONEncoder):
"""
custom Locator json encoder to encode only configuration related parameters
it encodes only :
1. protected data members (whose names start with a single '_'
2. members that are different from their default value
NOTE: for this filtering encoder to work two strong conventions must be upheld, namely:
1. Configuration data members must start with a single preceding '_'
2. They must differ from their correlated __init__ parameter by only that '_'
"""
#staticmethod
def get_default_args(f):
return {
k: v.default
for k, v in inspect.signature(f).parameters.items()
if v.default is not inspect.Parameter.empty
}
#staticmethod
def filter(d, defaults):
"""
this is the filtering method
:param d: dictionary of members to filter
:param defaults: default values to filter out
:return: ordered dictionary with only protected members ('_') that do not have their default value
the return dictionary is ordered because it prints nicer
"""
filtered = OrderedDict()
for k, v in d.items():
# this is the filter logic (key starts with one _ and is not its default value)
if (re.match(r'_[^_]', k)) and (k[1:] not in defaults or defaults[k[1:]] != v):
filtered[k] = v
return filtered
def default(self, o):
"""
iterate on classes in the objects mro and build a list of default constructor values
:param o: the object to be json encoded
:return: encoded dictionary for the object serialization
"""
if isinstance(o, Locator):
defaults = {}
for cl in o.__class__.mro():
# iterate on all the default arguments of the __init__ method
for k, v in self.get_default_args(cl.__init__).items():
# update the key with value if it doesn't already exist
defaults[k] = defaults.get(k, v)
# build the filtered configuration data members and add precedent in a recursive call to this default method
filtered_dictionary = self.filter(o.__dict__, defaults)
precedents = []
for precedent in o.precedents:
precedents.append(self.default(precedent))
filtered_dictionary["precedents"] = precedents
return {'__{}__'.format(o.__class__.__name__): filtered_dictionary}
return super().default(self, o)
I am working on a project where I have a number of custom classes to interface with a varied collection of data on a user's system. These classes only have properties as user-facing attributes. Some of these properties are decently resource intensive, so I want to only run the generation code once, and store the returned value on disk (cache it, that is) for faster retrieval on subsequent runs. As it stands, this is how I am accomplishing this:
def stored_property(func):
"""This ``decorator`` adds on-disk functionality to the `property`
decorator. This decorator is also a Method Decorator.
Each key property of a class is stored in a settings JSON file with
a dictionary of property names and values (e.g. :class:`MyClass`
stores its properties in `my_class.json`).
"""
#property
#functools.wraps(func)
def func_wrapper(self):
print('running decorator...')
try:
var = self.properties[func.__name__]
if var:
# property already written to disk
return var
else:
# property written to disk as `null`
return func(self)
except AttributeError:
# `self.properties` does not yet exist
return func(self)
except KeyError:
# `self.properties` exists, but property is not a key
return func(self)
return func_wrapper
class MyClass(object):
def __init__(self, wf):
self.wf = wf
self.properties = self._properties()
def _properties(self):
# get name of class in underscore format
class_name = convert(self.__class__.__name__)
# this is a library used (in Alfred workflows) for interacted with data stored on disk
properties = self.wf.stored_data(class_name)
# if no file on disk, or one of the properties has a null value
if properties is None or None in properties.values():
# get names of all properties of this class
propnames = [k for (k, v) in self.__class__.__dict__.items()
if isinstance(v, property)]
properties = dict()
for prop in propnames:
# generate dictionary of property names and values
properties[prop] = getattr(self, prop)
# use the external library to save that dictionary to disk in JSON format
self.wf.store_data(class_name, properties,
serializer='json')
# return either the data read from file, or data generated in situ
return properties
#this decorator ensures that this generating code is only run if necessary
#stored_property
def only_property(self):
# some code to get data
return 'this is my property'
This code works precisely as I need it, but it still forces me to manually add the _properties(self) method to each class wherein I need this functionality (currently, I have 3). What I want is a way to "insert" this functionality into any class I please. I think that a Class Decorator could get this job done, but try as I might, I can't quite figure out how to wrangle it.
For the sake of clarity (and in case a decorator is not the best way to get what I want), I will try to explain the overall functionality I am after. I want to write a class that contains some properties. The values of these properties are generated via various degrees of complex code (in one instance, I'm searching for a certain app's pref file, then searching for 3 different preferences (any of which may or may not exist) and determining the best single result from those preferences). I want the body of the properties' code only to contain the algorithm for finding the data. But, I don't want to run that algorithmic code each time I access that property. Once I generate the value once, I want to write it to disk and then simply read that on all subsequent calls. However, I don't want each value written to its own file; I want a dictionary of all the values of all the properties of a single class to be written to one file (so, in the example above, my_class.json would contain a JSON dictionary with one key, value pair). When accessing the property directly, it should first check to see if it already exists in the dictionary on disk. If it does, simply read and return that value. If it exists, but has a null value, then try to run the generation code (i.e. the code actually written in the property method) and see if you can find it now (if not, the method will return None and that will once again be written to file). If the dictionary exists and that property is not a key (my current code doesn't really make this possible, but better safe than sorry), run the generation code and add the key, value pair. If the dictionary doesn't exist (i.e. on the first instantiation of the class), run all generation code for all properties and create the JSON file. Ideally, the code would be able to update one property in the JSON file without rerunning all of the generation code (i.e. running _properties() again).
I know this is a bit peculiar, but I need the speed, human-readable content, and elegant code all together. I would really not to have to compromise on my goal. Hopefully, the description of what I want it clear enough. If not, let me know in a comment what doesn't make sense and I will try to clarify. But I do think that a Class Decorator could probably get me there (essentially by inserting the _properties() method into any class, running it on instantiation, and mapping its value to the properties attribute of the class).
Maybe I'm missing something, but it doesn't seem that your _properties method is specific to the properties that a given class has. I'd put that in a base class and have each of your classes with #stored_property methods subclass that. Then you don't need to duplicate the _properties method.
class PropertyBase(object):
def __init__(self, wf):
self.wf = wf
self.properties = self._properties()
def _properties(self):
# As before...
class MyClass(PropertyBase):
#stored_property
def expensive_to_calculate(self):
# Calculate it here
If for some reason you can't subclass PropertyBase directly (maybe you already need to have a different base class), you can probably use a mixin. Failing that, make _properties accept an instance/class and a workflow object and call it explicitly in __init__ for each class.
I've tried following the instructions here, which led me to this code:
import yaml
class Step(yaml.YAMLObject):
yaml_tag = "!step"
def __init__(self, *args, **kwargs):
raise Exception("Intentionally.")
yaml.load("""
--- !step
foo: bar
ham: 42
""")
Expected behaviour: I get an exception. But what I observe is, that my YAML markup results in a Step instance and I'm able to work with it, access methods, attributes (like foo in the code above) and so on. Reading the documentation, I cannot find my mistake since it suggests that the constructor is called with all the key-value-pairs as keyword arguments.
Basically the example in the doc works, but not because of the constructor's implementation, but because of the fact that the key-value-pairs (properties of the Monster) are used to fill the object's dict.
Anyone here knows about that?
I'm working with python3 but did a quick evaluation in python2 and observed the same.
edit
What I wanted to do: To stay in the linked example (documentation), if the Monsters name starts with a B, double the value of ac.
From the documentation:
yaml.YAMLObject uses metaclass magic to register a constructor, which transforms a YAML node to a class instance, and a representer, which serializes a class instance to a YAML node.
Internally, the default constructor registered by yaml.YAMLObject will call YourClass.__new__ then set the fields on your class by using instance.__dict__. See this method for more detail.
Depending on what you want to do, you could either put some logic in Step.__new__ (but you won't be getting any of the fields in **kwargs, or register a custom constructor.
I think I have a fundamental misunderstanding. I don't know if this assumption holds true, but I think
load, dump = yaml.load, yaml.dump
foo = "any valid yaml string"
load(foo) == load(dump(load(foo))) # should be true
If I now do what I suggested in the question, really changing a property while loading, would change this "equation" and result in a behaviour I most likely don't want.
I'm writing a Django view that sometimes gets data from the database, and sometimes from an external API.
When it comes from the database, it is a Django model instance. Attributes must be accessed with dot notation.
Coming from the API, the data is a dictionary and is accessed through subscript notation.
In either case, some processing is done on the data.
I'd like to avoid
if from_DB:
item.image_url='http://example.com/{0}'.format(item.image_id)
else:
item['image_url']='http://example.com/{0}'.format(item['image_id'])
I'm trying to find a more elegant, DRY way to do this.
Is there a way to get/set by key that works on either dictionaries or objects?
You could use a Bunch class, which transforms the dictionary into something that accepts dot notation.
In JavaScript they're equivalent (often useful; I mention it in case you didn't know as you're doing web development), but in Python they're different - [items] versus .attributes.
It's easy to write something which allows access through attributes, using __getattr__:
class AttrDict(dict):
def __getattr__(self, attr):
return self[attr]
def __setattr__(self, attr, value):
self[attr] = value
Then just use it as you'd use a dict (it'll accept a dict as a parameter, as it's extending dict), but you can do things like item.image_url and it'll map it to item.image_url, getting or setting.
I don't know what the implications will be, but I would add a method to the django model which reads the dictionary into itself, so you can access the data through the model.