Using contextvars instead of wrapper classes to store additional data - python

Let's say we have class with data:
class Foo:
def __init__(self, x, y):
self.x = x
self.y = y
And collection class:
class Bar:
def __init__(self, foos):
self.foos = []
if foos:
self.foos = foos
def set_z(self):
for foo in self.foos:
foo.z = randint()
def print_z(self):
print([foo.z for foo in self.foos])
Basic stuff. Now the question.
How can we store additional variable z in each object of class Foo, but different for each instance of class Bar this object is in.
What I want to do:
>>> f1 = Foo(x=13, y=42)
>>> f2 = Foo(x=-3, y=21)
>>> b1 = Bar(foos=[f1, f2])
>>> b2 = Bar(foos=[f1, f2])
>>> b1.set_z()
>>> b2.set_z()
>>> b1.print_z()
[9, 11]
>>> b2.print_z()
[32, 8]
First thought is to make wrapper class like this:
class FooWrapper:
def __init__(self, foo):
self.foo = foo
self.z = None
And change Bar to automatically wrap each object:
class Bar:
def __init__(self, foos):
self.foos = []
if foos:
self.foos = [FooWrapper(foo) for foo in foos]
Is there maybe cleaner way without writting additional class? It not look bad here, but when there is lot of different properties in both base class and wrapper class, it becomes messy. Changing bar.foos into dict is not an option, since it's not guaranteed foos will all be hashable.
But now looking at Python 3.7 docs I read about contextvars. It seems like this is something that can be used in this case, but I have problem grasping this concept. Can I set every instance of class Bar as context and write z as contextvar inside class Foo? Will it be reasonable?

An instance can only have a single value for a given attribute at any given time, but as long as you're accessing this "attribute" through the container, you can store the values on the container.
One way is to construct a new list in which each item is a 2-tuple consisting of a foo and a dict. The dict can then hold the z values in a manner that supports adding other attributes and values.
from random import randint
class Foo:
def __init__(self, x, y):
self.x = x
self.y = y
class Bar:
def __init__(self, foos):
self.foos_with_benefits = [(foo, {}) for foo in foos]
def set_attribute(self, name):
for foo, attributes in self.foos_with_benefits:
attributes[name] = randint(1, 100)
def print_attribute(self, name):
print([attributes[name] for _, attributes in self.foos_with_benefits])
def set_z(self):
self.set_attribute('z')
def print_z(self):
self.print_attribute('z')
It performs along the lines of what you were requesting:
>>> f1 = Foo(x=13, y=42)
>>> f2 = Foo(x=-3, y=21)
>>> b1 = Bar(foos=[f1, f2])
>>> b2 = Bar(foos=[f1, f2])
>>> b1.set_z()
>>> b2.set_z()
>>> b1.print_z()
[81, 19]
>>> b2.print_z()
[66, 99]
As for contextvars, they don't seem well-suited to this kind of use case. For one thing, the value of a contextvar can vary across async contexts, not across instances within the same async context. If you're interested in learning more about how you might use contextvars, I've written a context manager for using contextvars on the fly: FlexContext.

Related

How to preserve the value of class properties

class A:
p = 1
def __init__(self, p=None, **kwargs):
self.p = p
class B(A):
p = 2
a = A()
print(a.p)
b = B()
print(b.p)
As a more sensible example consider:
class Mamal:
can_fly = False
class Bat(Mamal):
can_fly = True
In the examples above, I would like 1 and 2 be printed. However, it prints None for both, though I know why. What is the solution to preserve the default value of classes?
One solution I can think of is:
class A:
p = 1
def __init__(self, p=None, **kwargs):
if p: self.p = p
if q: self.q = q
...
and if I have many attributes I should do that for all of them!? another minor problem is that the user can't pass None to the class init.
Another solution could be like:
class A:
p = 1
def __init__(self, p=1, **kwargs):
self.p = p
self.q = q
...
However again if one instantiate b like:
b = B()
the value of b.p would be also 1, while I expect it to keep 2.
I use overriding classes attributes much, but I just don't know how to preserve them from being overwritten by default values of the same or parent class.
Yet, another solution is combination of the above, like:
class A:
p = 1
def __init__(self, p=1, **kwargs):
if p != 1: self.p = p
...
or using dataclass
from dataclasses import dataclass
#dataclass
class A:
p :int = 1
#dataclass
class B(A):
p:int = 2
Just would like to know what is usual approach and consequences.
UPDATE:
If you really absolutely need both your class and your instances to have this attribute, and also want to use the class attribute as the default for an instance, I would say the correct way is like this:
_sentinel = object()
class A:
p = 1
def __init__(self, p=_sentinel):
if p is not _sentinel:
self.p = p
class B(A):
p = 2
a = A()
print(a.p) # prints 1
b = B()
print(b.p) # prints 2
b2 = B(p=None)
print(b2.p) # prints None
The sentinel object is for when you do want to be able to pass None to the constructor for whatever reason. Since we compare identity in the __init__ method, it is (practically) guaranteed that if any value is passed, it will be assigned to the instance attribute, even if that value is None.
Original answer:
The problem seems to stem from a misunderstanding of how (class-)attribute work in Python.
When you do this:
class A:
p = 1
You define a class attribute. Instances of that class will automatically have that same attribute upon initialization, unless you overwrite it, which is exactly what you do here:
def __init__(self, p=None, **kwargs):
self.p = p
This overwrites the instance's attribute .p with the value p it receives in the __init__ method. In this case, since you defined a default value None and called the constructor without passing an argument, that is what was assigned to the instance's attribute.
If you want, you can simply omit the self.p assignment in the constructor. Then your instances will have the class' default upon initialization.
EDIT:
Depending on how you want to handle it, you can simply assign the value after initialization. But I doubt that is what you want. You probably don't need class attributes at all. Instead you may just want to define the default values in your __init__ method signature and assign them there.
If you really need that class attribute as well, you can do what you did, but more precisely by testing for if p is not None:.
I would set the default value of the p argument to the value that you want:
class A:
def __init__(self, p=1, **kwargs):
self.p = p
class B(A):
def __init__(self, p=2, **kwargs):
super().__init__(p, **kwargs)
a = A()
print(a.p)
b = B()
print(b.p)
Then from the constructor of B you can call the one from A by using super().__init__
You can use class properties from the class:
class A:
p = 1
class B(A):
p = 2
a = A()
print(a.p)
b = B()
print(b.p)
prints 1 and 2, like you wanted.
It is clearer to access them from the class directly, though:
print(A.p)
print(B.p)
You can set the instance one, without changing what is associated in the class.
class B(A):
def change(self, x):
self.p = x
b.change(3)
print(B.p) #2
print(b.p) #3

How can I see attributes on a python namedlist object?

I have been using namedlist to create lightweight classes:
from namedlist import namedlist
# create a class
SomeClass = namedlist('SomeClass', 'foo bar', use_slots=False)
# create an object
my_list = SomeClass(1,2)
# set an attribute not specified in the class
my_list.baz = 3
# the attribute is there if I reference it
print(my_list.baz)
# output: 3
Sometimes I want to take an object and see if any extra attributes have been set:
# this doesn't show 'baz'
import inspect
inspect.getmembers(my_list)
# neither does this
my_list.__dict__
Is there a way I can see any attributes that have been added in this way?
Looking at the source of namedlist, we can see that the factory function namedlist(), generates the type (SomeClass in your example).
Now this is interesting.
On one hand, __getattribute__ and __setattribute__ were not overloaded, which lets you do things like my_list.baz = 3 and then access it as my_list.baz.
On the other, __dict__, was overridden with property(_asdict) (generated in _common_fields()). This causes whoever uses __dict__ to fail seeing baz - function such as dir() and the inspect module.
While I failed to find a function that will list the added attributes in this case, if you know what attribute you are looking for, you can still check if it exists using hasattr(my_list, 'baz'):
>>> from namedlist import namedlist
>>> SomeClass = namedlist('SomeClass', 'foo bar', use_slots=False)
>>> my_list = SomeClass(1,2)
>>> my_list.baz = 3
>>> hasattr(my_list, 'baz')
True
If switching types is going to be problematic (maybe there is legacy code already using namedlist), I found that the following makes viewing namedlist bearable:
def set_attr(self, attr_name, attr_val):
setattr(self, attr_name, attr_val)
self.opt_attrs.append(attr_name)
TypeA = namedlist('TypeA', 'field_a opt_attrs', use_slots=False)
TypeA.set_attr = set_attr
TypeB = namedlist('TypeB', 'field_b opt_attrs', use_slots=False)
TypeB.set_attr = set_attr
objA = TypeA(1, [])
objA.set_attr('field_x', 2)
objB = TypeB(7, [])
objA
# Out: TypeA(field_a=1, opt_attrs=['field_x'])
objA.field_x
# Out: 2
objB
# Out: TypeB(field_b=7, opt_attrs=[])
It is probably best to just to use python classes though. More up-front code, less after-the-fact confusion:
class TypeA:
def __init__(self, a):
self.a = a
def __repr__(self):
return "A(a={})".format(self.a)
class TypeB:
def __init__(self, b):
self.b = b
def __repr__(self):
return "B(b={})".format(self.b)
A = TypeA(1)
A.x = 2
B = TypeB(7)
class TypeA:
def __init__(self, a):
self.a = a
def __repr__(self):
return "A(a={})".format(self.a)
class TypeB:
def __init__(self, b):
self.b = b
def __repr__(self):
return "B(b={})".format(self.b)
objA = TypeA(1)
objA.x = 2
objB = TypeB(7)
objA
# Out: A(a=1)
objA.__dict__
# Out: {'a': 1, 'x': 2}
objB
# Out: B(b=7)

I need to pickle an array of (hardly pickable) class members

I have some class instances that I pickle using __reduce__. These classes have members that I collect in another array. I need to pickle this array, but I can't find the right way to do it.
Just to clarify things, imagine I have classes that represent square, rectangle and circle. I create some instances:
A=Square(10)
B=Rectangle(5,10)
C=Circle(6)
I can pickle the list
classes=[A, B, C]
but I need a way to pickle this array of the class instances properties
dimensions=[A.side, B.y, C.diameter]
keeping the reference to the original objects so that if one object changes the relative property changes: imagine I call C.grow(2) I expect to have dimensions[2]=12.
To solve the problem I now pickle this dictionary:
d={ classes: [A, B, C],
dimensions: [(A,'side'), (B,'y'), (C,'diameter')] }
but I think this is a very poor solution.
I appreciate any suggestion.
The only way to keep the original reference of the objects if you have things like A.side which is an int is to either keep it in a list or a class of it's own.
For example, you can do this:
class Value:
def __init__(self, val):
self.value = val
Now, you can make Square, Rectangle and Circle such that they use the class Value for each of their variables.
class Square:
def __init__(self, side_length):
self.side = Value(side_length)
and then use it in dimensions like dimensions = [A.side, ...]
Note, now that A.side is a class - the value cannot be seen using print(A.side) but can be seen using print(A.side.value)
I came out with this solution:
class AttributeWrapper(object):
def __init__(self, obj, attribute):
object.__setattr__(self, 'ref_obj', obj)
object.__setattr__(self, 'ref_attribute', attribute)
def __getattr__(self,attr):
return object.__getattribute__(self,'ref_obj').__getattribute__(object.__getattribute__(self,'ref_attribute')).__getattribute__(attr)
def __setattr__(self,attr,value):
object.__getattribute__(self,'ref_obj').__getattribute__(object.__getattribute__(self,'ref_attribute')).__setattr__(attr,value)
return value
I don't think this is the ideal solution but, at least, with it I don't have to change the other classes (Square, Rectangle, Circle in the example).
Thank you all for your help.
If the issue is that your class methods are not able to be pickled, then you might try dill. It typically can pickle class instances, class methods, and instance methods.
>>> import dill
>>>
>>> class Square(object):
... def __init__(self, side):
... self.side = side
...
>>> class Rectangle(object):
... def __init__(self, x, y):
... self.x = x
... self.y = y
...
>>> class Circle(object):
... def __init__(self, diameter):
... self.diameter = diameter
...
>>> a = Square(10)
>>> b = Rectangle(5, 10)
>>> c = Circle(6)
>>>
>>> dimensions = [a.side, b.x, b.y, c.diameter]
>>> _dim = dill.dumps(dimensions)
>>>
>>> dim_ = dill.loads(_dim)
>>> dim_
[10, 5, 10, 6]
>>>

Turning an attribute into a property on an object-by-object basis?

I have a class of objects, most of whom have this one attribute which can in 95% of cases be implemented as a simple attribute. However, there are a few important edge cases where that property must be computed from data on another object.
What I'd like to be able to do is set myobj.gnarlyattribute = property(lambda self: self.container.x*self.k).
However, this doesn't seem to work:
>>> myfoo=foo()
>>> myfoo.spam
10
>>> import random
>>> myfoo.spam=property(lambda self: random.randint(0,20))
>>> myfoo.spam
<property object at 0x02A57420>
>>>
I suppose I could have gnarlyattribute always be a property which usually just has lambda self: self._gnarlyattribute as the getter, but that seems a little smelly. Any ideas?
As has already been pointed out, properties can only work at the class level, and they can't be set on instances. (Well, they can, but they don't do what you want.)
Therefore, I suggest using class inheritance to solve your problem:
class NoProps(object):
def __init__(self, spam=None):
if spam is None:
spam = 0 # Pick a sensible default
self.spam = spam
class Props(NoProps):
#property
def spam(self):
"""Docstring for the spam property"""
return self._spam
#spam.setter
def spam(self, value):
# Do whatever calculations are needed here
import random
self._spam = value + random.randint(0,20)
#spam.deleter
def spam(self):
del self._spam
Then when you discover that a particular object needs to have its spam attribute as a calculated property, make that object an instance of Props instead of NoProps:
a = NoProps(3)
b = NoProps(4)
c = Props(5)
print a.spam, b.spam, c.spam
# Prints 3, 4, (something between 5 and 25)
If you can tell ahead of time when you'll need calculated values in a given instance, that should do what you're looking for.
Alternately, if you can't tell that you'll need calculated values until after you've created the instance, that one's pretty straightforward as well: just add a factory method to your class, which will copy the properties from the "old" object to the "new" one. Example:
class NoProps(object):
def __init__(self, spam=None):
if spam is None:
spam = 0 # Pick a sensible default
self.spam = spam
#classmethod
def from_other_obj(cls, other_obj):
"""Factory method to copy other_obj's values"""
# The call to cls() is where the "magic" happens
obj = cls()
obj.spam = other_obj.spam
# Copy any other properties here
return obj
class Props(NoProps):
#property
def spam(self):
"""Docstring for the spam property"""
return self._spam
#spam.setter
def spam(self, value):
# Do whatever calculations are needed here
import random
self._spam = value + random.randint(0,20)
#spam.deleter
def spam(self):
del self._spam
Since we call cls() inside the factory method, it will make an instance of whichever class it was invoked on. Thus the following is possible:
a = NoProps(3)
b = NoProps.from_other_obj(a)
c = NoProps.from_other_obj(b)
print(a.spam, b.spam, c.spam)
# Prints 3, 3, 3
# I just discovered that c.spam should be calculated
# So convert it into a Props object
c = Props.from_other_obj(c)
print(a.spam, b.spam, c.spam)
# Prints 3, 3, (something between 3 and 23)
One or the other of these two solutions should be what you're looking for.
The magic to make properties work only exists at the class level. There is no way to make properties work per-object.

Most elegant way to configure a class in Python?

I'm simulating a distributed system in which all nodes follow some protocol. This includes assessing some small variations in the protocol. A variation means alternative implementation of a single method. All nodes always follow the same variation, which is determined by experiment configuration (only one configuration is active at any given time). What is the clearest way to do it, without sacrificing performance?
As an experiment can be quite extensive, I clearly don't want any conditionals. Before I've just used inheritance, like:
class Node(object):
def dumb_method(self, argument):
# ...
def slow_method(self, argument):
# ...
# A lot more methods
class SmarterNode(Node):
def dumb_method(self, argument):
# A somewhat smarter variant ...
class FasterNode(SmarterNode):
def slow_method(self, argument):
# A faster variant ...
But now I need to test all possible variants and don't want an exponential number of classes cluttering the source. I also want other people peeping at the code to understand it with minimal effort. What are your suggestions?
Edit: One thing I failed to emphasize enough: for all envisioned use cases, it seems that patching the class upon configuration is good. I mean: it can be made to work by simple Node.dumb_method = smart_method. But somehow it didn't feel right. Would this kind of solution cause major headache to a random smart reader?
You can create new subtypes with the type function. You just have to give it the subclasses namespace as a dict.
# these are supposed to overwrite methods
def foo(self):
return "foo"
def bar(self):
return "bar"
def variants(base, methods):
"""
given a base class and list of dicts like [{ foo = <function foo> }]
returns types T(base) where foo was overwritten
"""
for d in methods:
yield type('NodeVariant', (base,), d)
from itertools import combinations
def subdicts(**fulldict):
""" returns all dicts that are subsets of `fulldict` """
items = fulldict.items()
for i in range(len(items)+1):
for subset in combinations(items, i):
yield dict(subset)
# a list of method variants
combos = subdicts(slow_method=foo, dumb_method=bar)
# base class
class Node(object):
def dumb_method(self):
return "dumb"
def slow_method(self):
return "slow"
# use the base and our variants to make a number of types
types = variants(Node, combos)
# instantiate each type and call boths methods on it for demonstration
print [(var.dumb_method(), var.slow_method()) for var
in (cls() for cls in types)]
# [('dumb', 'slow'), ('dumb', 'foo'), ('bar', 'slow'), ('bar', 'foo')]
You could use the __slots__ mechanism and a factory class. You would need to instantiate a NodeFactory for each experiment, but it would handle creating Node instances for you from there on. Example:
class Node(object):
__slots__ = ["slow","dumb"]
class NodeFactory(object):
def __init__(self, slow_method, dumb_method):
self.slow = slow_method
self.dumb = dumb_method
def makenode(self):
n = Node()
n.dumb = self.dumb
n.slow = self.slow
return n
an example run:
>>> def foo():
... print "foo"
...
>>> def bar():
... print "bar"
...
>>> nf = NodeFactory(foo, bar)
>>> n = nf.makenode()
>>> n.dumb()
bar
>>> n.slow()
foo
I'm not sure if you're trying to do something akin to this (allows swap-out runtime "inheritance"):
class Node(object):
__methnames = ('method','method1')
def __init__(self, type):
for i in self.__methnames:
setattr(self, i, getattr(self, i+"_"+type))
def dumb_method(self, argument):
# ...
def slow_method(self, argument):
# ...
n = Node('dumb')
n.method() # calls dumb_method
n = Node('slow')
n.method() # calls slow_method
Or if you're looking for something like this (allows running (and therefore testing) of all methods of the class):
class Node(object):
#do something
class NodeTest(Node):
def run_tests(self, ending = ''):
for i in dir(self):
if(i.endswith(ending)):
meth = getattr(self, i)
if(callable(meth)):
meth() #needs some default args.
# or yield meth if you can
You can use a metaclass for this.
If will let you create a class on the fly with method names according to every variations.
Should the method to be called be decided when the class is instantiated or after? Assuming it is when the class is instantiated, how about the following:
class Node():
def Fast(self):
print "Fast"
def Slow(self):
print "Slow"
class NodeFactory():
def __init__(self, method):
self.method = method
def SetMethod(self, method):
self.method = method
def New(self):
n = Node()
n.Run = getattr(n, self.method)
return n
nf = NodeFactory("Fast")
nf.New().Run()
# Prints "Fast"
nf.SetMethod("Slow")
nf.New().Run()
# Prints "Slow"

Categories

Resources