I need to pickle an array of (hardly pickable) class members - python

I have some class instances that I pickle using __reduce__. These classes have members that I collect in another array. I need to pickle this array, but I can't find the right way to do it.
Just to clarify things, imagine I have classes that represent square, rectangle and circle. I create some instances:
A=Square(10)
B=Rectangle(5,10)
C=Circle(6)
I can pickle the list
classes=[A, B, C]
but I need a way to pickle this array of the class instances properties
dimensions=[A.side, B.y, C.diameter]
keeping the reference to the original objects so that if one object changes the relative property changes: imagine I call C.grow(2) I expect to have dimensions[2]=12.
To solve the problem I now pickle this dictionary:
d={ classes: [A, B, C],
dimensions: [(A,'side'), (B,'y'), (C,'diameter')] }
but I think this is a very poor solution.
I appreciate any suggestion.

The only way to keep the original reference of the objects if you have things like A.side which is an int is to either keep it in a list or a class of it's own.
For example, you can do this:
class Value:
def __init__(self, val):
self.value = val
Now, you can make Square, Rectangle and Circle such that they use the class Value for each of their variables.
class Square:
def __init__(self, side_length):
self.side = Value(side_length)
and then use it in dimensions like dimensions = [A.side, ...]
Note, now that A.side is a class - the value cannot be seen using print(A.side) but can be seen using print(A.side.value)

I came out with this solution:
class AttributeWrapper(object):
def __init__(self, obj, attribute):
object.__setattr__(self, 'ref_obj', obj)
object.__setattr__(self, 'ref_attribute', attribute)
def __getattr__(self,attr):
return object.__getattribute__(self,'ref_obj').__getattribute__(object.__getattribute__(self,'ref_attribute')).__getattribute__(attr)
def __setattr__(self,attr,value):
object.__getattribute__(self,'ref_obj').__getattribute__(object.__getattribute__(self,'ref_attribute')).__setattr__(attr,value)
return value
I don't think this is the ideal solution but, at least, with it I don't have to change the other classes (Square, Rectangle, Circle in the example).
Thank you all for your help.

If the issue is that your class methods are not able to be pickled, then you might try dill. It typically can pickle class instances, class methods, and instance methods.
>>> import dill
>>>
>>> class Square(object):
... def __init__(self, side):
... self.side = side
...
>>> class Rectangle(object):
... def __init__(self, x, y):
... self.x = x
... self.y = y
...
>>> class Circle(object):
... def __init__(self, diameter):
... self.diameter = diameter
...
>>> a = Square(10)
>>> b = Rectangle(5, 10)
>>> c = Circle(6)
>>>
>>> dimensions = [a.side, b.x, b.y, c.diameter]
>>> _dim = dill.dumps(dimensions)
>>>
>>> dim_ = dill.loads(_dim)
>>> dim_
[10, 5, 10, 6]
>>>

Related

Using contextvars instead of wrapper classes to store additional data

Let's say we have class with data:
class Foo:
def __init__(self, x, y):
self.x = x
self.y = y
And collection class:
class Bar:
def __init__(self, foos):
self.foos = []
if foos:
self.foos = foos
def set_z(self):
for foo in self.foos:
foo.z = randint()
def print_z(self):
print([foo.z for foo in self.foos])
Basic stuff. Now the question.
How can we store additional variable z in each object of class Foo, but different for each instance of class Bar this object is in.
What I want to do:
>>> f1 = Foo(x=13, y=42)
>>> f2 = Foo(x=-3, y=21)
>>> b1 = Bar(foos=[f1, f2])
>>> b2 = Bar(foos=[f1, f2])
>>> b1.set_z()
>>> b2.set_z()
>>> b1.print_z()
[9, 11]
>>> b2.print_z()
[32, 8]
First thought is to make wrapper class like this:
class FooWrapper:
def __init__(self, foo):
self.foo = foo
self.z = None
And change Bar to automatically wrap each object:
class Bar:
def __init__(self, foos):
self.foos = []
if foos:
self.foos = [FooWrapper(foo) for foo in foos]
Is there maybe cleaner way without writting additional class? It not look bad here, but when there is lot of different properties in both base class and wrapper class, it becomes messy. Changing bar.foos into dict is not an option, since it's not guaranteed foos will all be hashable.
But now looking at Python 3.7 docs I read about contextvars. It seems like this is something that can be used in this case, but I have problem grasping this concept. Can I set every instance of class Bar as context and write z as contextvar inside class Foo? Will it be reasonable?
An instance can only have a single value for a given attribute at any given time, but as long as you're accessing this "attribute" through the container, you can store the values on the container.
One way is to construct a new list in which each item is a 2-tuple consisting of a foo and a dict. The dict can then hold the z values in a manner that supports adding other attributes and values.
from random import randint
class Foo:
def __init__(self, x, y):
self.x = x
self.y = y
class Bar:
def __init__(self, foos):
self.foos_with_benefits = [(foo, {}) for foo in foos]
def set_attribute(self, name):
for foo, attributes in self.foos_with_benefits:
attributes[name] = randint(1, 100)
def print_attribute(self, name):
print([attributes[name] for _, attributes in self.foos_with_benefits])
def set_z(self):
self.set_attribute('z')
def print_z(self):
self.print_attribute('z')
It performs along the lines of what you were requesting:
>>> f1 = Foo(x=13, y=42)
>>> f2 = Foo(x=-3, y=21)
>>> b1 = Bar(foos=[f1, f2])
>>> b2 = Bar(foos=[f1, f2])
>>> b1.set_z()
>>> b2.set_z()
>>> b1.print_z()
[81, 19]
>>> b2.print_z()
[66, 99]
As for contextvars, they don't seem well-suited to this kind of use case. For one thing, the value of a contextvar can vary across async contexts, not across instances within the same async context. If you're interested in learning more about how you might use contextvars, I've written a context manager for using contextvars on the fly: FlexContext.

conditional class inheritance in python

I am trying to dynamically create classes in Python and am relatively new to classes and class inheritance. Basically I want my final object to have different types of history depending on different needs. I have a solution but I feel there must be a better way. I dreamed up something like this.
class A:
def __init__(self):
self.history={}
def do_something():
pass
class B:
def __init__(self):
self.history=[]
def do_something_else():
pass
class C(A,B):
def __init__(self, a=False, b=False):
if a:
A.__init__(self)
elif b:
B.__init__(self)
use1 = C(a=True)
use2 = C(b=True)
You probably don't really need that, and this is probably an XY problem, but those happen regularly when you are learning a language. You should be aware that you typically don't need to build huge class hierarchies with Python like you do with some other languages. Python employs "duck typing" -- if a class has the method you want to use, just call it!
Also, by the time __init__ is called, the instance already exists. You can't (easily) change it out for a different instance at that time (though, really, anything is possible).
if you really want to be able to instantiate a class and receive what are essentially instances of completely different objects depending on what you passed to the constructor, the simple, straightforward thing to do is use a function that returns instances of different classes.
However, for completeness, you should know that classes can define a __new__ method, which gets called before __init__. This method can return an instance of the class, or an instance of a completely different class, or whatever the heck it wants. So, for example, you can do this:
class A(object):
def __init__(self):
self.history={}
def do_something(self):
print("Class A doing something", self.history)
class B(object):
def __init__(self):
self.history=[]
def do_something_else(self):
print("Class B doing something", self.history)
class C(object):
def __new__(cls, a=False, b=False):
if a:
return A()
elif b:
return B()
use1 = C(a=True)
use2 = C(b=True)
use3 = C()
use1.do_something()
use2.do_something_else()
print (use3 is None)
This works with either Python 2 or 3. With 3 it returns:
Class A doing something {}
Class B doing something []
True
I'm assuming that for some reason you can't change A and B, and you need the functionality of both.
Maybe what you need are two different classes:
class CAB(A, B):
'''uses A's __init__'''
class CBA(B, A):
'''uses B's __init__'''
use1 = CAB()
use2 = CBA()
The goal is to dynamically create a class.
I don't really recommend dynamically creating a class. You can use a function to do this, and you can easily do things like pickle the instances because they're available in the global namespace of the module:
def make_C(a=False, b=False):
if a:
return CAB()
elif b:
return CBA()
But if you insist on "dynamically creating the class"
def make_C(a=False, b=False):
if a:
return type('C', (A, B), {})()
elif b:
return type('C', (B, A), {})()
And usage either way is:
use1 = make_C(a=True)
use2 = make_C(b=True)
I was thinking about the very same thing and came up with a helper method for returning a class inheriting from the type provided as an argument.
The helper function defines and returns the class, which is inheriting from the type provided as an argument.
The solution presented itself when I was working on a named value class. I wanted a value, that could have its own name, but that could behave as a regular variable. The idea could be implemented mostly for debugging processes, I think. Here is the code:
def getValueClass(thetype):
"""Helper function for getting the `Value` class
Getting the named value class, based on `thetype`.
"""
# if thetype not in (int, float, complex): # if needed
# raise TypeError("The type is not numeric.")
class Value(thetype):
__text_signature__ = "(value, name: str = "")"
__doc__ = f"A named value of type `{thetype.__name__}`"
def __init__(self, value, name: str = ""):
"""Value(value, name) -- a named value"""
self._name = name
def __new__(cls, value, name: str = ""):
instance = super().__new__(cls, value)
return instance
def __repr__(self):
return f"{super().__repr__()}"
def __str__(self):
return f"{self._name} = {super().__str__()}"
return Value
Some examples:
IValue = getValueClass(int)
FValue = getValueClass(float)
CValue = getValueClass(complex)
iv = IValue(3, "iv")
print(f"{iv!r}")
print(iv)
print()
fv = FValue(4.5, "fv")
print(f"{fv!r}")
print(fv)
print()
cv = CValue(7 + 11j, "cv")
print(f"{cv!r}")
print(cv)
print()
print(f"{iv + fv + cv = }")
The output:
3
iv = 3
4.5
fv = 4.5
(7+11j)
cv = (7+11j)
iv + fv + cv = (14.5+11j)
When working in IDLE, the variables seem to behave as built-in types, except when printing:
>>> vi = IValue(4, "vi")
>>> vi
4
>>> print(vi)
vi = 4
>>> vf = FValue(3.5, 'vf')
>>> vf
3.5
>>> vf + vi
7.5
>>>

python reference a property like a function

How do you pythonically set multiple properties without referencing them individually? Below is my solution.
class Some_Class(object):
def __init__(self):
def init_property1(value): self.prop1 = value
def init_property2(value): self.prop2 = value
self.func_list = [init_property1, init_property2]
#property
def prop1(self):
return 'hey im the first property'
#prop1.setter
def prop1(self, value):
print value
#property
def prop2(self):
return 'hey im the second property'
#prop2.setter
def prop2(self, value):
print value
class Some_Other_Class(object):
def __init__(self):
myvalues = ['1 was set by a nested func','2 was set by a nested func']
some_class= Some_Class()
# now I simply set the properties without dealing with them individually
# this assumes I know how they are ordered (in the list)
# if necessary, I could use a map
for idx, func in enumerate(some_class.func_list):
func(myvalues[idx])
some_class.prop1 = 'actually i want to change the first property later on'
if __name__ == '__main__':
test = Some_Other_Class()
this became necessary to do when I had many many properties to initialize with user defined values. My code otherwise would look like a giant list of setting each property individually (very messy).
Note that many people have good answers below and I think I have reached a good solution. This is a re-edit mostly trying to clearly state the question. But, if anyone has a better approach please share!
just use the #property decorator
>>> class A:
... a=2
... #property
... def my_val(self,val=None):
... if val == None:return self.a
... self.a = val
...
>>> a=A()
>>> a.my_val
2
>>> a.my_val=7
>>> a.my_val
7
something like this?
if you only want to allow setting then dont give it a default val
>>> class A:
... a=2
... #property
... def my_val(self,val):
... self.a = val
...
>>> a=A()
>>> a.my_val
<Exception>
>>> a.my_val=7
>>> a.a
7
or if you only want to allow retrieval just ommit the 2nd arg
>>> class A:
... a=2
... #property
... def my_val(self):
... return self.a
...
...
>>> a=A()
>>> a.my_val
2
>>> a.my_val=7
<Exception>
I ... finally think I know what you're trying to do, and you don't need to do it the way you're approaching it. Let me take a stab at this.
class someclass(object):
def __init__(self):
func_list = [self.setter1, self.setter2]
value_list = [1, 2]
# These lines don't need to be this complicated.
# for ind in range(len(func_list)):
# func_list[ind](value_list[ind])
for idx, func in enumerate(func_list):
func(value_list[idx])
# Or even better
for idx, (func, val) in enumerate(zip(func_list, value_list)):
func(val)
def setter1(self, value):
self.a = value
def setter2(self, value):
self.b = value
It's worth pointing out that the idx variable and enumerate calls are superfluous in the second form, but I wasn't sure if you need that elsewhere.
If you look up the property in the object dict, you will get the property descriptor (if any), and likewise with the class; e.g.
a = SomeClass()
descriptor = a.__dict__.get('descriptor', type(a).__dict__.get('descriptor'))
Assuming that descriptor is a descriptor, it will have the following methods:
['deleter', 'fdel', 'fget', 'fset', 'getter', 'setter']
Note the fget and fset.

Most elegant way to configure a class in Python?

I'm simulating a distributed system in which all nodes follow some protocol. This includes assessing some small variations in the protocol. A variation means alternative implementation of a single method. All nodes always follow the same variation, which is determined by experiment configuration (only one configuration is active at any given time). What is the clearest way to do it, without sacrificing performance?
As an experiment can be quite extensive, I clearly don't want any conditionals. Before I've just used inheritance, like:
class Node(object):
def dumb_method(self, argument):
# ...
def slow_method(self, argument):
# ...
# A lot more methods
class SmarterNode(Node):
def dumb_method(self, argument):
# A somewhat smarter variant ...
class FasterNode(SmarterNode):
def slow_method(self, argument):
# A faster variant ...
But now I need to test all possible variants and don't want an exponential number of classes cluttering the source. I also want other people peeping at the code to understand it with minimal effort. What are your suggestions?
Edit: One thing I failed to emphasize enough: for all envisioned use cases, it seems that patching the class upon configuration is good. I mean: it can be made to work by simple Node.dumb_method = smart_method. But somehow it didn't feel right. Would this kind of solution cause major headache to a random smart reader?
You can create new subtypes with the type function. You just have to give it the subclasses namespace as a dict.
# these are supposed to overwrite methods
def foo(self):
return "foo"
def bar(self):
return "bar"
def variants(base, methods):
"""
given a base class and list of dicts like [{ foo = <function foo> }]
returns types T(base) where foo was overwritten
"""
for d in methods:
yield type('NodeVariant', (base,), d)
from itertools import combinations
def subdicts(**fulldict):
""" returns all dicts that are subsets of `fulldict` """
items = fulldict.items()
for i in range(len(items)+1):
for subset in combinations(items, i):
yield dict(subset)
# a list of method variants
combos = subdicts(slow_method=foo, dumb_method=bar)
# base class
class Node(object):
def dumb_method(self):
return "dumb"
def slow_method(self):
return "slow"
# use the base and our variants to make a number of types
types = variants(Node, combos)
# instantiate each type and call boths methods on it for demonstration
print [(var.dumb_method(), var.slow_method()) for var
in (cls() for cls in types)]
# [('dumb', 'slow'), ('dumb', 'foo'), ('bar', 'slow'), ('bar', 'foo')]
You could use the __slots__ mechanism and a factory class. You would need to instantiate a NodeFactory for each experiment, but it would handle creating Node instances for you from there on. Example:
class Node(object):
__slots__ = ["slow","dumb"]
class NodeFactory(object):
def __init__(self, slow_method, dumb_method):
self.slow = slow_method
self.dumb = dumb_method
def makenode(self):
n = Node()
n.dumb = self.dumb
n.slow = self.slow
return n
an example run:
>>> def foo():
... print "foo"
...
>>> def bar():
... print "bar"
...
>>> nf = NodeFactory(foo, bar)
>>> n = nf.makenode()
>>> n.dumb()
bar
>>> n.slow()
foo
I'm not sure if you're trying to do something akin to this (allows swap-out runtime "inheritance"):
class Node(object):
__methnames = ('method','method1')
def __init__(self, type):
for i in self.__methnames:
setattr(self, i, getattr(self, i+"_"+type))
def dumb_method(self, argument):
# ...
def slow_method(self, argument):
# ...
n = Node('dumb')
n.method() # calls dumb_method
n = Node('slow')
n.method() # calls slow_method
Or if you're looking for something like this (allows running (and therefore testing) of all methods of the class):
class Node(object):
#do something
class NodeTest(Node):
def run_tests(self, ending = ''):
for i in dir(self):
if(i.endswith(ending)):
meth = getattr(self, i)
if(callable(meth)):
meth() #needs some default args.
# or yield meth if you can
You can use a metaclass for this.
If will let you create a class on the fly with method names according to every variations.
Should the method to be called be decided when the class is instantiated or after? Assuming it is when the class is instantiated, how about the following:
class Node():
def Fast(self):
print "Fast"
def Slow(self):
print "Slow"
class NodeFactory():
def __init__(self, method):
self.method = method
def SetMethod(self, method):
self.method = method
def New(self):
n = Node()
n.Run = getattr(n, self.method)
return n
nf = NodeFactory("Fast")
nf.New().Run()
# Prints "Fast"
nf.SetMethod("Slow")
nf.New().Run()
# Prints "Slow"

Converting an object into a subclass in Python?

Lets say I have a library function that I cannot change that produces an object of class A, and I have created a class B that inherits from A.
What is the most straightforward way of using the library function to produce an object of class B?
edit- I was asked in a comment for more detail, so here goes:
PyTables is a package that handles hierarchical datasets in python. The bit I use most is its ability to manage data that is partially on disk. It provides an 'Array' type which only comes with extended slicing, but I need to select arbitrary rows. Numpy offers this capability - you can select by providing a boolean array of the same length as the array you are selecting from. Therefore, I wanted to subclass Array to add this new functionality.
In a more abstract sense this is a problem I have considered before. The usual solution is as has already been suggested- Have a constructor for B that takes an A and additional arguments, and then pulls out the relevant bits of A to insert into B. As it seemed like a fairly basic problem, I asked to question to see if there were any standard solutions I wasn't aware of.
This can be done if the initializer of the subclass can handle it, or you write an explicit upgrader. Here is an example:
class A(object):
def __init__(self):
self.x = 1
class B(A):
def __init__(self):
super(B, self).__init__()
self._init_B()
def _init_B(self):
self.x += 1
a = A()
b = a
b.__class__ = B
b._init_B()
assert b.x == 2
Since the library function returns an A, you can't make it return a B without changing it.
One thing you can do is write a function to take the fields of the A instance and copy them over into a new B instance:
class A: # defined by the library
def __init__(self, field):
self.field = field
class B(A): # your fancy new class
def __init__(self, field, field2):
self.field = field
self.field2 = field2 # B has some fancy extra stuff
def b_from_a(a_instance, field2):
"""Given an instance of A, return a new instance of B."""
return B(a_instance.field, field2)
a = A("spam") # this could be your A instance from the library
b = b_from_a(a, "ham") # make a new B which has the data from a
print b.field, b.field2 # prints "spam ham"
Edit: depending on your situation, composition instead of inheritance could be a good bet; that is your B class could just contain an instance of A instead of inheriting:
class B2: # doesn't have to inherit from A
def __init__(self, a, field2):
self._a = a # using composition instead
self.field2 = field2
#property
def field(self): # pass accesses to a
return self._a.field
# could provide setter, deleter, etc
a = A("spam")
b = B2(a, "ham")
print b.field, b.field2 # prints "spam ham"
you can actually change the .__class__ attribute of the object if you know what you're doing:
In [1]: class A(object):
...: def foo(self):
...: return "foo"
...:
In [2]: class B(object):
...: def foo(self):
...: return "bar"
...:
In [3]: a = A()
In [4]: a.foo()
Out[4]: 'foo'
In [5]: a.__class__
Out[5]: __main__.A
In [6]: a.__class__ = B
In [7]: a.foo()
Out[7]: 'bar'
Monkeypatch the library?
For example,
import other_library
other_library.function_or_class_to_replace = new_function
Poof, it returns whatever you want it to return.
Monkeypatch A.new to return an instance of B?
After you call obj = A(), change the result so obj.class = B?
Depending on use case, you can now hack a dataclass to arguably make the composition solution a little cleaner:
from dataclasses import dataclass, fields
#dataclass
class B:
field: int # Only adds 1 line per field instead of a whole #property method
#classmethod
def from_A(cls, a):
return cls(**{
f.name: getattr(a, f.name)
for f in fields(A)
})

Categories

Resources