Is there a design pattern that describes the following setup? Does this design suffer from any major issues?
Class Widget instances can be built either by a "dumb" constructor Widget.__init__(), or by an "intelligent" factory method Workbench.upgrade_widget():
class Widget:
def __init__(self, abc, def, ...):
self.abc = abc
self.def = def
...
...
class Workbench:
# widget factory function, which uses data from the workbench instance
def upgrade_widget(self, widget, upgrade_info):
widget = Widget(widget.abc, widget.def, ...)
# I will modify the widget's attributes
...
self.rearrange_widget(widget, xyz) # modifies widget's internal state
...
widget.abc = ... # also modifies widget's state
...
return widget
# uses data from the workbench instance
def rearrange_widget(self, widget, xyz):
...
# this class does other stuff too
...
Widgets are immutable in the sense that I must not modify its instances after they are fully initialized (a lot of code depends on this invariant). But I find that modifying widgets while they are being initialized is very convenient, and makes the code much cleaner.
My main concern is that I modify "immutable" widgets in a different class. If it was only in upgrade_widget, I might live with it since it does not modify the widget passed to it. But that method relies on other Workbench methods (rearrange_widget) which modifies the widget it received as an argument. I feel like I'm losing control over where this "immutable" instance can actually be modified - someone may accidentally call rearrange_widget with a widget that's already fully initialized, leading to a disaster.
How are you enforcing immutability of the Widget now?
What if you add a 'locked' property to your widget, and wrap your setattr to check that property:
class Widget(object):
__locked = False
def __init__(self,a,b,c,locked=True):
...
self.__locked = locked
def lock(self):
self.__locked = True
def is_locked(self):
return self.__locked
def __setattr___(self,*args,**kw):
if self.__locked:
raise Exception('immutable') # define your own rather than use Exception
return super(Widget).__setattr__(self,*args,**kw)
then in the factory:
class Workbench(object):
def upgrade_widget(self,widget,upgrade_info):
widget = Widget(widget.a,widget.b,...,locked=False)
self.rearrange_widget(widget, blah)
widget.c = 1337
widget.lock()
return widget
In general use you can be pretty certain that nothing funny happens to the class once it's locked. Any methods which care about the immutability of the widget should also check the is_locked() of that widget. For example, rearrange_widget should check that the widget is unlocked too before doing anything.
This is notwithstanding malicious tampering with the instances, which can happen anyway. it also doesn't prevent attributes being changed by their own methods.
Note that the code (pseudo python) I wrote above isn't tested, but hopefully it illustrates the general idea of how to deal with your main concern.
Oh, and I'm not sure if there's a particular name for this pattern.
#chees: A neater way of doing that is to modify __dict__ in __init__ and make __setattr__ always raise an exception (btw, its not a good idea to raise Exception - its just to general):
class Widget:
def __init__(self, args):
self.__dict__['args'] = args
def __setattr__(self, name, value):
raise TypeError
And modifying it in the same way in Workbench (i.e. using __dict__) is a constant reminder that you're doing something you shouldn't really be doing.
Related
I am currently designing a class structure using attrs. It is a tree-like structure where I need a parent backlink for certain purposes. OTOH, I want to use frozen classes (there are cached properties, which rely on read-only state).
This means that the tree must be instantiated bottom-up, child-before parent. To solve this, I invented a special kind of weakref object, which can be lazy-bound exactly once.
class LateRef(Generic[T]):
def __init__(self):
self.ref: Optional[T] = None
def set(self, obj: T):
if self.ref is not None:
raise RuntimeError('Reference can only be set once.')
self.ref = weakref.ref(obj)
def __call__(self) -> Optional[T]:
if self.ref is None:
return None
return self.ref()
There is custom initialization code to create an unbound reference in the child, and bind them in the parent:
#attr.s(auto_attribs=True, frozen=True)
class Child:
some_attribute:str = "whatever"
def __attrs_post_init__(self):
self.__dict__['parent'] = LateRef()
#attr.s(auto_attribs=True, frozen=True)
class Parent:
children: List[Child] = attr.ib(converter=_deepclone) # off-screen :-)
def __attrs_post_init__(self):
for child in self.children:
child.parent.set(self)
I need to sneak in the Child.parent attribute through the backdoor due to the frozenness of child. Previously I put it as a class attribute with custom factory, but then it is turned into an attrs-attribute, which it really shouldn't be. E.g. I don't want the parent to turn up in hashing, comparison, str representation and so on.
The code is already working, BUT: As you see I would like to do things right and type-annotate as far as possible. Putting the parent directly in the __dict__ means that the usual tools don't know the expected type. In fact they might even complain that the parent attribute doesn't exist at all.
Long story short question: Is there a way to declare parent as class attribute with type annotation, while telling attr.s to skip / ignore it? Or is there another solution I am overlooking?
Edit: i.e. How to make a static type checker recognize the parent attribute's type, without making it an attr.ib.
If your only concern is to bypass the frozen-ness of a frozen class, I’d suggest to take the same route as attrs in its __init__ methods and use object.__setattr__ directly. attrs classes are always frozen, so we have to work around it too.
Might not be the most elegant way, but Python sadly isn’t really designed for immutability, so we have to work with what we have.
That said, your _deepclone converter seems to indicate that you don’t need/want to add the original objects to the Parent? In that case attr.evolve() is the perfect tool for you.
P.S. you can write just #attr.frozen now. auto_attribs is True by default. See https://www.attrs.org/en/stable/api.html#next-generation-apis
The solution I settled on: (Thanks #hynek for the discussion and your awesome work!)
#attr.frozen(slots=False)
class Child:
some_attribute:str = "whatever"
#functools.cached_property
def parent(self) -> LateRef["Parent"]:
return LateRef()
# Parent class stays the same
Works nearly the same (parent is stored as __dict__["parent"] on first access), and is understood flawlessly at least by VSCode.
I am creating a class called an Environment which subclasses a dictionary. It looks something like this:
class Env(dict):
"An environment dict, containing the parent Env (or None) where created."
def __init__(self, parent=None):
self.parent = parent
# super().__init__() <-- not included
Pylint complains that:
super-init-not-called: __init__ method from base class 'dict' is not called.
What does doing super() on a dict type do? Is this something that is required to be done, and if so, why is it necessary?
After playing around with this a bit, I'm not so sure it does anything (or maybe it automatically does the super behind-the-scenes anyways). Here's an example:
class Env1(dict):
def __init__(self, parent=None):
self.parent = parent
super().__init__()
class Env2(dict):
def __init__(self, parent=None):
self.parent = parent
dir(Env1()) == dir(Env2()), len(dir(Env1))
(True, 48)
Pylint doesn't know what dict.__init__ does. It can't be sure if there's some important setup logic in that method or not. That's why it's warning you, so that you can either decide to call super().__init__ to be safe, or to silence the warning if you're confident you don't need the call.
I'm pretty sure you don't need to call dict.__init__ when you want to initialize your instances as empty dictionaries. But that may be dependent on the implementation details of the dict class you're inheriting from (which does all of its setup in the C-API equivalent __new__). Another Python implementation might do more of the setup work for its dictionaries in __init__ and then your code wouldn't work correctly.
To be safe, it's generally a good idea to call your parent class's __init__ method. This is such broad advice that it's baked into Pylint. You can ignore those warnings, and even add comments to your code that will suppress the ones that don't apply to certain parts of your code (so they don't distract you from real issues). But most of the warnings are generally good to obey, even if they don't reflect a serious bug in your current code.
Calling super() is not required, but makes sense if you want to follow OOP, specifically, the Liskov substitution principle.
From Wikipedia, the Liskov substitution principle says:
If S is a subtype of T, then objects of type T may be replaced with objects of type S without altering any of the desirable properties of the program.
In plain words, let S is a subclass of T. If T has a method or attribute, then S also has it. Moreover if T.some_method(arg1, arg2,...,argn) is a proper syntax, then S.some_method(arg1, arg2, ..., argn) is also a proper syntax and the output is identical. (There is more to it, but I skip it for simplicity)
What does this theory mean for our case? If dict has any attributes (except parent) declared during the init, they are lost, and the Liskov substitution principle is violated. Please check the following example.
class T:
def __init__(self):
self.t = 1
class S(T):
def __init__(self, parent=None):
self.parent = parent
s = S()
s.t
raises the error because class S does not have access to the attribute t.
Why no error is in our case? Because there are no attributes created inside __init__ in the parent class dict. Therefore, the extension works well and does not violate OOP.
To fix PyLint issue, change the code as follows:
class Env(dict):
def __init__(self, parent=None):
super().__init__() # get all parent's __init__ setup
self.parent = parent # add your attributes
It does just what the documentation teaches us: it calls the __init__ method of the parent class. This does all of the initialization behind the attributes you supposedly want to inherit from the parent.
In general, if you do not call super().__init__(), then your object has only the added parent field, plus access to methods and class attributes of the parent. This will work just fine (except for the warning) for any class that does not use initialization arguments -- or, in particular, one that does not initialize any fields on the fly.
Python built-in types do what you expect (or want), so your given use is okay.
In contrast, consider the case of extending your Env class to one called Context:
class Context(Env):
def __init__(upper, lower):
self.upper = upper
self.lower = lower
ctx = Context(7, 0)
print(ctx.upper)
print(ctx.parent)
At this last statement, you'll get a run-time fault: ctx has no attribute parent, since I never called super().__init__() in Context.__init__
The context for me is a single int's worth of info I need to retain between calls to a function which modifies that value. I could use a global, but I know that's discouraged. For now I've used a default argument in the form of a list containing the int and taken advantage of mutability so that changes to the value are retained between calls, like so--
def increment(val, saved=[0]):
saved[0] += val
# do stuff
This function is being attached to a button via tkinter, like so~
button0 = Button(root, text="demo", command=lambda: increment(val))
which means there's no return value I can assign to a local variable outside the function.
How do people normally handle this? I mean, sure, the mutability trick works and all, but what if I needed to access and modify that value from multiple functions?
Can this not be done without setting up a class with static methods and internal attributes, etc?
Use a class. Use an instance member for keeping the state.
class Incrementable:
def __init__(self, initial_value = 0):
self.x = initial_value
def increment(self, val):
self.x += val
# do stuff
You can add a __call__ method for simulating a function call (e.g. if you need to be backward-compatible). Whether or not it is a good idea really depends on the context and on your specific use case.
Can this not be done without setting up a class with static methods and internal attributes, etc?
It can, but solutions not involving classes/objects with attributes are not "pythonic". It is so easy to define classes in python (the example above is only 5 simple lines), and it gives you maximal control and flexibility.
Using python's mutable-default-args "weirdness" (I'm not going to call it "a feature") should be considered a hack.
If you don't want to set up a class, your only1 other option is a global variable. You can't save it to a local variable because the command runs from within mainloop, not within the local scope in which it was created.
For example:
button0 = Button(root, text="demo", command=lambda: increment_and_save(val))
def increment_and_save(val):
global saved
saved = increment(val)
1 not literally true, since you can use all sorts of other ways to persist data, such as a database or a file, but I assume you want an in-memory solution.
Aren't you mixing up model and view?
The UI elements, such as buttons, should just delegate to your data model. As such, if you have a model with a persistent state (i.e. class with attributes), you can just implement a class method there that handles the required things if a button is clicked.
If you try to bind stateful things to your presentation (UI), you will consequently lose the desirable separation between said presentation and your data model.
In case you want to keep your data model access simple, you can think about a singleton instance, such that you don't need to carry a reference to that model as an argument to all UI elements (plus you don't need a global instance, even though this singleton holds some kind of globally available instance):
def singleton(cls):
instance = cls()
instance.__call__ = lambda: instance
return instance
#singleton
class TheDataModel(object):
def __init__(self):
self.x = 0
def on_button_demo(self):
self.x += 1
if __name__ == '__main__':
# If an element needs a reference to the model, just get
# the current instance from the decorated singleton:
model = TheDataModel
print('model', model.x)
model.on_button_demo()
print('model', model.x)
# In fact, it is a global instance that is available via
# the class name; even across imports in the same session
other = TheDataModel
print('other', other.x)
# Consequently, you can easily bind the model's methods
# to the action of any UI element
button0 = Button(root, text="demo", command=TheDataModel.on_button_demo)
But, and I have to point this out, be cautious when using singleton instances, as they easily lead to bad design. Set up a proper model and just make the access to the major model compound accessible as a singleton. Such unified access is often referred to as context.
We can make it context oriented by using context managers. Example is not specific to UI element however states general scenario.
class MyContext(object):
# This is my container
# have whatever state to it
# support different operations
def __init__(self):
self.val = 0
def increament(self, val):
self.val += val
def get(self):
return self.val
def __enter__(self):
# do on creation
return self
def __exit__(self, type, value, traceback):
# do on exit
self.val = 0
def some_func(val, context=None):
if context:
context.increament(val)
def some_more(val, context=None):
if context:
context.increament(val)
def some_getter(context=None):
if context:
print context.get()
with MyContext() as context:
some_func(5, context=context)
some_more(10, context=context)
some_getter(context=context)
I know that I can dynamically add an instance method to an object by doing something like:
import types
def my_method(self):
# logic of method
# ...
# instance is some instance of some class
instance.my_method = types.MethodType(my_method, instance)
Later on I can call instance.my_method() and self will be bound correctly and everything works.
Now, my question: how to do the exact same thing to obtain the behavior that decorating the new method with #property would give?
I would guess something like:
instance.my_method = types.MethodType(my_method, instance)
instance.my_method = property(instance.my_method)
But, doing that instance.my_method returns a property object.
The property descriptor objects needs to live in the class, not in the instance, to have the effect you desire. If you don't want to alter the existing class in order to avoid altering the behavior of other instances, you'll need to make a "per-instance class", e.g.:
def addprop(inst, name, method):
cls = type(inst)
if not hasattr(cls, '__perinstance'):
cls = type(cls.__name__, (cls,), {})
cls.__perinstance = True
inst.__class__ = cls
setattr(cls, name, property(method))
I'm marking these special "per-instance" classes with an attribute to avoid needlessly making multiple ones if you're doing several addprop calls on the same instance.
Note that, like for other uses of property, you need the class in play to be new-style (typically obtained by inheriting directly or indirectly from object), not the ancient legacy style (dropped in Python 3) that's assigned by default to a class without bases.
Since this question isn't asking about only adding to a spesific instance,
the following method can be used to add a property to the class, this will expose the properties to all instances of the class YMMV.
cls = type(my_instance)
cls.my_prop = property(lambda self: "hello world")
print(my_instance.my_prop)
# >>> hello world
Note: Adding another answer because I think #Alex Martelli, while correct, is achieving the desired result by creating a new class that holds the property, this answer is intended to be more direct/straightforward without abstracting whats going on into its own method.
I would like to control which methods appear when a user uses tab-completion on a custom object in ipython - in particular, I want to hide functions that I have deprecated. I still want these methods to be callable, but I don't want users to see them and start using them if they are inspecting the object. Is this something that is possible?
Partial answer for you. I'll post the example code and then explain why its only a partial answer.
Code:
class hidden(object): # or whatever its parent class is
def __init__(self):
self.value = 4
def show(self):
return self.value
def change(self,n):
self.value = n
def __getattr__(self, attrname):
# put the dep'd method/attribute names here
deprecateds = ['dep_show','dep_change']
if attrname in deprecateds:
print("These aren't the methods you're looking for.")
def dep_change(n):
self.value = n
def dep_show():
return self.value
return eval(attrname)
else:
raise AttributeError, attrname
So now the caveat: they're not methods (note the lack of self as the first variable). If you need your users (or your code) to be able to call im_class, im_func, or im_self on any of your deprecated methods, then this hack won't work. Also, i'm pretty sure there's going to be a performance hit because you're defining each dep'd function inside __getattr__. This won't affect your other attribute lookups (had I put them in __getattribute__, that would be a different matter), but it will slow down access to those deprecated methods. This can be (largely, but not entirely) negated by putting each function definition inside its own if block, instead of doing a list-membership check, but, depending on how big your function is, that could be really annoying to maintain.
UPDATE:
1) If you want to make the deprecated functions methods (and you do), just use
import types
return types.MethodType(eval(attrname), self)
instead of
return eval(attrname)
in the above snippet, and add self as the first argument to the function defs. It turns them into instancemethods (so you can use im_class, im_func, and im_self to your heart's content).
2) If the __getattr__ hook didn't thrill you, there's another option (that I know of) (albiet, with its own caveats, and we'll get to those): Put the deprecated functions definitions inside __init__, and hide them with a custom __dir__. Here's what the above code would look like done this way:
class hidden(object):
def __init__(self):
self.value = 4
from types import MethodType
def dep_show(self):
return self.value
self.__setattr__('dep_show', MethodType(dep_show, self))
def dep_change(self, n):
self.value = n
self.__setattr__('dep_change', MethodType(dep_change, self))
def show(self):
return self.value
def change(self, n):
self.value = n
def __dir__(self):
heritage = dir(super(self.__class__, self)) # inherited attributes
hide = ['dep_show', 'dep_change']
show = [k for k in self.__class__.__dict__.keys() + self.__dict__.keys() if not k in heritage + private]
return sorted(heritage + show)
The advantage here is that you're not defining the functions anew every lookup, which nets you speed. The disadvantage here is that because you're not defining functions anew each lookup, they have to 'persist' (if you will). So, while the custom __dir__ method hides your deprecateds from dir(hiddenObj) and, therefore, IPython's tab-completion, they still exist in the instance's __dict__ attribute, where users can discover them.
Seems like there is a special magic method for the introcpection which is called by dir(): __dir__(). Isn't it what you are lookin for?
The DeprecationWarning isn't emitted until the method is called, so you'd have to have a separate attribute on the class that stores the names of deprecated methods, then check that before suggesting a completion.
Alternatively, you could walk the AST for the method looking for DeprecationWarning, but that will fail if either the class is defined in C, or if the method may emit a DeprecationWarning based on the type or value of the arguments.
About the completion mechanism in IPython, it is documented here:
http://ipython.scipy.org/doc/manual/html/api/generated/IPython.core.completer.html#ipcompleter
But a really interesting example for you is the traits completer, that does precisely what you want to do: it hides some methods (based on their names) from the autocompletion.
Here is the code:
http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/Extensions/ipy_traits_completer.py