The context for me is a single int's worth of info I need to retain between calls to a function which modifies that value. I could use a global, but I know that's discouraged. For now I've used a default argument in the form of a list containing the int and taken advantage of mutability so that changes to the value are retained between calls, like so--
def increment(val, saved=[0]):
saved[0] += val
# do stuff
This function is being attached to a button via tkinter, like so~
button0 = Button(root, text="demo", command=lambda: increment(val))
which means there's no return value I can assign to a local variable outside the function.
How do people normally handle this? I mean, sure, the mutability trick works and all, but what if I needed to access and modify that value from multiple functions?
Can this not be done without setting up a class with static methods and internal attributes, etc?
Use a class. Use an instance member for keeping the state.
class Incrementable:
def __init__(self, initial_value = 0):
self.x = initial_value
def increment(self, val):
self.x += val
# do stuff
You can add a __call__ method for simulating a function call (e.g. if you need to be backward-compatible). Whether or not it is a good idea really depends on the context and on your specific use case.
Can this not be done without setting up a class with static methods and internal attributes, etc?
It can, but solutions not involving classes/objects with attributes are not "pythonic". It is so easy to define classes in python (the example above is only 5 simple lines), and it gives you maximal control and flexibility.
Using python's mutable-default-args "weirdness" (I'm not going to call it "a feature") should be considered a hack.
If you don't want to set up a class, your only1 other option is a global variable. You can't save it to a local variable because the command runs from within mainloop, not within the local scope in which it was created.
For example:
button0 = Button(root, text="demo", command=lambda: increment_and_save(val))
def increment_and_save(val):
global saved
saved = increment(val)
1 not literally true, since you can use all sorts of other ways to persist data, such as a database or a file, but I assume you want an in-memory solution.
Aren't you mixing up model and view?
The UI elements, such as buttons, should just delegate to your data model. As such, if you have a model with a persistent state (i.e. class with attributes), you can just implement a class method there that handles the required things if a button is clicked.
If you try to bind stateful things to your presentation (UI), you will consequently lose the desirable separation between said presentation and your data model.
In case you want to keep your data model access simple, you can think about a singleton instance, such that you don't need to carry a reference to that model as an argument to all UI elements (plus you don't need a global instance, even though this singleton holds some kind of globally available instance):
def singleton(cls):
instance = cls()
instance.__call__ = lambda: instance
return instance
#singleton
class TheDataModel(object):
def __init__(self):
self.x = 0
def on_button_demo(self):
self.x += 1
if __name__ == '__main__':
# If an element needs a reference to the model, just get
# the current instance from the decorated singleton:
model = TheDataModel
print('model', model.x)
model.on_button_demo()
print('model', model.x)
# In fact, it is a global instance that is available via
# the class name; even across imports in the same session
other = TheDataModel
print('other', other.x)
# Consequently, you can easily bind the model's methods
# to the action of any UI element
button0 = Button(root, text="demo", command=TheDataModel.on_button_demo)
But, and I have to point this out, be cautious when using singleton instances, as they easily lead to bad design. Set up a proper model and just make the access to the major model compound accessible as a singleton. Such unified access is often referred to as context.
We can make it context oriented by using context managers. Example is not specific to UI element however states general scenario.
class MyContext(object):
# This is my container
# have whatever state to it
# support different operations
def __init__(self):
self.val = 0
def increament(self, val):
self.val += val
def get(self):
return self.val
def __enter__(self):
# do on creation
return self
def __exit__(self, type, value, traceback):
# do on exit
self.val = 0
def some_func(val, context=None):
if context:
context.increament(val)
def some_more(val, context=None):
if context:
context.increament(val)
def some_getter(context=None):
if context:
print context.get()
with MyContext() as context:
some_func(5, context=context)
some_more(10, context=context)
some_getter(context=context)
Related
I was looking into the following code.
On many occasions the __init__ method is not really used but there is a custom initialize function like in the following example:
def __init__(self):
pass
def initialize(self, opt):
# ...
This is then called as:
data_loader = CustomDatasetDataLoader()
# other instance method is called
data_loader.initialize(opt)
I see the problem that variables, that are used in other instance methods, could still be undefined, if one forgets to call this custom initialize function. But what are the benefits of this approach?
Some APIs out in the wild (such as inside setuptools) have similar kind of thing and they use it to their advantage. The __init__ call could be used for the low level internal API while public constructors are defined as classmethods for the different ways that one might construct objects. For instance, in pkg_resources.EntryPoint, the way to create instances of this class is to make use of the parse classmethod. A similar way can be followed if a custom initialization is desired
class CustomDatasetDataLoader(object):
#classmethod
def create(cls):
"""standard creation"""
return cls()
#classmethod
def create_with_initialization(cls, opt):
"""create with special options."""
inst = cls()
# assign things from opt to cls, like
# inst.some_update_method(opt.something)
# inst.attr = opt.some_attr
return inst
This way users of the class will not need two lines of code to do what a single line could do, they can just simply call CustomDatasetDataLoader.create_with_initialization(some_obj) if that is what they want, or call the other classmethod to construct an instance of this class.
Edit: I see, you had an example linked (wish underlining links didn't go out of fashion) - that particular usage and implementation I feel is a poor way, when a classmethod (or just rely on the standard __init__) would be sufficient.
However, if that initialize function were to be an interface with some other system that receives an object of a particular type to invoke some method with it (e.g. something akin to the visitor pattern) it might make sense, but as it is it really doesn't.
I have a class that will always have only 1 object at the time. I'm just starting OOP in python and I was wondering what is a better approach: to assign an instance of this class to the variable and operate on that variable or rather have this instance referenced in the class variable instead. Here is an example of what I mean:
Referenced instance:
def Transaction(object):
current_transaction = None
in_progress = False
def __init__(self):
self.__class__.current_transaction = self
self.__class__.in_progress = True
self.name = 'abc'
self.value = 50
def update(self):
do_smth()
Transaction()
if Transaction.in_progress:
Transaction.current_transaction.update()
print Transaction.current_transaction.name
print Transaction.current_transaction.value
instance in a variable
def Transaction(object):
def __init__(self):
self.name = 'abc'
self.value = 50
def update(self):
do_smth()
current_transaction = Transaction()
in_progress = True
if in_progress:
current_transaction.update()
print current_transaction.name
print current_transaction.value
It's possible to see that you've encapsulated too much in the first case just by comparing the overall readability of the code: the second is much cleaner.
A better way to implement the first option is to use class methods: decorate all your method with #classmethod and then call with Transaction.method().
There's no practical difference in code quality for these two options. However, assuming that the the class is final, that is, without derived classes, I would go for a third choice: use the module as a singleton and kill the class. This would be the most compact and most readable choice. You don't need classes to create sigletons.
I think the first version doesn't make much sense, and the second version of your code would be better in almost all situations. It can sometimes be useful to write a Singleton class (where only one instance ever exists) by overriding __new__ to always return the saved instance (after it's been created the first time). But usually you don't need that unless you're wrapping some external resource that really only ever makes sense to exist once.
If your other code needs to share a single instance, there are other ways to do so (e.g. a global variable in some module or a constructor argument for each other object that needs a reference).
Note that if your instances have a very well defined life cycle, with specific events that should happen when they're created and destroyed, and unknown code running and using the object in between, the context manager protocol may be something you should look at, as it lets you use your instances in with statements:
with Transaction() as trans:
trans.whatever() # the Transaction will be notified if anything raises
other_stuff() # an exception that is not caught within the with block
trans.foo() # (so it can do a rollback if it wants to)
foo() # the Transaction will be cleaned up (e.g. committed) when the indented with block ends
Implementing the context manager protocol requires an __enter__ and __exit__ method.
Is it possible to modify/extend an inherited method from the middle. I realize I can call super and get the original method, then either put code before or after that call which will extend the original. Is there a technique of doing something similar but from the middle of a method?
class Base():
def __init__(self):
self.size = 4
def get_data(self):
data = []
for num in range(self.size):
data.append("doing stuff")
data.append("doing stuff")
### add here from child##
data.append("doing stuff")
data.append("doing stuff")
return data
class MyClass(Base):
def __init__(self):
super().__init__()
def get_data(self):
# inherited parent code
# Do something else here
# inherited parent code
Despite Python's powerful introspection and code-modifying capabilities, there is no "clean" way of doing this. It could be done only by directly modifying the bytecode in the original function and shoehorsing a new method call in there - which would also implying in creating a new code and function objects - definitely not something to do in production code - even because bytecode is not guaranteed to be unchanged across Python versions or Python implementations.
Refactoring the original method:
But it can be done if the original method is coded in a way it is "aware" of points were subclasses might want to run additional code (maybe even being split up in several methods):
For your example, you'd have something like:
class Base():
def __init__(self):
self.size = 4
def get_data(self):
self.data = data = []
for num in range(self.size):
data.append("doing stuff")
data.append("doing stuff")
self.do_extra_things_with_data()
data.append("doing stuff")
data.append("doing stuff")
return data
def do_extra_things_with_data():
"""Override this on subclasses"""
class MyClass(Base):
def __init__(self):
super().__init__()
def do_extra_things_with_data():
print(len(self.data), "objects defined so far")
One technical name for this is "slot". (It is used for templating in certain web frameworks - the derived page uses the parent template for columns and general layout, and defines "slots" for the content areas)
One other thing to watch are descriptors such as "properties": you can't change the superclass'method code - but if the code retrieves instance attributes for its computations, you can define these attributes as properties on the subclasses to run custom code.
Using descriptors:
One other way of doing that is to use descriptors such as "properties": you can't change the superclass'method code - but if the code retrieves instance attributes for its computations, you can define these attributes as properties on the subclasses to run custom code.
Let's suppose your method makes use of the self.size attribute, but it is exactly for calculating it that you might want to run more code - keeping exactly the same Base class you can do:
class MyClass(Base):
#property
def size(self):
# put extr calculation to retrieve the dynamc value
of self.size here
return value
Is there a technique of doing something similar but from the middle of
a method?
Not really. The def compiles into a function object that has a self-contained code object that is usually treated as being opaque.
When a need like this arises, it is usually an indication that the parent method needs to be split into reusable components that can be called separately.
If you can't refactor the parent method, then the unfortunate alternative is that the subclass will have to override the method and duplicate some of the code from the parent.
In short, Pythonic object oriented design is treats methods and attributes as the atomic units of composability.
I have a set of related classes that all inherit from one base class. I would like to use a factory method to instantiate objects for these classes. I want to do this because then I can store the objects in a dictionary keyed by the class name before returning the object to the caller. Then if there is a request for an object of a particular class, I can check to see whether one already exists in my dictionary. If not, I'll instantiate it and add it to the dictionary. If so, then I'll return the existing object from the dictionary. This will essentially turn all the classes in my module into singletons.
I want to do this because the base class that they all inherit from does some automatic wrapping of the functions in the subclasses, and I don't want to the functions to get wrapped more than once, which is what happens currently if two objects of the same class are created.
The only way I can think of doing this is to check the stacktrace in the __init__() method of the base class, which will always be called, and to throw an exception if the stacktrace does not show that the request to make the object is coming from the factory function.
Is this a good idea?
Edit: Here is the source code for my base class. I've been told that I need to figure out metaclasses to accomplish this more elegantly, but this is what I have for now. All Page objects use the same Selenium Webdriver instance, which is in the driver module imported at the top. This driver is very expensive to initialize -- it is initialized the first time a LoginPage is created. After it is initialized the initialize() method will return the existing driver instead of creating a new one. The idea is that the user must begin by creating a LoginPage. There will eventually be dozens of Page classes defined and they will be used by unit testing code to verify that the behavior of a website is correct.
from driver import get_driver, urlpath, initialize
from settings import urlpaths
class DriverPageMismatchException(Exception):
pass
class URLVerifyingPage(object):
# we add logic in __init__() to check the expected urlpath for the page
# against the urlpath that the driver is showing - we only want the page's
# methods to be invokable if the driver is actualy at the appropriate page.
# If the driver shows a different urlpath than the page is supposed to
# have, the method should throw a DriverPageMismatchException
def __init__(self):
self.driver = get_driver()
self._adjust_methods(self.__class__)
def _adjust_methods(self, cls):
for attr, val in cls.__dict__.iteritems():
if callable(val) and not attr.startswith("_"):
print "adjusting:"+str(attr)+" - "+str(val)
setattr(
cls,
attr,
self._add_wrapper_to_confirm_page_matches_driver(val)
)
for base in cls.__bases__:
if base.__name__ == 'URLVerifyingPage': break
self._adjust_methods(base)
def _add_wrapper_to_confirm_page_matches_driver(self, page_method):
def _wrapper(self, *args, **kwargs):
if urlpath() != urlpaths[self.__class__.__name__]:
raise DriverPageMismatchException(
"path is '"+urlpath()+
"' but '"+urlpaths[self.__class.__name__]+"' expected "+
"for "+self.__class.__name__
)
return page_method(self, *args, **kwargs)
return _wrapper
class LoginPage(URLVerifyingPage):
def __init__(self, username=username, password=password, baseurl="http://example.com/"):
self.username = username
self.password = password
self.driver = initialize(baseurl)
super(LoginPage, self).__init__()
def login(self):
driver.find_element_by_id("username").clear()
driver.find_element_by_id("username").send_keys(self.username)
driver.find_element_by_id("password").clear()
driver.find_element_by_id("password").send_keys(self.password)
driver.find_element_by_id("login_button").click()
return HomePage()
class HomePage(URLVerifyingPage):
def some_method(self):
...
return SomePage()
def many_more_methods(self):
...
return ManyMorePages()
It's no big deal if a page gets instantiated a handful of times -- the methods will just get wrapped a handful of times and a handful of unnecessary checks will take place, but everything will still work. But it would be bad if a page was instantiated dozens or hundreds or tens of thousands of times. I could just put a flag in the class definition for each page and check to see if the methods have already been wrapped, but I like the idea of keeping the class definitions pure and clean and shoving all the hocus-pocus into a deep corner of my system where no one can see it and it just works.
In Python, it's almost never worth trying to "force" anything. Whatever you come up with, someone can get around it by monkeypatching your class, copying and editing the source, fooling around with bytecode, etc.
So, just write your factory, and document that as the right way to get an instance of your class, and expect anyone who writes code using your classes to understand TOOWTDI, and not violate it unless she really knows what she's doing and is willing to figure out and deal with the consequences.
If you're just trying to prevent accidents, rather than intentional "misuse", that's a different story. In fact, it's just standard design-by-contract: check the invariant. Of course at this point, SillyBaseClass is already screwed up, and it's too late to repair it, and all you can do is assert, raise, log, or whatever else is appropriate. But that's what you want: it's a logic error in the application, and the only thing to do is get the programmer to fix it, so assert is probably exactly what you want.
So:
class SillyBaseClass:
singletons = {}
class Foo(SillyBaseClass):
def __init__(self):
assert self.__class__ not in SillyBaseClass.singletons
def get_foo():
if Foo not in SillyBaseClass.singletons:
SillyBaseClass.singletons[Foo] = Foo()
return SillyBaseClass.singletons[Foo]
If you really do want to stop things from getting this far, you can check the invariant earlier, in the __new__ method, but unless "SillyBaseClass got screwed up" is equivalent to "launch the nukes", why bother?
it sounds like you want to provide a __new__ implementation: Something like:
class MySingledtonBase(object):
instance_cache = {}
def __new__(cls, arg1, arg2):
if cls in MySingletonBase.instance_cache:
return MySingletonBase.instance_cache[cls]
self = super(MySingletonBase, cls).__new__(arg1, arg2)
MySingletonBase.instance_cache[cls] = self
return self
Rather than adding complex code to catch mistakes at runtime, I'd first try to use convention to guide users of your module to do the right thing on their own.
Give your classes "private" names (prefixed by an underscore), give them names that suggest they shouldn't be instantiated (eg _Internal...) and make your factory function "public".
That is, something like this:
class _InternalSubClassOne(_BaseClass):
...
class _InternalSubClassTwo(_BaseClass):
...
# An example factory function.
def new_object(arg):
return _InternalSubClassOne() if arg == 'one' else _InternalSubClassTwo()
I'd also add docstrings or comments to each class, like "Don't instantiate this class by hand, use the factory method new_object."
You can also just nest classes in factory method, as described here:
https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html#preventing-direct-creation
Working example from mentioned source:
# Factory/shapefact1/NestedShapeFactory.py
import random
class Shape(object):
types = []
def factory(type):
class Circle(Shape):
def draw(self): print("Circle.draw")
def erase(self): print("Circle.erase")
class Square(Shape):
def draw(self): print("Square.draw")
def erase(self): print("Square.erase")
if type == "Circle": return Circle()
if type == "Square": return Square()
assert 0, "Bad shape creation: " + type
def shapeNameGen(n):
for i in range(n):
yield factory(random.choice(["Circle", "Square"]))
# Circle() # Not defined
for shape in shapeNameGen(7):
shape.draw()
shape.erase()
I'm not fan of this solution, just want to add this as one more option.
Is there a design pattern that describes the following setup? Does this design suffer from any major issues?
Class Widget instances can be built either by a "dumb" constructor Widget.__init__(), or by an "intelligent" factory method Workbench.upgrade_widget():
class Widget:
def __init__(self, abc, def, ...):
self.abc = abc
self.def = def
...
...
class Workbench:
# widget factory function, which uses data from the workbench instance
def upgrade_widget(self, widget, upgrade_info):
widget = Widget(widget.abc, widget.def, ...)
# I will modify the widget's attributes
...
self.rearrange_widget(widget, xyz) # modifies widget's internal state
...
widget.abc = ... # also modifies widget's state
...
return widget
# uses data from the workbench instance
def rearrange_widget(self, widget, xyz):
...
# this class does other stuff too
...
Widgets are immutable in the sense that I must not modify its instances after they are fully initialized (a lot of code depends on this invariant). But I find that modifying widgets while they are being initialized is very convenient, and makes the code much cleaner.
My main concern is that I modify "immutable" widgets in a different class. If it was only in upgrade_widget, I might live with it since it does not modify the widget passed to it. But that method relies on other Workbench methods (rearrange_widget) which modifies the widget it received as an argument. I feel like I'm losing control over where this "immutable" instance can actually be modified - someone may accidentally call rearrange_widget with a widget that's already fully initialized, leading to a disaster.
How are you enforcing immutability of the Widget now?
What if you add a 'locked' property to your widget, and wrap your setattr to check that property:
class Widget(object):
__locked = False
def __init__(self,a,b,c,locked=True):
...
self.__locked = locked
def lock(self):
self.__locked = True
def is_locked(self):
return self.__locked
def __setattr___(self,*args,**kw):
if self.__locked:
raise Exception('immutable') # define your own rather than use Exception
return super(Widget).__setattr__(self,*args,**kw)
then in the factory:
class Workbench(object):
def upgrade_widget(self,widget,upgrade_info):
widget = Widget(widget.a,widget.b,...,locked=False)
self.rearrange_widget(widget, blah)
widget.c = 1337
widget.lock()
return widget
In general use you can be pretty certain that nothing funny happens to the class once it's locked. Any methods which care about the immutability of the widget should also check the is_locked() of that widget. For example, rearrange_widget should check that the widget is unlocked too before doing anything.
This is notwithstanding malicious tampering with the instances, which can happen anyway. it also doesn't prevent attributes being changed by their own methods.
Note that the code (pseudo python) I wrote above isn't tested, but hopefully it illustrates the general idea of how to deal with your main concern.
Oh, and I'm not sure if there's a particular name for this pattern.
#chees: A neater way of doing that is to modify __dict__ in __init__ and make __setattr__ always raise an exception (btw, its not a good idea to raise Exception - its just to general):
class Widget:
def __init__(self, args):
self.__dict__['args'] = args
def __setattr__(self, name, value):
raise TypeError
And modifying it in the same way in Workbench (i.e. using __dict__) is a constant reminder that you're doing something you shouldn't really be doing.