Scanning for thread violations with Tkinter - python

We are just about to finish a very large update to our application which is built with python2.5 and Tkinter and the following error has crept in sadly:
alloc: invalid block: 06807CE7: 1 0 0
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
We've seen this before and it is usually a Tcl Interrupter error caused when a non GUI thread tries to access TK via Tkinter in anyway (TK not being thread safe). The error pops up on application close, after the python interrupter is finished with our code. This error is very hard to reproduce and I'm thinking I will have to scan all threads in the system to see if they access TK when they shouldn't.
I'm looking for a magic python trick to help with this. All Tkinter widgets we use are first subclassed and inherit from out own Widget base class.
With this in mind I'm looking for a way to add the following check to the beginning of every method in the widget sub classes:
import thread
if thread.get_ident() != TKINTER_GUI_THREAD_ID:
assert 0, "Invalid thread accessing Tkinter!"
Decorators as a partial solution comes to mind. I do not want to add decorators manually to each method however. Is there a way I can add the decorator to all methods of a class that inherits from our Widget base class? Or is there a better way to do all this? Or does anyone have more info about this error?
enter code here

I don't know if your approach is good, as I don't know Tkinter.
But here's a sample of how to decorate all class methods using a metaclass.
import functools
# This is the decorator
def my_decorator(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
print 'calling', func.__name__, 'from decorator'
return func(*args, **kwargs)
return wrapper
# This is the metaclass
class DecorateMeta(type):
def __new__(cls, name, bases, attrs):
for key in attrs:
# Skip special methods, e.g. __init__
if not key.startswith('__') and callable(attrs[key]):
attrs[key] = my_decorator(attrs[key])
return super(DecorateMeta, cls).__new__(cls, name, bases, attrs)
# This is a sample class that uses the metaclass
class MyClass(object):
__metaclass__ = DecorateMeta
def __init__(self):
print 'in __init__()'
def test(self):
print 'in test()'
obj = MyClass()
obj.test()
The metaclass overrides the class creation. It loops through all the attributes of the class being created and decorates all callable attributes that have a "regular" name with my_decorator.

I went with a slightly easier method. I used the __getattribute__ method. The code is as follows:
def __getattribute__(self, name):
import ApplicationInfo
import thread, traceback
if ApplicationInfo.main_loop_thread_id != thread.get_ident():
print "Thread GUI violation"
traceback.print_stack()
return object.__getattribute__(self, name)
And sure enough we found one obscure place where we were accessing state from within TK while not being in the main GUI thread.
Although I must admit I need to review my python, feeling noobish looking at your example.

Related

unitTest a python 3 metaclass

I have a metaclass that set a class property my_new_property when it loads the class. This file will me named my_meta and the code is this
def remote_function():
# Get some data from a request to other site
return 'remote_response'
class MyMeta(type):
def __new__(cls, *args, **kwargs):
print("It is in")
obj = super().__new__(cls, *args, **kwargs)
new_value = remote_function()
setattr(obj, 'my_new_property', new_value)
return obj
The functionality to set the property works fine, however when writing the test file tests.py with only one code line:
from my_meta import MyMeta
The meta code is executed. As a consequence, it executes the real method remote_function.
The question is... as the meta code is executed only by using the import from the test file, how could I mock the method remote_function?
Importing the file as you show us won't trigger execution of the metaclass code.
However, importing any file (includng the one where the metaclass is), where there is a class that makes use of this metaclass, will run the code in the metaclass __new__ method - as parsing a class body defined with the class statement does just that: call the metaclass to create a new class instance.
So, the recomendation is: do not have your metaclass __new__ or __init__ methods to trigger side effects, like accessing remote stuff, if that can't be done in a seamless and innocuous way. Not only testing, but importing modules of your app in a Python shell will also trigger the behavior.
You could have a method in the metaclass to inialize with the remote value, and when you are about to actually use those, you'd explicitly call a such "remote_init" - like in
class MyMeta(type):
def __new__(cls, *args, **kwargs):
print("It is in")
obj = super().__new__(cls, *args, **kwargs)
new_value = remote_function()
setattr(obj, 'my_new_property', new_value)
return obj
def remote_init(cls):
if hasattr(cls, "my_new_property"):
return
cls.my_new_property = remote_function()
The remote_init method, being placed in the metaclass will behave jsut like a class method for the instantiated classes, but won't be visible (for dir or attribute retrieval), from the class instances.
This is the safest thing to do.
If you want to avoid the explicit step, which is understanble, you could use a setting in a configuration file, and a test inside the remote_function on whether to trigger the actual networking code, or just return a local, dummy value. You then make the configurations differ for testing/staging/production.
And, finally, you could just separate remote_method in another module, import that first, patch it out with unitttest.mock.patch, and the import the metaclass containing module - when it runs and calls the method, it will be just the pacthed version. This will work for your tests, but won't fix the problem of you triggering side-effects in other occasions (like in other tests that load this module).
Of course, for that to work you have to import the module containing the metaclass and any classes defined with it inside your test function, in a region where mock.patch is active, not importing it at the top of the file. There is just no problem in importing things inside test methods to have control over the importing process itself.

Better way to handle two-way binding in PyQt?

Currently I'm using PyQT, which I very much like so far. However, I need to have two-way bindings and from the sparse information I could find on the internet this is the way to do it. Create a PyQTProperty, attach a getter, a setter and a signal (which must be called whenever the value has changed in order to notify the GUI).
from PyQt5.QtCore import pyqtSignal, pyqtProperty, QObject
class Test(QObject):
signal = pyqtSignal()
def getName(self):
print("name gotten")
return self._name
def setName(self, name):
print("name changed to " + name)
self.signal.emit()
self._name = name
name = pyqtProperty('QString', fget= getName, fset= setName, notify= signal)
def __init__(self, parent=None):
super().__init__(parent)
# Initialise the value of the properties.
self._name = 'asdf'
This works as expected. However, I'm not sure if I'm indeed using it correctly, but this looks absolutely horrible to me. In fact, it smells so bad that I'm inclined to open a window. Instead of being able to simply declare an attribute, I now have to create a setter and a getter (before the declaration even, meaning there will be declarations halfway through the source file) and a signal for each attribute. This leads to a lot of duplicate code, and unreadable source files.
What would be the best way to change this to one or two lines? For example through automatic code generation, metaprogramming or macros (I've looked into MacroPy, but was not sure if this would be the way to go). So something like this:
name = # magic Python voodoo, automatically handling setters, getters and the signal
Thanks in advance, and please let me know if I need to supply additional information.
Edit: Thanks to BrenBarn I noticed I forgot to mention how this code is actually called upon. The GUI is implemented in QML (Qt Markup Language), and this object is made known to QML by registering it (like here, qmlRegisterType), after which QML can instantiate and manipulate instances of it. I decided not to include the full source code example for brevity, and because I'm mainly interested in a better way of handling the getter/setter and the instantiating of the signal, without focusing on QML.
You can use a pyqtProperty() decorator as mentioned in the documentation. It does at least remove the separate definition.
signal = pyqtSignal()
#name.setter
def name(self, name):
self._name = name
self.signal.emit()
#pyqtProperty('QString', notify=signal)
def name(self):
return self._name

conditional class inheritance definition in python

I have an linux based python application, which make use of pygtk and gtk.
It have both UI execution & command line mode execution option.
In UI mode, to create main application window, class definition is
class ToolWindow(common.Singleton, gtk.Window):
def __init__(self):
gtk.Window.__init__(self,gtk.WINDOW_TOPLEVEL)
What I want to do is, if application is able to import gtk and pygtk, then only
class ToolWindow should inherit both common.Singleton and gtk.Window classes, else it should only inherit common.Singleton class.
What is the best way to do it?
You can specify a metaclass where you can test what modules are importable:
class Meta(type):
def __new__(cls, name, bases, attrs):
try:
import gtk
bases += (gtk.Window)
except ImportError:
# gtk module not available
pass
# Create the class with the new bases tuple
return super(Meta, cls).__new__(cls, name, bases, attrs)
class ToolWindow(common.Singleton):
__metaclass__ = Meta
...
This is just a raw sketch, obviously many improvements can be done, but it should help you get started.
You should be aware that you should change your __init__() method from ToolWindow because it may not have the gtk module available (maybe set a flag in the metaclass to later check if the module is available; or you can even redefine the __init__() method from within the metaclass based on whether the module is available or not -- there are several ways of tackling this).

Python - how do I force the use of a factory method to instantiate an object?

I have a set of related classes that all inherit from one base class. I would like to use a factory method to instantiate objects for these classes. I want to do this because then I can store the objects in a dictionary keyed by the class name before returning the object to the caller. Then if there is a request for an object of a particular class, I can check to see whether one already exists in my dictionary. If not, I'll instantiate it and add it to the dictionary. If so, then I'll return the existing object from the dictionary. This will essentially turn all the classes in my module into singletons.
I want to do this because the base class that they all inherit from does some automatic wrapping of the functions in the subclasses, and I don't want to the functions to get wrapped more than once, which is what happens currently if two objects of the same class are created.
The only way I can think of doing this is to check the stacktrace in the __init__() method of the base class, which will always be called, and to throw an exception if the stacktrace does not show that the request to make the object is coming from the factory function.
Is this a good idea?
Edit: Here is the source code for my base class. I've been told that I need to figure out metaclasses to accomplish this more elegantly, but this is what I have for now. All Page objects use the same Selenium Webdriver instance, which is in the driver module imported at the top. This driver is very expensive to initialize -- it is initialized the first time a LoginPage is created. After it is initialized the initialize() method will return the existing driver instead of creating a new one. The idea is that the user must begin by creating a LoginPage. There will eventually be dozens of Page classes defined and they will be used by unit testing code to verify that the behavior of a website is correct.
from driver import get_driver, urlpath, initialize
from settings import urlpaths
class DriverPageMismatchException(Exception):
pass
class URLVerifyingPage(object):
# we add logic in __init__() to check the expected urlpath for the page
# against the urlpath that the driver is showing - we only want the page's
# methods to be invokable if the driver is actualy at the appropriate page.
# If the driver shows a different urlpath than the page is supposed to
# have, the method should throw a DriverPageMismatchException
def __init__(self):
self.driver = get_driver()
self._adjust_methods(self.__class__)
def _adjust_methods(self, cls):
for attr, val in cls.__dict__.iteritems():
if callable(val) and not attr.startswith("_"):
print "adjusting:"+str(attr)+" - "+str(val)
setattr(
cls,
attr,
self._add_wrapper_to_confirm_page_matches_driver(val)
)
for base in cls.__bases__:
if base.__name__ == 'URLVerifyingPage': break
self._adjust_methods(base)
def _add_wrapper_to_confirm_page_matches_driver(self, page_method):
def _wrapper(self, *args, **kwargs):
if urlpath() != urlpaths[self.__class__.__name__]:
raise DriverPageMismatchException(
"path is '"+urlpath()+
"' but '"+urlpaths[self.__class.__name__]+"' expected "+
"for "+self.__class.__name__
)
return page_method(self, *args, **kwargs)
return _wrapper
class LoginPage(URLVerifyingPage):
def __init__(self, username=username, password=password, baseurl="http://example.com/"):
self.username = username
self.password = password
self.driver = initialize(baseurl)
super(LoginPage, self).__init__()
def login(self):
driver.find_element_by_id("username").clear()
driver.find_element_by_id("username").send_keys(self.username)
driver.find_element_by_id("password").clear()
driver.find_element_by_id("password").send_keys(self.password)
driver.find_element_by_id("login_button").click()
return HomePage()
class HomePage(URLVerifyingPage):
def some_method(self):
...
return SomePage()
def many_more_methods(self):
...
return ManyMorePages()
It's no big deal if a page gets instantiated a handful of times -- the methods will just get wrapped a handful of times and a handful of unnecessary checks will take place, but everything will still work. But it would be bad if a page was instantiated dozens or hundreds or tens of thousands of times. I could just put a flag in the class definition for each page and check to see if the methods have already been wrapped, but I like the idea of keeping the class definitions pure and clean and shoving all the hocus-pocus into a deep corner of my system where no one can see it and it just works.
In Python, it's almost never worth trying to "force" anything. Whatever you come up with, someone can get around it by monkeypatching your class, copying and editing the source, fooling around with bytecode, etc.
So, just write your factory, and document that as the right way to get an instance of your class, and expect anyone who writes code using your classes to understand TOOWTDI, and not violate it unless she really knows what she's doing and is willing to figure out and deal with the consequences.
If you're just trying to prevent accidents, rather than intentional "misuse", that's a different story. In fact, it's just standard design-by-contract: check the invariant. Of course at this point, SillyBaseClass is already screwed up, and it's too late to repair it, and all you can do is assert, raise, log, or whatever else is appropriate. But that's what you want: it's a logic error in the application, and the only thing to do is get the programmer to fix it, so assert is probably exactly what you want.
So:
class SillyBaseClass:
singletons = {}
class Foo(SillyBaseClass):
def __init__(self):
assert self.__class__ not in SillyBaseClass.singletons
def get_foo():
if Foo not in SillyBaseClass.singletons:
SillyBaseClass.singletons[Foo] = Foo()
return SillyBaseClass.singletons[Foo]
If you really do want to stop things from getting this far, you can check the invariant earlier, in the __new__ method, but unless "SillyBaseClass got screwed up" is equivalent to "launch the nukes", why bother?
it sounds like you want to provide a __new__ implementation: Something like:
class MySingledtonBase(object):
instance_cache = {}
def __new__(cls, arg1, arg2):
if cls in MySingletonBase.instance_cache:
return MySingletonBase.instance_cache[cls]
self = super(MySingletonBase, cls).__new__(arg1, arg2)
MySingletonBase.instance_cache[cls] = self
return self
Rather than adding complex code to catch mistakes at runtime, I'd first try to use convention to guide users of your module to do the right thing on their own.
Give your classes "private" names (prefixed by an underscore), give them names that suggest they shouldn't be instantiated (eg _Internal...) and make your factory function "public".
That is, something like this:
class _InternalSubClassOne(_BaseClass):
...
class _InternalSubClassTwo(_BaseClass):
...
# An example factory function.
def new_object(arg):
return _InternalSubClassOne() if arg == 'one' else _InternalSubClassTwo()
I'd also add docstrings or comments to each class, like "Don't instantiate this class by hand, use the factory method new_object."
You can also just nest classes in factory method, as described here:
https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Factory.html#preventing-direct-creation
Working example from mentioned source:
# Factory/shapefact1/NestedShapeFactory.py
import random
class Shape(object):
types = []
def factory(type):
class Circle(Shape):
def draw(self): print("Circle.draw")
def erase(self): print("Circle.erase")
class Square(Shape):
def draw(self): print("Square.draw")
def erase(self): print("Square.erase")
if type == "Circle": return Circle()
if type == "Square": return Square()
assert 0, "Bad shape creation: " + type
def shapeNameGen(n):
for i in range(n):
yield factory(random.choice(["Circle", "Square"]))
# Circle() # Not defined
for shape in shapeNameGen(7):
shape.draw()
shape.erase()
I'm not fan of this solution, just want to add this as one more option.

Why aren't superclass __init__ methods automatically invoked?

Why did the Python designers decide that subclasses' __init__() methods don't automatically call the __init__() methods of their superclasses, as in some other languages? Is the Pythonic and recommended idiom really like the following?
class Superclass(object):
def __init__(self):
print 'Do something'
class Subclass(Superclass):
def __init__(self):
super(Subclass, self).__init__()
print 'Do something else'
The crucial distinction between Python's __init__ and those other languages constructors is that __init__ is not a constructor: it's an initializer (the actual constructor (if any, but, see later;-) is __new__ and works completely differently again). While constructing all superclasses (and, no doubt, doing so "before" you continue constructing downwards) is obviously part of saying you're constructing a subclass's instance, that is clearly not the case for initializing, since there are many use cases in which superclasses' initialization needs to be skipped, altered, controlled -- happening, if at all, "in the middle" of the subclass initialization, and so forth.
Basically, super-class delegation of the initializer is not automatic in Python for exactly the same reasons such delegation is also not automatic for any other methods -- and note that those "other languages" don't do automatic super-class delegation for any other method either... just for the constructor (and if applicable, destructor), which, as I mentioned, is not what Python's __init__ is. (Behavior of __new__ is also quite peculiar, though really not directly related to your question, since __new__ is such a peculiar constructor that it doesn't actually necessarily need to construct anything -- could perfectly well return an existing instance, or even a non-instance... clearly Python offers you a lot more control of the mechanics than the "other languages" you have in mind, which also includes having no automatic delegation in __new__ itself!-).
I'm somewhat embarrassed when people parrot the "Zen of Python", as if it's a justification for anything. It's a design philosophy; particular design decisions can always be explained in more specific terms--and they must be, or else the "Zen of Python" becomes an excuse for doing anything.
The reason is simple: you don't necessarily construct a derived class in a way similar at all to how you construct the base class. You may have more parameters, fewer, they may be in a different order or not related at all.
class myFile(object):
def __init__(self, filename, mode):
self.f = open(filename, mode)
class readFile(myFile):
def __init__(self, filename):
super(readFile, self).__init__(filename, "r")
class tempFile(myFile):
def __init__(self, mode):
super(tempFile, self).__init__("/tmp/file", mode)
class wordsFile(myFile):
def __init__(self, language):
super(wordsFile, self).__init__("/usr/share/dict/%s" % language, "r")
This applies to all derived methods, not just __init__.
Java and C++ require that a base class constructor is called because of memory layout.
If you have a class BaseClass with a member field1, and you create a new class SubClass that adds a member field2, then an instance of SubClass contains space for field1 and field2. You need a constructor of BaseClass to fill in field1, unless you require all inheriting classes to repeat BaseClass's initialization in their own constructors. And if field1 is private, then inheriting classes can't initialise field1.
Python is not Java or C++. All instances of all user-defined classes have the same 'shape'. They're basically just dictionaries in which attributes can be inserted. Before any initialisation has been done, all instances of all user-defined classes are almost exactly the same; they're just places to store attributes that aren't storing any yet.
So it makes perfect sense for a Python subclass not to call its base class constructor. It could just add the attributes itself if it wanted to. There's no space reserved for a given number of fields for each class in the hierarchy, and there's no difference between an attribute added by code from a BaseClass method and an attribute added by code from a SubClass method.
If, as is common, SubClass actually does want to have all of BaseClass's invariants set up before it goes on to do its own customisation, then yes you can just call BaseClass.__init__() (or use super, but that's complicated and has its own problems sometimes). But you don't have to. And you can do it before, or after, or with different arguments. Hell, if you wanted you could call the BaseClass.__init__ from another method entirely than __init__; maybe you have some bizarre lazy initialization thing going.
Python achieves this flexibility by keeping things simple. You initialise objects by writing an __init__ method that sets attributes on self. That's it. It behaves exactly like a method, because it is exactly a method. There are no other strange and unintuitive rules about things having to be done first, or things that will automatically happen if you don't do other things. The only purpose it needs to serve is to be a hook to execute during object initialisation to set initial attribute values, and it does just that. If you want it to do something else, you explicitly write that in your code.
To avoid confusion it is useful to know that you can invoke the base_class __init__() method if the child_class does not have an __init__() class.
Example:
class parent:
def __init__(self, a=1, b=0):
self.a = a
self.b = b
class child(parent):
def me(self):
pass
p = child(5, 4)
q = child(7)
z= child()
print p.a # prints 5
print q.b # prints 0
print z.a # prints 1
In fact the MRO in python will look for __init__() in the parent class when can not find it in the children class. You need to invoke the parent class constructor directly if you have already an __init__() method in the children class.
For example the following code will return an error:
class parent:
def init(self, a=1, b=0):
self.a = a
self.b = b
class child(parent):
def __init__(self):
pass
def me(self):
pass
p = child(5, 4) # Error: constructor gets one argument 3 is provided.
q = child(7) # Error: constructor gets one argument 2 is provided.
z= child()
print z.a # Error: No attribute named as a can be found.
"Explicit is better than implicit." It's the same reasoning that indicates we should explicitly write 'self'.
I think in in the end it is a benefit-- can you recite all of the rules Java has regarding calling superclasses' constructors?
Right now, we have a rather long page describing the method resolution order in case of multiple inheritance: http://www.python.org/download/releases/2.3/mro/
If constructors were called automatically, you'd need another page of at least the same length explaining the order of that happening. That would be hell...
Often the subclass has extra parameters which can't be passed to the superclass.
Maybe __init__ is the method that the subclass needs to override. Sometimes subclasses need the parent's function to run before they add class-specific code, and other times they need to set up instance variables before calling the parent's function. Since there's no way Python could possibly know when it would be most appropriate to call those functions, it shouldn't guess.
If those don't sway you, consider that __init__ is Just Another Function. If the function in question were dostuff instead, would you still want Python to automatically call the corresponding function in the parent class?
i believe the one very important consideration here is that with an automatic call to super.__init__(), you proscribe, by design, when that initialization method is called, and with what arguments. eschewing automatically calling it, and requiring the programmer to explicitly do that call, entails a lot of flexibility.
after all, just because class B is derived from class A does not mean A.__init__() can or should be called with the same arguments as B.__init__(). making the call explicit means a programmer can have e.g. define B.__init__() with completely different parameters, do some computation with that data, call A.__init__() with arguments as appropriate for that method, and then do some postprocessing. this kind of flexibility would be awkward to attain if A.__init__() would be called from B.__init__() implicitly, either before B.__init__() executes or right after it.
As Sergey Orshanskiy pointed out in the comments, it is also convenient to write a decorator to inherit the __init__ method.
You can write a decorator to inherit the __init__ method, and even perhaps automatically search for subclasses and decorate them. – Sergey Orshanskiy Jun 9 '15 at 23:17
Part 1/3: The implementation
Note: actually this is only useful if you want to call both the base and the derived class's __init__ since __init__ is inherited automatically. See the previous answers for this question.
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
super(type(self), self).__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
child(42)
Outputs:
Base: 42
Part 2/3: A warning
Warning: this doesn't work if base itself called super(type(self), self).
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
'''Warning: recursive calls.'''
super(type(self), self).__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
class child2(child):
#default_init
def __init__(self, n: int) -> None:
pass
child2(42)
RecursionError: maximum recursion depth exceeded while calling a Python object.
Part 3/3: Why not just use plain super()?
But why not just use the safe plain super()? Because it doesn't work since the new rebinded __init__ is from outside the class, and super(type(self), self) is required.
def default_init(func):
def wrapper(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
return wrapper
class base():
def __init__(self, n: int) -> None:
print(f'Base: {n}')
class child(base):
#default_init
def __init__(self, n: int) -> None:
pass
child(42)
Errors:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-6f580b3839cd> in <module>
13 pass
14
---> 15 child(42)
<ipython-input-9-6f580b3839cd> in wrapper(self, *args, **kwargs)
1 def default_init(func):
2 def wrapper(self, *args, **kwargs) -> None:
----> 3 super().__init__(*args, **kwargs)
4 return wrapper
5
RuntimeError: super(): __class__ cell not found
Background - We CAN AUTO init a parent AND child class!
A lot of answers here and say "This is not the python way, use super().__init__() from the subclass". The question is not asking for the pythonic way, it's comparing to the expected behavior from other languages to python's obviously different one.
The MRO document is pretty and colorful but it's really a TLDR situation and still doesn't quite answer the question, as is often the case in these types of comparisons - "Do it the Python way, because.".
Inherited objects can be overloaded by later declarations in subclasses, a pattern building on #keyvanrm's (https://stackoverflow.com/a/46943772/1112676) answer solves the case where I want to AUTOMATICALLY init a parent class as part of calling a class without explicitly calling super().__init__() in every child class.
In my case where a new team member might be asked to use a boilerplate module template (for making extensions to our application without touching the core application source) which we want to make as bare and easy to adopt without them needing to know or understand the underlying machinery - to only need to know of and use what is provided by the application's base interface which is well documented.
For those who will say "Explicit is better than implicit." I generally agree, however, when coming from many other popular languages inherited automatic initialization is the expected behavior and it is very useful if it can be leveraged for projects where some work on a core application and others work on extending it.
This technique can even pass args/keyword args for init which means pretty much any object can be pushed to the parent and used by the parent class or its relatives.
Example:
class Parent:
def __init__(self, *args, **kwargs):
self.somevar = "test"
self.anothervar = "anothertest"
#important part, call the init surrogate pass through args:
self._init(*args, **kwargs)
#important part, a placeholder init surrogate:
def _init(self, *args, **kwargs):
print("Parent class _init; ", self, args, kwargs)
def some_base_method(self):
print("some base method in Parent")
self.a_new_dict={}
class Child1(Parent):
# when omitted, the parent class's __init__() is run
#def __init__(self):
# pass
#overloading the parent class's _init() surrogate
def _init(self, *args, **kwargs):
print(f"Child1 class _init() overload; ",self, args, kwargs)
self.a_var_set_from_child = "This is a new var!"
class Child2(Parent):
def __init__(self, onevar, twovar, akeyword):
print(f"Child2 class __init__() overload; ", self)
#call some_base_method from parent
self.some_base_method()
#the parent's base method set a_new_dict
print(self.a_new_dict)
class Child3(Parent):
pass
print("\nRunning Parent()")
Parent()
Parent("a string", "something else", akeyword="a kwarg")
print("\nRunning Child1(), keep Parent.__init__(), overload surrogate Parent._init()")
Child1()
Child1("a string", "something else", akeyword="a kwarg")
print("\nRunning Child2(), overload Parent.__init__()")
#Child2() # __init__() requires arguments
Child2("a string", "something else", akeyword="a kwarg")
print("\nRunning Child3(), empty class, inherits everything")
Child3().some_base_method()
Output:
Running Parent()
Parent class _init; <__main__.Parent object at 0x7f84a721fdc0> () {}
Parent class _init; <__main__.Parent object at 0x7f84a721fdc0> ('a string', 'something else') {'akeyword': 'a kwarg'}
Running Child1(), keep Parent.__init__(), overload surrogate Parent._init()
Child1 class _init() overload; <__main__.Child1 object at 0x7f84a721fdc0> () {}
Child1 class _init() overload; <__main__.Child1 object at 0x7f84a721fdc0> ('a string', 'something else') {'akeyword': 'a kwarg'}
Running Child2(), overload Parent.__init__()
Child2 class __init__() overload; <__main__.Child2 object at 0x7f84a721fdc0>
some base method in Parent
{}
Running Child3(), empty class, inherits everything, access things set by other children
Parent class _init; <__main__.Child3 object at 0x7f84a721fdc0> () {}
some base method in Parent
As one can see, the overloaded definition(s) take the place of those declared in Parent class but can still be called BY the Parent class thereby allowing one to emulate the classical implicit inheritance initialization behavior Parent and Child classes both initialize without needing to explicitly invoke the Parent's init() from the Child class.
Personally, I call the surrogate _init() method main() because it makes sense to me when switching between C++ and Python for example since it is a function that will be automatically run for any subclass of Parent (the last declared definition of main(), that is).

Categories

Resources