settattr for parent class to use in children - python

I have a library with one parent and a dozen of children:
# mylib1.py:
#
class Foo(object):
def __init__(self, a):
self.a = a
class FooChild(Foo):
def __init__(self, a, b):
super(FooChild, self).__init__(a)
self.b = b
# more children here...
Now I want to extend that library with a simple (but a bit spesific, for use in another approach) method. So I would like to change parent class and use it's children.
# mylib2.py:
#
import mylib1
def fooMethod(self):
print 'a={}, b={}'.format(self.a, self.b)
setattr(mylib1.Foo, 'fooMethod', fooMethod)
And now I can use it like this:
# way ONE:
import mylib2
fc = mylib2.mylib1.FooChild(3, 4)
fc.fooMethod()
or like this:
# way TWO:
# order doesn't matter here:
import mylib1
import mylib2
fc = mylib1.FooChild(3, 4)
fc.fooMethod()
So, my questions are:
Is this good thing?
How this should be done in a better way?

A common approach is to use mixin
If you want, you could add dynamically How do I dynamically add mixins as base classes without getting MRO errors?.

There is a general rule in programming, that you should avoid dependence on global state. Which in other words means that your globals should be if possible constant. Classes are (mostly) globals.
Your approach is called monkey patching. And if you don't have a really really good reason to explain it, you should avoid it. This is because monkey patching violates the above rule.
Imagine you have two separate modules and both of them use this approach. One of them sets Foo.fooMethod to some method. The other - to another. Then you somehow switch control between these modules. The result would be, that it would be hard to determine what fooMethod is used where. This means hard to debug problems.
There are people (e.g. Brandon Craig-Rhodes), who believe that patching is bad even in tests.
What I would suggest is to use some attribute that you would set when instantiating instances of your Foo() class (and its children), that would control the behaviour of your fooMethod. Then the behaviour of this method would depend on how you instantiated the object, not on global state.

Related

Calling Class Method From Another Class in Python 3.x

So I came across this answer for calling class method from another class. Now my question why have they done things using so much complexity when this can be achieved simply by this code:
class a(object):
def __init__(self):
pass
def foo(self):
print('Testing')
class b(object):
def __init__(self, c):
c.foo()
A = a()
B = b(A)
And the output is:
Testing
So what is wrong in my approach? Am I missing something?
Basically, because of the Zen of Python which says:
Explicit is better than implicit.
Complex is better than complicated.
From the OOD (object-oriented design) perspective you are having strong dependencies between two classes, since you cannot initialize B without calling specific method of class A, that may be ok for now, but moving forward such dependency may lead to the problems in long time run. To get deeper understanding of that make sure you are familiart Single responsibility principle and Separation of concerns principles.
Generally speaking - If some method of the another class shall be always called during initializing, maybe that method shall be moved out from another class. As an alternative, you can create utility function which will handle that without introducing hard dependency between classes.
Also, the solution provided in the SO question differs, since it has dynamic call of the method to the name isn't hardcoded like in your sample.

python memory consumption and performance related to classes

I am curious about memory consumption / performance of python related to nested classes vs class attributes.
If i have classes called OtherClass, ClassA, ClassB, ClassC where OtherClass needs access to limited attributes of ClassA-C. Assuming ClassA-C are large classes with many attributes, methods, and properties. Which one of these scenarios is more efficient.
Option 1:
def OtherClass(object):
def __init__(self, classa, classb, classc):
self.classa
self.classb
self.classc
Option 2:
def OtherClass(object):
def __init__(self, classa_id, classa_atr1, classa_atr2,
classb_id, classb_atr1, classb_atr2,
classc_id, classc_atr1, classc_atr2):
self.classa_id
self.classb_id
self.classc_id
self.classa_atr1
self.classb_atr1
self.classc_atr1
self.classa_atr2
self.classb_atr2
self.classc_atr2
I imagine option 1 is better, since the 3 attributes will simply link to the class instance already existing in memory. Where option 2 is adding 6 additional attributes per instance to memory. Is this correct?
TL;DR
My answer is that you should prefer option 1 for it's simplicity and better OOP design, and avoid premature optimization.
The Rest
I think the efficiency question here is dwarfed by how difficult it will be in the future to maintain your second option. If one object needs to use attributes of another object (your example code uses a form of composition), then it should have those objects as attributes, rather than creating extra references directly to the object attributes it needs. Your first option is the way to go. The first option supports encapsulation, option 2 very clearly violates it. (Granted, encapsulation isn't as strongly enforced in Python as some langauages, like Java, but it's still a good principle to follow).
The only efficiency-related reason you should prefer number two is if you find your code is slow, you profile, and your profiling shows that these extra lookups are indeed your bottleneck. Then you could consider sacrificing things like ease of maintenance for the speed you need. It is possible that the extra layer of references (foo = self.classa.bar() vs. foo = self.bar()) could slow things down if you're using them in tight loops, but it's not likely.
In fact, I would go one step further and say you should modify your code so that OtherClass actually instantiates the object it needs, rather than having them passed in. With Option 1, if I want to use OtherClass, I have to do this:
classa = ClassA(class_a_init_args)
classb = ClassC(class_b_init_args)
classc = ClassC(class_c_init_args)
otherclass_obj = OtherClass(classa_obj, classb_obj, classc_obj)
That's too much setup required just to instantiate OtherClass. Instead, change OtherClass to this:
def OtherClass(object):
def __init__(self, classa_init_args, classb_init_args, classc_init_args):
self.classa = ClassA(class_a_init_args)
self.classb = ClassC(class_b_init_args)
self.classc = ClassC(class_c_init_args)
Now instantiating an OtherClass object is simply this:
otherclass_obj = OtherClass(classa_init_args, classb_init_args, classc_init_args)
If possible, another option may be possible to reconfigure your class so that you don't even have to instantiate the other classes! Have a look at Class Attributes and the classmethod decorator. That allows you to do things like this:
class foo(object):
bar = 2
#classmethod
def frobble(self):
return "I didn't even have to be instantiated!"
print(foo.bar)
print(foo.frobble())
This code prints this:
2
I didn't even have to be instantiated!
If your OtherClass uses attributes or methods of classa, classb, and classc that don't need to be tied to an instance of those classes, consider using them directly via class methods and attributes instead of instantiating the objects. That would actually save you the most memory by avoiding the creation of entire objects.

Can Python do DI seamlessly without relying on a service locator?

I'm coming from the C# world, so my views may be a little skewed. I'm looking to do DI in Python, however I'm noticing a trend with libraries where they all appear to rely on a service locator. That is, you must tie your object creation to the framework, such as injectlib.build(MyClass) in order to get an instance of MyClass.
Here is an example of what I mean -
from injector import Injector, inject
class Inner(object):
def __init__(self):
self.foo = 'foo'
class Outer(object):
#inject(inner=Inner)
def __init__(self, inner=None):
if inner is None:
print('inner not provided')
self.inner = Inner()
else:
print('inner provided')
self.inner = inner
injector = Injector()
outer = Outer()
print(outer.inner.foo)
outer = injector.get(Outer)
print(outer.inner.foo)
Is there a way in Python to create a class while automatically inferring dependency types based on parameter names? So if I have a constructor parameter called my_class, then an instance of MyClass will be injected. Reason I ask is that I don't see how I could inject a dependency into a class that gets created automatically via a third party library.
To answer the question you explicitly asked: no, there's no built-in way in Python to automatically get a MyClass object from a parameter named my_class.
That said, neither "tying your object creation to the framework" nor the example code you gave seem terribly Pythonic, and this question in general is kind of confusing because DI in dynamic languages isn't really a big deal.
For general thoughts about DI in Python I'd say this presentation gives a pretty good overview of different approaches. For your specific question, I'll give two options based on what you might be trying to do.
If you're trying to add DI to your own classes, I would use paramaters with default values in the constructor, as that presentation shows. E.g:
import time
class Example(object):
def __init__(self, sleep_func=time.sleep):
self.sleep_func = sleep_func
def foo(self):
self.sleep_func(10)
print('Done!')
And then you could just pass in a dummy sleep function for testing or whatever.
If you're trying to manipulate a library's classes through DI, (not something I can really imagine a use case for, but seems like what you're asking) then I would probably just monkey patch those classes to change whatever needed changing. E.g:
import test_module
def dummy_sleep(*args, **kwargs):
pass
test_module.time.sleep = dummy_sleep
e = test_module.Example()
e.foo()

Python2.7: infinite loop when super __init__ creates an instance of it's own subclass

I have the sense that this must be kind of a dumb question—nub here. So I'm open to an answer of the sort "This is ass-backwards, don't do it, please try this: [proper way]".
I'm using Python 2.7.5.
General Form of the Problem
This causes an infinite loop unless Thesaurus (an app-wide singleton) does not call Baseclass.__init__()
class Baseclass():
def __init__(self):
thes = Thesaurus()
#do stuff
class Thesaurus(Baseclass):
def __init__(self):
Baseclass.__init__(self)
#do stuff
My Specific Case
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface). This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
But since the subclass calls the superclass's __init__(), which in turn creates another subclass, loops ensue. Not calling the superclass's __init__() works just fine, but I'm concerned that's merely a lucky coincidence, and that my Thesaurus class may eventually be modified to require it's parent __init__().
Advice?
Well, I'm stopping to look at your code, and I'll just base my answer on what you say:
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface).
this would be ThesaurusBase in the code below
This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
That would be ThesaurusSingleton, that you can call with a better name and make it actually useful.
class ThesaurusBase():
def __init__(self, singleton=None):
self.singleton = singleton
def mymethod1(self):
raise NotImplementedError
def mymethod2(self):
raise NotImplementedError
class ThesaurusSingleton(ThesaurusBase):
def mymethod1(self):
return "meaw!"
class Thesaurus(TheraususBase):
def __init__(self, singleton=None):
TheraususBase.__init__(self, singleton)
def mymethod1(self):
return "quack!"
def mymethod2(self):
return "\\_o<"
now you can create your objects as follows:
singleton = ThesaurusSingleton()
thesaurus = Thesaurus(singleton)
edit:
Basically, what I've done here is build a "Base" class that is just an interface defining an expected behavior for all its children classes. The class ThesaurusSingleton (I know that's a terrible name) is also implementing that interface, because you said it had too and I did not want to discuss your design, you may always have good reasons for weird constraints.
And finally, do you really need to instantiate your singleton inside the class that is defining the singleton object? Though there may be some hackish way to do so, there's often a better design that avoids the "hackish" part.
What I think is that however you create your singleton, you should better do it explicitly. That's in the "Zen of python": explicit is better than implicit. Why? because then people reading your code (and that might be you in six months) will be able to understand what's happening and what you were thinking when you wrote that code. If you try to make things more implicit (like using sophisticated meta classes and weird self-inheritance) you may wonder what this code does in less than three weeks!
I'm not telling to avoid that kind of options, but to only use sophisticated stuff when you're out of simple ones!
Based on what you said I think the solution I gave can be a starting point. But as you focus on some obscure, yet not very useful hackish stuff instead of talking about your design, I can't be sure if my example is that appropriate, and hint you on the design.
edit2:
There's an another way to achieve what you say you want (but be sure that's really the design you want). You may want to use a class method that will act on the class itself (instead of the instances) and thus enable you to store a class-wide instance of itself:
>>> class ThesaurusBase:
... #classmethod
... def initClassWide(cls):
... cls._shared = cls()
...
>>> class T(ThesaurusBase):
... def foo(self):
... print self._shared
...
>>> ThesaurusBase.initClassWide()
>>> t = T()
>>> t.foo()
<__main__.ThesaurusBase instance at 0x7ff299a7def0>
and you can call the initClassWide method at the module level of where you declare ThesaurusBase, so whenever you import that module, it will have the singleton loaded (the import mechanism ensuring that python modules are run only once).
the short answer is:
do not instantiate an instance of a sub class from the super class constructor
longer answer:
if the motive you have to try to do this is the fact the Thesaurus is a singleton then you'll be better off exposing the singleton using a static method in the class (Thesaurus) and calling this method when you need the singleton

extending all subclasses of superclass

I have a one to many class inheritance structure as follows:
class SuperClass:
def func1():
print 'hello'
def func2():
print 'ow'
class SubClass1(SuperClass):
def func1():
print 'hi'
class SubClass2(SuperClass):
def func1():
print 'howdy'
...
I want to add functionality to class A so that I can use it when I create classes B and C (etc), but I cannot edit the code for class A directly. My current solution is:
def func3():
print 'yes!'
SuperClass.func3 = func3
Is there a better and/or more pythonic way to achieve this?
This is called "monkeypatching", and is perfectly reasonable in some cases.
For example if you have to use someone else's code (that you can't modify) that depends on SuperClass, and you need to change that code's behavior, your only real choice is to replace methods on SuperClass.
However, in your case, there doesn't seem to be any good reason to do this. You're defining all of the subclasses of SuperClass, so why not just add another class in between?
class Intermediate(SuperClass):
def func3():
pass
class SubClass1(Intermediate):
def func1():
print 'hi'
This isn't good enough for "functionality that should have been in SuperClass but wasn't" if other code you can't control needs that functionality… but when it's only your code that needs that functionality, it's just as good, and a lot simpler.
If even the subclasses aren't under your control, often you can just derive a new class from each one that is. For example:
class Func3Mixin(object):
def func3():
pass
class F3SubClass1(SubClass1, Func3Mixin):
pass
class F3SubClass2(SubClass2, Func3Mixin):
pass
Now you just construct instances of F3SubClass1 instead of SubClass1. Code that was expecting a SubClass1 instance can use an F3SubClass1 just fine. And Python's duck typing makes this kind of "mixin-oriented programming" especially simple: inside the implementation of Func3Mixin.func3, you can use attributes and methods of SuperClass, despite the fact that Func3Mixin itself isn't statically related to SuperClass in any way, because you know that any runtime object that is a Func3Mixin will also be a SuperClass.
Meanwhile, even when monkeypatching is appropriate, it isn't necessarily the best answer. For example, if you're patching to work around a bug in some third-party code, that code has a nice license and a source repository that makes it easy to maintain your own patches, you can just fork it, create a fixed copy, and use that instead of the original.
Also, it's worth pointing out that none of your classes are actually usable as written—any attempt to call any of the methods will raise a TypeError because they're missing the self argument. But the way you've monkeypatched in func3, it will fail in exactly the same way as func1. (And the same is true for the alternatives I sketched above.)
Finally, all of your classes here are classic classes rather than new-style, because you forgot to make SuperClass inherit from object. If you can't change SuperClass, of course, that's not your fault—but you may want to fix it anyway by making your subclasses (or Intermediate) multiply inherit from object and SuperClass. (If you've been paying attention: yes, this means you can mix-in new-style-classness. Although under the covers you have to understand metaclasses to understand why.)

Categories

Resources