override method of class in python - python

I'd like to override a class method, not creating subclass/extending from a class.
An example:
from django.contrib import admin
class NewModelAdmin(admin.ModelAdmin):
def formfield_for_dbfield(self, db_field, **kwargs):
# some custom stuff
Now I don't want to change all the classes (aka Models) which extend from admin.ModelAdmin to NewModelAdmin. But I don't want to modify the original django code either.
Is there some way to accomplish this?

I'm not 100% clear with what you want to do, and why you don't want to create a new subclass or have a method of a different name.
But in general in python you can do something like:
class MyClass(object):
def print_hello(self):
print "not hello"
def real_print_hello():
print "hello"
x = MyClass()
x.print_hello() # "not hello"
setattr(x, "print_hello", real_print_hello)
x.print_hello() # "hello"

Are you trying to do 'monkey patching'?
http://mail.python.org/pipermail/python-dev/2008-January/076194.html

In order to keep your code maintainable, it's best to go ahead and have your individual ModelAdmin classes inherit from NewModelAdmin. This way, other developers who look at your code (and you, perhaps a year or two later) can clearly see where the custom formfield_for_dbfield behavior originates from so that it can be updated if needed. If you monkey-patch admin.ModelAdmin, it will make it much more difficult to track down issues or change the behavior if needed later.

Chances are good that your problem is solvable without monkey-patching, which often can have unintended consequences.
How are you registering models with the django admin?
If you are using this approach:
admin.site.register(FooModel) #uses generic ModelAdmin
You have the problem of needing to change this to many boilerplate instances of subclasses of NewModelAdmin, which would look like this:
class FooModelAdmin(NewModelAdmin):
pass #does nothing except set up inheritance
admin.site.register(FooModel, FooModelAdmin)
This is really wordy and might take a lot of time to implement if you have a lot of models, so do it programmatically by writing a wrapper function:
def my_admin_register(model):
class _newmodeladmin(ModelAdmin):
def your_overridden_method(*args, **kwargs):
#do whatever here
admin.site.register(model, _newmodeladmin)
Then, you can use this like this:
my_admin_register(FooModel)

You can change a class method using setattr() on the class - aka monkey patching.

If you modify a method in a class you modify behavior for:
all instances which resolve their method to that class
all derived classes which resolve their method to that class
Your requirements are mutually exclusive. You cannot modify the behavior of a class without impacting those object which resolve their methods to the class.
In order to not modify the behaviors of these other objects you would want to create the method in your instance so that it doesn't resolve it's method in the class.
Another alternative is to rely on Python's duck-typing. You don't need the object to be directly related to the one currently used. You could reimplement the interface and in the code swap out the calls to the old class for your new one.
These techniques have tradeoffs in maintainability and design. In other words don't use them unless you have no other options.

I'm not 100% sure what you are trying to achieve, but I suppose you want to behave all admins that inherit from models.ModelAdmin without having to change their declaration. The only solution to achieve this will be monkey-patching the original ModelAdmin class, eg. something like:
setattr(admin.ModelAdmin, 'form_field_for_dbfield', mymethod)
This for sure not the most recommendable way, because the code will be hard to maintain and other things.

Related

Python class that inherits, does nothing else

I'm creating a package where I create about ~30 classes that all inherit from a common generic class. Generally, they add or modify methods, but every now and then, the subclass can inherit from the generic and make no modification.
What is the most pythonic way to do this? SubClass = GenericClass works, but it offends my aesthetic sensibilities.
You could probably just..
class Subclass(GenericClass):
'''this is a subclass'''

How to tell if a class is abstract in Python 3?

I wrote a metaclass that automatically registers its classes in a dict at runtime. In order for it to work properly, it must be able to ignore abstract classes.
The code works really well in Python 2, but I've run into a wall trying to make it compatible with Python 3.
Here's what the code looks like currently:
def AutoRegister(registry, base_type=ABCMeta):
class _metaclass(base_type):
def __init__(self, what, bases=None, attrs=None):
super(_metaclass, self).__init__(what, bases, attrs)
# Do not register abstract classes.
# Note that we do not use `inspect.isabstract` here, as
# that only detects classes with unimplemented abstract
# methods - which is a valid approach, but not what we
# want here.
# :see: http://stackoverflow.com/a/14410942/
metaclass = attrs.get('__metaclass__')
if not (metaclass and issubclass(metaclass, ABCMeta)):
registry.register(self)
return _metaclass
Usage in Python 2 looks like this:
# Abstract classes; these are not registered.
class BaseWidget(object): __metaclass__ = AutoRegister(widget_registry)
class BaseGizmo(BaseWidget): __metaclass__ = ABCMeta
# Concrete classes; these get registered.
class AlphaWidget(BaseWidget): pass
class BravoGizmo(BaseGizmo): pass
What I can't figure out, though, is how to make this work in Python 3.
How can a metaclass determine if it is initializing an abstract class in Python 3?
PEP3119 describes how the ABCMeta metaclass "marks" abstract methods and creates an __abstractmethods__ frozenset that contains all methods of a class that are still abstract. So, to check if a class cls is abstract, check if cls.__abstractmethods__ is empty or not.
I also found this relevant post on abstract classes useful.
I couldn't shake the feeling as I was posting this question that I was dealing with an XY Problem. As it turns out, that's exactly what was going on.
The real issue here is that the AutoRegister metaclass, as implemented, relies on a flawed understanding of what an abstract class is. Python or not, one of the most important criteria of an abstract class is that it is not instanciable.
In the example posted in the question, BaseWidget and BaseGizmo are instanciable, so they are not abstract.
Aren't we just bifurcating rabbits here?
Well, why was I having so much trouble getting AutoRegister to work in Python 3? Because I was trying to build something whose behavior contradicts the way classes work in Python.
The fact that inspect.isabstract wasn't returning the result I wanted should have been a major red flag: AutoRegister is a warranty-voider.
So what's the real solution then?
First, we have to recognize that BaseWidget and BaseGizmo have no reason to exist. They do not provide enough functionality to be instantiable, nor do they declare abstract methods that describe the functionality that they are missing.
One could argue that they could be used to "categorize" their sub-classes, but a) that's clearly not what's going on in this case, and b) quack.
Instead, we could embrace Python's definition of "abstract":
Modify BaseWidget and BaseGizmo so that they define one or more abstract methods.
If we can't come up with any abstract methods, then can we remove them entirely?
If we can't remove them but also can't make them properly abstract, it might be worthwhile to take a step back and see if there are other ways we might solve this problem.
Modify the definition of AutoRegister so that it uses inspect.isabstract to decide if a class is abstract: see final implementation.
That's cool and all, but what if I can't change the base classes?
Or, if you have to maintain backwards compatibility with existing code (as was the case for me), a decorator is probably easier:
#widget_registry.register
class AlphaWidget(object):
pass
#widget_registry.register
class BravoGizmo(object):
pass

Get PyCharm to know what classes are mixin for

Our application has set of complex form wizards. To avoid code duplication I created several mixins.
The problem is that PyCharm highlights mixin methods with Unresolved attribute refference error.
This is correct as object does not have such methods. But I know that this mixin will be used only with special classes. Is there any way to tell this info to PyCharm?
For now I use such approach:
class MyMixin(object):
def get_context_data(self, **kwargs):
assert isinstance(self, (ClassToBeExtended, MyMixin))
# super.get_context_data is still highlighter,
# as super is considered as object
context = super(MyMixin, self).get_context_data(**kwargs)
context.update(self.get_preview_context())
return context
def get_preview_context(self):
# without this line PyCharm highlights the self.initial_data
assert isinstance(self, (ClassToBeExtended, MyMixin))
return {'needs': (self.initial_data['needs']
if 'type' not in self.initial_data
else '%(needs)s %(type)s' % self.initial_data)}
While this works for some cases like autocomplete for self., it fails for other cases like super. Is there a better approach to achieve the desired behavior?
P.S.: I know that I can disable reference check for specific name or whole class, but I don't want to do this as it will not help in typo checks and autocomplete.
You can type-hint to PyCharm what kind of classes to expect.
class DictMixin(object):
def megamethod(
self, # type: dict
key
):
return self.get(key)
It's still not quite comparable to other type handling.
PyCharm is lazy in evaluating it, and only does so when first working on self.
Things are a bit tricky when accessing attributes of the mixin as well - self, # type: dict | DictMixin works for one of my classes, but not in my test code.
In python 3.5, you should be able to use # type: typing.Union[dict, DictMixin].
If you are creating Mixin, for, let's say ClassSub, which is subclass of ClassSuper, you can implement Mixins this way:
class Mixin1(ClassSuper):
pass
class Mixin2(ClassSuper):
pass
and then use them like:
class ClassSub(Mixin1, Mixin2):
pass
That way I use some mixins for models in Django. Also, django-extensions uses similar pattern (gives models that are actually mixins). Basically, this way you don't have to inherit ClassSuper, because it's "included" in every of your mixins.
Most important - PyCharm works like a charm this way.

Re-initializing parent of a class

I have become stuck on a problem with a class that I am writing where I need to be able to reinitialize the parents of that class after having created an instance of the class. The problem is that the parent class has a read and a write mode that is determined by passing a string to the init function. I want to be able to switch between these modes without destroying the object and re-initialising. Here is an example of my problem:
from parent import Parent
class Child(Parent):
def __init__(mode="w"):
super.__init__(mode=mode)
def switch_mode():
# need to change the mode called in the super function here somehow
The idea is to extend a class that I have imported from a module to offer extended functionality. The problem is I still need to be able to access the original class methods from the new extended object. This has all worked smoothly so far with me simply adding and overwriting methods as needed. As far as I can see the alternative is to use composition rather than inheritance so that the object I want to extend is created as a member of the new class. The problem with this is this requires me to make methods for accessing each of the object's methods
ie. lots of this sort of thing:
def read_frames(self):
return self.memberObject.read_frames()
def seek(self):
return self.memberObject.seek()
which doesn't seem all that fantastic and comes with the problem that if any new methods are added to the base class in the future I have to create new methods manually in order to access them, but is perhaps the only option?
Thanks in advance for any help!
This should work. super is a function.
super(Child, self).__init__(mode=mode)

Where is the best place to put support functions in a class?

In Python, I have a class that I've built.
However, there is one method where I apply a rather specific type of substring-search procedure. This procedure could be a standalone function by itself (it just requires a needle a haystack string), but it feels odd to have the function outside the class, because my class depends on it.
What is the typical design paradigm for this? Is it typical to just have myClassName.py with the main class, as well as all the support functions outside the class itself, in the same file? Or is it better to have the support function embedded within the class at the expense of modularity?
You can create a staticmethod, like so:
class yo:
#staticmethod
def say_hi():
print "Hi there!"
Then, you can do this:
>>> yo.say_hi()
Hi there!
>>> a = yo()
>>> a.say_hi()
Hi there!
They can be used non-statically, and statically (if that makes sense).
About where to put your functions...
If a method is required by a class, and it is appropriate for the method to perform data that is specific to the class, then make it a method. This is what you would want:
class yo:
self.message = "Hello there!"
def say_message(self):
print self.message
My say_message relies on the data that is particular to the instance of a class.
If you feel the need to have a function, in addition to the class method, by all means go ahead. Use whichever one is more appropriate in your script. There are many examples of this, including in the python built-ins. Take generator objects for example:
a = my_new_generator()
a.next()
Can also be done as:
a = my_new_generator()
next(a)
Use whichever is more appropriate, and obviously whichever one is more readable. :)
If you can think or any reason to override this function one day, make it a staticmethod, else a plain function is just ok - FWIW, your class probably depends on much more than this simple function. And if you cannot think of any reason for anyone else to ever use this function, keep it in the same module as your class.
As a side note: "myClassName.py" is definitly unpythonic. First because module names should be all_lower, then because the one-module-per-class stuff is a nonsense in Python - we group related classes and functions (and exceptions and whatnots) together.
If the search method you are talking about is really so specific and you will never need to reuse it somewhere else, I do not see any reason to make it static. The fact that it doesn't require access to instance variables doesn't make it static by definition.
If there is a possibility, that this method is going to be reused, refactor it into a helper/utility class (no static again).
ADDED:
Just wanted to add, that when you consider something being static or not, think about how method name relates to the class name. Does this method name makes more sense when used in class context or object context?

Categories

Resources