Trying to change the __unicode__ method on an instance after it's created produces different results on Python 2.5 and 2.6.
Here's a test script:
class Dummy(object):
def __unicode__(self):
return u'one'
def two(self):
return u'two'
d = Dummy()
print unicode(d)
d.__unicode__ = d.two
print unicode(d)
print d.__unicode__()
On Python 2.5, this produces
one
two
two
That is, changing the instance's __unicode__ also changes unicode(instance)
On Python 2.6, this produces
one
one
two
So, after a change, unicode(instance) and instance.__unicode__() return different results.
Why? How can I get this working on Python 2.6?
(For what it's worth, the use case here is that I want to append something to the output of __unicode__ for all subclasses of a given class, without having to modify the code of the subclasses.)
Edit to make the use case a little clearer
I have Class A, which has many subclasses. Those subclasses define simple __unicode__ methods. I want to add logic so that, for instances of a Class A subclass, unicode(instance) gets something tacked on to the end. To keep the code simple, and because there are many subclasses I don't want to change, I'd prefer to avoid editing subclass code.
This is actually existing code that works in Python 2.5. It's something like this:
class A(object):
def __init__(self):
self._original_unicode = self.__unicode__
self.__unicode__ = self.augmented_unicode
def augmented_unicode(self):
return self._original_unicode() + u' EXTRA'
It's this code that no longer works on 2.6. Any suggestions on how to achieve this without modifying subclass code? (If the answer involves metaclasses, note that class A is itself a subclass of another class -- django.db.models.Model -- with a pretty elaborate metaclass.)
It appears that you are not allowed to monkey-patch protocol methods (those that begin and end with double underscores) :
Note
In practise there is another exception
that we haven't handled here. Although
you can override methods with instance
attributes (very useful for monkey
patching methods for test purposes)
you can't do this with the Python
protocol methods. These are the 'magic
methods' whose names begin and end
with double underscores. When invoked
by the Python interpreter they are
looked up directly on the class and
not on the instance (however if you
look them up directly - e.g.
x.repr - normal attribute lookup
rules apply).
That being the case, you may be stuck unless you can go with ~unutbu's answer.
EDIT: Or, you can have the base class __unicode__ method search the instance object's dict for a __unicode__ attribute. If it's present, then __unicode__ is defined on the instance object, and the class method calls the instance method. Otherwise, we fall back to the class definition of __unicode__.
I think that this could allow your existing subclass code to work without any changes. However, it gets ugly if the derived class wants to invoke the class implementation -- you need to be careful to avoid infinite loops. I haven't implemented such hacks in this example; merely commented about them.
import types
class Dummy(object):
def __unicode__(self):
func = self.__dict__.get("__unicode__", None)
if func:
// WARNING: if func() invokes this __unicode__ method directly,
// an infinite loop could result. You may need an ugly hack to guard
// against this. (E.g., set a flag on entry / unset the flag on exit,
// using a try/finally to protect against exceptions.)
return func()
return u'one'
def two(self):
return u'two'
d = Dummy()
print unicode(d)
funcType = type(Dummy.__unicode__)
d.__unicode__ = types.MethodType(Dummy.two, d)
print unicode(d)
print d.__unicode__()
Testing with Python 2.6 produces the following output:
> python dummy.py
one
two
two
Edit: In response to the OP's comment: Adding a layer of indirection can allow you to change the behavior of unicode on a per-instance basis:
class Dummy(object):
def __unicode__(self):
return self._unicode()
def _unicode(self):
return u'one'
def two(self):
return u'two'
d = Dummy()
print unicode(d)
# one
d._unicode = d.two
print unicode(d)
# two
print d.__unicode__()
# two
Looks like Dan is correct about monkey-patching protocol methods, and that this was a change between Python 2.5 and Python 2.6.
My fix ended up being making the change on the classes rather the instances:
class A(object):
def __init__(self):
self.__class__.__unicode__ = self.__class__.augmented_unicode
Related
I'm trying to write my own Python ANSI terminal colorizer, and eventialy came to the point where I want to define a bunch of public properties with similar body, so I want to ask: is there a way in Python to define multiple similar methods with slight difference between them?
Actual code I'm stuck with:
class ANSIColors:
#classmethod
def __wrap(cls, _code: str):
return cls.PREFIX + _code + cls.POSTFIX
#classproperty
def reset(cls: Type['ANSIColors']):
return cls.__wrap(cls.CODES['reset'])
#classproperty
def black(cls: Type['ANSIColors']):
return cls.__wrap(cls.CODES['black'])
...
If it actually matters, here is the code for classproperty decorator:
def classproperty(func):
return classmethod(property(func))
I will be happy to see answers with some Python-intended solutions, rather then code generation programs.
Edit 1: it will be great to preserve given properties names.
I don't think you need (or really want) to use properties to do what you want to accomplish, which is good because doing so would require a lot of repetitive code if you have many entries in the class' CODES attribute (which I'm assuming is a dictionary mapping).
You could instead use __getattr__() to dynamically look up the strings associated with the names in the class' CODES attribute because then you wouldn't need to explicitly create a property for each of them. However in this case it needs to be applied it the class-of-the-class — in other words the class' metaclass.
The code below show how to define one that does this:
class ANSIColorsMeta(type):
def __getattr__(cls, key):
"""Call (mangled) private class method __wrap() with the key's code."""
return getattr(cls, f'_{cls.__name__}__wrap')(cls.CODES[key])
class ANSIColors(metaclass=ANSIColorsMeta):
#classmethod
def __wrap(cls, code: str):
return cls.PREFIX + code + cls.POSTFIX
PREFIX = '<prefix>'
POSTFIX = '<postfix>'
CODES = {'reset': '|reset|', 'black': '|black|'}
if __name__ == '__main__':
print(ANSIColors.reset) # -> <prefix>|reset|<postfix>
print(ANSIColors.black) # -> <prefix>|black|<postfix>
print(ANSIColors.foobar) # -> KeyError: 'foobar'
✶ It's important to also note that this could be made much faster by having the metaclass' __getattr__() assign the result of the lookup to an actual cls attribute, thereby avoiding the need to repeat the whole process if the same key is ever used again (because __getattr__() is only called when the default attribute access fails) effectively caching looked-up values and auto-optimizing itself based on how it's actually being used.
Context
I've been working on a python project recently, and found modularity very important. For example you made a class with some attributes and some line of code that uses those attributes like
a = A()
print("hi"+a.imA)
If you were to modify imA of class A to another type, you would have to modify the print statement. In my case I had to do this so many times. It was annoying and time consuming. get/set methods would've solved this, but I heard that get/set are not 'good python'. So how would you solve this problem without using get and set methods?
First point: you would have saved yourself quite some hassle by using string formatting instead of string concatenation, ie:
print("hi {}".format(a.imA))
Granted, the final result may or not be what you'd expect depending on how a.imA type implements __str__() and __repr__() but at least this will not break the code.
wrt/ getters and setters, they are indeed considered rather unpythonic, because python has a strong support for computed attributes, and a simple generic implementation is available as the builtin property type.
NB: actually what's considered unpythonic is to systematically use implementation attributes and getters/setters (either explicits or - as is the case with computed attributes - implicits) when a plain public attribute is enough, and this is considered unpythonic because you can always turn a plain attribute into a computed one without breaking the client code (assuming of course you don't change the type nor semantic of the attribute) - something that was not possible with early OOPLs like Smalltalk, C++ or Java (Smalltalk being a bit of a special case actually but that's another topic).
In your case, if the point was to change the stored value's type without breaking the API, the simple obvious canonical solution was to use a property delegating to an implementation attribute:
before:
class Foo(object):
def __init__(self, bar):
# `bar` is expected to be the string representation of an int.
self.bar = bar
def frobnicate(self, val):
return (int(self.bar) + val) / 2
after:
class Foo(object):
def __init__(self, bar):
# `bar` is expected to be the string representation of an int.
self.bar = bar
# but we want to store it as an int
#property
def bar(self):
return str(self._bar)
#bar.setter
def bar(self, value):
self._bar = int(value)
def frobnicate(self, val):
# internally we use the implementation attribute `_bar`
return (self._bar + val) / 2
And you now have the value stored internally as an int, but the public interface is (almost) exactly the same - the only difference being that passing something that cannot be passed to int() will raise at the expected place (when you set it) instead than breaking at the most unexpected one (when you call .frobnicate())
Now note that that changing the type of a public attribute is just like changing the return type of a getter (or the type of a setter argument) - in both cases you are breaking the contract - so if what you wanted was really to change the type of A.imA, neither getters nor properties would have solved your issue - getters and setters (or in Python computed attributes) can only protect you from implementation changes.
EDIT: oh and yes: this has nothing to do with modularity (which is about writing decoupled, self-contained code that's easier to read, test, maintain and eventually reuse), but with encapsulation (which aim is to make the public interface resilient to implementation changes).
First, use
print(f"hi {a.imA}") # Python 3.6+
or
print("hi {}".format(a.imA)) # all Python 3
instead of
print("hi"+a.imA)
That way, str will be called automatically on each argument.
Then define a __str__ function in all your classes, so that printing any class always works.
class A:
def __init__(self):
self._member_1 = "spam"
def __str__(self):
return f"A(member 1: {self._member_1})"
I have a two child classes which are inherited from the base class. I have one method in the different script which will actually return one of the child class object depending on some condition, is it the correct way in python to return the different child object using the same method. I think yes as their type is same and they are inherited from the same base class? Or should type casting be done? Please guide the below example is just for explaining the question in simple terms.
class A():
class B(A):
Different methods
class C(A):
Different methods
Other Script:
def test_func:
if <some-condition>
new_obj = B()
else
new_obj = C()
return new_obj
Python is a dynamically typed language. One does not declare types. So, from that side, it is perfectly fine to pass arguments and return values of any type.
On the other hand, you want your objects to be usable, so some interface has to be adhered to. For example, you can often pass any object with read and readline methods instead of an opened file. That is not only acceptable, but actually one of the strong advantages of Python over some other languages.
In this question, the case is even cleaner than what is usually done in Python. This pattern is valid even in e.g. much stricter C++ (see this question).
TL;DR:
Yes, it is fine. It would even be fine without inheriting from A, as long as B and C looked and behaved (and quacked) similarly enough for the code using test_func to work.
I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.
In Java you can define a new class inline using anonymous inner classes. This is useful when you need to rewrite only a single method of the class.
Suppose that you want create a subclass of OptionParser that overrides only a single method (for example exit()). In Java you can write something like this:
new OptionParser () {
public void exit() {
// body of the method
}
};
This piece of code creates a anonymous class that extends OptionParser and override only the exit() method.
There is a similar idiom in Python? Which idiom is used in these circumstances?
You can use the type(name, bases, dict) builtin function to create classes on the fly. For example:
op = type("MyOptionParser", (OptionParser,object), {"foo": lambda self: "foo" })
op().foo()
Since OptionParser isn't a new-style class, you have to explicitly include object in the list of base classes.
Java uses anonymous classes mostly to imitate closures or simply code blocks. Since in Python you can easily pass around methods there's no need for a construct as clunky as anonymous inner classes:
def printStuff():
print "hello"
def doit(what):
what()
doit(printStuff)
Edit: I'm aware that this is not what is needed in this special case. I just described the most common python solution to the problem most commonly by anonymous inner classes in Java.
You can accomplish this in three ways:
Proper subclass (of course)
a custom method that you invoke with the object as an argument
(what you probably want) -- adding a new method to an object (or replacing an existing one).
Example of option 3 (edited to remove use of "new" module -- It's deprecated, I did not know ):
import types
class someclass(object):
val = "Value"
def some_method(self):
print self.val
def some_method_upper(self):
print self.val.upper()
obj = someclass()
obj.some_method()
obj.some_method = types.MethodType(some_method_upper, obj)
obj.some_method()
Well, classes are first class objects, so you can create them in methods if you want. e.g.
from optparse import OptionParser
def make_custom_op(i):
class MyOP(OptionParser):
def exit(self):
print 'custom exit called', i
return MyOP
custom_op_class = make_custom_op(3)
custom_op = custom_op_class()
custom_op.exit() # prints 'custom exit called 3'
dir(custom_op) # shows all the regular attributes of an OptionParser
But, really, why not just define the class at the normal level? If you need to customise it, put the customisation in as arguments to __init__.
(edit: fixed typing errors in code)
Python doesn't support this directly (anonymous classes) but because of its terse syntax it isn't really necessary:
class MyOptionParser(OptionParser):
def exit(self, status=0, msg=None):
# body of method
p = MyOptionParser()
The only downside is you add MyOptionParser to your namespace, but as John Fouhy pointed out, you can hide that inside a function if you are going to do it multiple times.
Python probably has better ways to solve your problem. If you could provide more specific details of what you want to do it would help.
For example, if you need to change the method being called in a specific point in code, you can do this by passing the function as a parameter (functions are first class objects in python, you can pass them to functions, etc). You can also create anonymous lambda functions (but they're restricted to a single expression).
Also, since python is very dynamic, you can change methods of an object after it's been created object.method1 = alternative_impl1, although it's actually a bit more complicated, see gnud's answer
In python you have anonymous functions, declared using lambda statement. I do not like them very much - they are not so readable, and have limited functionality.
However, what you are talking about may be implemented in python with a completely different approach:
class a(object):
def meth_a(self):
print "a"
def meth_b(obj):
print "b"
b = a()
b.__class__.meth_a = meth_b
You can always hide class by variables:
class var(...):
pass
var = var()
instead of
var = new ...() {};
This is what you would do in Python 3.7
#!/usr/bin/env python3
class ExmapleClass:
def exit(self):
print('this should NOT print since we are going to override')
ExmapleClass= type('', (ExmapleClass,), {'exit': lambda self: print('you should see this printed only')})()
ExmapleClass.exit()
I do this in python3 usually with inner classes
class SomeSerializer():
class __Paginator(Paginator):
page_size = 10
# defining it for e.g. Rest:
pagination_class = __Paginator
# you could also be accessing it to e.g. create an instance via method:
def get_paginator(self):
return self.__Paginator()
as i used double underscore, this mixes the idea of "mangling" with inner classes, from outside you can still access the inner class with SomeSerializer._SomeSerializer__Paginator, and also subclasses, but SomeSerializer.__Paginator will not work, which might or might not be your whish if you want it a bit more "anonymous".
However I suggest to use "private" notation with a single underscore, if you do not need the mangling.
In my case, all I need is a fast subclass to set some class attributes, followed up by assigning it to the class attribute of my RestSerializer class, so the double underscore would denote to "not use it at all further" and might change to no underscores, if I start reusing it elsewhere.
Being perverse, you could use the throwaway name _ for the derived class name:
class _(OptionParser):
def exit(self):
pass # your override impl
Here is a more fancy way of doing Maciej's method.
I defined the following decorator:
def newinstance(*args, **kwargs):
def decorator(cls):
return cls(*args, **kwargs)
return decorator
The following codes are roughly equivalent (also works with args!)
// java
MyClass obj = new MyClass(arg) {
public void method() {
// body of the method
}
};
# python
#newinstance(arg)
class obj(MyClass):
def method(self):
pass # body of the method
You can use this code from within a class/method/function if you want to define an "inner" class instance.