I have a Python class Foo and a more memory-conscious version LightFoo. The information contained in their attributes is ultimately the same but is encoded differently.
A handful of Foo's methods will be completely re-written for LightFoo, but for most of them it will be fine to cast the LightFoo instance to a Foo and call the corresponding Foo method. For example, LightFoo might include:
def total(self):
self.fooize().total()
If Foo has 100 methods, though, this gets really tedious. What would be really convenient is to set up LightFoo to inherit from Foo and have the casting step somehow inserted by default for all methods not found in LightFoo. I'm pretty sure this isn't possible, but it seems like there must be a better approach than writing a block like the one above for each of Foo's methods.
If fooize() is doing some custom processing to convert the object to a Foo, you can do this by just defining __getattr__, which is called when you access an attribute that can't be found. You can call fooize() there.
def __getattr__(self, name):
return getattr(self.fooize(), name)
Otherwise, you can just inherit from Foo and let the nonexistent methods fall back to the superclass.
Related
class Foo:
def __init__(self, bar):
self.bar = bar
def get_new_foo(self, new_bar):
return type(self)([self.bar, new_bar]) #How should it be documented?
If get_new_foo gets called from a derived class, then it would return an instance of the derived class. If multiple classes use Foo as base class, then get_new_foo will return an instance of the derived class it was called from.
I want to document what type of object get_new_foo returns, and I don't understand what/how to document it. I can't say Returns an instance of Foo because this will not be the case always.
Personally, I wouldn't be overly concerned about this. Since any subclass "is-a" Foo anyway, you're at worst mildly misleading in your chosen wording. If you want to be pedantically correct, you can always expand it to "Returns an instance of Foo (a Foo subclass when called on child class instances)".
You can provide type hints in the docstring.
Since you're writing the documentation on a class that would be the super-class for other classes, it makes sense that you document the base type it returns. Even if the actual type that will be returned once the base class is inherited, the results will still be instances of the base type and at this level, you cannot document anything beyond that anyway.
If you feel sub-classes need more specific documentation for whatever they return, you can simply provide the documentation there.
Lately, I've been studying Python's class instantiation process to really understand what happen under the hood when creating a class instance. But, while playing around with test code, I came across something I don't understand.
Consider this dummy class
class Foo():
def test(self):
print("I'm using test()")
Normally, if I wanted to use Foo.test instance method, I would go and create an instance of Foo and call it explicitly like so,
foo_inst = Foo()
foo_inst.test()
>>>> I'm using test()
But, I found that calling it that way ends up with the same result,
Foo.test(Foo)
>>>> I'm using test()
Here I don't actually create an instance, but I'm still accessing Foo's instance method. Why and how is this working in the context of Python ? I mean self normally refers to the current instance of the class, but I'm not technically creating a class instance in this case.
print(Foo()) #This is a Foo object
>>>><__main__.Foo object at ...>
print(Foo) #This is not
>>>> <class '__main__.Foo'>
Props to everyone that led me there in the comments section.
The answer to this question rely on two fundamentals of Python:
Duck-typing
Everything is an object
Indeed, even if self is Python's idiom to reference the current class instance, you technically can pass whatever object you want because of how Python handle typing.
Now, the other confusion that brought me here is that I wasn't creating an object in my second example. But, the thing is, Foo is already an object internally.
This can be tested empirically like so,
print(type(Foo))
<class 'type'>
So, we now know that Foo is an instance of class type and therefore can be passed as self even though it is not an instance of itself.
Basically, if I were to manipulate self as if it was a Foo object in my test method, I would have problem when calling it like my second example.
A few notes on your question (and answer). First, everything is, really an object. Even a class is an object, so, there is the class of the class (called metaclass) which is type in this case.
Second, more relevant to your case. Methods are, more or less, class, not instance attributes. In python, when you have an object obj, instance of Class, and you access obj.x, python first looks into obj, and then into Class. That's what happens when you access a method from an instance, they are just special class attributes, so they can be access from both instance and class. And, since you are not using any instance attributes of the self that should be passed to test(self) function, the object that is passed is irrelevant.
To understand that in depth, you should read about, descriptor protocol, if you are not familiar with it. It explains a lot about how things work in python. It allows python classes and objects to be essentially dictionaries, with some special attributes (very similar to javascript objects and methods)
Regarding the class instantiation, see about __new__ and metaclasses.
I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.
Recently I faced a problem in a C-based python extension while trying to instantiate objects without calling its constructor -- which is a requirement of the extension.
The class to be used to create instances is obtained dynamically: at some point, I have an instance x whose class I wish to use to create other instances, so I store x.__class__ for later use -- let this value be klass.
At a later point, I invoke PyInstance_NewRaw(klass, PyDict_New()) and then, the problem arises. It seems that if klass is an old-style class, the result of that call is the desired new instance. However, if it is a new-style class, the result is NULL and the exception raised is:
SystemError: ../Objects/classobject.c:521: bad argument to internal function
For the record, I'm using Python version 2.7.5. Googling around, I observed no more than one other person looking for a solution (and it seemed to me he was doing a workaround, but didn't detailed it).
For the record #2: the instances the extension is creating are proxies for these same x instances -- the x.__class__ and x.__dict__'s are known, so the extension is spawning new instances based on __class__ (using the aforementioned C function) and setting the respective __dict__ to the new instance (those __dict__'s have inter-process shared-memory data). Not only is conceptually problematic to call an instance's __init__ a second time (first: it's state is already know, second: the expected behavior for ctors is that they should be called exactly once for each instance), it is also impractical, since the extension cannot figure out the arguments and their order to call the __init__() for each instance in the system. Also, changing the __init__ of each class in the system whose instances may be proxies and making them aware there is a proxy mechanism they will be subjected to is conceptually problematic (they shouldn't know about it) and impractical.
So, my question is: how to perform the same behavior of PyInstance_NewRaw regardless of the instance's class style?
The type of new-style classes isn't instance, it's the class itself. So, the PyInstance_* methods aren't even meaningful for new-style classes.
In fact, the documentation explicitly explains this:
Note that the class objects described here represent old-style classes, which will go away in Python 3. When creating new types for extension modules, you will want to work with type objects (section Type Objects).
So, you will have to write code that checks whether klass is an old-style or new-style class and does the appropriate thing for each case. An old-style class's type is PyClass_Type, while a new-style class's type is either PyType_Type, or a custom metaclass.
Meanwhile, there is no direct equivalent of PyInstance_NewRaw for new-style classes. Or, rather, the direct equivalent—calling its tp_alloc slot and then adding a dict—will give you a non-functional class. You could try to duplicate all the other appropriate work, but that's going to be tricky. Alternatively, you could use tp_new, but that will do the wrong thing if there's a custom __new__ function in the class (or any of its bases). See the rejected patches from #5180 for some ideas.
But really, what you're trying to do is probably not a good idea in the first place. Maybe if you explained why this is a requirement, and what you're trying to do, there would be a better way to do it.
If the goal is to build objects by creating a new uninitialized instance of the class, then copying over its _dict__ from an initialized prototype, there's a much easier solution that I think will work for you:
__class__ is a writeable attribute. So (showing it in Python; the C API is basically the same, just a lot more verbose, and I'd probably screw up the refcounting somewhere):
class NewStyleDummy(object):
pass
def make_instance(cls, instance_dict):
if isinstance(cls, types.ClassType):
obj = do_old_style_thing(cls)
else:
obj = NewStyleDummy()
obj.__class__ = cls
obj.__dict__ = instance_dict
return obj
The new object will be an instance of cls—in particular, it will have the same class dictionary, including the MRO, metaclass, etc.
This won't work if cls has a metaclass that's required for its construction, or a custom __new__ method, or __slots__… but then your design of copying over the __dict__ doesn't make any sense in those cases anyway. I believe that in any case where anything could possibly work, this simple solution will work.
Calling cls.__new__ seems like a good solution at first, but it actually isn't. Let me explain the background.
When you do this:
foo = Foo(1, 2)
(where Foo is a new-style class), it gets converted into something like this pseudocode:
foo = Foo.__new__(1, 2)
if isinstance(foo, Foo):
foo.__init__(1, 2)
The problem is that, if Foo or one of its bases has defined a __new__ method, it will expect to get the arguments from the constructor call, just like an __init__ method will.
As you explained in your question, you don't know the constructor call arguments—in fact, that's the main reason you can't call the normal __init__ method in the first place. So, you can't call __new__ either.
The base implementation of __new__ accepts and ignores any arguments it's given. So, if none of your classes has a __new__ override or a __metaclass__, you will happen to get away with this, because of a quirk in object.__new__ (a quirk which works differently in Python 3.x, by the way). But those are the exact same cases the previous solution can handle, and that solution works for much more obvious reason.
Put another way: The previous solution depends on nobody defining __new__ because it never calls __new__. This solution depends on nobody defining __new__ because it calls __new__ with the wrong arguments.
class Foo(object):
pass
foo = Foo()
def bar(self):
print 'bar'
Foo.bar = bar
foo.bar() #bar
Coming from JavaScript, if a "class" prototype was augmented with a certain attribute. It is known that all instances of that "class" would have that attribute in its prototype chain, hence no modifications has to be done on any of its instances or "sub-classes".
In that sense, how can a Class-based language like Python achieve Monkey patching?
The real question is, how can it not? In Python, classes are first-class objects in their own right. Attribute access on instances of a class is resolved by looking up attributes on the instance, and then the class, and then the parent classes (in the method resolution order.) These lookups are all done at runtime (as is everything in Python.) If you add an attribute to a class after you create an instance, the instance will still "see" the new attribute, simply because nothing prevents it.
In other words, it works because Python doesn't cache attributes (unless your code does), because it doesn't use negative caching or shadowclasses or any of the optimization techniques that would inhibit it (or, when Python implementations do, they take into account the class might change) and because everything is runtime.
I just read through a bunch of documentation, and as far as I can tell, the whole story of how foo.bar is resolved, is as follows:
Can we find foo.__getattribute__ by the following process? If so, use the result of foo.__getattribute__('bar').
(Looking up __getattribute__ will not cause infinite recursion, but the implementation of it might.)
(In reality, we will always find __getattribute__ in new-style objects, as a default implementation is provided in object - but that implementation is of the following process. ;) )
(If we define a __getattribute__ method in Foo, and access foo.__getattribute__, foo.__getattribute__('__getattribute__') will be called! But this does not imply infinite recursion - if you are careful ;) )
Is bar a "special" name for an attribute provided by the Python runtime (e.g. __dict__, __class__, __bases__, __mro__)? If so, use that. (As far as I can tell, __getattribute__ falls into this category, which avoids infinite recursion.)
Is bar in the foo.__dict__ dict? If so, use foo.__dict__['bar'].
Does foo.__mro__ exist (i.e., is foo actually a class)? If so,
For each base-class base in foo.__mro__[1:]:
(Note that the first one will be foo itself, which we already searched.)
Is bar in base.__dict__? If so:
Let x be base.__dict__['bar'].
Can we find (again, recursively, but it won't cause a problem) x.__get__?
If so, use x.__get__(foo, foo.__class__).
(Note that the function bar is, itself, an object, and the Python compiler automatically gives functions a __get__ attribute which is designed to be used this way.)
Otherwise, use x.
For each base-class base of foo.__class__.__mro__:
(Note that this recursion is not a problem: those attributes should always exist, and fall into the "provided by the Python runtime" case. foo.__class__.__mro__[0] will always be foo.__class__, i.e. Foo in our example.)
(Note that we do this even if foo.__mro__ exists. This is because classes have a class, too: its name is type, and it provides, among other things, the method used to calculate __mro__ attributes in the first place.)
Is bar in base.__dict__? If so:
Let x be base.__dict__['bar'].
Can we find (again, recursively, but it won't cause a problem) x.__get__?
If so, use x.__get__(foo, foo.__class__).
(Note that the function bar is, itself, an object, and the Python compiler automatically gives functions a __get__ attribute which is designed to be used this way.)
Otherwise, use x.
If we still haven't found something to use: can we find foo.__getattr__ by the preceding process? If so, use the result of foo.__getattr__('bar').
If everything failed, raise AttributeError.
bar.__get__ is not really a function - it's a "method-wrapper" - but you can imagine it being implemented vaguely like this:
# Somewhere in the Python internals
class __method_wrapper(object):
def __init__(self, func):
self.func = func
def __call__(self, obj, cls):
return lambda *args, **kwargs: func(obj, *args, **kwargs)
# Except it actually returns a "bound method" object
# that uses cls for its __repr__
# and there is a __repr__ for the method_wrapper that I *think*
# uses the hashcode of the underlying function, rather than of itself,
# but I'm not sure.
# Automatically done after compiling bar
bar.__get__ = __method_wrapper(bar)
The "binding" that happens within the __get__ automatically attached to bar (called a descriptor), by the way, is more or less the reason why you have to specify self parameters explicitly for Python methods. In Javascript, this itself is magical; in Python, it is merely the process of binding things to self that is magical. ;)
And yes, you can explicitly set a __get__ method on your own objects and have it do special things when you set a class attribute to an instance of the object and then access it from an instance of that other class. Python is extremely reflective. :) But if you want to learn how to do that, and get a really full understanding of the situation, you have a lot of reading to do. ;)