Why to put a property variable in the constructor - python

From other languages I am used to code a class property and afterwards I can access this without having it in the constructor like
Class MyClass:
def __init__(self):
self._value = 0
#property
my_property(self):
print('I got the value:' & self._value)
In almost every example I worked through the property variable was in the constructor self._value like this
Class MyClass:
def __init__(self, value = 0):
self._value = value
To me this makes no sence since you want to set it in the property. Could anyone explain to me what is the use of placing the value variable in the constructor?

Python objects are not struct-based (like C++ or Java), they are dict-based (like Javascript). This means that the instances attributes are dynamic (you can add new attributes or delete existing ones at runtime), and are not defined at the class level but at the instance level, and they are defined quite simply by assigning to them. While they can technically be defined anywhere in the code (even outside the class), the convention (and good practice) is to define them (eventually to default values) in the initializer (the __init__ method - the real constructor is named __new__ but there are very few reasons to override it) to make clear which attributes an instance of a given class is supposed to have.
Note the use of the term "attribute" here - in python, we don't talk about "member variables" or "member functions" but about "attributes" and "methods". Actually, since Python classes are objects too (instance of the type class or a subclass of), they have attributes too, so we have instance attributes (which are per-instance) and class attributes (which belong to the class object itself, and are shared amongst instances). A class attribute can be looked up on an instance, as long as it's not shadowed by an instance attribute of the same name.
Also, since Python functions are objects too (hint: in Python, everything - everything you can put on the RHS of an assignment that is - is an object), there are no distinct namespaces for "data" attributes and "function" attributes, and Python's "methods" are actually functions defined on the class itself - IOW they are class attributes that happen to be instances of the function type. Since methods need to access the instance to be able to work on it, there's a special mechanism that allow to "customize" attribute access so a given object - if it implements the proper interface - can return something else than itself when it's looked up on an instance but resolved on the class. This mechanism is used by functions so they turn themselves into methods (callable objects that wraps the function and instance together so you don't have to pass the instance to the function), but also more generally as the support for computed attributes.
The property class is a generic implementation of computed attributes that wraps a getter function (and eventually a setter and a deleter) - so in Python "property" has a very specific meaning (the property class itself or an instance of it). Also, the #decorator syntax is nothing magical (and isn't specific to properties), it's just syntactic sugar so given a "decorator" function:
def decorator(func):
return something
this:
#decorator
def foo():
# code here
is just a shortcut for:
def foo():
# code here
foo = decorator(foo)
Here I defined decorator as a function, but just any callable object (a "callable" object is an instance of a class that defines the __call__ magic method) can be used instead - and Python classes are callables (that's even actually by calling a class that you instanciate it).
So back to your code:
# in py2, you want to inherit from `object` for
# descriptors and other fancy things to work.
# this is useless in py3 but doesn't break anything either...
class MyClass(object):
# the `__init__` function will become an attribute
# of the `MyClass` class object
def __init__(self, value=0):
# defines the instance attribute named `_value`
# the leading underscore denotes an "implementation attribute"
# - something that is not part of the class public interface
# and should not be accessed externally (IOW a protected attribute)
self._value = value
# this first defines the `my_property` function, then
# pass it to `property()`, and rebinds the `my_property` name
# to the newly created `property` instance. The `my_property` function
# will then become the property's getter (it's `fget` instance attribute)
# and will be called when the `my_property` name is resolved on a `MyClass` instance
#property
my_property(self):
print('I got the value: {}'.format(self._value))
# let's at least return something
return self._value
You may then want to inspect both the class and an instance of it:
>>> print(MyClass.__dict__)
{'__module__': 'oop', '__init__': <function MyClass.__init__ at 0x7f477fc4a158>, 'my_property': <property object at 0x7f477fc639a8>, '__dict__': <attribute '__dict__' of 'MyClass' objects>, '__weakref__': <attribute '__weakref__' of 'MyClass' objects>, '__doc__': None}
>>> print(MyClass.my_property)
<property object at 0x7f477fc639a8>
>>> print(MyClass.my_property.fget)
<function MyClass.my_property at 0x7f477fc4a1e0>
>>> m = MyClass(42)
>>> print(m.__dict__)
{'_value': 42}
>>> print(m.my_property)
I got the value: 42
42
>>>
As a conclusion: if you hope to do anything usefull with a language, you have to learn this language - you cannot just expect it to work as other languages you know. While some features are based on common concepts (ie functions, classes etc), they can actually be implemented in a totally different way (Python's object model has almost nothing in common with Java's one), so just trying to write Java (or C or C++ etc) in Python will not work (just like trying to write Python in Java FWIW).
NB: just for the sake of completeness: Python objects can actually be made "struct-based" by using __slots__ - but the aim here is not to prevent dynamically adding attributes (that's only a side effect) but to make instances of those classes "lighter" in size (which is useful when you know you're going to have thousands or more instances of them at a given time).

Because #property is not a decorator for a variable, it is a decorator for function that allows it to behave like a property. You still need to create a class variable to use a function decorated by #property:
The #property decorator turns the voltage() method into a “getter” for a read-only attribute with the same name, and it sets the docstring for voltage to “Get the current voltage.”
A property object has getter, setter, and deleter methods usable as decorators that create a copy of the property with the corresponding accessor function set to the decorated function. This is best explained with an example:

I'm guessing you're coming from a language like C++ or Java where it is common to make attributes private and then write explicit getters and setters for them? In Python there is no such thing as private other than by convention and there is no need to write getters and setters for a variable that you only need to write and read as is. #property and the corresponding setter decorators can be used if you want to add additional behaviour (e.g. logging acccess) or you want to have pseudo-properties that you can access just like real ones, e.g. you might have a Circle class that is defined by it's radius but you could define a #property for the diameter so you can still write circle.diameter.
More specifically to your question: You want to have the property as an argument of the initializer if you want to set the property at the time when you create the object. You wouldn't want to create an empty object and then immediately fill it with properties as that would create a lot of noise and make the code less readable.
Just an aside: __init__ isn't actually a constructor. The constructor for Python objects is __new__ and you almost never overwrite it.

Related

Why isn't __bases__ accessible in the class body?

Class objects have a __bases__ (and a __base__) attribute:
>>> class Foo(object):
... pass
...
>>> Foo.__bases__
(<class 'object'>,)
Sadly, these attributes aren't accessible in the class body, which would be very convenient for accessing parent class attributes without having to hard-code the name:
class Foo:
cls_attr = 3
class Bar(Foo):
cls_attr = __base__.cls_attr + 2
# throws NameError: name '__base__' is not defined
Is there a reason why __bases__ and __base__ can't be accessed in the class body?
(To be clear, I'm asking if this is a conscious design decision. I'm not asking about the implementation; I know that __bases__ is a descriptor in type and that this descriptor can't be accessed until a class object has been created. I want to know why python doesn't create __bases__ as a local variable in the class body.)
I want to know why python doesn't create __bases__ as a local variable in the class body
As you know, class is mostly a shortcut for type.__new__() - when the runtime hits a class statements, it executes all statements at the top-level of the class body, collects all resulting bindings in a dedicated namespace dict, calls type() with the concrete metaclass, the class name, the base classes and the namespace dict, and binds the resulting class object to the class name in the enclosing scope (usually but not necessarily the module's top-level namespace).
The important point here is that it's the metaclass responsabilty to build the class object, and to allow for class object creation customisations, the metaclass must be free to do whatever it wants with its arguments. Most often a custom metaclass will mainly work on the attrs dict, but it must also be able to mess with the bases argument. Now since the metaclass is only invoked AFTER the class body statements have been executed, there's no way the runtime can reliably expose the bases in the class body scope since those bases could be modified afterward by the metaclass.
There are also some more philosophical considerations here, notably wrt/ explicit vs implicit, and as shx2 mentions, Python designers try to avoid magic variables popping out of the blue. There are indeed a couple implementation variables (__module__ and, in py3, __qualname__) that are "automagically" defined in the class body namespace, but those are just names, mostly intended as additional debugging / inspection informations for developers) and have absolutely no impact on the class object creation nor on its properties, behaviour and whatnots.
As always with Python, you have to consider the whole context (the execution model, the object model, how the different parts work together etc) to really understand the design choices. Whether you agree with the whole design and philosophy is another debate (and one that doesn't belong here), but you can be sure that yes, those choices are "conscious design decisions".
I am not answering as to why it was decided to be implemented the way it was, I'm answering why it wasn't implemented as a "local variable in the class body":
Simply because nothing in python is a local variable magically defined in the class body. Python doesn't like names magically appearing out of nowhere.
It's because it's simply is not yet created.
Consider the following:
>>> class baselessmeta(type):
... def __new__(metaclass, class_name, bases, classdict):
... return type.__new__(
... metaclass,
... class_name,
... (), # I can just ignore all the base
... {}
... )
...
>>> class Baseless(int, metaclass=baselessmeta):
... # imaginary print(__bases__, __base__)
... ...
...
>>> Baseless.__bases__
(<class 'object'>,)
>>> Baseless.__base__
<class 'object'>
>>>
What should the imaginary print result in?
Every python class is created via the type metaclass one way or another.
You have the int argument for the type() in bases argument, yet you do not know what is the return value is going to be. You may use that directly as a base in your metaclass, or you may return another base with your LOC.
Just realized your to be clear part and now my answer is useless haha. Oh welp.

Properties seem to set to the same value for all objects (Python) [duplicate]

What is the difference between class and instance variables in Python?
class Complex:
a = 1
and
class Complex:
def __init__(self):
self.a = 1
Using the call: x = Complex().a in both cases assigns x to 1.
A more in-depth answer about __init__() and self will be appreciated.
When you write a class block, you create class attributes (or class variables). All the names you assign in the class block, including methods you define with def become class attributes.
After a class instance is created, anything with a reference to the instance can create instance attributes on it. Inside methods, the "current" instance is almost always bound to the name self, which is why you are thinking of these as "self variables". Usually in object-oriented design, the code attached to a class is supposed to have control over the attributes of instances of that class, so almost all instance attribute assignment is done inside methods, using the reference to the instance received in the self parameter of the method.
Class attributes are often compared to static variables (or methods) as found in languages like Java, C#, or C++. However, if you want to aim for deeper understanding I would avoid thinking of class attributes as "the same" as static variables. While they are often used for the same purposes, the underlying concept is quite different. More on this in the "advanced" section below the line.
An example!
class SomeClass:
def __init__(self):
self.foo = 'I am an instance attribute called foo'
self.foo_list = []
bar = 'I am a class attribute called bar'
bar_list = []
After executing this block, there is a class SomeClass, with 3 class attributes: __init__, bar, and bar_list.
Then we'll create an instance:
instance = SomeClass()
When this happens, SomeClass's __init__ method is executed, receiving the new instance in its self parameter. This method creates two instance attributes: foo and foo_list. Then this instance is assigned into the instance variable, so it's bound to a thing with those two instance attributes: foo and foo_list.
But:
print instance.bar
gives:
I am a class attribute called bar
How did this happen? When we try to retrieve an attribute through the dot syntax, and the attribute doesn't exist, Python goes through a bunch of steps to try and fulfill your request anyway. The next thing it will try is to look at the class attributes of the class of your instance. In this case, it found an attribute bar in SomeClass, so it returned that.
That's also how method calls work by the way. When you call mylist.append(5), for example, mylist doesn't have an attribute named append. But the class of mylist does, and it's bound to a method object. That method object is returned by the mylist.append bit, and then the (5) bit calls the method with the argument 5.
The way this is useful is that all instances of SomeClass will have access to the same bar attribute. We could create a million instances, but we only need to store that one string in memory, because they can all find it.
But you have to be a bit careful. Have a look at the following operations:
sc1 = SomeClass()
sc1.foo_list.append(1)
sc1.bar_list.append(2)
sc2 = SomeClass()
sc2.foo_list.append(10)
sc2.bar_list.append(20)
print sc1.foo_list
print sc1.bar_list
print sc2.foo_list
print sc2.bar_list
What do you think this prints?
[1]
[2, 20]
[10]
[2, 20]
This is because each instance has its own copy of foo_list, so they were appended to separately. But all instances share access to the same bar_list. So when we did sc1.bar_list.append(2) it affected sc2, even though sc2 didn't exist yet! And likewise sc2.bar_list.append(20) affected the bar_list retrieved through sc1. This is often not what you want.
Advanced study follows. :)
To really grok Python, coming from traditional statically typed OO-languages like Java and C#, you have to learn to rethink classes a little bit.
In Java, a class isn't really a thing in its own right. When you write a class you're more declaring a bunch of things that all instances of that class have in common. At runtime, there's only instances (and static methods/variables, but those are really just global variables and functions in a namespace associated with a class, nothing to do with OO really). Classes are the way you write down in your source code what the instances will be like at runtime; they only "exist" in your source code, not in the running program.
In Python, a class is nothing special. It's an object just like anything else. So "class attributes" are in fact exactly the same thing as "instance attributes"; in reality there's just "attributes". The only reason for drawing a distinction is that we tend to use objects which are classes differently from objects which are not classes. The underlying machinery is all the same. This is why I say it would be a mistake to think of class attributes as static variables from other languages.
But the thing that really makes Python classes different from Java-style classes is that just like any other object each class is an instance of some class!
In Python, most classes are instances of a builtin class called type. It is this class that controls the common behaviour of classes, and makes all the OO stuff the way it does. The default OO way of having instances of classes that have their own attributes, and have common methods/attributes defined by their class, is just a protocol in Python. You can change most aspects of it if you want. If you've ever heard of using a metaclass, all that is is defining a class that is an instance of a different class than type.
The only really "special" thing about classes (aside from all the builtin machinery to make them work they way they do by default), is the class block syntax, to make it easier for you to create instances of type. This:
class Foo(BaseFoo):
def __init__(self, foo):
self.foo = foo
z = 28
is roughly equivalent to the following:
def __init__(self, foo):
self.foo = foo
classdict = {'__init__': __init__, 'z': 28 }
Foo = type('Foo', (BaseFoo,) classdict)
And it will arrange for all the contents of classdict to become attributes of the object that gets created.
So then it becomes almost trivial to see that you can access a class attribute by Class.attribute just as easily as i = Class(); i.attribute. Both i and Class are objects, and objects have attributes. This also makes it easy to understand how you can modify a class after it's been created; just assign its attributes the same way you would with any other object!
In fact, instances have no particular special relationship with the class used to create them. The way Python knows which class to search for attributes that aren't found in the instance is by the hidden __class__ attribute. Which you can read to find out what class this is an instance of, just as with any other attribute: c = some_instance.__class__. Now you have a variable c bound to a class, even though it probably doesn't have the same name as the class. You can use this to access class attributes, or even call it to create more instances of it (even though you don't know what class it is!).
And you can even assign to i.__class__ to change what class it is an instance of! If you do this, nothing in particular happens immediately. It's not earth-shattering. All that it means is that when you look up attributes that don't exist in the instance, Python will go look at the new contents of __class__. Since that includes most methods, and methods usually expect the instance they're operating on to be in certain states, this usually results in errors if you do it at random, and it's very confusing, but it can be done. If you're very careful, the thing you store in __class__ doesn't even have to be a class object; all Python's going to do with it is look up attributes under certain circumstances, so all you need is an object that has the right kind of attributes (some caveats aside where Python does get picky about things being classes or instances of a particular class).
That's probably enough for now. Hopefully (if you've even read this far) I haven't confused you too much. Python is neat when you learn how it works. :)
What you're calling an "instance" variable isn't actually an instance variable; it's a class variable. See the language reference about classes.
In your example, the a appears to be an instance variable because it is immutable. It's nature as a class variable can be seen in the case when you assign a mutable object:
>>> class Complex:
>>> a = []
>>>
>>> b = Complex()
>>> c = Complex()
>>>
>>> # What do they look like?
>>> b.a
[]
>>> c.a
[]
>>>
>>> # Change b...
>>> b.a.append('Hello')
>>> b.a
['Hello']
>>> # What does c look like?
>>> c.a
['Hello']
If you used self, then it would be a true instance variable, and thus each instance would have it's own unique a. An object's __init__ function is called when a new instance is created, and self is a reference to that instance.

How is the __class__ cell value set in class methods?

Looking at the documentation of the super type in Python 3.5, it notes that super(…) is the same as super(__class__, «first argument to function»). To my surprise, I wrote a method that returned __class__ – and it actually worked:
>>> class c:
... def meth(self): return __class__
...
>>> c().meth()
<class '__main__.c'>
Apparently, __class__ is a free variable assigned by the closure of the function:
>>> c.meth.__code__.co_freevars
('__class__',)
>>> c.meth.__closure__
(<cell at 0x7f6346a91048: type object at 0x55823b17f3a8>,)
I'd like to know under what circumstances that free variable is associated in the closure. I know that if I assign a function to a variable as part of creating a class it doesn't happen.
>>> def meth2(self): return __class__
...
>>> meth2.__code__.co_freevars
()
Even if I create a new class and as part of that creation assign some attribute to meth2, meth2 doesn't somehow magically gain a free variable that gets filled in.
That's unsurprising, because part of this appears to depend on the lexical state of the compiler at the time that the code is compiled.
I'd like to confirm that the conditions necessary for __class__ to be treated as a free variable are simply:
A reference to __class__ in the code block; and
The def containing the __class__ reference is lexically within a class declaration block.
I'd further like to understand what the conditions necessary for that variable getting filled in correctly are. It appears – at least from the Python 3.6 documentation – that something like type.__new__(…) is involved somehow. I haven't been able to understand for sure how type comes into play and how this all interacts with metaclasses that do not ultimately call type.__new__(…).
I'm particularly confused because I didn't think that at the time the namespace's __setattr__ method was used to assign the attribute containing the method to the method function (as it exists on the ultimately-constructed class object). I know that this namespace object exists because it was either constructed implicitly by the use of the class statement, or explicitly by the metaclass's __prepare__ method – but as best I can tell, the metaclass constructs the class object that populates __class__ after the function object is set as a value within the class namespace.
In the docs for Python’s data model, § 3.3.3.6 – “Creating the class object” – you will find the following:
[The] class object is the one that will be referenced by the
zero-argument form of super(). __class__ is an implicit closure
reference created by the compiler if any methods in a class body refer
to either __class__ or super. This allows the zero argument form
of super() to correctly identify the class being defined based on
lexical scoping, while the class or instance that was used to make
the current call is identified based on the first argument passed to
the method.
…emphasis is mine. This confirms your two putative criteria for a __class__ closure happening: a “__class__” reference in the method def, which itself is defined inside a class statement.
But then, the next ¶ in “Creating the class object” goes on to say:
CPython implementation detail: In CPython 3.6 and later, the __class__ cell is passed to the metaclass as a __classcell__ entry
in the class namespace. If present, this must be propagated up to the
type.__new__ call in order for the class to be initialized
correctly. Failing to do so will result in a RuntimeError in Python
3.8.
… emphasis is theirs. This means that if you are employing a metaclass with a __new__ method – in order to dictate the terms by which classes so designated are created – for example like e.g.:
class Meta(type):
def __new__(metacls, name, bases, attributes, **kwargs):
# Or whatever:
if '__slots__' not in attributes:
attributes['__slots__'] = tuple()
# Call up, creating and returning the new class:
return super().__new__(metacls, name,
bases,
attributes,
**kwargs)
… that last super(…).__new__(…) call is effectively calling type.__new__(…). In real life, there might be some other ancestral “__new__(…)” methods that get called between here and there, if your metaclass inherits from other metaclasses (like, e.g. abc.ABCMeta). Effectively, though, inside your Meta.__new__(…) method, between the method entry point, the super(…).__new__(…) call, and return-ing the new class object, you can inspect or set the value of the eventual __class__ cell variable through attributes['__classcell__']†.
Now as for whether this is at all useful: I don’t know. I have been programming in python for ten years; I totally use metaclasses‡, like, absolutely all the time (for better or for worse); and in the course of doing so I have never done any of the following things:
reassigned a __class__ attribute;
inspected the __class__ cell variable of anything; nor
messed around with this supposed __classcell__ namespace entry, in like any capacity
… Naturally, your programming experience will be different from mine, who knows what one does. It is not that any one of those aforementioned stratagems are de facto problematic, necessarily. But I am no stranger to bending Python’s type systems and metaprogramming facilities to my whim, and these particular things have never presented themselves as partiuclarly useful, especially once you are working within the general context of metaclasses, and what they do.
By which I suppose I mean, tl;dr: you are on the cusp of figuring out the basics of metaclasses and what they can do – do press on and experiment, but do investigate the topic with depth as well as breath. Indeed!
† – In reading through code examples of this sort, you’ll often find what my snippet here calls the attributes dictionary referred to as namespace or ns, or similar. It’s all the same stuff.
‡ – …and ABCs and mixins and class decorators and __init_subclass__(…) and the abuse of __mro_entries__(…) for personal gain; et cetera, ad nauseum

Substitute a mock object for the metaclass of a class

How can I override the metaclass of a Python class, with a unittest.mock.MagicMock instance instead?
I have a function whose job involves working with the metaclass of an argument:
# lorem.py
class Foo(object):
pass
def quux(existing_class):
…
metaclass = type(existing_class)
new_class = metaclass(…)
The unit tests for this function will need to assert that the calls to
the metaclass go as expected, without actually calling a real class
object.
Note: The test case does not care about the metaclass's behaviour; it cares that quux retrieves that metaclass (using type(existing_class)) and calls the metaclass with the correct arguments.
So to write a unit test for this function, I want to pass a class object whose metaclass is a mock object instead. This will allow, for example, making assertions about how the metaclass was called, and ensuring no unwanted side effects.
# test_lorem.py
import unittest
import unittest.mock
import lorem
class stub_metaclass(type):
def __new__(metaclass, name, bases, namespace):
return super().__new__(metaclass, name, bases, namespace)
class quux_TestCase(unittest.TestCase):
#unittest.mock.patch.object(
lorem.Foo, '__class__', side_effect=stub_metaclass)
def test_calls_expected_metaclass_with_class_name(
self,
mock_foo_metaclass,
):
expected_name = 'Foo'
expected_bases = …
expected_namespace = …
lorem.quux(lorem.Foo)
mock_foo_metaclass.assert_called_with(
expected_name, expected_bases, expected_namespace)
When I try to mock the __class__ attribute of an existing class, though, I get this error:
File "/usr/lib/python3/dist-packages/mock/mock.py", line 1500, in start
result = self.__enter__()
File "/usr/lib/python3/dist-packages/mock/mock.py", line 1460, in __enter__
setattr(self.target, self.attribute, new_attr)
TypeError: __class__ must be set to a class, not 'MagicMock' object
This is telling me that unittest.mock.patch is attempting to set the __class__ attribute temporarily to a MagicMock instance, as I want; but Python is refusing that with a TypeError.
But placing a mock object as the metaclass is exactly what I'm trying to do: put a unittest.mock.MagicMock instance in the __class__ attribute in order that the mock object will do all that it does: record calls, pretend valid behaviour, etc.
How can I set a mock object in place of the Foo class's __class__ attribute, in order to instrument Foo and test that my code uses Foo's metaclass correctly?
You can't do exactly what you want. As you can see an object's __class__ attribute is very special in Python, and even for ordinary instances there are checks in runtime to verify it is assigned to a proper type.
When you get down to a class's __class__, that is even more strict.
Possible approach:
One thing to do in there is not pass a class to your test - but an object that is an instance from a crafted ordinary class, which will have an artificial __class__ attribute. Even them, you will have to change your code from calling type(existing_class) to do existing_class.__class__ directly. For an instance object to "falsify" its __class__ anyway, you have to implement __class__ as a property on its class (or override __getattribute__; (the class itself will report its true metaclass, but an instance can return whatever is coded on the __class__ property.
class Foo:
#property
def __class__(self):
return stub_metaclass
Actual suggestion:
But then, since you are at it, maybe the simplest thing is to mock type instead on the target module where quux is defined.
class MockType:
def __init__(self):
self.mock = mock.Mock()
def __call__(self, *args):
return self.mock
...
class ...:
...
def test_calls_expected_metaclass_with_class_name(
self,
):
try:
new_type = MockType()
# This creates "type" on the module "lorem" namespace
# as a global variable. It will then override the built-in "type"
lorem.type = new_type
lorem.quux(lorem.Foo)
finally:
del lorem.type # un-shadows the built-in type on the module
new_type.mock.assert_called_with(
'Foo', unittest.mock.ANY, unittest.mock.ANY)
Still another approach
Another thing that can be done is to craft a full "MockMetaclass" in the "old fashion": without unittest.magicmock at all, instead, with intrumented __new__ and other relevant methods that will record the called parameters, and function as a true metaclass for a class you pass in as parameter.
Considerations on what is being done
People reaching here, please note that one should not test the class creation (and metaclass) mechanisms themselves. One can just assume the Python runtime have these working and tested already.

Python memory allocation, when using bound, static or class functions?

I am curious about this: what actually happens to the python objects once that you create a class that contains each one of these functions?
Looking at some example, I see that either the bound, static or class function is in fact creating a class object, which is the one that contains all 3 function.
Is this always true, no matter which function I call? and the parent object class (object in this case, but can be anything I think) is always called, since the constructor in my class is invoking it implicitly?
class myclass(object):
a=1
b=True
def myfunct(self, b)
return (self.a + b)
#staticmethod
def staticfunct(b):
print b
#classmethod
classfunct(cls, b):
cls.a=b
Since it was not clear: what is the lifecycle for this object class, when I use it as following?
from mymodule import myclass
class1 = myclass()
class1.staticfunct(4)
class1.classfunct(3)
class1.myfunct
In the case of static, myclass object get allocated, and then the function is run, but class and bound method are not generated?
In the case of class funciton, it is the same as above?
in the case of the bound function, everything in the class is allocated?
The class statement creates the class. That is an object which has all three functions, but the first (myfunct) is unbound and cannot be called without an instance object of this class.
The instances of this class (in case you create them) will have bound versions of this function and references to the static and the class functions.
So, both the class and the instances have all three functions.
None of these functions create a class object, though. That is done by the class statement. (To be precise: When the interpreter completes the class creation, i. e. the class does not yet exist when the functions inside it are created; mind boggling, but seldom necessary to know.)
If you do not override the __init__() function, it will be inherited and called for each created instance, yes.
Since it was not clear: what is the lifecycle for this object class,
when I use it as following?
from mymodule import myclass
This will create the class, and code for all functions. They will be classmethod, staticmethod, and method (which you can see by using type() on them)
class1 = myclass()
This will create an instance of the class, which has a dictionary and a lot of other stuff. It doesn't do anything to your methods though.
class1.staticfunct(4)
This calls your staticfunct.
class1.classfunct(3)
This calls you classfunct
class1.myfunct
This will create a new object that is a bound myfunct method of class1. It is often useful to bind this to a variable if you are going to be calling it over and over. But this bound method has normal lifetime.
Here is an example you might find illustrative:
>>> class foo(object):
... def bar(self):
... pass
...
>>> x = foo()
>>> x.bar is x.bar
False
Every time you access x.bar, it creates a new bound method object.
And another example showing class methods:
>>> class foo(object):
... #classmethod
... def bar():
... pass
...
>>> foo.bar
<bound method type.bar of <class '__main__.foo'>>
Your class myclass actually has four methods that are important: the three you explicitly coded and the constructor, __init__ which is inherited from object. Only the constructor creates a new instance. So in your code one instance is created, which you have named class1 (a poor choice of name).
myfunctcreates a new integer by adding class1.a to 4. The lifecycle of class1 is not affected, nor are variables class1.a, class1.b, myclass.a or myclass.b.
staticfunct just prints something, and the attributes of myclass and class1 are irrelevant.
classfunct modifies the variable myclass.a. It has no effect on the lifecycle or state of class1.
The variable myclass.b is never used or accessed at all; the variables named b in the individual functions refer to the values passed in the function's arguments.
Additional info added based on the OP's comments:
Except for the basic data types (int, chars, floats, etc) everything in Python is an object. That includes the class itself (a class object), every method (a method object) and every instance you create. Once created each object remains alive until every reference to it disappears; then it is garbage-collected.
So in your example, when the interpreter reaches the end of the class statement body an object named "myclass" exists, and additional objects exist for each of its members (myclass.a, myclass.b, myclass.myfunct, myclass.staticfunct etc.) There is also some overhead for each object; most objects have a member named __dict__ and a few others. When you instantiate an instance of myclass, named "class1", another new object is created. But there are no new method objects created, and no instance variables since you don't have any of those. class1.a is a pseudonym for myclass.a and similarly for the methods.
If you want to get rid of an object, i.e., have it garbage-collected, you need to eliminate all references to it. In the case of global variables you can use the "del" statement for this purpose:
A = myclass()
del A
Will create a new instance and immediately delete it, releasing its resources for garbage collection. Of course you then cannot subsequently use the object, for example print(A) will now give you an exception.

Categories

Resources