What is this python concept called? Parent class using child-only attributes. - python

I am writing some code that is an upside down triangle of inheritance. I have a base Linux class that has a CLIENT attr which holds a connection. I have several APIs that are logically separated (kvm, yum, gdb, dhcp, etc..) that use CLIENT but I only want the user to need to create a single instance of Linux class but be able to call all the methods from the Parent classes. While maintaining the nice logical code separation among the parents:
class Linux(
SSHClient,
yum.Yum,
kvm.Kvm,
ldap.Ldap,
lshw.Lshw,
packet_capture.Tcpdump,
tc.TrafficControl,
networking.Networking,
gdb.Gdb,
dhcp.Dhcp,
httputil.Http,
scp.Scp,
fileutils.FileUtils):
I made a little example:
class Dad(object):
def __init__(self):
raise NotImplementedError("Create Baby instead")
def dadCallBaby(self):
print('sup {}'.format(self.babyName))
class Mom(object):
def __init__(self):
raise NotImplementedError("Create Baby instead")
def momCallBaby(self):
print('goochi goo {}'.format(self.babyName))
class Baby(Mom, Dad):
def __init__(self, name):
self.babyName = name
def greeting(self):
self.momCallBaby()
self.dadCallBaby()
x=Baby('Joe')
x.greeting()
What is doing this called? Is this Duck Typing? And is there a better option?

There's really no such thing as "child-only attributes".
The attribute babyName is just stored in each object's namespace, and looked up there. Python doesn't care that it happened to be stored by Baby.__init__. And in fact, you can write store the same attribute on a Mom that isn't a Baby and it will work the same way:
class NotABaby(Mom):
def __init__(self): pass
mom = NotABaby()
mom.babyName = 'Me?'
mom.momCallBaby()
Also, it's hard to suggest a better way to do what you're doing, because what you're doing is inherently confusing and probably shouldn't be done.
Inheritance normally means subtyping—that is, Baby should only be a subclass of Mom if every Baby instance is usable as a Mom.1
But a baby is not a mom and a dad.2 A baby has a mom and a dad. And the way to represent that is by giving Baby attributes for its mom and dad:
class Baby(object):
def __init__(self, mom, dad, name):
self.mom, self.dad, self.name = mom, dad, name
def greeting(self):
self.mom.momCallBaby(self.name)
self.dad.dadCallBaby(self.name)
Notice that, e.g., this means that the same woman can be the mom of two babies. Since that's also true of the real-life thing you're modeling here, that's a sign that you're modeling things correctly.
Your "real" example is a little less clear, but I suspect the same thing is going on there.
The only reason you want to use inheritance, as far as I can tell, is:
I only want the user to need to create a single instance of Linux class
You don't need, or want, inheritance for that:
class Linux(object):
def __init__(self):
self.ssh_client = SSHClient()
self.yum = yum.Yum()
# etc.
… but be able to call all the methods from the Parent classes
If yum.Yum, ldap.Ldap and dhcp.Dhcp both have methods named lookup, which one would be called by Linux.lookup?
What you probably want is to just leave the attributes as public attributes, and use them explicitly:
system = Linux()
print(system.yum.lookup(package))
print(system.ldap.lookup(name))
print(system.dhcp.lookup(reservation))
Or you'll want to provide a "Linux API" that wraps all the underlying APIs:
def lookup_package(self, package):
return self.yum.lookup(package)
def lookup_ldap_name(self, name):
return self.ldap.lookup(name)
def lookup_reservation(self, reservation):
return self.dhcp.lookup(reservation)
If you really do want to just forward every method of all of your different components, and you're sure that none of them conflict with each other, and there are way too many to write out manually, you can always do it programmatically, by iterating all of the classes, iterating inspect.getmembers of each one, filtering out the ones that start with _ or aren't unbound methods, creating a proxy function, and setattr-ing it onto Linux.
Or, alternatively (probably not as good an idea in this case, but very commonly useful in cases that aren't that different), you can proxy dynamically, at method lookup time, by implementing a __getattr__ method (and, often, a __dir__ method).
I think one of these two kinds of proxying may be what you're really after here.
1. There are some cases where you want to inherit for reasons other than subtyping. For example, you inherit a mixin class to get implementations for a bunch of methods. The question of whether your class is usable wherever that mixin's instances are usable doesn't really make sense, because the mixin isn't usable anywhere (except as a base class). But the subtyping is still the standard that you're bending there.
2. If it is, call Child Protective Services. And also call Professor X, because that shouldn't be physically possible.

Related

Design pattern to allow attaching one class to another

I am designing an RPG and would like to have the ability to attach classes to each other. What I'm looking to do is have say an Item class. The weapon class would inherit from it. A sword would be an instance of the weapon class. I want to then be able to attach properties to the sword. These properties would be other classes. For example I could attach the container class to it and the sword (only that instance of the sword) would become a container. I could also maybe attach something like an enchantment to that sword.
For a bonus it would be nice to be able to combine instances as well. So instead of having to have a fire_enchantment class I could just make it an instance of Enchantment and attach it to the sword instance.
I've googled around and haven't been able to find a design pattern that fits this. I recall studying one but can't remember what it was called (Was a few years ago)
I'm at a loss of of which design pattern allowed this. The combining of multiple classes dynamically.
I think you seem to understand the idea of inheritance in python (e.g. class Subclass(Superclass): ) so I won't cover that here.
The classes you want to 'attach' can be treated as any other variable within the Weapon class.
class Enchantment(object):
def __init__(self, name, type):
self.name = name
self.type = type
# can define more member variables here, and set with setter methods
# more Enchantment methods here...
class Weapon(object):
def __init__(self, name, type)
self.name = name
self.type = type
self.enchantments = []
# more Weapon member variables here
def add_enchantment(self, enchantment):
# any logic you need to check when adding an enchantment
self.enchantments.append(enchantment)
Then in wherever your game code is running you could do
sword = Weapon('My sword', 'sword')
fire_enchantment = Enchantment('Fireball', 'fire')
sword.add_enchantment(fire_enchantment)
You can then add methods on the Weapon class to do things with the enchantments/add certain logic.
The enchantment is still an instance of an object, so if you access it in the list (maybe by identifying it by its name, or looping through the list) all its methods and variables are accessible. You just need to build an interface to it via the Weapon class e.g. get_enchantment(self, name), or have other methods in the Weapon class interact with it (e.g. when you attack you might loop through the enchantments and see if they add any extra damage).
There's obviously design considerations about how you design your classes (the above was thrown together for example and doesn't include inheritance). For example you might only allow one enchantment per weapon, in which case you shouldn't use a list in the weapon object, but could just set self.enchantment = None in the constructor, and set self.enchantment = enchantment in the add_enchantment method.
The point I'm making is you can treat instances of Enchantment or other 'attachable' classes as variables. Just make sure you create an instance of the class e.g. fire_enchantment = Enchantment('Fireball', 'fire').
There's plenty of reading out there in terms of inheritance and OOP in general. Hope this helps!
Additional answer from OP
I think the Mixin pattern is what I was looking for. After digging around more I found this post which has an answer for dynamic mixin's.

Python2.7: infinite loop when super __init__ creates an instance of it's own subclass

I have the sense that this must be kind of a dumb question—nub here. So I'm open to an answer of the sort "This is ass-backwards, don't do it, please try this: [proper way]".
I'm using Python 2.7.5.
General Form of the Problem
This causes an infinite loop unless Thesaurus (an app-wide singleton) does not call Baseclass.__init__()
class Baseclass():
def __init__(self):
thes = Thesaurus()
#do stuff
class Thesaurus(Baseclass):
def __init__(self):
Baseclass.__init__(self)
#do stuff
My Specific Case
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface). This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
But since the subclass calls the superclass's __init__(), which in turn creates another subclass, loops ensue. Not calling the superclass's __init__() works just fine, but I'm concerned that's merely a lucky coincidence, and that my Thesaurus class may eventually be modified to require it's parent __init__().
Advice?
Well, I'm stopping to look at your code, and I'll just base my answer on what you say:
I have a base class that virtually every other class in my app extends (just some basic conventions for functionality within the app; perhaps should just be an interface).
this would be ThesaurusBase in the code below
This base class is meant to house a singleton of a Thesaurus class that grants some flexibility with user input by inferring some synonyms (ie. {'yes':'yep', 'ok'}).
That would be ThesaurusSingleton, that you can call with a better name and make it actually useful.
class ThesaurusBase():
def __init__(self, singleton=None):
self.singleton = singleton
def mymethod1(self):
raise NotImplementedError
def mymethod2(self):
raise NotImplementedError
class ThesaurusSingleton(ThesaurusBase):
def mymethod1(self):
return "meaw!"
class Thesaurus(TheraususBase):
def __init__(self, singleton=None):
TheraususBase.__init__(self, singleton)
def mymethod1(self):
return "quack!"
def mymethod2(self):
return "\\_o<"
now you can create your objects as follows:
singleton = ThesaurusSingleton()
thesaurus = Thesaurus(singleton)
edit:
Basically, what I've done here is build a "Base" class that is just an interface defining an expected behavior for all its children classes. The class ThesaurusSingleton (I know that's a terrible name) is also implementing that interface, because you said it had too and I did not want to discuss your design, you may always have good reasons for weird constraints.
And finally, do you really need to instantiate your singleton inside the class that is defining the singleton object? Though there may be some hackish way to do so, there's often a better design that avoids the "hackish" part.
What I think is that however you create your singleton, you should better do it explicitly. That's in the "Zen of python": explicit is better than implicit. Why? because then people reading your code (and that might be you in six months) will be able to understand what's happening and what you were thinking when you wrote that code. If you try to make things more implicit (like using sophisticated meta classes and weird self-inheritance) you may wonder what this code does in less than three weeks!
I'm not telling to avoid that kind of options, but to only use sophisticated stuff when you're out of simple ones!
Based on what you said I think the solution I gave can be a starting point. But as you focus on some obscure, yet not very useful hackish stuff instead of talking about your design, I can't be sure if my example is that appropriate, and hint you on the design.
edit2:
There's an another way to achieve what you say you want (but be sure that's really the design you want). You may want to use a class method that will act on the class itself (instead of the instances) and thus enable you to store a class-wide instance of itself:
>>> class ThesaurusBase:
... #classmethod
... def initClassWide(cls):
... cls._shared = cls()
...
>>> class T(ThesaurusBase):
... def foo(self):
... print self._shared
...
>>> ThesaurusBase.initClassWide()
>>> t = T()
>>> t.foo()
<__main__.ThesaurusBase instance at 0x7ff299a7def0>
and you can call the initClassWide method at the module level of where you declare ThesaurusBase, so whenever you import that module, it will have the singleton loaded (the import mechanism ensuring that python modules are run only once).
the short answer is:
do not instantiate an instance of a sub class from the super class constructor
longer answer:
if the motive you have to try to do this is the fact the Thesaurus is a singleton then you'll be better off exposing the singleton using a static method in the class (Thesaurus) and calling this method when you need the singleton

Does Python require intimate knowledge of all classes in the inheritance chain?

Python classes have no concept of public/private, so we are told to not touch something that starts with an underscore unless we created it. But does this not require complete knowledge of all classes from which we inherit, directly or indirectly? Witness:
class Base(object):
def __init__(self):
super(Base, self).__init__()
self._foo = 0
def foo(self):
return self._foo + 1
class Sub(Base):
def __init__(self):
super(Sub, self).__init__()
self._foo = None
Sub().foo()
Expectedly, a TypeError is raised when None + 1 is evaluated. So I have to know that _foo exists in the base class. To get around this, __foo can be used instead, which solves the problem by mangling the name. This seems to be, if not elegant, an acceptable solution. However, what happens if Base inherits from a class (in a separate package) called Sub? Now __foo in my Sub overrides __foo in the grandparent Sub.
This implies that I have to know the entire inheritance chain, including all "private" objects each uses. The fact that Python is dynamically-typed makes this even harder, since there are no declarations to search for. The worst part, however, is probably the fact Base might inherit from object right now, but in some future release, it switches to inheriting from Sub. Clearly if I know Sub is inherited from, I can rename my class, however annoying that is. But I can't see into the future.
Is this not a case where a true private data type would prevent a problem? How, in Python, can I be sure that I'm not accidentally stepping on somebody's toes if those toes might spring into existence at some point in the future?
EDIT: I've apparently not made clear the primary question. I'm familiar with name mangling and the difference between a single and a double underscore. The question is: how do I deal with the fact that I might clash with classes whose existence I don't know of right now? If my parent class (which is in a package I did not write) happens to start inheriting from a class with the same name as my class, even name mangling won't help. Am I wrong in seeing this as a (corner) case that true private members would solve, but that Python has trouble with?
EDIT: As requested, the following is a full example:
File parent.py:
class Sub(object):
def __init__(self):
self.__foo = 12
def foo(self):
return self.__foo + 1
class Base(Sub):
pass
File sub.py:
import parent
class Sub(parent.Base):
def __init__(self):
super(Sub, self).__init__()
self.__foo = None
Sub().foo()
The grandparent's foo is called, but my __foo is used.
Obviously you wouldn't write code like this yourself, but parent could easily be provided by a third party, the details of which could change at any time.
Use private names (instead of protected ones), starting with a double underscore:
class Sub(Base):
def __init__(self):
super(Sub, self).__init__()
self.__foo = None
# ^^
will not conflict with _foo or __foo in Base. This is because Python replaces the double underscore with a single underscore and the name of the class; the following two lines are equivalent:
class Sub(Base):
def x(self):
self.__foo = None # .. is the same as ..
self._Sub__foo = None
(In response to the edit:) The chance that two classes in a class hierarchy not only have the same name, but that they are both using the same property name, and are both using the private mangled (__) form is so minuscule that it can be safely ignored in practice (I for one haven't heard of a single case so far).
In theory, however, you are correct in that in order to formally verify correctness of a program, one most know the entire inheritance chain. Luckily, formal verification usually requires a fixed set of libraries in any case.
This is in the spirit of the Zen of Python, which includes
practicality beats purity.
Name mangling includes the class so your Base.__foo and Sub.__foo will have different names. This was the entire reason for adding the name mangling feature to Python in the first place. One will be _Base__foo, the other _Sub__foo.
Many people prefer to use composition (has-a) instead of inheritance (is-a) for some of these very reasons.
This implies that I have to know the entire inheritance chain. . .
Yes, you should know the entire inheritance chain, or the docs for the object you are directly sub-classing should tell you what you need to know.
Subclassing is an advanced feature, and should be treated with care.
A good example of docs specifying what should be overridden in a subclass is the threading class:
This class represents an activity that is run in a separate thread of control. There are two ways to specify the activity: by passing a callable object to the constructor, or by overriding the run() method in a subclass. No other methods (except for the constructor) should be overridden in a subclass. In other words, only override the __init__() and run() methods of this class.
How often do you modify base classes in inheritance chains to introduce inheritance from a class with the same name as a subclass further down the chain???
Less flippantly, yes, you have to know the code you are working with. You certainly have to know the public names being used, after all. Python being python, discovering the public names in use by your ancestor classes takes pretty much the same effort as discovering the private ones.
In years of Python programming, I have never found this to be much of an issue in practice. When you're naming instance variables, you should have a pretty good idea whether (a) a name is generic enough that it's likely to be used in other contexts and (b) the class you're writing is likely to be involved in an inheritance hierarchy with other unknown classes. In such cases, you think a bit more carefully about the names you're using; self.value isn't a great idea for an attribute name, and neither is something like Adaptor a great class name.
In contrast, I have run into difficulties with the overuse of double-underscore names a number of times. Python being Python, even "private" names tend to be accessed by code defined outside the class. You might think that it would always be bad practice to let an external function access "private" attributes, but what about things like getattr and hasattr? The invocation of them can be in the class's own code, so the class is still controlling all access to the private attributes, but they still don't work without you doing the name-mangling manually. If Python had actually-enforced private variables you couldn't use functions like those on them at all. These days I tend to reserve double-underscore names for cases when I'm writing something very generic like a decorator, metaclass, or mixin that needs to add a "secret attribute" to the instances of the (unknown) classes it's applied to.
And of course there's the standard dynamic language argument: the reality is that you have to test your code thoroughly to have much justification in making the claim "my software works". Such testing will be very unlikely to miss the bugs caused by accidentally clashing names. If you are not doing that testing, then many more uncaught bugs will be introduced by other means than by accidental name clashes.
In summation, the lack of private variables is just not that big a deal in idiomatic Python code in practice, and the addition of true private variables would cause more frequent problems in other ways IMHO.
Mangling happens with double underscores. Single underscores are more of a "please don't".
You don't need to know all the details of all parent classes (note that deep inheritance is usually best avoided), because you can still dir() and help() and any other form of introspection you can come up with.
As noted, you can use name mangling. However, you can stick with a single underscore (or none!) if you document your code adequately - you should not have so many private variables that this proves to be a problem. Just say if a method relies on a private variable, and add either the variable, or the name of the method to the class docstring to alert users.
Further, if you create unit tests, you should create tests that check invariants on members, and accordingly these should be able to show up such name clashes.
If you really want to have "private" variables, and for whatever reason name-mangling doesn't meet your needs, you can factor your private state into another object:
class Foo(object):
class Stateholder(object): pass
def __init__(self):
self._state = Stateholder()
self.state.private = 1

In Python, when should I use a meta class?

I have gone through this: What is a metaclass in Python?
But can any one explain more specifically when should I use the meta class concept and when it's very handy?
Suppose I have a class like below:
class Book(object):
CATEGORIES = ['programming','literature','physics']
def _get_book_name(self,book):
return book['title']
def _get_category(self, book):
for cat in self.CATEGORIES:
if book['title'].find(cat) > -1:
return cat
return "Other"
if __name__ == '__main__':
b = Book()
dummy_book = {'title':'Python Guide of Programming', 'status':'available'}
print b._get_category(dummy_book)
For this class.
In which situation should I use a meta class and why is it useful?
Thanks in advance.
You use metaclasses when you want to mutate the class as it is being created. Metaclasses are hardly ever needed, they're hard to debug, and they're difficult to understand -- but occasionally they can make frameworks easier to use. In our 600Kloc code base we've used metaclasses 7 times: ABCMeta once, 4x models.SubfieldBase from Django, and twice a metaclass that makes classes usable as views in Django. As #Ignacio writes, if you don't know that you need a metaclass (and have considered all other options), you don't need a metaclass.
Conceptually, a class exists to define what a set of objects (the instances of the class) have in common. That's all. It allows you to think about the instances of the class according to that shared pattern defined by the class. If every object was different, we wouldn't bother using classes, we'd just use dictionaries.
A metaclass is an ordinary class, and it exists for the same reason; to define what is common to its instances. The default metaclass type provides all the normal rules that make classes and instances work the way you're used to, such as:
Attribute lookup on an instance checks the instance followed by its class, followed by all superclasses in MRO order
Calling MyClass(*args, **kwargs) invokes i = MyClass.__new__(MyClass, *args, **kwargs) to get an instance, then invokes i.__init__(*args, **kwargs) to initialise it
A class is created from the definitions in a class block by making all the names bound in the class block into attributes of the class
Etc
If you want to have some classes that work differently to normal classes, you can define a metaclass and make your unusual classes instances of the metaclass rather than type. Your metaclass will almost certainly be a subclass of type, because you probably don't want to make your different kind of class completely different; just as you might want to have some sub-set of Books behave a bit differently (say, books that are compilations of other works) and use a subclass of Book rather than a completely different class.
If you're not trying to define a way of making some classes work differently to normal classes, then a metaclass is probably not the most appropriate solution. Note that the "classes define how their instances work" is already a very flexible and abstract paradigm; most of the time you do not need to change how classes work.
If you google around, you'll see a lot of examples of metaclasses that are really just being used to go do a bunch of stuff around class creation; often automatically processing the class attributes, or finding new ones automatically from somewhere. I wouldn't really call those great uses of metaclasses. They're not changing how classes work, they're just processing some classes. A factory function to create the classes, or a class method that you invoke immediately after class creation, or best of all a class decorator, would be a better way to implement this sort of thing, in my opinion.
But occasionally you find yourself writing complex code to get Python's default behaviour of classes to do something conceptually simple, and it actually helps to step "further out" and implement it at the metaclass level.
A fairly trivial example is the "singleton pattern", where you have a class of which there can only be one instance; calling the class will return an existing instance if one has already been created. Personally I am against singletons and would not advise their use (I think they're just global variables, cunningly disguised to look like newly created instances in order to be even more likely to cause subtle bugs). But people use them, and there are huge numbers of recipes for making singleton classes using __new__ and __init__. Doing it this way can be a little irritating, mainly because Python wants to call __new__ and then call __init__ on the result of that, so you have to find a way of not having your initialisation code re-run every time someone requests access to the singleton. But wouldn't be easier if we could just tell Python directly what we want to happen when we call the class, rather than trying to set up the things that Python wants to do so that they happen to do what we want in the end?
class Singleton(type):
def __init__(self, *args, **kwargs):
super(Singleton, self).__init__(*args, **kwargs)
self.__instance = None
def __call__(self, *args, **kwargs):
if self.__instance is None:
self.__instance = super(Singleton, self).__call__(*args, **kwargs)
return self.__instance
Under 10 lines, and it turns normal classes into singletons simply by adding __metaclass__ = Singleton, i.e. nothing more than a declaration that they are a singleton. It's just easier to implement this sort of thing at this level, than to hack something out at the class level directly.
But for your specific Book class, it doesn't look like you have any need to do anything that would be helped by a metaclass. You really don't need to reach for metaclasses unless you find the normal rules of how classes work are preventing you from doing something that should be simple in a simple way (which is different from "man, I wish I didn't have to type so much for all these classes, I wonder if I could auto-generate the common bits?"). In fact, I have never actually used a metaclass for something real, despite using Python every day at work; all my metaclasses have been toy examples like the above Singleton or else just silly exploration.
A metaclass is used whenever you need to override the default behavior for classes, including their creation.
A class gets created from the name, a tuple of bases, and a class dict. You can intercept the creation process to make changes to any of those inputs.
You can also override any of the services provided by classes:
__call__ which is used to create instances
__getattribute__ which is used to lookup attributes and methods on a class
__setattr__ which controls setting attributes
__repr__ which controls how the class is diplayed
In summary, metaclasses are used when you need to control how classes are created or when you need to alter any of the services provided by classes.
If you for whatever reason want to do stuff like Class[x], x in Class etc., you have to use metaclasses:
class Meta(type):
def __getitem__(cls, x):
return x ** 2
def __contains__(cls, x):
return int(x ** (0.5)) == x ** 0.5
# Python 2.x
class Class(object):
__metaclass__ = Meta
# Python 3.x
class Class(metaclass=Meta):
pass
print Class[2]
print 4 in Class
check the link Meta Class Made Easy to know how and when to use meta class.

Python: must __init__(self, foo) always be followed by self.foo = foo?

I've been striving mightily for three days to wrap my head around __init__ and "self", starting at Learn Python the Hard Way exercise 42, and moving on to read parts of the Python documentation, Alan Gauld's chapter on Object-Oriented Programming, Stack threads like this one on "self", and this one, and frankly, I'm getting ready to hit myself in the face with a brick until I pass out.
That being said, I've noticed a really common convention in initial __init__ definitions, which is to follow up with (self, foo) and then immediately declare, within that definition, that self.foo = foo.
From LPTHW, ex42:
class Game(object):
def __init__(self, start):
self.quips = ["a list", "of phrases", "here"]
self.start = start
From Alan Gauld:
def __init__(self,val): self.val = val
I'm in that horrible space where I can see that there's just One Big Thing I'm not getting, and I it's remaining opaque no matter how much I read about it and try to figure it out. Maybe if somebody can explain this little bit of consistency to me, the light will turn on. Is this because we need to say that "foo," the variable, will always be equal to the (foo) parameter, which is itself contained in the "self" parameter that's automatically assigned to the def it's attached to?
You might want to study up on object-oriented programming.
Loosely speaking, when you say
class Game(object):
def __init__(self, start):
self.start = start
you're saying:
I have a type of "thing" named Game
Whenever a new Game is created, it will demand me for some extra piece of information, start. (This is because the Game's initializer, named __init__, asks for this information.)
The initializer (also referred to as the "constructor", although that's a slight misnomer) needs to know which object (which was created just a moment ago) it's initializing. That's the first parameter -- which is usually called self by convention (but which you could call anything else...).
The game probably needs to remember what the start I gave it was. So it stores this information "inside" itself, by creating an instance variable also named start (nothing special, it's just whatever name you want), and assigning the value of the start parameter to the start variable.
If it doesn't store the value of the parameter, it won't have that informatoin available for later use.
Hope this explains what's happening.
I'm not quite sure what you're missing, so let me hit some basic items.
There are two "special" intialization names in a Python class object, one that is relatively rare for users to worry about, called __new__, and one that is much more usual, called __init__.
When you invoke a class-object constructor, e.g. (based on your example) x = Game(args), this first calls Game.__new__ to obtain memory in which to hold the object, and then Game.__init__ to fill in that memory. Most of the time, you can allow the underlying object.__new__ to allocate the memory, and you just need to fill it in. (You can use your own allocator for special weird rare cases like objects that never change and may share identities, the way ordinary integers do for instance. It's also for "metaclasses" that do weird stuff. But that's all a topic for much later.)
Your Game.__init__ function is called with "all the arguments to the constructor" plus one stashed in the front, which is the memory allocated for that object itself. (For "ordinary" objects that's mostly a dictionary of "attributes", plus the magic glue for classes, but for objects with __slots__ the attributes dictionary is omitted.) Naming that first argument self is just a convention—but don't violate it, people will hate you if you do. :-)
There's nothing that requires you to save all the arguments to the constructor. You can set any or all instance attributes you like:
class Weird(object):
def __init__(self, required_arg1, required_arg2, optional_arg3 = 'spam'):
self.irrelevant = False
def __str__(self):
...
The thing is that a Weird() instance is pretty useless after initialization, because you're required to pass two arguments that are simply thrown away, and given a third optional argument that is also thrown away:
x = Weird(42, 0.0, 'maybe')
The only point in requiring those thrown-away arguments is for future expansion, as it were (you might have these unused fields during early development). So if you're not immediately using and/or saving arguments to __init__, something is definitely weird in Weird.
Incidentally, the only reason for using (object) in the class definition is to indicate to Python 2.x that this is a "new-style" class (as distinguished from very-old-Python "instance only" classes). But it's generally best to use it—it makes what I said above about object.__new__ true, for instance :-) —until Python 3, where the old-style stuff is gone entirely.
Parameter names should be meaningful, to convey the role they play in the function/method or some information about their content.
You can see parameters of constructors to be even more important because they are often required for the working of the new instance and contain information which is needed in other methods of the class as well.
Imagine you have a Game class which accepts a playerList.
class Game:
def __init__(self, playerList):
self.playerList = playerList # or self.players = playerList
def printPlayerList(self):
print self.playerList # or print self.players
This list is needed in various methods of the class. Hence it makes sense to assign it to self.playerList. You could also assign it to self.players, whatever you feel more comfortable with and you think is understandable. But if you don't assign it to self.<somename> it won't be accessible in other methods.
So there is nothing special about how to name parameters/attributes/etc (there are some special class methods though), but using meaningful names makes the code easier to understand. Or would you understand the meaning of the above class if you had:
class G:
def __init__(self, x):
self.y = x
def ppl(self):
print self.y
? :) It does exactly the same but is harder to understand...

Categories

Resources