First things first, I'm reasonably new to python, but I have been working hard and doing lots of tutorials and sample projects to get better, so, if I'm missing something obvious, I appologize.
I've been trying to figure this out for a while now, and I've done a number of searches here and through the googles, but I can't quite figure out how to turn the examples I've found into what I'm looking for, so I was hoping someone here could give me a push in the right direction.
class Super1:
def __init__(self,artib1,atrib2,atrib3):
self.atrib1 = atrib1
self.atrib2 = atrib2
self.atrib3 = atrib3
class Sub1(Super1):
def __init__(self,atrib4,atrib5,atrib6)
self.atrib4 = atrib4
self.atrib5 = atrib5
self.atrib6 = atrib6
okay, so what I'm having trouble figuring out is, in the tutroials I've done, they said that I could call on the class like this:
spam = Super1("eggs","foo","bar")
and if I input
print spam.atrib1
it would spit out
eggs
What I want to do is make spam = Sub1, but I don't know how to call it so that I can set all the 'attrib's the way I did with Super1.
I looked up a number of 'multiple inheritance' examples, but I can't seem to reconcile the examples into my own needs. Most of the tutorials don't have more than 1 atribute, or often have the sub 'override' the atributes of the super.
I also checked into composition, and I'm not sure that's exactly what I'm looking for for this part of my project, but I do know that I will need it in later parts.
If anyone can point me in the right direction, that would be great.
You need to call the parent class's constructor Super1.__init__(self)
You also need to allow Sub1 to take the arguments for the parent class's constructor.
With the modifications above, your code becomes:
class Sub1(Super1):
def __init__(self, artib1, atrib2, atrib3, atrib4, atrib5, atrib6)
Super1.__init__(self, artib1, atrib2, atrib3)
self.atrib4 = atrib4
self.atrib5 = atrib5
self.atrib6 = atrib6
However, rather than calling the parent class's constructor yourself, you should use the super built-in function:
super(Sub1, self).__init__(artib1, atrib2, atrib3)
That way, you don't have to hard-code the name of the parent class in each sub-classes constructor. This allows you to easily refactor your code. Another added benefit of using super is that will automatically deal with the sticky details of multiple-inheritance problems such as "diamond inheritance".
One more piece of advice is that if you don't know the amount of positional arguments ahead of time that the super class will take, you can use the *args syntax:
class Sub1(Super1):
def __init__(self, atrib4, atrib5, atrib6, *args)
super(Sub1, self).__init__(*args)
self.atrib4 = atrib4
self.atrib5 = atrib5
self.atrib6 = atrib6
If Sub1 inherits from Super1, that's supposed to mean it is a Super1 (with some extra stuff added, or with some customizations). But you can't remove things, so Sub1 must
contain everything a Super1 contains
initialize the Super1 part of itself by calling super(Sub1,self).1.__init__(self, ...) in its own constructor.
So, if you your super class has a member a, whose value is passed to its constructor, your subclass also has (inherits) a member a, and must somehow pass its value to the superclass constructor.
Whether that means
class Sub1(Super1):
def __init__(self, a, b, c, d, e, f):
super(Sub1, self).__init__(a,b,c)
self.d=d
self.e=e
self.f=f
or whether there's some relationship between the super and subclass arguments (or the subclass hard-codes some of the superclass arguments, or ...) depends on your code.
If you call spam = Super1("eggs","foo","bar"). It will call Super class constructor.
The problem is if you want to create a instance for the Sub1 you should spam = Super1("eggs","foo","bar",atrib4,atrib5,atri6). Also you have to change the constructor for the Sub1 as:
def __init__(self,atrib1,atrib2,atrib3,atrib4,atrib5,atrib6):
Super1.__init__(self,atrib1,atrib2,atrib3)
self.atrib4 = atrib4
self.atrib5 = atrib5
self.atrib6 = atrib6`
Related
I have a few classes with almost identical contents, so I tried two methods to copy the classes and their attributes over. The classes copy correctly, but the randint function is only invoked in the main class, so the same number is output every time. Is there any way to recalculate the random number for each class?
class a:
exampleData = random.randint(1,100)
b = type('b', a.__bases__, dict(a.__dict__))
class c(a):
pass
For example if a.exampleData = 50, b.exampleData and c.exampleData would be the same. Is there any way around this?
Edit -- Part of my program displays characters with random stats each time, and the class contains the stats associated with each character. The random numbers pick the stats out of a list, but the same stats are being chosen, instead of being random in each class. I may not be explaining this right, so basically:
data = [stat1,stat2,stat3,ect,,]
data[random.randint(1,3)]
When you write this:
b = type('b', a.__bases__, dict(a.__dict__))
… you're just copying a.__dict__. Since a.__dict__ is just {'exampleData': 50}, the new copy that ends up as b.__dict__ is also going to be {'exampleData': 50}.
There are many ways you could get a new random number. The simplest is to just create a new random number for b explicitly:
bdict = dict(a.__dict__)
b['exampleData'] = random.randint(1,100)
b = type('b', a.__bases__, bdict)
If you want to create a bunch of classes this way, you can wrap that up in a function:
def make_clone(proto, name):
clonedict = dict(proto.__dict__)
clonedict['exampleData'] = random.randint(1,100)
return type(name, proto.__bases__, clonedict)
You can make that factory function more complicated if you want to be (see namedtuple for a pretty extreme example).
You could wrap that behavior up in a decorator:
def randomize(cls):
cls.exampleData = random.randint(1,100)
#randomize
class a:
pass
b = randomize(type('b', a.__bases__, dict(a.__dict__)))
Notice that I had to call the decorator with normal function-call syntax here, because there's no declaration statement to attach an #decorator to.
Or you can wrap it up in a metaclass:
class RandomMeta(type):
def __new__(mcls, name, bases, namespace):
d = dict(namespace)
d['exampleData'] = random.randint(1,100)
return type.__new__(mcls, name, bases, d)
class a(metaclass=RandomMeta):
pass
b = type(a)('b', a.__bases__, dict(a.__dict__))
Notice that we have to call type(a) here, the same way a class definition statement does, not the base metaclass type.
Also notice that I'm not taking **kwds in the __new__ method, and I'm calling type.__new__ directly. This means that if you try to use RandomMeta together with another metaclass (besides type), you should get an immediate TypeError, rather than something that may or may not be correct.
Meanwhile, I have a suspicion that what you're really trying to do here is build a prototype-based inheritance system, a la Self or JavaScript on top of Python's class-based system. While you can do that by using a special Prototype metaclass and a bunch of class objects, it's a whole lot simpler to just have a Prototype class and a bunch of instance objects. The only advantage to the metaclass approach is that you can use class statements (misleadingly, but conveniently) to clone prototypes, and you're explicitly not doing that here.
While my other answer covers the question as asked, I suspect it's all completely unnecessary to the OP's actual problem.
If you just want to create a bunch of separate objects, which each have a separate value for exampleData, you just want a bunch of instances of a single class, not a bunch of separate classes.
A class is a special kind of object that, in addition to doing all the normal object stuff, also works as a factory for other objects, which are instances of that class. You don't need a, b, and c to all be factories for for different kinds of objects, you just need them to be different objects of the same type. So:
class RandomThing:
def __init__(self):
self.exampleData = random.randint(1,100)
a = RandomThing()
b = RandomThing()
… or, if you want to make sure b is the same type of thing as a but don't know what type that is:
b = type(a)()
That's as fancy as you need to get here.
See the official tutorial on Classes (or maybe search for a friendlier tutorial, because there are probably better ones out there).
That is a kind of best practices question.
I have a class structure with some methods defined. In some cases I want to override a particular part of a method. First thought on that is splitting my method to more atomic pieces and override related parts like below.
class myTest(object):
def __init__(self):
pass
def myfunc(self):
self._do_atomic_job()
...
...
def _do_atomic_job(self):
print "Hello"
That is a practical-looking way to solve the problem. But since I have too many parameters that is needed to be transferred to and revieced back from _do_atomic_job(), I do not want to pass and retrieve tons of parameters. Other option is setting these parameters as class variables with self.param_var etc but those parameters are used in a small part of the code and using self is not my preferred way of solving this.
Last option I thought is using inner functions. (I know I will have problems in variable scopes but as I said, this is a best practise and just ignore them and think scope and all things about the inner functions are working as expected)
class MyTest2(object):
mytext = ""
def myfunc(self):
def _do_atomic_job():
mytext = "Hello"
_do_atomic_job()
print mytext
Lets assume that works as expected. What I want to do is overriding the inner function _do_atomic_job()
class MyTest3(MyTest2):
def __init__(self):
super(MyTest3, self).__init__()
self.myfunc._do_atomic_job = self._alt_do_atomic_job # Of course this do not work!
def _alt_do_atomic_job(self):
mytext = "Hollla!"
Do what I want to achieve is overriding inherited class' method's inner function _do_atomic_job
Is it possible?
Either factoring _do_atomic_job() into a proper method, or maybe factoring it
into its own class seem like the best approach to take. Overriding an inner
function can't work, because you won't have access to the local variable of the
containing method.
You say that _do_atomic_job() takes a lot of parameters returns lots of values. Maybe you group some of these parameters into reasonable objects:
_do_atomic_job(start_x, start_y, end_x, end_y) # Separate coordinates
_do_atomic_job(start, end) # Better: start/end points
_do_atomic_job(rect) # Even better: rectangle
If you can't do that, and _do_atomic_job() is reasonably self-contained,
you could create helper classes AtomicJobParams and AtomicJobResult.
An example using namedtuples instead of classes:
AtomicJobParams = namedtuple('AtomicJobParams', ['a', 'b', 'c', 'd'])
jobparams = AtomicJobParams(a, b, c, d)
_do_atomic_job(jobparams) # Returns AtomicJobResult
Finally, if the atomic job is self-contained, you can even factor it into its
own class AtomicJob.
class AtomicJob:
def __init__(self, a, b, c, d):
self.a = a
self.b = b
self.c = c
self.d = d
self._do_atomic_job()
def _do_atomic_job(self):
...
self.result_1 = 42
self.result_2 = 23
self.result_3 = 443
Overall, this seems more like a code factorization problem. Aim for rather lean
classes that delegate work to helpers where appropriate. Follow the single responsibility principle. If values belong together, bundle them up in a value class.
As David Miller (a prominent Linux kernel developer) recently said:
If you write interfaces with more than 4 or 5 function arguments, it's
possible that you and I cannot be friends.
Inner variables are related to where they are defined and not where they are executed. This prints "hello".
class MyTest2(object):
def __init__(self):
localvariable = "hello"
def do_atomic_job():
print localvariable
self.do_atomic_job = do_atomic_job
def myfunc(self):
localvariable = "hollla!"
self.do_atomic_job()
MyTest2().myfunc()
So I can't see any way you could use the local variables without passing them, which is probably the best way to do it.
Note: Passing locals() will get you a dict of the variables, this is considered quite bad style though.
I have class Base. I'd like to extend its functionality in a class Derived. I was planning to write:
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
# ...
def derived_method1(self):
# ...
Sometimes I already have a Base instance, and I want to create a Derived instance based on it, i.e., a Derived instance that shares the Base object (doesn't re-create it from scratch). I thought I could write a static method to do that:
b = Base(arg1, arg2) # very large object, expensive to create or copy
d = Derived.from_base(b, derived_arg1, derived_arg2) # reuses existing b object
but it seems impossible. Either I'm missing a way to make this work, or (more likely) I'm missing a very big reason why it can't be allowed to work. Can someone explain which one it is?
[Of course, if I used composition rather than inheritance, this would all be easy to do. But I was hoping to avoid the delegation of all the Base methods to Derived through __getattr__.]
Rely on what your Base class is doing with with base_arg1, base_arg2.
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
...
class Derived(Base):
def __init__(self, base_arg1, base_arg2, derived_arg1, derived_arg2):
super().__init__(base_arg1, base_arg2)
...
#classmethod
def from_base(cls, b, da1, da2):
return cls(b.base_arg1, b.base_arg2, da1, da2)
The alternative approach to Alexey's answer (my +1) is to pass the base object in the base_arg1 argument and to check, whether it was misused for passing the base object (if it is the instance of the base class). The other agrument can be made technically optional (say None) and checked explicitly when decided inside the code.
The difference is that only the argument type decides what of the two possible ways of creation is to be used. This is neccessary if the creation of the object cannot be explicitly captured in the source code (e.g. some structure contains a mix of argument tuples, some of them with the initial values, some of them with the references to the existing objects. Then you would probably need pass the arguments as the keyword arguments:
d = Derived(b, derived_arg1=derived_arg1, derived_arg2=derived_arg2)
Updated: For the sharing the internal structures with the initial class, it is possible using both approaches. However, you must be aware of the fact, that if one of the objects tries to modify the shared data, the usual funny things can happen.
To be clear here, I'll make an answer with code. pepr talks about this solution, but code is always clearer than English. In this case Base should not be subclassed, but it should be a member of Derived:
class Base(object):
def __init__(self, base_arg1, base_arg2):
self.base_arg1 = base_arg1
self.base_arg2 = base_arg2
class Derived(object):
def __init__(self, base, derived_arg1, derived_arg2):
self.base = base
self.derived_arg1 = derived_arg1
self.derived_arg2 = derived_arg2
def derived_method1(self):
return self.base.base_arg1 * self.derived_arg1
I've been striving mightily for three days to wrap my head around __init__ and "self", starting at Learn Python the Hard Way exercise 42, and moving on to read parts of the Python documentation, Alan Gauld's chapter on Object-Oriented Programming, Stack threads like this one on "self", and this one, and frankly, I'm getting ready to hit myself in the face with a brick until I pass out.
That being said, I've noticed a really common convention in initial __init__ definitions, which is to follow up with (self, foo) and then immediately declare, within that definition, that self.foo = foo.
From LPTHW, ex42:
class Game(object):
def __init__(self, start):
self.quips = ["a list", "of phrases", "here"]
self.start = start
From Alan Gauld:
def __init__(self,val): self.val = val
I'm in that horrible space where I can see that there's just One Big Thing I'm not getting, and I it's remaining opaque no matter how much I read about it and try to figure it out. Maybe if somebody can explain this little bit of consistency to me, the light will turn on. Is this because we need to say that "foo," the variable, will always be equal to the (foo) parameter, which is itself contained in the "self" parameter that's automatically assigned to the def it's attached to?
You might want to study up on object-oriented programming.
Loosely speaking, when you say
class Game(object):
def __init__(self, start):
self.start = start
you're saying:
I have a type of "thing" named Game
Whenever a new Game is created, it will demand me for some extra piece of information, start. (This is because the Game's initializer, named __init__, asks for this information.)
The initializer (also referred to as the "constructor", although that's a slight misnomer) needs to know which object (which was created just a moment ago) it's initializing. That's the first parameter -- which is usually called self by convention (but which you could call anything else...).
The game probably needs to remember what the start I gave it was. So it stores this information "inside" itself, by creating an instance variable also named start (nothing special, it's just whatever name you want), and assigning the value of the start parameter to the start variable.
If it doesn't store the value of the parameter, it won't have that informatoin available for later use.
Hope this explains what's happening.
I'm not quite sure what you're missing, so let me hit some basic items.
There are two "special" intialization names in a Python class object, one that is relatively rare for users to worry about, called __new__, and one that is much more usual, called __init__.
When you invoke a class-object constructor, e.g. (based on your example) x = Game(args), this first calls Game.__new__ to obtain memory in which to hold the object, and then Game.__init__ to fill in that memory. Most of the time, you can allow the underlying object.__new__ to allocate the memory, and you just need to fill it in. (You can use your own allocator for special weird rare cases like objects that never change and may share identities, the way ordinary integers do for instance. It's also for "metaclasses" that do weird stuff. But that's all a topic for much later.)
Your Game.__init__ function is called with "all the arguments to the constructor" plus one stashed in the front, which is the memory allocated for that object itself. (For "ordinary" objects that's mostly a dictionary of "attributes", plus the magic glue for classes, but for objects with __slots__ the attributes dictionary is omitted.) Naming that first argument self is just a convention—but don't violate it, people will hate you if you do. :-)
There's nothing that requires you to save all the arguments to the constructor. You can set any or all instance attributes you like:
class Weird(object):
def __init__(self, required_arg1, required_arg2, optional_arg3 = 'spam'):
self.irrelevant = False
def __str__(self):
...
The thing is that a Weird() instance is pretty useless after initialization, because you're required to pass two arguments that are simply thrown away, and given a third optional argument that is also thrown away:
x = Weird(42, 0.0, 'maybe')
The only point in requiring those thrown-away arguments is for future expansion, as it were (you might have these unused fields during early development). So if you're not immediately using and/or saving arguments to __init__, something is definitely weird in Weird.
Incidentally, the only reason for using (object) in the class definition is to indicate to Python 2.x that this is a "new-style" class (as distinguished from very-old-Python "instance only" classes). But it's generally best to use it—it makes what I said above about object.__new__ true, for instance :-) —until Python 3, where the old-style stuff is gone entirely.
Parameter names should be meaningful, to convey the role they play in the function/method or some information about their content.
You can see parameters of constructors to be even more important because they are often required for the working of the new instance and contain information which is needed in other methods of the class as well.
Imagine you have a Game class which accepts a playerList.
class Game:
def __init__(self, playerList):
self.playerList = playerList # or self.players = playerList
def printPlayerList(self):
print self.playerList # or print self.players
This list is needed in various methods of the class. Hence it makes sense to assign it to self.playerList. You could also assign it to self.players, whatever you feel more comfortable with and you think is understandable. But if you don't assign it to self.<somename> it won't be accessible in other methods.
So there is nothing special about how to name parameters/attributes/etc (there are some special class methods though), but using meaningful names makes the code easier to understand. Or would you understand the meaning of the above class if you had:
class G:
def __init__(self, x):
self.y = x
def ppl(self):
print self.y
? :) It does exactly the same but is harder to understand...
In Python, I want to know if it is necessary to include __init__ as the first method while creating a class, as in the example below:
class ExampleClass:
def __init__(self, some_message):
self.message = some_message
print "New Class instance created, with message:"
print self.message
Also, why do we use self to call methods?
Can someone explain the use of "self" in detail?
Also, why do we use pass statement in Python?
No, it isn't necessary.
For example.
class A(object):
def f():
print 'foo'
And you can of course use it, in this manner:
a = A()
a.f()
In fact you can even define a class in this manner.
class A:
pass
However, defining __init__ is a common practice because instances of a class usually store some sort of state information or data and the methods of the class offer a way to manipulate or do something with that state information or data. __init__ allows us to initialize this state information or data while creating an instance of the class.
Here is a complete example.
class BankAccount(object):
def __init__(self, deposit):
self.amount = deposit
def withdraw(self, amount):
self.amount -= amount
def deposit(self, amount):
self.amount += amount
def balance(self):
return self.amount
# Let me create an instance of 'BankAccount' class with the initial
# balance as $2000.
myAccount = BankAccount(2000)
# Let me check if the balance is right.
print myAccount.balance()
# Let me deposit my salary
myAccount.deposit(10000)
# Let me withdraw some money to buy dinner.
myAccount.withdraw(15)
# What's the balance left?
print myAccount.balance()
An instance of the class is always passed as the first argument to a method of the class. For example if there is class A and you have an instance a = A(), whenever you call a.foo(x, y), Python calls foo(a, x, y) of class A automatically. (Note the first argument.) By convention, we name this first argument as self.
In addition to other answers, one point in your question that has not been addressed :
Is it necessary to include __init__ as the first function
everytime in a class in Python?
The answer is no. In the case you need a constructor, it can be located at any position of your code, although the conventional and logical place is the beginning.
You don't need to put it in your Class; it is the object constructor.
You will need it if you want things to happen automatically to your object when it is instantiated.
No, it is not necessary to use the init in a class. It's a object constructor that define default values upon calling the class.
If you're programming in OOP manner and ought to have a basic structure of your class. You often will need this.
I read your other sub-question regarding
Can u explain about the use of "self"??? – harsh Jul 28 '11 at 5:13
Please refer to this post in stackoverflow. There's a lot of useful links to help you better understand python's init function.
Python __init__ and self what do they do?
Is not necessary the "init" statement, besides the "pass" statement is just used for skip, either to the next part of the code, or just skip cause was reached a special part like an "if" statement.
I initially struggled with that question too then I realize it is just another way to store certain data to your object and that data can be passed to any object method you define since your instance method has a self argument that can point back to the data you created in the init method.
No, it is not necessary but it helps in so many ways. people from Java or OOPS background understand better.
For every class instance, there is an object chaining that needs to complete when we instantiate any class by creating an object.
If we don’t put it compiler/interpreter puts it. But when we need some action to be formed while creating an object then we must have to pass it.
first.py
-------
class A:
def one(self):
print("something")
second.py
----------
from first import A
class B:
def test(self):
a = A()
x = a.one()
print(x)
test(any)
**output:**
something
None
Sure that this not required.
Please read more about defining python classes in this tutorial here.
Read more about __init__ in the documentation here and at What do __init__ and self do in Python?.
In general __init__ is a kind of constructor that is called automatically and allows you to perform any additional actions(adding variables, calling any methods and so on - the idea is to have the ability to initialize instance since it is already created and now you may need to do something with it before proceeding - for example remember creation time or serializing its initial state and so on) while creating object. So if you don't need to do some special preparation you may skip using it.
Its not necessary...
It's just function that runs everytime an object is created from your class...
And it can be helpful if you want every object have some things in common