I am defining two classes in the same module and want to use the second one in the first one (as a global variable):
class Class1(object):
global_c2 = Class2()
def foo(self):
local_c2 = Class2()
class Class2(object):
pass
global_c2 gets an error but local_c2 doesn't. This makes sense because when the compiler looks through this file it won't know that Class2 is going to exist. Also if I switch the class around so that Class2 is defined first it works.
However I was wondering if there is another way to get around this. Maybe I can somehow tell python that Class2 is going to exist so don't worry about it, or do I just have to put them in the right order?
The compiler doesn't do anything here. In both cases, exactly the same bytecode sequence is generated to look up the class at runtime and instanciate it.
What makes the difference is when the statements are run. All code in a Python module is executed top from bottom -- there is no such thing as a declaration, everything's a definition and every binding is dynamic. Code in a class definition is run when the class definition is encountered (and therefore before the second class is brought into existence and bound to the name Class2). Code in a function runs when the function is called, and because you don't call the function before the definition of the second class, it's available by the time you call that function.
That's basically what every solution boils down to: Delay binding until whatever you're binding to exists.
You can do the following (i.e. backfill the contents of Class1 once Class2 has been declared.
class Class1(object):
pass
class Class2(object):
pass
Class1.global_c2 = Class2()
Related
I have a question about inner classes usage in python. I know that this is a bad practice but anyway. I am wondering about scope where inner class is defined. I got an error 'global name 'ClassName' is not defined'. My code snippet looks like this:
I discovered that to avoid getting this error I can use:
ClassWithEnum.EnumClass
instead of:
EnumClass
inside of doSomethingWithEnum() function. So I am wondering if there is any other way to make EnumClass defined inside of doSomethingWithEnum() function? And why there is no such error when I declare EnumClass as a default parameter in doSomethingWithEnum() function?
class ClassWithEnum(object):
class EnumClass(object):
...
def doSomethingWithEnum(self, m = EnumClass....):
...
Python class construction executes as code. The def statement is really a line of code being executed that creates a function. The class keyword introduces a namespace. Putting these two mechanisms together, it means that class EnumClass really creates an object by that name in the current namespace, not much different from what foo = 'bar' does, so within the same namespace you can refer to it by that name, which is what happens in the def statement.
Also compare:
class Foo:
bar = 'baz'
print bar
baz = bar
Every line of code inside a class block is a regular executable line of code.
Once your class definition is done, you're out of the ClassWithEnum namespace and cannot access EnumClass anymore simply by that name; it's now only available as ClassWithEnum.EnumClass; whether from "outside" the class or from within a function (any function, including class methods).
To get access to the class without typing its name from within the method you could do:
type(self).EnumClass
Or simply self.EnumClass, since properties are looked up up the chain.
When you are inside doSomethingWithEnum function, you are in the different namespace. To access anything that is defined in your class, like EnumClass, you should call self.EnumClass. If it were a class method, it would be cls.EnumClass.
I am learning OOP in python and following this and this stackoverflow answers and this post
I understood how class works and how method called and all things but i have some doubts:
Consider this fragment of code:
class Point(object):
def __init__(self,x,y):
self.x = x
self.y = y
def distance(self):
print (self.x)
def bye(self):
print(self.y)
a=Point(1,2)
a.distance()
a.bye()
As i read in tutorial :
when we call a method with some arguments, the corresponding class
function is called by placing the method's object before the first
argument. So, anything like obj.meth(args) becomes Class.meth(obj,
args).
when ObjectA.methodA(arg1, arg2) is called, python internally converts
it for you as:
ClassA.methodA(ObjectA, arg1, arg2)
Now my confusion is why program need to call class with each method ?
Class.meth(obj, args) ??
like when we call a.distance it become Point.distance(a) causes of "self"
when we called a.bye it become Point.bye(a) causes of "self" .
when Point class is necessery with each method if we don't use Point class with each method what will happen?
why can't simply meth(obj, args) works ?
My main doubt is why its called class.some_method with each method when we called with attribute of method . why its needs calls with each one?
#if i am understanding right then its necessary because so that each method can access other methods data like variables and stuff?
The key is
python internally converts it for you
From your standpoint:
meth(self, args) is the syntax you use to define member functions; and
obj.meth(args) is the syntax you use to call member functions.
The meth(obj,args) option is the way procedural languages work. That is often how the implementation works, but expressing the call as obj.meth(args) keeps focus on the object and makes it easier to read which data values (object instances) are being used.
Edit 1 If I understand your question correctly, you are asking why Python needs to know the class when it already has the instance available, and instances know their own types. In fact, Python fetches methods based on the instance all the time. I think the point the tutorial is making is that in Python, the class is the primary place the functions are defined. This is different from some object-oriented languages, in which each instance has its own methods, and they may be completely different from each other. So the tutorial is contrasting the usual approach in Python:
class Foo:
def bar(self):
pass
with an alternative (possible in Python, but not typical):
foo = object() # an empty instance
foo.bar = lambda self: pass
Edit 2 Python methods normally live in the classes, not in the instances. Even if you create 1000 Point objects, there is only one copy of the actual instruction bytes for Point.distance. Those instruction bytes are executed anytime <some point variable>.distance() is called. You are correct that the self parameter is how those instruction bytes know what instance to work on, and how the method can access other data in the passed instance.
Edit 3 self isn't exactly a namespace in the way that local vs. global is. However, it is fair to say that self.foo refers to a foo that is indeed accessible to all the methods of this instance of the current class. Given
a = Point(1,2)
b = Point(3,4)
inside a Point.distance call, self refers to a or b, but not both. So when you call a.distance(), the self.x will be a.x, not b.x. But all methods of Point can access self.x to get whatever the current point's x is.
Edit 4 Suppose you weren't using objects, but instead dictionaries:
a = {'x':1, 'y':2} # make a "point"
b = {'x':3, 'y':4} # make another
def point_distance(point):
print (point['x'])
then you could say:
point_distance(a)
to get the effect of
print (a['x'])
Classes do basically that, with cleaner syntax and some nice benefits. But just as the point parameter to point_distance() refers to one and only one point-like dictionary each time you call point_distance(), the self parameter to Point.distance() refers to one and only one Point instance each time you call <whatever point>.distance().
Because you can have the same method name in different classes, and it needs to call the appropriate one. So if you have
class Class1:
def meth():
print "This is Class 1"
class Class2:
def meth():
print "This is Class 2"
c1 = Class1()
c2 = Class2()
c1.meth() # equivalent to Class1.meth(c1)
c2.meth() # equivalent to Class2.meth(c2)
If it translated c1.meth() to meth(c1), there's no way for the system to know which meth() function to call.
Classes define what is common to all instances of them. Usually this is the code comprising each of its methods. To apply this code to the correct instance object, the language interprets
instance.method(arg1, arg2, ...)
as
class_of_instance.method(instance, arg1, arg2, ...)
so the code is applied to the proper class instance.
name_player = None
health_player = None
inventory_player = []
class engine:
print name_player
I have no idea why this runs without calling it with engine()
The Python interpreter starts by reading your file, one line at a time.
Step 1:
name_player = None adds name_player : None to locals()
Step 2 and 3 proceed in the same way.
Step 4: class engine: Python sees a class and prepares to load the definition into memory. So it's going to read the class and put all of the fields and method definitions into some runtime dictionary probably. In order to do that, it needs to execute the statements in the class.
So normally a class might look like
class Foo():
def my_method():
return "I'm foo!"
This would define a method, and put that definition with the class definition on the heap.
So your definition proceeds as follows. We've started creating the class object and then we come across a statement, so the interpreter executes it. In your case, it's a print statement, so you see it executed.
You'll see now if you call engine(), another print won't happen.
What you probably want is to have this statement in a constructor like so:
class engine:
def __init__(self); #__init__() is a constructor in Python
print name_player
For more information about classes in Python, see https://docs.python.org/2/tutorial/classes.html
When you define a class, python evaluates the statements making up the class's definition. If those statements have side effects, for example sending text to the standard output, then that text will get sent.
If you were to instantiate this, by calling engine(), you would get back an empty object.
I'm looking at the source code for a trie implementation
On lines 80-85:
def keys(self, prefix=[]):
return self.__keys__(prefix)
def __keys__(self, prefix=[], seen=[]):
result = []
etc.
What is def __keys__? Is that a magic object that is self-created? If so, is this poor code? Or does __keys__ exist as a standard Python magic method? I can't find it anywhere in the Python documentation, though.
Why is it legal for the function to call self.__keys__ before def __keys__ is even instantiated? Wouldn't def __keys__ have to go before def keys (since keys calls __keys__)?
For your second question, it is legal, the functions for a class are defined when the class gets defined , so you can be sure both functions would be defined before keys() is called, the logic also applies to normal functions, we can do -
>>> def a():
... b()
...
>>> def b():
... print("In B()")
...
>>> a()
In B()
This is legal because both a() and b() are defined before a() is called. It would only be illegal , if you try to call a() before b() gets defined. Please note defining a function does not automatically call it , and python does not validate at time of definition of function whether any functions used in a function is defined or not (untill runtime, when the function is called and in that case it throws a NameError)
For your first question, I do not know of any such magic methods called __keys__() , cannot find it in documentation either.
All of the real "magic methods" are in the data model documentation; __keys__ isn't one of them. The style guide says:
Never invent such names; only use them as documented.
so yes, making up a new one is bad form (the convention would have been to call it _keys).
The second part of your question doesn't make sense; even if this wasn't a class, there is no need to define methods and functions in the order they're called. As long as they exist by the time the call actually gets made, it's not a problem. I tend to define public methods before private ones, even though the former may call the latter, simply for the reader's convenience.
There is no magic method named __keys__(), so as you suspected this is just poor naming.
The code in the class definition can be in any order. All the matters that the definition has been made by the time the actual call is made downstream.
There is no magic method named __keys__, so its just a wrong naming convention. Looking at the code, the author just wanted to have a private method which is used internally, and also from the public method keys. As you can see __keys__ accepts an additional argument.
About the second question, there is no need that you define the functions in the same order as they called. It will be available by the time code is compiled.
The compilation of a class in Python is done way before the class is instantiated.
Whenever class type is created, the body of the class block is compiled and executed. Then, all the functions are transformed either into bound handles (normal functions) or into classmethod/staticmethod objects. Then, when a new instance is created, content of the type's __dict__ is copied over to the instance (and bound handles are transformed into methods).
Therefore, at the moment of calling instance.keys(), the instance already has both keys and __keys__ methods.
Also, there is no __keys__ method in any data mode, as far as I know.
class Ball:
a = []
def __init__(self):
pass
def add(self,thing):
self.a.append(thing)
def size(self):
print len(self.a)
for i in range(3):
foo = Ball()
foo.add(1)
foo.add(2)
foo.size()
I would expect a return of :
2
2
2
But I get :
2
4
6
Why is this? I've found that by doing a=[] in the init, I can route around this behavior, but I'm less than clear why.
doh
I just figured out why.
In the above case, the a is a class attribute, not a data attribute - those are shared by all Balls(). Commenting out the a=[] and placing it into the init block means that it's a data attribute instead. (And, I couldn't access it then with foo.a, which I shouldn't do anyhow.) It seems like the class attributes act like static attributes of the class, they're shared by all instances.
Whoa.
One question though : CodeCompletion sucks like this. In the foo class, I can't do self.(variable), because it's not being defined automatically - it's being defined by a function. Can I define a class variable and replace it with a data variable?
What you probably want to do is:
class Ball:
def __init__(self):
self.a = []
If you use just a = [], it creates a local variable in the __init__ function, which disappears when the function returns. Assigning to self.a makes it an instance variable which is what you're after.
For a semi-related gotcha, see how you can change the value of default parameters for future callers.
"Can I define a class variable and replace it with a data variable?"
No. They're separate things. A class variable exists precisely once -- in the class.
You could -- to finesse code completion -- start with some class variables and then delete those lines of code after you've written your class. But every time you forget to do that nothing good will happen.
Better is to try a different IDE. Komodo Edit's code completions seem to be sensible.
If you have so many variables with such long names that code completion is actually helpful, perhaps you should make your classes smaller or use shorter names. Seriously.
I find that when you get to a place where code completion is more helpful than annoying, you've exceeded the "keep it all in my brain" complexity threshold. If the class won't fit in my brain, it's too complex.