I am attempting to write a test that checks if a variable holding the bound method of a class is the same as another reference to that method. Normally this is not a problem, but it does not appear to work when done within another method of the same class. Here is a minimal example:
class TestClass:
def sample_method(self):
pass
def test_method(self, method_reference):
print(method_reference is self.sample_method)
I am really using an assert instead of print, but that is neither here nor there since the end result is the same. The test is run as follows:
instance = TestClass()
instance.test_method(instance.sample_method)
The result is False even though I am expecting it to be True. The issue manifests itself in both Python 3.5 and Python 2.7 (running under Anaconda).
I understand that bound methods are closures that are acquired by doing something like TestClass.test_method.__get__(instance, type(instance)). However, I would expect that self.sample_method is already a reference to such a closure, so that self.sample_method and instance.sample_method represent the same reference.
Part of what is confusing me here is the output of the real pytest test that I am running (working on a PR for matplotlib):
assert <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>> is <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>>
E + where <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>> = <matplotlib.ticker.TransformFormatter object at 0x7f0101077e10>.transform
E + and <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>> = <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>.transform1
If I understand the output correctly, the actual comparison (the first line) is really comparing the same objects, but somehow turning up False. The only thing I can imagine at this point is that __get__ is in fact being called twice, but I know neither why/where/how, nor how to work around it.
They're not the same reference - the objects representing the two methods occupy different locations in memory:
>>> class TestClass:
... def sample_method(self):
... pass
... def test_method(self, method_reference):
... print(hex(id(method_reference)))
... print(hex(id(self.sample_method)))
...
>>> instance = TestClass()
>>> instance.test_method(instance.sample_method)
0x7fed0cc561c8
0x7fed0cc4e688
Changing to method_reference == self.sample_method will make the assert pass, though.
Edit since question was expanded: seems like a flawed test - probably the actual functionality of the code does not require the references to be the same (is), just equal (==). So your change probably didn't break anything except for the test.
While the accepted answer is in no way incorrect, it seems like it should be noted that methods are bound on attribute lookup. Furthermore, the behavior of unbound methods changes between Python 2.X and Python 3.X.
class A:
def method(self):
pass
a = A()
print(a.method is a.method) # False
print(A.method is A.method) # Python 3.X: True, Python 2.X: False
Related
Folks,
After much searching and reading I have come to the conclusion that methods use () and attributes don't.
Example using arr=np.arrange(25):
To find the size of the array I'd use: arr.size This is an attribute.
To find the max of the array I'd use: arr.max() This is a method.
To me, as an amateur python coder, I can't for the life of me tell what the actual difference is. (Note: I do understand that an attribute is found under _ _init_ _ while methods are not.) Is it really just the person who wrote the class randomly decided to put size as an attribute and max as method? Is there any way to tell while writing code to intuitively know when to use () and when not to without looking up the list of methods and attributes for each class?
Thanks for the help and sorry if I have any of the terms incorrect.
The full story is a little complicated (for all of it, see Python: the __getattribute__ method and descriptors and python __getattribute__ override and #property decorator and follow the links to the wiki on descriptor protocols), but in short, you write:
somevar.thing()
when you want to call the thing, and you write:
somevar.thing
when you want to use the value of the thing. This usage is the same as with functions and non-functions:
def f(arg):
print('f called, arg =', arg)
return 42
x = f('douglas adams')
print('f returned', x)
y = x
print('I just set y to x:', y)
y = f
print('this time I did not call', y)
which, when run (as Python3 or with from __future__ import print_function in Python2) prints:
f called, arg = douglas adams
f returned 42
I just set y to x: 42
this time I did not call <function f at ...>
If we try to do y = x(), it fails because we cannot call 42.
If you are going to define how the thing is to be used, define it as a method if it needs to be called, and as an instance attribute if it's just to be used (and/or set to some value). If you make the wrong decision—if you make it a used/set instance attribute, and it turns out later you need a function—you can work around it later with #property.
What's special about instance methods is that when you call them—or even when you don't—you get an extra self argument. The implementation by which this occurs is different in Python2 and Python3, but:
class K(object):
def method(self, arg):
print('method called, arg is', arg)
x = K()
x.method(42)
prints:
method called, arg is 42
Note that if we don't call it, we see this as a "bound method":
print('x.method is', x.method)
produces:
x.method is <bound method K.method of <__main__.K object at ...>>
If we look at K.method directly, the difference between Python2 and Python3 shows up:
$ python2 x.py
K.method is <unbound method K.method>
$ python3.6 x.py
K.method is <function K.method at ...>
but in the end it's all just descriptor protocols, with the CPython implementation able to do some short-cutting.
In python, I'm trying to get a depth of recursion(dor). Before adding all the dor things into my code, it worked ~fine, but after adding the dor stuff, I received an attribute error concerning my code. Here is the function where I'm receiving the error
def parse(json,dor=0):
parse.index[dor]=0
parse.keyList[dor]=[]
parse.jsonDict[dor]=dict()
parse.json[dor]=remove_white(json)
Disclaimer: what you are doing is most likely The Wrong Thing To Do
Your code worked before because (I assume) you were setting an attribute on a function object:
def foo():
foo.bar = 4
When run, the function object sets an attribute bar on itself. However, when you added the __setitem__ (with the square brackets):
def foo():
foo.bar[dor] = 4
You're now saying that you want to modify foo.bar, but foo.bar doesn't exist yet! You can "fix" this by setting up the object manually, before you run it for the first time:
def foo():
foo.bar[dor] = 4
foo.bar = {}
foo()
Most likely, you want to avoid this whole mess altogether by using a separate object to keep track of the recursion depth in your code. Just because you can do something doesn't mean you should.
EDIT: Looking at your code, it seems like you should be using a class instead of a function for parse. Using a class makes sense because you're encapsulating mutable state with a set of methods that act on it. Of course, I'm also obligated to point you to the standard library JSON module.
I'm writing a Python script to parse some data. At the moment I'm trying to make a Class which creates "placeholder" objects. I intend to repeatedly pass each "placeholder" variables, and finally turn it into a dict, float, list or string. For reasons that would take a while to describe, it would be a lot easier if I could replace the instance by calling a method.
Here's a simplified example
class Event( dict ):
def __init__( self ):
self.sumA = 0.0
self.sumB = 0.0
def augmentA( self, i ):
self.sumA += i
def augmentB( self, i ):
self.sumB += i
def seal( self ):
if self.sumA != 0 and self.sumB != 0:
self = [ sumA, sumB ]
elif self.sumA != 0:
self = float( sumA )
elif self.sumB != 0:
self = float( sumB )
And what I want to do is:
e = Event()
e.augmentA( 1 )
e.augmentA( 2 )
e.seal()
...and have 'e' turn into a float.
What I am hoping to avoid is:
e = Event()
e.augmentA( 1 )
e.augmentA( 2 )
e = e.getSealedValue()
I totally understand that "self" in my "seal" method is just a local variable, and won't have any effect on the instance outside scope. I'm unsure however how to achieve what I want, from within the instance, where it would be most convenient for my code. I also understand I could override all the bulit-ins ( getItem, toStr ) but that complicates my code a lot.
I'm a Python noob so I'm unsure if this is even possible. Indulge me, please :)
Under some circunstances, Python allows you to change the class of an object on the fly. However, not any object can be converted to any class, as the example below demonstrates (newlines added for readability):
>>> class A(object):
... pass
...
>>> class B(object):
... pass
...
>>> a = A()
>>> type(a)
<class '__main__.A'>
>>> a.__class__ = B
>>> type(a)
<class '__main__.B'>
>>> a.__class__ = int
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __class__ assignment: only for heap types
(I don't know the exact rules from the top of my head, but if your classes uses __slots__ for instance they must be compatible for the conversion to be possible)
However, as other answerers pointed out, in general it's a very bad idea to do so, even if there was a way to convert every reference to one object to a reference to another. I wouldn't go as far as saying never do that though, there might be legitimate uses of this technique (for instance, I see it as an easy way of implementing the State Design Pattern without creating unnecessary clutter).
Even if there was some sane way to make it work, I would avoid going down this path simply because it changes the type of the object to something completely incompatible with its existing contract.
Now, one could say "but I'm only using it in one place, other code paths won't ever see the old contract", but unfortunately that isn't an argument for this mechanism since you could simply only make the value available for the other code paths once you have the final object.
In short, don't do this and don't even try.
No, you cannot have an variable value replace itself by calling a method on it. The normal way to do this would be what you stated: e = e.getSealedValue()
You can make an object change behavior, but that's generally considered a bad idea and is likely to result in highly unmaintainable code.
Given a class C with a function or method f, I use inspect.ismethod(obj.f) (where obj is an instance of C) to find out if f is bound method or not. Is there a way to do the same directly at the class level (without creating an object)?
inspect.ismethod does not work as this:
class C(object):
#staticmethod
def st(x):
pass
def me(self):
pass
obj = C()
results in this (in Python 3):
>>> inspect.ismethod(C.st)
False
>>> inspect.ismethod(C.me)
False
>>> inspect.ismethod(obj.st)
False
>>> inspect.ismethod(obj.me)
True
I guess I need to check if the function/method is member of a class and not static but I was not able to do it easily. I guess it could be done using classify_class_attrs as shown here
How would you determine where each property and method of a Python class is defined?
but I was hoping there was another more direct way.
There are no unbound methods in Python 3, so you cannot detect them either. All you have is regular functions. At most you can see if they have a qualified name with a dot, indicating that they are nested, and their first argument name is self:
if '.' in method.__qualname__ and inspect.getargspec(method).args[0] == 'self':
# regular method. *Probably*
This of course fails entirely for static methods and nested functions that happen to have self as a first argument, as well as regular methods that do not use self as a first argument (flying in the face of convention).
For static methods and class methods, you'd have to look at the class dictionary instead:
>>> isinstance(vars(C)['st'], staticmethod)
True
That's because C.__dict__['st'] is the actual staticmethod instance, before binding to the class.
Could you use inspect.isroutine(...)? Running it with your class C I get:
>>> inspect.isroutine(C.st)
True
>>> inspect.isroutine(C.me)
True
>>> inspect.isroutine(obj.st)
True
>>> inspect.isroutine(obj.me)
True
Combining the results of inspect.isroutine(...) with the results of inspect.ismethod(...) may enable you to infer what you need to know.
Edit: dm03514's answer suggests you might also try inspect.isfunction():
>>> inspect.isfunction(obj.me)
False
>>> inspect.isfunction(obj.st)
True
>>> inspect.isfunction(C.st)
True
>>> inspect.isfunction(C.me)
False
Though as Hernan has pointed out, the results of inspect.isfunction(...) change in python 3.
Since inspect.ismethod returns True for both bound and unbound methods in Python 2.7 (ie., is broken), I'm using:
def is_bound_method(obj):
return hasattr(obj, '__self__') and obj.__self__ is not None
It also works for methods of classes implemented in C, e.g., int:
>>> a = 1
>>> is_bound_method(a.__add__)
True
>>> is_bound_method(int.__add__)
False
But is not very useful in that case because inspect.getargspec does not work for functions implemented in C.
is_bound_method works unchanged in Python 3, but in Python 3, inspect.ismethod properly distinguishes between bound and unbound methods, so it is not necessary.
I would like to do this:
def foo():
if <a magical condition>:
return x
else:
poof()
# or...
def foo():
x = <a magical object>
return x
def poof():
print 'poof!'
bar = foo() # bar points to <a magical object> but poof() is not called
foo() # prints 'poof!'
I guess it comes down to what the circumstanses are when the returned object's __del__ method is called. But maybe there is a better way. Like if the function itself knew it's returned value was being assigned. I guess I'm worried about relying on the timing of the garbage collection. Also I don't like that global at_end_of_program flag.
My solution:
class Magic:
def __del__(s):
poof()
def foo():
x = Magic()
return x
def poof():
if not at_end_of_program:
print 'poof!'
bar = foo() # No poof.
foo() # prints 'poof!'
I'm pretty confused by your question, but I think what you are trying to do is run a function when a value is reassigned.
Instead of doing tricky things with a __del__() method function, I suggest you just put your value into a class instance, and then overload __setattr__(). You could also overload __delattr__() to make sure you catch del(object.x) for your value x.
The very purpose of __setattr__() is to give you a hook to catch when something assigns to a member of your class. And you won't need any strange end_of_program flag. At the end of your program, just get rid of your overloaded function for __delattr__() so it doesn't get called for end-of-program cleanup.
A function can't tell what its return value is used for. Your solution will print poof if you re-assign to bar for example.
What's the real problem you are trying to solve?