Given a class C with a function or method f, I use inspect.ismethod(obj.f) (where obj is an instance of C) to find out if f is bound method or not. Is there a way to do the same directly at the class level (without creating an object)?
inspect.ismethod does not work as this:
class C(object):
#staticmethod
def st(x):
pass
def me(self):
pass
obj = C()
results in this (in Python 3):
>>> inspect.ismethod(C.st)
False
>>> inspect.ismethod(C.me)
False
>>> inspect.ismethod(obj.st)
False
>>> inspect.ismethod(obj.me)
True
I guess I need to check if the function/method is member of a class and not static but I was not able to do it easily. I guess it could be done using classify_class_attrs as shown here
How would you determine where each property and method of a Python class is defined?
but I was hoping there was another more direct way.
There are no unbound methods in Python 3, so you cannot detect them either. All you have is regular functions. At most you can see if they have a qualified name with a dot, indicating that they are nested, and their first argument name is self:
if '.' in method.__qualname__ and inspect.getargspec(method).args[0] == 'self':
# regular method. *Probably*
This of course fails entirely for static methods and nested functions that happen to have self as a first argument, as well as regular methods that do not use self as a first argument (flying in the face of convention).
For static methods and class methods, you'd have to look at the class dictionary instead:
>>> isinstance(vars(C)['st'], staticmethod)
True
That's because C.__dict__['st'] is the actual staticmethod instance, before binding to the class.
Could you use inspect.isroutine(...)? Running it with your class C I get:
>>> inspect.isroutine(C.st)
True
>>> inspect.isroutine(C.me)
True
>>> inspect.isroutine(obj.st)
True
>>> inspect.isroutine(obj.me)
True
Combining the results of inspect.isroutine(...) with the results of inspect.ismethod(...) may enable you to infer what you need to know.
Edit: dm03514's answer suggests you might also try inspect.isfunction():
>>> inspect.isfunction(obj.me)
False
>>> inspect.isfunction(obj.st)
True
>>> inspect.isfunction(C.st)
True
>>> inspect.isfunction(C.me)
False
Though as Hernan has pointed out, the results of inspect.isfunction(...) change in python 3.
Since inspect.ismethod returns True for both bound and unbound methods in Python 2.7 (ie., is broken), I'm using:
def is_bound_method(obj):
return hasattr(obj, '__self__') and obj.__self__ is not None
It also works for methods of classes implemented in C, e.g., int:
>>> a = 1
>>> is_bound_method(a.__add__)
True
>>> is_bound_method(int.__add__)
False
But is not very useful in that case because inspect.getargspec does not work for functions implemented in C.
is_bound_method works unchanged in Python 3, but in Python 3, inspect.ismethod properly distinguishes between bound and unbound methods, so it is not necessary.
Related
I am attempting to write a test that checks if a variable holding the bound method of a class is the same as another reference to that method. Normally this is not a problem, but it does not appear to work when done within another method of the same class. Here is a minimal example:
class TestClass:
def sample_method(self):
pass
def test_method(self, method_reference):
print(method_reference is self.sample_method)
I am really using an assert instead of print, but that is neither here nor there since the end result is the same. The test is run as follows:
instance = TestClass()
instance.test_method(instance.sample_method)
The result is False even though I am expecting it to be True. The issue manifests itself in both Python 3.5 and Python 2.7 (running under Anaconda).
I understand that bound methods are closures that are acquired by doing something like TestClass.test_method.__get__(instance, type(instance)). However, I would expect that self.sample_method is already a reference to such a closure, so that self.sample_method and instance.sample_method represent the same reference.
Part of what is confusing me here is the output of the real pytest test that I am running (working on a PR for matplotlib):
assert <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>> is <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>>
E + where <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>> = <matplotlib.ticker.TransformFormatter object at 0x7f0101077e10>.transform
E + and <bound method TestTransformFormatter.transform1 of <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>> = <matplotlib.tests.test_ticker.TestTransformFormatter object at 0x7f0101077b70>.transform1
If I understand the output correctly, the actual comparison (the first line) is really comparing the same objects, but somehow turning up False. The only thing I can imagine at this point is that __get__ is in fact being called twice, but I know neither why/where/how, nor how to work around it.
They're not the same reference - the objects representing the two methods occupy different locations in memory:
>>> class TestClass:
... def sample_method(self):
... pass
... def test_method(self, method_reference):
... print(hex(id(method_reference)))
... print(hex(id(self.sample_method)))
...
>>> instance = TestClass()
>>> instance.test_method(instance.sample_method)
0x7fed0cc561c8
0x7fed0cc4e688
Changing to method_reference == self.sample_method will make the assert pass, though.
Edit since question was expanded: seems like a flawed test - probably the actual functionality of the code does not require the references to be the same (is), just equal (==). So your change probably didn't break anything except for the test.
While the accepted answer is in no way incorrect, it seems like it should be noted that methods are bound on attribute lookup. Furthermore, the behavior of unbound methods changes between Python 2.X and Python 3.X.
class A:
def method(self):
pass
a = A()
print(a.method is a.method) # False
print(A.method is A.method) # Python 3.X: True, Python 2.X: False
I need to determine if a given Python variable is an instance of native type: str, int, float, bool, list, dict and so on. Is there elegant way to doing it?
Or is this the only way:
if myvar in (str, int, float, bool):
# do something
This is an old question but it seems none of the answers actually answer the specific question: "(How-to) Determine if Python variable is an instance of a built-in type". Note that it's not "[...] of a specific/given built-in type" but of a.
The proper way to determine if a given object is an instance of a buil-in type/class is to check if the type of the object happens to be defined in the module __builtin__.
def is_builtin_class_instance(obj):
return obj.__class__.__module__ == '__builtin__'
Warning: if obj is a class and not an instance, no matter if that class is built-in or not, True will be returned since a class is also an object, an instance of type (i.e. AnyClass.__class__ is type).
The best way to achieve this is to collect the types in a list of tuple called primitiveTypes and:
if isinstance(myvar, primitiveTypes): ...
The types module contains collections of all important types which can help to build the list/tuple.
Works since Python 2.2
Not that I know why you would want to do it, as there isn't any "simple" types in Python, it's all objects. But this works:
type(theobject).__name__ in dir(__builtins__)
But explicitly listing the types is probably better as it's clearer. Or even better: Changing the application so you don't need to know the difference.
Update: The problem that needs solving is how to make a serializer for objects, even those built-in. The best way to do this is not to make a big phat serializer that treats builtins differently, but to look up serializers based on type.
Something like this:
def IntSerializer(theint):
return str(theint)
def StringSerializer(thestring):
return repr(thestring)
def MyOwnSerializer(value):
return "whatever"
serializers = {
int: IntSerializer,
str: StringSerializer,
mymodel.myclass: MyOwnSerializer,
}
def serialize(ob):
try:
return ob.serialize() #For objects that know they need to be serialized
except AttributeError:
# Look up the serializer amongst the serializer based on type.
# Default to using "repr" (works for most builtins).
return serializers.get(type(ob), repr)(ob)
This way you can easily add new serializers, and the code is easy to maintain and clear, as each type has its own serializer. Notice how the fact that some types are builtin became completely irrelevant. :)
You appear to be interested in assuring the simplejson will handle your types. This is done trivially by
try:
json.dumps( object )
except TypeError:
print "Can't convert", object
Which is more reliable than trying to guess which types your JSON implementation handles.
What is a "native type" in Python? Please don't base your code on types, use Duck Typing.
you can access all these types by types module:
`builtin_types = [ i for i in types.__dict__.values() if isinstance(i, type)]`
as a reminder, import module types first
def isBuiltinTypes(var):
return type(var) in types.__dict__.values() and not isinstance(var, types.InstanceType)
It's 2020, I'm on python 3.7, and none of the existing answers worked for me. What worked instead is the builtins module. Here's how:
import builtins
type(your_object).__name__ in dir(builtins)
Built in type function may be helpful:
>>> a = 5
>>> type(a)
<type 'int'>
building off of S.Lott's answer you should have something like this:
from simplejson import JSONEncoder
class JSONEncodeAll(JSONEncoder):
def default(self, obj):
try:
return JSONEncoder.default(self, obj)
except TypeError:
## optionally
# try:
# # you'd have to add this per object, but if an object wants to do something
# # special then it can do whatever it wants
# return obj.__json__()
# except AttributeError:
##
# ...do whatever you are doing now...
# (which should be creating an object simplejson understands)
to use:
>>> json = JSONEncodeAll()
>>> json.encode(myObject)
# whatever myObject looks like when it passes through your serialization code
these calls will use your special class and if simplejson can take care of the object it will. Otherwise your catchall functionality will be triggered, and possibly (depending if you use the optional part) an object can define it's own serialization
For me the best option is:
allowed_modules = set(['numpy'])
def isprimitive(value):
return not hasattr(value, '__dict__') or \
value.__class__.__module__ in allowed_modules
This fix when value is a module and value.__class__.__module__ == '__builtin__' will fail.
The question asks to check for non-class types. These types don't have a __dict__ member (You could also test for __repr__ member, instead of checking for __dict__) Other answers mention to check for membership in types.__dict__.values(), but some of the types in this list are classes.
def isnonclasstype(val):
return getattr(val,"__dict__", None) != None
a=2
print( isnonclasstype(a) )
a="aaa"
print( isnonclasstype(a) )
a=[1,2,3]
print( isnonclasstype(a) )
a={ "1": 1, "2" : 2 }
print( isnonclasstype(a) )
class Foo:
def __init__(self):
pass
a = Foo()
print( isnonclasstype(a) )
gives me:
> python3 t.py
False
False
False
False
True
> python t.py
False
False
False
False
True
I'm writing a Python script to parse some data. At the moment I'm trying to make a Class which creates "placeholder" objects. I intend to repeatedly pass each "placeholder" variables, and finally turn it into a dict, float, list or string. For reasons that would take a while to describe, it would be a lot easier if I could replace the instance by calling a method.
Here's a simplified example
class Event( dict ):
def __init__( self ):
self.sumA = 0.0
self.sumB = 0.0
def augmentA( self, i ):
self.sumA += i
def augmentB( self, i ):
self.sumB += i
def seal( self ):
if self.sumA != 0 and self.sumB != 0:
self = [ sumA, sumB ]
elif self.sumA != 0:
self = float( sumA )
elif self.sumB != 0:
self = float( sumB )
And what I want to do is:
e = Event()
e.augmentA( 1 )
e.augmentA( 2 )
e.seal()
...and have 'e' turn into a float.
What I am hoping to avoid is:
e = Event()
e.augmentA( 1 )
e.augmentA( 2 )
e = e.getSealedValue()
I totally understand that "self" in my "seal" method is just a local variable, and won't have any effect on the instance outside scope. I'm unsure however how to achieve what I want, from within the instance, where it would be most convenient for my code. I also understand I could override all the bulit-ins ( getItem, toStr ) but that complicates my code a lot.
I'm a Python noob so I'm unsure if this is even possible. Indulge me, please :)
Under some circunstances, Python allows you to change the class of an object on the fly. However, not any object can be converted to any class, as the example below demonstrates (newlines added for readability):
>>> class A(object):
... pass
...
>>> class B(object):
... pass
...
>>> a = A()
>>> type(a)
<class '__main__.A'>
>>> a.__class__ = B
>>> type(a)
<class '__main__.B'>
>>> a.__class__ = int
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __class__ assignment: only for heap types
(I don't know the exact rules from the top of my head, but if your classes uses __slots__ for instance they must be compatible for the conversion to be possible)
However, as other answerers pointed out, in general it's a very bad idea to do so, even if there was a way to convert every reference to one object to a reference to another. I wouldn't go as far as saying never do that though, there might be legitimate uses of this technique (for instance, I see it as an easy way of implementing the State Design Pattern without creating unnecessary clutter).
Even if there was some sane way to make it work, I would avoid going down this path simply because it changes the type of the object to something completely incompatible with its existing contract.
Now, one could say "but I'm only using it in one place, other code paths won't ever see the old contract", but unfortunately that isn't an argument for this mechanism since you could simply only make the value available for the other code paths once you have the final object.
In short, don't do this and don't even try.
No, you cannot have an variable value replace itself by calling a method on it. The normal way to do this would be what you stated: e = e.getSealedValue()
You can make an object change behavior, but that's generally considered a bad idea and is likely to result in highly unmaintainable code.
Is a way to see if a class responds to a method in Python? like in ruby:
class Fun
def hello
puts 'Hello'
end
end
fun = Fun.new
puts fun.respond_to? 'hello' # true
Also is there a way to see how many arguments the method requires?
Hmmm .... I'd think that hasattr and callable would be the easiest way to accomplish the same goal:
class Fun:
def hello(self):
print 'Hello'
hasattr(Fun, 'hello') # -> True
callable(Fun.hello) # -> True
You could, of course, call callable(Fun.hello) from within an exception handling suite:
try:
callable(Fun.goodbye)
except AttributeError, e:
return False
As for introspection on the number of required arguments; I think that would be of dubious value to the language (even if it existed in Python) because that would tell you nothing about the required semantics. Given both the ease with which one can define optional/defaulted arguments and variable argument functions and methods in Python it seems that knowing the "required" number of arguments for a function would be of very little value (from a programmatic/introspective perspective).
Has method:
func = getattr(Fun, "hello", None)
if callable(func):
...
Arity:
import inspect
args, varargs, varkw, defaults = inspect.getargspec(Fun.hello)
arity = len(args)
Note that arity can be pretty much anything if you have varargs and/or varkw not None.
dir(instance) returns a list of an objects attributes.
getattr(instance,"attr") returns an object's attribute.
callable(x) returns True if x is callable.
class Fun(object):
def hello(self):
print "Hello"
f = Fun()
callable(getattr(f,'hello'))
I am no Ruby expert, so I am not sure if this answers your question. I think you want to check if an object contains a method. There are numerous ways to do so. You can try to use the hasattr() function, to see if an object hast the method:
hasattr(fun, "hello") #True
Or you can follow the python guideline don't ask to ask, just ask so, just catch the exception thrown when the object doesn't have the method:
try:
fun.hello2()
except AttributeError:
print("fun does not have the attribute hello2")
I need to determine if a given Python variable is an instance of native type: str, int, float, bool, list, dict and so on. Is there elegant way to doing it?
Or is this the only way:
if myvar in (str, int, float, bool):
# do something
This is an old question but it seems none of the answers actually answer the specific question: "(How-to) Determine if Python variable is an instance of a built-in type". Note that it's not "[...] of a specific/given built-in type" but of a.
The proper way to determine if a given object is an instance of a buil-in type/class is to check if the type of the object happens to be defined in the module __builtin__.
def is_builtin_class_instance(obj):
return obj.__class__.__module__ == '__builtin__'
Warning: if obj is a class and not an instance, no matter if that class is built-in or not, True will be returned since a class is also an object, an instance of type (i.e. AnyClass.__class__ is type).
The best way to achieve this is to collect the types in a list of tuple called primitiveTypes and:
if isinstance(myvar, primitiveTypes): ...
The types module contains collections of all important types which can help to build the list/tuple.
Works since Python 2.2
Not that I know why you would want to do it, as there isn't any "simple" types in Python, it's all objects. But this works:
type(theobject).__name__ in dir(__builtins__)
But explicitly listing the types is probably better as it's clearer. Or even better: Changing the application so you don't need to know the difference.
Update: The problem that needs solving is how to make a serializer for objects, even those built-in. The best way to do this is not to make a big phat serializer that treats builtins differently, but to look up serializers based on type.
Something like this:
def IntSerializer(theint):
return str(theint)
def StringSerializer(thestring):
return repr(thestring)
def MyOwnSerializer(value):
return "whatever"
serializers = {
int: IntSerializer,
str: StringSerializer,
mymodel.myclass: MyOwnSerializer,
}
def serialize(ob):
try:
return ob.serialize() #For objects that know they need to be serialized
except AttributeError:
# Look up the serializer amongst the serializer based on type.
# Default to using "repr" (works for most builtins).
return serializers.get(type(ob), repr)(ob)
This way you can easily add new serializers, and the code is easy to maintain and clear, as each type has its own serializer. Notice how the fact that some types are builtin became completely irrelevant. :)
You appear to be interested in assuring the simplejson will handle your types. This is done trivially by
try:
json.dumps( object )
except TypeError:
print "Can't convert", object
Which is more reliable than trying to guess which types your JSON implementation handles.
What is a "native type" in Python? Please don't base your code on types, use Duck Typing.
you can access all these types by types module:
`builtin_types = [ i for i in types.__dict__.values() if isinstance(i, type)]`
as a reminder, import module types first
def isBuiltinTypes(var):
return type(var) in types.__dict__.values() and not isinstance(var, types.InstanceType)
It's 2020, I'm on python 3.7, and none of the existing answers worked for me. What worked instead is the builtins module. Here's how:
import builtins
type(your_object).__name__ in dir(builtins)
Built in type function may be helpful:
>>> a = 5
>>> type(a)
<type 'int'>
building off of S.Lott's answer you should have something like this:
from simplejson import JSONEncoder
class JSONEncodeAll(JSONEncoder):
def default(self, obj):
try:
return JSONEncoder.default(self, obj)
except TypeError:
## optionally
# try:
# # you'd have to add this per object, but if an object wants to do something
# # special then it can do whatever it wants
# return obj.__json__()
# except AttributeError:
##
# ...do whatever you are doing now...
# (which should be creating an object simplejson understands)
to use:
>>> json = JSONEncodeAll()
>>> json.encode(myObject)
# whatever myObject looks like when it passes through your serialization code
these calls will use your special class and if simplejson can take care of the object it will. Otherwise your catchall functionality will be triggered, and possibly (depending if you use the optional part) an object can define it's own serialization
For me the best option is:
allowed_modules = set(['numpy'])
def isprimitive(value):
return not hasattr(value, '__dict__') or \
value.__class__.__module__ in allowed_modules
This fix when value is a module and value.__class__.__module__ == '__builtin__' will fail.
The question asks to check for non-class types. These types don't have a __dict__ member (You could also test for __repr__ member, instead of checking for __dict__) Other answers mention to check for membership in types.__dict__.values(), but some of the types in this list are classes.
def isnonclasstype(val):
return getattr(val,"__dict__", None) != None
a=2
print( isnonclasstype(a) )
a="aaa"
print( isnonclasstype(a) )
a=[1,2,3]
print( isnonclasstype(a) )
a={ "1": 1, "2" : 2 }
print( isnonclasstype(a) )
class Foo:
def __init__(self):
pass
a = Foo()
print( isnonclasstype(a) )
gives me:
> python3 t.py
False
False
False
False
True
> python t.py
False
False
False
False
True