python getattr() with multiple params - python

Construction getattr(obj, 'attr1.attr2', None) does not work.
What are the best practices to replace this construction?
Divide that into two getattr statements?

You can use operator.attrgetter() in order to get multiple attributes at once:
from operator import attrgetter
my_attrs = attrgetter(attr1, attr2)(obj)

As stated in this answer, the most straightforward solution would be to use operator.attrgetter (more info in this python docs page).
If for some reason, this solution doesn't make you happy, you could use this code snippet:
def multi_getattr(obj, attr, default = None):
"""
Get a named attribute from an object; multi_getattr(x, 'a.b.c.d') is
equivalent to x.a.b.c.d. When a default argument is given, it is
returned when any attribute in the chain doesn't exist; without
it, an exception is raised when a missing attribute is encountered.
"""
attributes = attr.split(".")
for i in attributes:
try:
obj = getattr(obj, i)
except AttributeError:
if default:
return default
else:
raise
return obj
# Example usage
obj = [1,2,3]
attr = "append.__doc__.capitalize.__doc__"
multi_getattr(obj, attr) #Will return the docstring for the
#capitalize method of the builtin string
#object
from this page, which does work. I tested and used it.

I would suggest using something like this:
from operator import attrgetter
attrgetter('attr0.attr1.attr2.attr3')(obj)

If you have the attribute names you want to get in a list, you can do the following:
my_attrs = [getattr(obj, attr) for attr in attr_list]

A simple, but not very eloquent way, to get multiple attr would be to use tuples with or without brackets something like
aval, bval = getattr(myObj,"a"), getattr(myObj,"b")
but I think you might be wanting instead to get atrribute of a contained object with the way you are using dot notation. In which case it would be something like
getattr(myObj.contained, "c")
where contained is an object cotained within myObj object and c is an attribute of contained. Let me know if this is not what you want.

Related

How can I call a method from a name string in Python?

I am using an API to call specific information from a website. I need to be able to parse through the list to utilize the functions. Example:
list = ['doThis','doThat']
for item in list:
sampleobject.item
The issue is when I use this, I get an error saying "sampleobject has no attribute 'item'".
Is there a way that I can pull the quote out of the string to do this?
Try:
methods = ['doThis','doThat']
for method_name in methods:
method = getattr(sampleobject, method_name)
method()
Though it would be easier to do:
sampleobject.doThis()
sampleobject.doThat()
You can call getattr(sampleobject, item) to get the content of a property with the name equal to what is stored in item, which is an element from your list.
I think the problem is not about quotes at all. The problem is that syntax object.member means: evaluate a property named member that is stored in a variable named object. And you expect it to mean: evaluated a property with the name stored in member.

How to check if a variable is instance of any class [duplicate]

I need to determine if a given Python variable is an instance of native type: str, int, float, bool, list, dict and so on. Is there elegant way to doing it?
Or is this the only way:
if myvar in (str, int, float, bool):
# do something
This is an old question but it seems none of the answers actually answer the specific question: "(How-to) Determine if Python variable is an instance of a built-in type". Note that it's not "[...] of a specific/given built-in type" but of a.
The proper way to determine if a given object is an instance of a buil-in type/class is to check if the type of the object happens to be defined in the module __builtin__.
def is_builtin_class_instance(obj):
return obj.__class__.__module__ == '__builtin__'
Warning: if obj is a class and not an instance, no matter if that class is built-in or not, True will be returned since a class is also an object, an instance of type (i.e. AnyClass.__class__ is type).
The best way to achieve this is to collect the types in a list of tuple called primitiveTypes and:
if isinstance(myvar, primitiveTypes): ...
The types module contains collections of all important types which can help to build the list/tuple.
Works since Python 2.2
Not that I know why you would want to do it, as there isn't any "simple" types in Python, it's all objects. But this works:
type(theobject).__name__ in dir(__builtins__)
But explicitly listing the types is probably better as it's clearer. Or even better: Changing the application so you don't need to know the difference.
Update: The problem that needs solving is how to make a serializer for objects, even those built-in. The best way to do this is not to make a big phat serializer that treats builtins differently, but to look up serializers based on type.
Something like this:
def IntSerializer(theint):
return str(theint)
def StringSerializer(thestring):
return repr(thestring)
def MyOwnSerializer(value):
return "whatever"
serializers = {
int: IntSerializer,
str: StringSerializer,
mymodel.myclass: MyOwnSerializer,
}
def serialize(ob):
try:
return ob.serialize() #For objects that know they need to be serialized
except AttributeError:
# Look up the serializer amongst the serializer based on type.
# Default to using "repr" (works for most builtins).
return serializers.get(type(ob), repr)(ob)
This way you can easily add new serializers, and the code is easy to maintain and clear, as each type has its own serializer. Notice how the fact that some types are builtin became completely irrelevant. :)
You appear to be interested in assuring the simplejson will handle your types. This is done trivially by
try:
json.dumps( object )
except TypeError:
print "Can't convert", object
Which is more reliable than trying to guess which types your JSON implementation handles.
What is a "native type" in Python? Please don't base your code on types, use Duck Typing.
you can access all these types by types module:
`builtin_types = [ i for i in types.__dict__.values() if isinstance(i, type)]`
as a reminder, import module types first
def isBuiltinTypes(var):
return type(var) in types.__dict__.values() and not isinstance(var, types.InstanceType)
It's 2020, I'm on python 3.7, and none of the existing answers worked for me. What worked instead is the builtins module. Here's how:
import builtins
type(your_object).__name__ in dir(builtins)
Built in type function may be helpful:
>>> a = 5
>>> type(a)
<type 'int'>
building off of S.Lott's answer you should have something like this:
from simplejson import JSONEncoder
class JSONEncodeAll(JSONEncoder):
def default(self, obj):
try:
return JSONEncoder.default(self, obj)
except TypeError:
## optionally
# try:
# # you'd have to add this per object, but if an object wants to do something
# # special then it can do whatever it wants
# return obj.__json__()
# except AttributeError:
##
# ...do whatever you are doing now...
# (which should be creating an object simplejson understands)
to use:
>>> json = JSONEncodeAll()
>>> json.encode(myObject)
# whatever myObject looks like when it passes through your serialization code
these calls will use your special class and if simplejson can take care of the object it will. Otherwise your catchall functionality will be triggered, and possibly (depending if you use the optional part) an object can define it's own serialization
For me the best option is:
allowed_modules = set(['numpy'])
def isprimitive(value):
return not hasattr(value, '__dict__') or \
value.__class__.__module__ in allowed_modules
This fix when value is a module and value.__class__.__module__ == '__builtin__' will fail.
The question asks to check for non-class types. These types don't have a __dict__ member (You could also test for __repr__ member, instead of checking for __dict__) Other answers mention to check for membership in types.__dict__.values(), but some of the types in this list are classes.
def isnonclasstype(val):
return getattr(val,"__dict__", None) != None
a=2
print( isnonclasstype(a) )
a="aaa"
print( isnonclasstype(a) )
a=[1,2,3]
print( isnonclasstype(a) )
a={ "1": 1, "2" : 2 }
print( isnonclasstype(a) )
class Foo:
def __init__(self):
pass
a = Foo()
print( isnonclasstype(a) )
gives me:
> python3 t.py
False
False
False
False
True
> python t.py
False
False
False
False
True

what's the right way to put *arg in a tuple that can be sorted?

I want a dict or tuple I can sort based on attributes of the objects I'm using as arguments for *arg. The way I've been trying to do it just gives me AttributeErrors, which leads me to believe I'm doing it weird.
def function(*arg):
items = {}
for thing in arg:
items.update({thing.name:thing})
while True:
for thing in items:
## lots of other code here, basically just a game loop.
## Problem is that the 'turn order' is based on whatever
## Python decides the order of arguments is inside "items".
## I'd like to be able to sort the dict based on each object's
## attributes (ie, highest 'thing.speed' goes first in the while loop)
The problem is when I try to sort "items" based on an attribute of the objects I put into function(), it gives me "AttributeError: 'str' object has no attribute 'attribute'". Which leads me to believe I'm either unpacking *arg in a lousy way, or I'm trying to do something the wrong way.
while True:
for thing in sorted(items, key=attrgetter('attribute')):
...doesn't work either, keeps telling me I'm trying to manipulate a 'str' object. What am I not doing here?
arg already is a tuple you can sort by an attribute of each item:
def function(*args):
for thing in sorted(args, key=attrgetter('attribute')):
When you iterate over a dict, as sorted is doing, you just get the keys, not the values. So, if you want to use a dict, you need to do:
def function(*args):
# or use a dict comprehension on 2.7+
items = dict((thing.name, thing) for thing in args)
# or just items.values on 3+
for thing in sorted(items.itervalues(), key=attrgetter('attribute')):
to actually sort the args by an attribute. If you want the keys of the dict available as well (not necessary here because the key is also an attribute of the item), use something like:
for name, thing in sorted(items.iteritems(), key=lambda item: item[1].attribute):
Your items is a dict, you can't properly sort a dict. When you try to use it as an iterable, it silently returns its keys list, which is a list of strings. And you don't use your arg after creating a dict.
If you don't need dict lookup, as you just iterate through it, you can replace dict with list of 2-tuples (thing.name, thing), sort it by any attribute and iterate through it. You can also use collections.OrderedDict from Python 2.7 (it exists as a separate ordereddict package for earlier versions) if you really want both dict lookup and ordering.
{edit} Thanks to agf, I understood the problem. So, what I wrote below is a good answer in itself, but not when related to the question above... I let it here for the trace.
Looking to the answers, I may have not understood the question. But here's my understanding: as args is a tuple of arguments you give to your function, it's likely that none of these arguments is an object with a name attribute. But, looking to the errors you report, you're giving string arguments.
Maybe some illustration will help my description:
>>> # defining a function using name attribute
>>> def f(*args):
... for arg in args:
... print arg.name
>>> # defining an object with a name attribute
>>> class o(object):
... def __init__(self, name):
... self.name = name
>>> # now applying the function on the previous object, and on a string
>>> f( o('arg 1'), 'arg 2' )
arg 1
Traceback (most recent call last):
File "<pyshell#9>", line 1, in <module>
f(o('arg 1'), 'ets')
File "<pyshell#3>", line 3, in f
print arg.name
AttributeError: 'str' object has no attribute 'name'
This is failing as strings have no such attribute.
For me, in your code, there is a mistake: you're trying to use attribute name on your inputs, without ever verifying that they have such an attribute. Maybe you should test with hasattr first:
>>> if hasattr(arg, 'name'):
... print arg.name
... else:
... print arg
or with some inspection on the input, to verify if it's an instance of a given class, known to have the requested attribute.

Python getattr equivalent for dictionaries?

What's the most succinct way of saying, in Python, "Give me dict['foo'] if it exists, and if not, give me this other value bar"? If I were using an object rather than a dictionary, I'd use getattr:
getattr(obj, 'foo', bar)
but this raises a key error if I try using a dictionary instead (a distinction I find unfortunate coming from JavaScript/CoffeeScript). Likewise, in JavaScript/CoffeeScript I'd just write
dict['foo'] || bar
but, again, this yields a KeyError. What to do? Something succinct, please!
dict.get(key, default) returns dict[key] if key in dict, else returns default.
Note that the default for default is None so if you say dict.get(key) and key is not in dict then this will just return None rather than raising a KeyError as happens when you use the [] key access notation.
Also take a look at collections module's defaultdict class. It's a dict for which you can specify what it must return when the key is not found. With it you can do things like:
class MyDefaultObj:
def __init__(self):
self.a = 1
from collections import defaultdict
d = defaultdict(MyDefaultObj)
i = d['NonExistentKey']
type(i)
<instance of class MyDefalutObj>
which allows you to use the familiar d[i] convention.
However, as mikej said, .get() also works, but here is the form closer to your JavaScript example:
d = {}
i = d.get('NonExistentKey') or MyDefaultObj()
# the reason this is slightly better than d.get('NonExistent', MyDefaultObj())
# is that instantiation of default value happens only when 'NonExistent' does not exist.
# With d.get('NonExistent', MyDefaultObj()) you spin up a default every time you .get()
type(i)
<instance of class MyDefalutObj>

Determine if Python variable is an instance of a built-in type

I need to determine if a given Python variable is an instance of native type: str, int, float, bool, list, dict and so on. Is there elegant way to doing it?
Or is this the only way:
if myvar in (str, int, float, bool):
# do something
This is an old question but it seems none of the answers actually answer the specific question: "(How-to) Determine if Python variable is an instance of a built-in type". Note that it's not "[...] of a specific/given built-in type" but of a.
The proper way to determine if a given object is an instance of a buil-in type/class is to check if the type of the object happens to be defined in the module __builtin__.
def is_builtin_class_instance(obj):
return obj.__class__.__module__ == '__builtin__'
Warning: if obj is a class and not an instance, no matter if that class is built-in or not, True will be returned since a class is also an object, an instance of type (i.e. AnyClass.__class__ is type).
The best way to achieve this is to collect the types in a list of tuple called primitiveTypes and:
if isinstance(myvar, primitiveTypes): ...
The types module contains collections of all important types which can help to build the list/tuple.
Works since Python 2.2
Not that I know why you would want to do it, as there isn't any "simple" types in Python, it's all objects. But this works:
type(theobject).__name__ in dir(__builtins__)
But explicitly listing the types is probably better as it's clearer. Or even better: Changing the application so you don't need to know the difference.
Update: The problem that needs solving is how to make a serializer for objects, even those built-in. The best way to do this is not to make a big phat serializer that treats builtins differently, but to look up serializers based on type.
Something like this:
def IntSerializer(theint):
return str(theint)
def StringSerializer(thestring):
return repr(thestring)
def MyOwnSerializer(value):
return "whatever"
serializers = {
int: IntSerializer,
str: StringSerializer,
mymodel.myclass: MyOwnSerializer,
}
def serialize(ob):
try:
return ob.serialize() #For objects that know they need to be serialized
except AttributeError:
# Look up the serializer amongst the serializer based on type.
# Default to using "repr" (works for most builtins).
return serializers.get(type(ob), repr)(ob)
This way you can easily add new serializers, and the code is easy to maintain and clear, as each type has its own serializer. Notice how the fact that some types are builtin became completely irrelevant. :)
You appear to be interested in assuring the simplejson will handle your types. This is done trivially by
try:
json.dumps( object )
except TypeError:
print "Can't convert", object
Which is more reliable than trying to guess which types your JSON implementation handles.
What is a "native type" in Python? Please don't base your code on types, use Duck Typing.
you can access all these types by types module:
`builtin_types = [ i for i in types.__dict__.values() if isinstance(i, type)]`
as a reminder, import module types first
def isBuiltinTypes(var):
return type(var) in types.__dict__.values() and not isinstance(var, types.InstanceType)
It's 2020, I'm on python 3.7, and none of the existing answers worked for me. What worked instead is the builtins module. Here's how:
import builtins
type(your_object).__name__ in dir(builtins)
Built in type function may be helpful:
>>> a = 5
>>> type(a)
<type 'int'>
building off of S.Lott's answer you should have something like this:
from simplejson import JSONEncoder
class JSONEncodeAll(JSONEncoder):
def default(self, obj):
try:
return JSONEncoder.default(self, obj)
except TypeError:
## optionally
# try:
# # you'd have to add this per object, but if an object wants to do something
# # special then it can do whatever it wants
# return obj.__json__()
# except AttributeError:
##
# ...do whatever you are doing now...
# (which should be creating an object simplejson understands)
to use:
>>> json = JSONEncodeAll()
>>> json.encode(myObject)
# whatever myObject looks like when it passes through your serialization code
these calls will use your special class and if simplejson can take care of the object it will. Otherwise your catchall functionality will be triggered, and possibly (depending if you use the optional part) an object can define it's own serialization
For me the best option is:
allowed_modules = set(['numpy'])
def isprimitive(value):
return not hasattr(value, '__dict__') or \
value.__class__.__module__ in allowed_modules
This fix when value is a module and value.__class__.__module__ == '__builtin__' will fail.
The question asks to check for non-class types. These types don't have a __dict__ member (You could also test for __repr__ member, instead of checking for __dict__) Other answers mention to check for membership in types.__dict__.values(), but some of the types in this list are classes.
def isnonclasstype(val):
return getattr(val,"__dict__", None) != None
a=2
print( isnonclasstype(a) )
a="aaa"
print( isnonclasstype(a) )
a=[1,2,3]
print( isnonclasstype(a) )
a={ "1": 1, "2" : 2 }
print( isnonclasstype(a) )
class Foo:
def __init__(self):
pass
a = Foo()
print( isnonclasstype(a) )
gives me:
> python3 t.py
False
False
False
False
True
> python t.py
False
False
False
False
True

Categories

Resources