I'm trying to set up a simple test example of setattr() in Python, but it fails to assign a new value to the member.
class Foo(object):
__bar = 0
def modify_bar(self):
print(self.__bar)
setattr(self, "__bar", 1)
print(self.__bar)
Here I tried variable assignment with setattr(self, "bar", 1), but was unsuccessful:
>>> foo = Foo()
>>> foo.modify_bar()
0
0
Can someone explain what is happening under the hood. I'm new to python, so please forgive my elementary question.
A leading double underscore invokes python name-mangling.
So:
class Foo(object):
__bar = 0 # actually `_Foo__bar`
def modify_bar(self):
print(self.__bar) # actually self._Foo__bar
setattr(self, "__bar", 1)
print(self.__bar) # actually self._Foo__bar
Name mangling only applies to identifiers, not strings, which is why the __bar in the setattr function call is unaffected.
class Foo(object):
_bar = 0
def modify_bar(self):
print(self._bar)
setattr(self, "_bar", 1)
print(self._bar)
should work as expected.
Leading double underscores are generally not used very frequently in most python code (because their use is typically discouraged). There are a few valid use-cases (mainly to avoid name clashes when subclassing), but those are rare enough that name mangling is generally avoided in the wild.
Related
I have a general question about Python best practices.
Tried googling for an answer, but didn't find anything relevant.
The question is : is it best practice to explicitly pass a globally known parameter to a function?
So, is this best practice?
a = 1
def add_one(a):
b = a + 1
or this?
a = 1
def add_one():
b = a + 1
Here is an answer to expand on my comment regarding setting a as a class attribute, rather than using a global.
The Short Answer:
Use globals with controlled caution; or the safe play ... just don't. There's usually a better way.
Ref 1: PEP-8
Ref 2: This interesting post on SO, emphasising the need for caution
Passing variables is OK
Use a class at any (applicable) opportunity
Class Example:
Here is a simple, stripped down class structure with no frills. You'll notice the absence of any global variables, along with no passed variables.
This is not saying that passed variables are discouraged, they are useful, if not necessary. However, this is to show that globals are not needed, and there's usually a better way. For what it's worth ... (personally speaking, I've never seen the use for them in Python).
class Demo():
"""Class attribute demonstration class."""
def __init__(self):
"""Demo class initialiser."""
self._a = 73
def add_one(self):
"""Add one to the `a` attribute."""
self._a += 1
def times_two(self):
"""Multiply the `a` attribute by two."""
self._a *= 2
Use Case:
I'll be the first to point out that this example is all but useless in the real world due to 1) externally accessing a 'private' class attribute and 2) the continued updating of _a; this is only an example specific to your question.
Note that each call to a different function has a different, yet continued affect on _a; noting the variable (class attribute) is neither global, nor passed.
demo = Demo()
print(f'Value of _a: {demo._a}')
>>> Value of _a: 73
demo.add_one()
print(f'Value of _a plus one: {demo._a}')
>>> Value of _a plus one: 74
demo.times_two()
print(f'Value of _a plus one, times two: {demo._a}')
>>> Value of _a plus one, times two: 148
This is a bit of a silly thing, but I want to know if there is concise way in Python to define class variables that contain string representations of their own names. For example, one can define:
class foo(object):
bar = 'bar'
baz = 'baz'
baf = 'baf'
Probably a more concise way to write it in terms of lines consumed is:
class foo(object):
bar, baz, baf = 'bar', 'baz', 'baf'
Even there, though, I still have to type each identifier twice, once on each side of the assignment, and the opportunity for typos is rife.
What I want is something like what sympy provides in its var method:
sympy.var('a,b,c')
The above injects into the namespace the variables a, b, and c, defined as the corresponding sympy symbolic variables.
Is there something comparable that would do this for plain strings?
class foo(object):
[nifty thing]('bar', 'baz', 'baf')
EDIT: To note, I want to be able to access these as separate identifiers in code that uses foo:
>>> f = foo(); print(f.bar)
bar
ADDENDUM: Given the interest in the question, I thought I'd provide more context on why I want to do this. I have two use-cases at present: (1) typecodes for a set of custom exceptions (each Exception subclass has a distinct typecode set); and (2) lightweight enum. My desired feature set is:
Only having to type the typecode / enum name (or value) once in the source definition. class foo(object): bar = 'bar' works fine but means I have to type it out twice in-source, which gets annoying for longer names and exposes a typo risk.
Valid typecodes / enum values exposed for IDE autocomplete.
Values stored internally as comprehensible strings:
For the Exception subclasses, I want to be able to define myError.__str__ as just something like return self.typecode + ": " + self.message + " (" + self.source + ")", without having to do a whole lot of dict-fu to back-reference an int value of self.typecode to a comprehensible and meaningful string.
For the enums, I want to just be able to obtain widget as output from e = myEnum.widget; print(e), again without a lot of dict-fu.
I recognize this will increase overhead. My application is not speed-sensitive (GUI-based tool for driving a separate program), so I don't think this will matter at all.
Straightforward membership testing, by also including (say) a frozenset containing all of the typecodes / enum string values as myError.typecodes/myEnum.E classes. This addresses potential problems from accidental (or intentional.. but why?!) use of an invalid typecode / enum string via simple sanity checks like if not enumVal in myEnum.E: raise(ValueError('Invalid enum value: ' + str(enumVal))).
Ability to import individual enum / exception subclasses via, say, from errmodule import squirrelerror, to avoid cluttering the namespace of the usage environment with non-relevant exception subclasses. I believe this prohibits any solutions requiring post-twiddling on the module level like what Sinux proposed.
For the enum use case, I would rather avoid introducing an additional package dependency since I don't (think I) care about any extra functionality available in the official enum class. In any event, it still wouldn't resolve #1.
I've already figured out implementation I'm satisfied with for all of the above but #1. My interest in a solution to #1 (without breaking the others) is partly a desire to typo-proof entry of the typecode / enum values into source, and partly plain ol' laziness. (Says the guy who just typed up a gigantic SO question on the topic.)
I recommend using collections.namedtuple:
Example:
>>> from collections import namedtuple as nifty_thing
>>> Data = nifty_thing("Data", ["foo", "bar", "baz"])
>>> data = Data(foo=1, bar=2, baz=3)
>>> data.foo
1
>>> data.bar
2
>>> data.baz
3
Side Note: If you are using/on Python 3.x I'd recommend Enum as per #user2357112's comment. This is the standardized approach going forward for Python 3+
Update: Okay so if I understand the OP's exact requirement(s) here I think the only way to do this (and presumably sympy does this too) is to inject the names/variables into the globals() or locals() namespaces. Example:
#!/usr/bin/env python
def nifty_thing(*names):
d = globals()
for name in names:
d[name] = None
nifty_thing("foo", "bar", "baz")
print foo, bar, bar
Output:
$ python foo.py
None None None
NB: I don't really recommend this! :)
Update #2: The other example you showed in your question is implemented like this:
#!/usr/bin/env python
import sys
def nifty_thing(*names):
frame = sys._getframe(1)
locals = frame.f_locals
for name in names:
locals[name] = None
class foo(object):
nifty_thing("foo", "bar", "baz")
f = foo()
print f.foo, f.bar, f.bar
Output:
$ python foo.py
None None None
NB: This is inspired by zope.interface.implements().
current_list = ['bar', 'baz', 'baf']
class foo(object):
"""to be added"""
for i in current_list:
setattr(foo, i, i)
then run this:
>>>f = foo()
>>>print(f.bar)
bar
>>>print(f.baz)
baz
This doesn't work exactly like what you asked for, but it seems like it should do the job:
class AutoNamespace(object):
def __init__(self, names):
try:
# Support space-separated name strings
names = names.split()
except AttributeError:
pass
for name in names:
setattr(self, name, name)
Demo:
>>> x = AutoNamespace('a b c')
>>> x.a
'a'
If you want to do what SymPy does with var, you can, but I would strongly recommend against it. That said, here's a function based on the source code of sympy.var:
def var(names):
from inspect import currentframe
frame = currentframe().f_back
try:
names = names.split()
except AttributeError:
pass
for name in names:
frame.f_globals[name] = name
Demo:
>>> var('foo bar baz')
>>> bar
'bar'
It'll always create global variables, even if you call it from inside a function or class. inspect is used to get at the caller's globals, whereas globals() would get var's own globals.
How about you define the variable as emtpy string and then get their name:
class foo(object):
def __getitem__(self, item):
return item
foo = foo()
print foo['test']
Here's an extension of bman's idea. This has its advantages and disadvantages, but at least it does work with some autocompleters.
class FooMeta(type):
def __getattr__(self, attr):
return attr
def __dir__(self):
return ['bar', 'baz', 'baf']
class foo:
__metaclass__ = FooMeta
This allows access like foo.xxx → 'xxx' for all xxx, but also guides autocomplete through __dir__.
Figured out what I was looking for:
>>> class tester:
... E = frozenset(['this', 'that', 'the', 'other'])
... for s in E:
... exec(str(s) + "='" + str(s) + "'") # <--- THIS
...
>>> tester()
<__main__.tester instance at 0x03018BE8>
>>> t = tester()
>>> t.this
'this'
>>> t.that in tester.E
True
Only have to define the element strings once, and I'm pretty sure it will work for all of my requirements listed in the question. In actual implementation, I plan to encapsulate the str(s) + "='" + str(s) + "'" in a helper function, so that I can just call exec(helper(s)) in the for loop. (I'm pretty sure that the exec has to be placed in the body of the class, not in the helper function, or else the new variables would be injected into the (transitory) scope of the helper function, not that of the class.)
EDIT: Upon detailed testing, this DOES NOT WORK -- the use of exec prevents the introspection of the IDE from knowing of the existence of the created variables.
I think you can achieve a rather beautiful solution using metaclasses, but I'm not fluent enough in using those to present that as an answer, but I do have an option which seems to work rather nicely:
def new_enum(name, *class_members):
"""Builds a class <name> with <class_members> having the name as value."""
return type(name, (object, ), { val : val for val in class_members })
Foo = new_enum('Foo', 'bar', 'baz', 'baf')
This should recreate the class you've given as example, and if you want you can change the inheritance by changing the second parameter of the call to class type(name, bases, dict).
I'm working on a class describing an object that can be expressed in several "units", I'll say, to keep things simple. Let's say we're talking about length. (It's actually something more complicated.) What I would like is for the user to be able to input 1 and "inch", for example, and automatically get member variables in feet, meters, furlongs, what have you as well. I want the user to be able to input any of the units I am dealing in, and get member variables in all the other units. My thought was to do something like this:
class length:
#classmethod
def inch_to_foot(cls,inch):
# etc.
#classmethod
def inch_to_meter(cls,inch):
# etc.
I guess you get the idea. Then I would define a dictionary in the class:
from_to={'inch':{'foot':inch_to_foot,'meter':inch_to_meter, ...},
'furlong':{'foot':furlong_to_foot, ...},
#etc
}
So then I think I can write an __init__ method
def __init__(self,num,unit):
cls = self.__class__
setattr(self,unit,num)
for k in cls.from_to[unit].keys:
setattr(self,k,cls.from_to[unit][k](num)
But no go. I get the error "class method not callable". Any ideas how I can make this work? Any ideas for scrapping the whole thing and trying a different approach? Thanks.
If you move the from_to variable into __init__ and modify it to something like:
cls.from_to={'inch':{'foot':cls.inch_to_foot,'meter':cls.inch_to_meter, }}
then I think it works as you expect.
Unfortunately I can't answer why because i haven't used classmethods much myself, but I think it is something to do with bound vs unbound methods. Anyway, if you print the functions stored in to_from in your code vs the ones with my modification you'll see they are different (mine are bound, yours are classmethod objects)
Hope that helps somewhat!
EDIT: I've thought about it a bit more, I think the problem is because you are storing a reference to the functions before they have been bound to the class (not surprising that the binding happens once the rest of the class has been parsed). My advice would be to forget about storing a dictionary of function references, but to store (in some representation of your choice) strings that indicate the units you can change between. For instance you might choose a similar format, such as:
from_to = {'inch':['foot','meter']}
and then look up the functions during __init__ using getattr
E.G.:
class length:
from_to = {'inch':['foot','meter']}
def __init__(self,num,unit):
if unit not in self.from_to:
raise RuntimeError('unit %s not supported'%unit)
cls = self.__class__
setattr(self,unit,num)
for k in cls.from_to[unit]:
f = getattr(cls,'%s_to_%s'%(unit,k))
setattr(self,k,f(num))
#classmethod
def inch_to_foot(cls,inch):
return inch/12.0
#classmethod
def inch_to_meter(cls,inch):
return inch*2.54/100
a = length(3,'inches')
print a.meter
print a.foot
print length.inch_to_foot(3)
I don't think doing with an __init__() method would be a good idea. I once saw an interesting way to do it in the Overriding the __new__ method section of in the classic document titled Unifying types and classes in Python 2.2 by Guido van Rossum.
Here's some examples:
class inch_to_foot(float):
"Convert from inch to feet"
def __new__(cls, arg=0.0):
return float.__new__(cls, float(arg)/12)
class inch_to_meter(float):
"Convert from inch to meter"
def __new__(cls, arg=0.0):
return float.__new__(cls, arg*0.0254)
print inch_to_meter(5) # 0.127
Here's a completely different answer that uses a metaclass and requires the conversion functions to bestaticmethodsrather thanclassmethods-- which it turns into properties based on the target unit's name. If searches for the names of any conversion functions itself, eliminating the need to manually definefrom_totype tables.
One thing about this approach is that the conversion functions aren't even called unless indirect references are made to the units associated with them. Another is that they're dynamic in the sense that the results returned will reflect the current value of the instance (unlike instances of three_pineapples'lengthclass, which stores the results of calling them on the numeric value of the instance when it's initially constructed).
You've never said what version of Python you're using, so the following code is for Python 2.2 - 2.x.
import re
class MetaUnit(type):
def __new__(metaclass, classname, bases, classdict):
cls = type.__new__(metaclass, classname, bases, classdict)
# add a constructor
setattr(cls, '__init__',
lambda self, value=0: setattr(self, '_value', value))
# add a property for getting and setting the underlying value
setattr(cls, 'value',
property(lambda self: self._value,
lambda self, value: setattr(self, '_value', value)))
# add an identity property the just returns the value unchanged
unitname = classname.lower() # lowercase classname becomes name of unit
setattr(cls, unitname, property(lambda self: self._value))
# find conversion methods and create properties that use them
matcher = re.compile(unitname + r'''_to_(?P<target_unitname>\w+)''')
for name in cls.__dict__.keys():
match = matcher.match(name)
if match:
target_unitname = match.group('target_unitname').lower()
fget = (lambda self, conversion_method=getattr(cls, name):
conversion_method(self._value))
setattr(cls, target_unitname, property(fget))
return cls
Sample usage:
scalar_conversion_staticmethod = (
lambda scale_factor: staticmethod(lambda value: value * scale_factor))
class Inch(object):
__metaclass__ = MetaUnit
inch_to_foot = scalar_conversion_staticmethod(1./12.)
inch_to_meter = scalar_conversion_staticmethod(0.0254)
a = Inch(3)
print a.inch # 3
print a.meter # 0.0762
print a.foot # 0.25
a.value = 6
print a.inch # 6
print a.meter # 0.1524
print a.foot # 0.5
I know what double underscore means for Python class attributes/methods, but does it mean something for method argument?
It looks like you cannot pass argument starting with double underscore to methods. It is confusing because you can do that for normal functions.
Consider this script:
def egg(__a=None):
return __a
print "egg(1) =",
print egg(1)
print
class Spam(object):
def egg(self, __a=None):
return __a
print "Spam().egg(__a=1) =",
print Spam().egg(__a=1)
Running this script yields:
egg(1) = 1
Spam().egg(__a=1) =
Traceback (most recent call last):
File "/....py", line 15, in <module>
print Spam().egg(__a=1)
TypeError: egg() got an unexpected keyword argument '__a'
I checked this with Python 2.7.2.
Some other examples
This works:
def egg(self, __a):
return __a
class Spam(object):
egg = egg
Spam().egg(__a=1)
This does not:
class Spam(object):
def _egg(self, __a=None):
return __a
def egg(self, a):
return self._egg(__a=a)
Spam().egg(1)
Name mangling applies to all identifiers with leading double underscores, regardless of where they occur (second to last sentence in that section):
This transformation is independent of the syntactical context in which the identifier is used.
This is simpler to implement and to define, and more consistent. It may seem stupid, but the whole name mangling deal is an ugly little hack IMHO; and you're not expected to use names like that for anything except attributes/methods anyway.
Spam().egg(_Spam__a=1), as well as Spam().egg(1), does work. But even though you can make it work, leading underscores (any number of them) have no place in parameter names. Or in any local variable (exception: _) for that matter.
Edit: You appear to have found a corner case nobody ever considered. The documentation is imprecise here, or the implementation is flawed. It appears keyword argument names are not mangled. Look at the bytecode (Python 3.2):
>>> dis.dis(Spam.egg)
3 0 LOAD_FAST 0 (self)
3 LOAD_ATTR 0 (_egg)
6 LOAD_CONST 1 ('__a') # keyword argument not mangled
9 LOAD_FAST 1 (a)
12 CALL_FUNCTION 256
15 RETURN_VALUE
>>> dis.dis(Spam._egg)
2 0 LOAD_FAST 1 (_Spam__a) # parameter mangled
3 RETURN_VALUE
This may be rooted in the fact that keyword arguments are equivalent to passing a dict (in this case {'__a': 1}) whose keys wouldn't be mangled either. But honestly, I'd just call it an ugly corner case in an already ugly special case and move on. It's not important because you shouldn't use identifiers like that anyway.
It gets converted to _Spam__a:
In [20]: class Spam(object):
....:
....: def egg(self, __a=None):
....: return __a
....:
In [21]: Spam.egg.__func__.__code__.co_varnames
Out[21]: ('self', '_Spam__a')
Double Underscore or Name Mangling in a class context is a private identifier. In your example try dir(Spam.egg) and you will see that the parameter __a is now _Spam__egg.
You can now use:
Spam().egg(_Spam__a=1)
This question already has answers here:
Python: Reference to a class from a string?
(4 answers)
Closed 7 years ago.
So i have a set of classes and a string with one of the class names. How do I instantiate a class based on that string?
class foo:
def __init__(self, left, right):
self.left = left
self.right = right
str = "foo"
x = Init(str, A, B)
I want x to be an instantiation of class foo.
In your case you can use something like:
get_class = lambda x: globals()[x]
c = get_class("foo")
And it's even easier to get the class from the module:
import somemodule
getattr(somemodule, "SomeClass")
If you know the namespace involved, you can use it directly -- for example, if all classes are in module zap, the dictionary vars(zap) is that namespace; if they're all in the current module, globals() is probably the handiest way to get that dictionary.
If the classes are not all in the same namespace, then building an "artificial" namespace (a dedicated dict with class names as keys and class objects as values), as #Ignacio suggests, is probably the simplest approach.
classdict = {'foo': foo}
x = classdict['foo'](A, B)
classname = "Foo"
foo = vars()[classname](Bar, 0, 4)
Or perhaps
def mkinst(cls, *args, **kwargs):
try:
return globals()[cls](*args, **kwargs)
except:
raise NameError("Class %s is not defined" % cls)
x = mkinst("Foo", bar, 0, 4, disc="bust")
y = mkinst("Bar", foo, batman="robin")
Miscellaneous notes on the snippet:
*args and **kwargs are special parameters in Python, they mean «an array of non-keyword args» and «a dict of keyword args» accordingly.
PEP-8 (official Python style guide) recommends using cls for class variables.
vars() returns a dict of variables defined in the local scope.
globals() returns a dict of variables currently present in the environment outside of local scope.
try this
cls = __import__('cls_name')
and this - http://effbot.org/zone/import-confusion.htm maybe helpful
You might consider usage of metaclass as well:
Cls = type('foo', (), foo.__dict__)
x = Cls(A, B)
Yet it creates another similar class.