How to get instance variables in Python? - python

Is there a built-in method in Python to get an array of all a class' instance variables? For example, if I have this code:
class hi:
def __init__(self):
self.ii = "foo"
self.kk = "bar"
Is there a way for me to do this:
>>> mystery_method(hi)
["ii", "kk"]
Edit: I originally had asked for class variables erroneously.

Every object has a __dict__ variable containing all the variables and its values in it.
Try this
>>> hi_obj = hi()
>>> hi_obj.__dict__.keys()
Output
dict_keys(['ii', 'kk'])

Use vars()
class Foo(object):
def __init__(self):
self.a = 1
self.b = 2
vars(Foo()) #==> {'a': 1, 'b': 2}
vars(Foo()).keys() #==> ['a', 'b']

You normally can't get instance attributes given just a class, at least not without instantiating the class. You can get instance attributes given an instance, though, or class attributes given a class. See the 'inspect' module. You can't get a list of instance attributes because instances really can have anything as attribute, and -- as in your example -- the normal way to create them is to just assign to them in the __init__ method.
An exception is if your class uses slots, which is a fixed list of attributes that the class allows instances to have. Slots are explained in http://www.python.org/2.2.3/descrintro.html, but there are various pitfalls with slots; they affect memory layout, so multiple inheritance may be problematic, and inheritance in general has to take slots into account, too.

Both the Vars() and dict methods will work for the example the OP posted, but they won't work for "loosely" defined objects like:
class foo:
a = 'foo'
b = 'bar'
To print all non-callable attributes, you can use the following function:
def printVars(object):
for i in [v for v in dir(object) if not callable(getattr(object,v))]:
print '\n%s:' % i
exec('print object.%s\n\n') % i

You can also test if an object has a specific variable with:
>>> hi_obj = hi()
>>> hasattr(hi_obj, "some attribute")
False
>>> hasattr(hi_obj, "ii")
True
>>> hasattr(hi_obj, "kk")
True

Your example shows "instance variables", not really class variables.
Look in hi_obj.__class__.__dict__.items() for the class variables, along with other other class members like member functions and the containing module.
class Hi( object ):
class_var = ( 23, 'skidoo' ) # class variable
def __init__( self ):
self.ii = "foo" # instance variable
self.jj = "bar"
Class variables are shared by all instances of the class.

Suggest
>>> print vars.__doc__
vars([object]) -> dictionary
Without arguments, equivalent to locals().
With an argument, equivalent to object.__dict__.
In otherwords, it essentially just wraps __dict__

Although not directly an answer to the OP question, there is a pretty sweet way of finding out what variables are in scope in a function. take a look at this code:
>>> def f(x, y):
z = x**2 + y**2
sqrt_z = z**.5
return sqrt_z
>>> f.func_code.co_varnames
('x', 'y', 'z', 'sqrt_z')
>>>
The func_code attribute has all kinds of interesting things in it. It allows you todo some cool stuff. Here is an example of how I have have used this:
def exec_command(self, cmd, msg, sig):
def message(msg):
a = self.link.process(self.link.recieved_message(msg))
self.exec_command(*a)
def error(msg):
self.printer.printInfo(msg)
def set_usrlist(msg):
self.client.connected_users = msg
def chatmessage(msg):
self.printer.printInfo(msg)
if not locals().has_key(cmd): return
cmd = locals()[cmd]
try:
if 'sig' in cmd.func_code.co_varnames and \
'msg' in cmd.func_code.co_varnames:
cmd(msg, sig)
elif 'msg' in cmd.func_code.co_varnames:
cmd(msg)
else:
cmd()
except Exception, e:
print '\n-----------ERROR-----------'
print 'error: ', e
print 'Error proccessing: ', cmd.__name__
print 'Message: ', msg
print 'Sig: ', sig
print '-----------ERROR-----------\n'

Sometimes you want to filter the list based on public/private vars. E.g.
def pub_vars(self):
"""Gives the variable names of our instance we want to expose
"""
return [k for k in vars(self) if not k.startswith('_')]

built on dmark's answer to get the following, which is useful if you want the equiv of sprintf and hopefully will help someone...
def sprint(object):
result = ''
for i in [v for v in dir(object) if not callable(getattr(object, v)) and v[0] != '_']:
result += '\n%s:' % i + str(getattr(object, i, ''))
return result

You will need to first, examine the class, next, examine the bytecode for functions, then, copy the bytecode, and finally, use the __code__.co_varnames. This is tricky because some classes create their methods using constructors like those in the types module. I will provide code for it on GitHub.

Based on answer of Ethan Joffe
def print_inspect(obj):
print(f"{type(obj)}\n")
var_names = [attr for attr in dir(obj) if not callable(getattr(obj, attr)) and not attr.startswith("__")]
for v in var_names:
print(f"\tself.{v} = {getattr(obj, v)}\n")

Related

How do i list available datasets loaders in sklearn to see all list of available function [duplicate]

I want to iterate through the methods in a class, or handle class or instance objects differently based on the methods present. How do I get a list of class methods?
Also see:
How can I list the methods in a
Python 2.5 module?
Looping over
a Python / IronPython Object
Methods
Finding the methods an
object has
How do I look inside
a Python object?
How Do I
Perform Introspection on an Object in
Python 2.x?
How to get a
complete list of object’s methods and
attributes?
Finding out which
functions are available from a class
instance in python?
An example (listing the methods of the optparse.OptionParser class):
>>> from optparse import OptionParser
>>> import inspect
#python2
>>> inspect.getmembers(OptionParser, predicate=inspect.ismethod)
[([('__init__', <unbound method OptionParser.__init__>),
...
('add_option', <unbound method OptionParser.add_option>),
('add_option_group', <unbound method OptionParser.add_option_group>),
('add_options', <unbound method OptionParser.add_options>),
('check_values', <unbound method OptionParser.check_values>),
('destroy', <unbound method OptionParser.destroy>),
('disable_interspersed_args',
<unbound method OptionParser.disable_interspersed_args>),
('enable_interspersed_args',
<unbound method OptionParser.enable_interspersed_args>),
('error', <unbound method OptionParser.error>),
('exit', <unbound method OptionParser.exit>),
('expand_prog_name', <unbound method OptionParser.expand_prog_name>),
...
]
# python3
>>> inspect.getmembers(OptionParser, predicate=inspect.isfunction)
...
Notice that getmembers returns a list of 2-tuples. The first item is the name of the member, the second item is the value.
You can also pass an instance to getmembers:
>>> parser = OptionParser()
>>> inspect.getmembers(parser, predicate=inspect.ismethod)
...
There is the dir(theobject) method to list all the fields and methods of your object (as a tuple) and the inspect module (as codeape write) to list the fields and methods with their doc (in """).
Because everything (even fields) might be called in Python, I'm not sure there is a built-in function to list only methods. You might want to try if the object you get through dir is callable or not.
Python 3.x answer without external libraries
method_list = [func for func in dir(Foo) if callable(getattr(Foo, func))]
dunder-excluded result:
method_list = [func for func in dir(Foo) if callable(getattr(Foo, func)) and not func.startswith("__")]
Say you want to know all methods associated with list class
Just Type The following
print (dir(list))
Above will give you all methods of list class
Try the property __dict__.
you can also import the FunctionType from types and test it with the class.__dict__:
from types import FunctionType
class Foo:
def bar(self): pass
def baz(self): pass
def methods(cls):
return [x for x, y in cls.__dict__.items() if type(y) == FunctionType]
methods(Foo) # ['bar', 'baz']
You can list all methods in a python class by using the following code
dir(className)
This will return a list of all the names of the methods in the class
Note that you need to consider whether you want methods from base classes which are inherited (but not overridden) included in the result. The dir() and inspect.getmembers() operations do include base class methods, but use of the __dict__ attribute does not.
If your method is a "regular" method and not a staticmethod, classmethod etc.
There is a little hack I came up with -
for k, v in your_class.__dict__.items():
if "function" in str(v):
print(k)
This can be extended to other type of methods by changing "function" in the if condition correspondingly.
Tested in Python 2.7 and Python 3.5.
Try
print(help(ClassName))
It prints out methods of the class
I just keep this there, because top rated answers are not clear.
This is simple test with not usual class based on Enum.
# -*- coding: utf-8 -*-
import sys, inspect
from enum import Enum
class my_enum(Enum):
"""Enum base class my_enum"""
M_ONE = -1
ZERO = 0
ONE = 1
TWO = 2
THREE = 3
def is_natural(self):
return (self.value > 0)
def is_negative(self):
return (self.value < 0)
def is_clean_name(name):
return not name.startswith('_') and not name.endswith('_')
def clean_names(lst):
return [ n for n in lst if is_clean_name(n) ]
def get_items(cls,lst):
try:
res = [ getattr(cls,n) for n in lst ]
except Exception as e:
res = (Exception, type(e), e)
pass
return res
print( sys.version )
dir_res = clean_names( dir(my_enum) )
inspect_res = clean_names( [ x[0] for x in inspect.getmembers(my_enum) ] )
dict_res = clean_names( my_enum.__dict__.keys() )
print( '## names ##' )
print( dir_res )
print( inspect_res )
print( dict_res )
print( '## items ##' )
print( get_items(my_enum,dir_res) )
print( get_items(my_enum,inspect_res) )
print( get_items(my_enum,dict_res) )
And this is output results.
3.7.7 (default, Mar 10 2020, 13:18:53)
[GCC 9.2.1 20200306]
## names ##
['M_ONE', 'ONE', 'THREE', 'TWO', 'ZERO']
['M_ONE', 'ONE', 'THREE', 'TWO', 'ZERO', 'name', 'value']
['is_natural', 'is_negative', 'M_ONE', 'ZERO', 'ONE', 'TWO', 'THREE']
## items ##
[<my_enum.M_ONE: -1>, <my_enum.ONE: 1>, <my_enum.THREE: 3>, <my_enum.TWO: 2>, <my_enum.ZERO: 0>]
(<class 'Exception'>, <class 'AttributeError'>, AttributeError('name'))
[<function my_enum.is_natural at 0xb78a1fa4>, <function my_enum.is_negative at 0xb78ae854>, <my_enum.M_ONE: -1>, <my_enum.ZERO: 0>, <my_enum.ONE: 1>, <my_enum.TWO: 2>, <my_enum.THREE: 3>]
So what we have:
dir provide not complete data
inspect.getmembers provide not complete data and provide internal keys that are not accessible with getattr()
__dict__.keys() provide complete and reliable result
Why are votes so erroneous? And where i'm wrong? And where wrong other people which answers have so low votes?
There's this approach:
[getattr(obj, m) for m in dir(obj) if not m.startswith('__')]
When dealing with a class instance, perhaps it'd be better to return a list with the method references instead of just names¹. If that's your goal, as well as
Using no import
Excluding private methods (e.g. __init__) from the list
It may be of use. You might also want to assure it's callable(getattr(obj, m)), since dir returns all attributes within obj, not just methods.
In a nutshell, for a class like
class Ghost:
def boo(self, who):
return f'Who you gonna call? {who}'
We could check instance retrieval with
>>> g = Ghost()
>>> methods = [getattr(g, m) for m in dir(g) if not m.startswith('__')]
>>> print(methods)
[<bound method Ghost.boo of <__main__.Ghost object at ...>>]
So you can call it right away:
>>> for method in methods:
... print(method('GHOSTBUSTERS'))
...
Who you gonna call? GHOSTBUSTERS
¹ An use case:
I used this for unit testing. Had a class where all methods performed variations of the same process - which led to lengthy tests, each only a tweak away from the others. DRY was a far away dream.
Thought I should have a single test for all methods, so I made the above iteration.
Although I realized I should instead refactor the code itself to be DRY-compliant anyway... this may still serve a random nitpicky soul in the future.
This also works:
In mymodule.py:
def foo(x):
return 'foo'
def bar():
return 'bar'
In another file:
import inspect
import mymodule
method_list = [ func[0] for func in inspect.getmembers(mymodule, predicate=inspect.isroutine) if callable(getattr(mymodule, func[0])) ]
Output:
['foo', 'bar']
From the Python docs:
inspect.isroutine(object)
Return true if the object is a user-defined or built-in function or method.
def find_defining_class(obj, meth_name):
for ty in type(obj).mro():
if meth_name in ty.__dict__:
return ty
So
print find_defining_class(car, 'speedometer')
Think Python page 210
You can use a function which I have created.
def method_finder(classname):
non_magic_class = []
class_methods = dir(classname)
for m in class_methods:
if m.startswith('__'):
continue
else:
non_magic_class.append(m)
return non_magic_class
method_finder(list)
Output:
['append',
'clear',
'copy',
'count',
'extend',
'index',
'insert',
'pop',
'remove',
'reverse',
'sort']
methods = [(func, getattr(o, func)) for func in dir(o) if callable(getattr(o, func))]
gives an identical list as
methods = inspect.getmembers(o, predicate=inspect.ismethod)
does.
I know this is an old post, but just wrote this function and will leave it here is case someone stumbles looking for an answer:
def classMethods(the_class,class_only=False,instance_only=False,exclude_internal=True):
def acceptMethod(tup):
#internal function that analyzes the tuples returned by getmembers tup[1] is the
#actual member object
is_method = inspect.ismethod(tup[1])
if is_method:
bound_to = tup[1].im_self
internal = tup[1].im_func.func_name[:2] == '__' and tup[1].im_func.func_name[-2:] == '__'
if internal and exclude_internal:
include = False
else:
include = (bound_to == the_class and not instance_only) or (bound_to == None and not class_only)
else:
include = False
return include
#uses filter to return results according to internal function and arguments
return filter(acceptMethod,inspect.getmembers(the_class))
use inspect.ismethod and dir and getattr
import inspect
class ClassWithMethods:
def method1(self):
print('method1')
def method2(self):
print('method2')
obj=ClassWithMethods()
method_names = [attr for attr in dir(obj) if inspect.ismethod(getattr(obj,attr))
print(method_names)
output:
[[('method1', <bound method ClassWithMethods.method1 of <__main__.ClassWithMethods object at 0x00000266779AF388>>), ('method2', <bound method ClassWithMethods.method2 of <__main__.ClassWithMethods object at 0x00000266779AF388>>)]]
None of the above worked for me.
I've encountered this problem while writing pytests.
The only work-around I found was to:
1- create another directory and place all my .py files there
2- create a separate directory for my pytests and then importing the classes I'm interested in
This allowed me to get up-to-dated methods within the class - you can change the method names and then use print(dir(class)) to confirm it.
For my use case, I needed to distinguish between class methods, static methods, properties, and instance methods. The inspect module confuses the issue a bit (particularly with class methods and instance methods), so I used vars based on a comment on this SO question. The basic gist is to use vars to get the __dict__ attribute of the class, then filter based on various isinstance checks. For instance methods, I check that it is callable and not a class method. One caveat: this approach of using vars (or __dict__ for that matter) won't work with __slots__. Using Python 3.6.9 (because it's what the Docker image I'm using as my interpreter has):
class MethodAnalyzer:
class_under_test = None
#classmethod
def get_static_methods(cls):
if cls.class_under_test:
return {
k for k, v in vars(cls.class_under_test).items()
if isinstance(v, staticmethod)
}
return {}
#classmethod
def get_class_methods(cls):
if cls.class_under_test:
return {
k for k, v in vars(cls.class_under_test).items()
if isinstance(v, classmethod)
}
return {}
#classmethod
def get_instance_methods(cls):
if cls.class_under_test:
return {
k for k, v in vars(cls.class_under_test).items()
if callable(v) and not isinstance(v, classmethod)
}
return {}
#classmethod
def get_properties(cls):
if cls.class_under_test:
return {
k for k, v in vars(cls.class_under_test).items()
if isinstance(v, property)
}
return {}
To see it in action, I created this little test class:
class Foo:
#staticmethod
def bar(baz):
print(baz)
#property
def bleep(self):
return 'bloop'
#classmethod
def bork(cls):
return cls.__name__
def flank(self):
return 'on your six'
then did:
MethodAnalyzer.class_under_test = Foo
print(MethodAnalyzer.get_instance_methods())
print(MethodAnalyzer.get_class_methods())
print(MethodAnalyzer.get_static_methods())
print(MethodAnalyzer.get_properties())
which output
{'flank'}
{'bork'}
{'bar'}
{'bleep'}
In this example I'm discarding the actual methods, but if you needed to keep them you could just use a dict comprehension instead of a set comprehension:
{
k, v for k, v in vars(cls.class_under_test).items()
if callable(v) and not isinstance(v, classmethod)
}
This is just an observation. "encode" seems to be a method for string objects
str_1 = 'a'
str_1.encode('utf-8')
>>> b'a'
However, if str1 is inspected for methods, an empty list is returned
inspect.getmember(str_1, predicate=inspect.ismethod)
>>> []
So, maybe I am wrong, but the issue seems to be not simple.
To produce a list of methods put the name of the method in a list without the usual parenthesis. Remove the name and attach the parenthesis and that calls the method.
def methodA():
print("# MethodA")
def methodB():
print("# methodB")
a = []
a.append(methodA)
a.append(methodB)
for item in a:
item()
Just like this
pprint.pprint([x for x in dir(list) if not x.startswith("_")])
class CPerson:
def __init__(self, age):
self._age = age
def run(self):
pass
#property
def age(self): return self._age
#staticmethod
def my_static_method(): print("Life is short, you need Python")
#classmethod
def say(cls, msg): return msg
test_class = CPerson
# print(dir(test_class)) # list all the fields and methods of your object
print([(name, t) for name, t in test_class.__dict__.items() if type(t).__name__ == 'function' and not name.startswith('__')])
print([(name, t) for name, t in test_class.__dict__.items() if type(t).__name__ != 'function' and not name.startswith('__')])
output
[('run', <function CPerson.run at 0x0000000002AD3268>)]
[('age', <property object at 0x0000000002368688>), ('my_static_method', <staticmethod object at 0x0000000002ACBD68>), ('say', <classmethod object at 0x0000000002ACF0B8>)]
If you want to list only methods of a python class
import numpy as np
print(np.random.__all__)

python get all class level attributes on an instance, including parent class [duplicate]

Suppose we have the following class hierarchy:
class ClassA:
#property
def foo(self): return "hello"
class ClassB(ClassA):
#property
def bar(self): return "world"
If I explore __dict__ on ClassB like so, I only see the bar attribute:
for name,_ in ClassB.__dict__.items():
if name.startswith("__"):
continue
print(name)
Output is bar
I can roll my own means to get attributes on not only the specified type but its ancestors. However, my question is whether there's already a way in python for me to do this without re-inventing a wheel.
def return_attributes_including_inherited(type):
results = []
return_attributes_including_inherited_helper(type,results)
return results
def return_attributes_including_inherited_helper(type,attributes):
for name,attribute_as_object in type.__dict__.items():
if name.startswith("__"):
continue
attributes.append(name)
for base_type in type.__bases__:
return_attributes_including_inherited_helper(base_type,attributes)
Running my code as follows...
for attribute_name in return_attributes_including_inherited(ClassB):
print(attribute_name)
... gives back both bar and foo.
Note that I'm simplifying some things: name collisions, using items() when for this example I could use dict, skipping over anything that starts with __, ignoring the possibility that two ancestors themselves have a common ancestor, etc.
EDIT1 - I tried to keep the example simple. But I really want both the attribute name and the attribute reference for each class and ancestor class. One of the answers below has me on a better track, I'll post some better code when I get it to work.
EDIT2 - This does what I want and is very succinct. It's based on Eli's answer below.
def get_attributes(type):
attributes = set(type.__dict__.items())
for type in type.__mro__:
attributes.update(type.__dict__.items())
return attributes
It gives back both the attribute names and their references.
EDIT3 - One of the answers below suggested using inspect.getmembers. This appears very useful because it's like dict only it operates on ancestor classes as well.
Since a large part of what I was trying to do was find attributes marked with a particular descriptor, and include ancestors classes, here is some code that would help do that in case it helps anyone:
class MyCustomDescriptor:
# This is greatly oversimplified
def __init__(self,foo,bar):
self._foo = foo
self._bar = bar
pass
def __call__(self,decorated_function):
return self
def __get__(self,instance,type):
if not instance:
return self
return 10
class ClassA:
#property
def foo(self): return "hello"
#MyCustomDescriptor(foo="a",bar="b")
def bar(self): pass
#MyCustomDescriptor(foo="c",bar="d")
def baz(self): pass
class ClassB(ClassA):
#property
def something_we_dont_care_about(self): return "world"
#MyCustomDescriptor(foo="e",bar="f")
def blah(self): pass
# This will get attributes on the specified type (class) that are of matching_attribute_type. It just returns the attributes themselves, not their names.
def get_attributes_of_matching_type(type,matching_attribute_type):
return_value = []
for member in inspect.getmembers(type):
member_name = member[0]
member_instance = member[1]
if isinstance(member_instance,matching_attribute_type):
return_value.append(member_instance)
return return_value
# This will return a dictionary of name & instance of attributes on type that are of matching_attribute_type (useful when you're looking for attributes marked with a particular descriptor)
def get_attribute_name_and_instance_of_matching_type(type,matching_attribute_type):
return_value = {}
for member in inspect.getmembers(ClassB):
member_name = member[0]
member_instance = member[1]
if isinstance(member_instance,matching_attribute_type):
return_value[member_name] = member_instance
return return_value
You should use python's inspect module for any such introspective capabilities.
.
.
>>> class ClassC(ClassB):
... def baz(self):
... return "hiya"
...
>>> import inspect
>>> for attr in inspect.getmembers(ClassC):
... print attr
...
('__doc__', None)
('__module__', '__main__')
('bar', <property object at 0x10046bf70>)
('baz', <unbound method ClassC.baz>)
('foo', <property object at 0x10046bf18>)
Read more about the inspect module here.
You want to use dir:
for attr in dir(ClassB):
print attr
Sadly there isn't a single composite object. Every attribute access for a (normal) python object first checks obj.__dict__, then the attributes of all it's base classes; while there are some internal caches and optimizations, there isn't a single object you can access.
That said, one thing that could improve your code is to use cls.__mro__ instead of cls.__bases__... instead of the class's immediate parents, cls.__mro__ contains ALL the ancestors of the class, in the exact order Python would search, with all common ancestors occuring only once. That would also allow your type-searching method to be non-recursive. Loosely...
def get_attrs(obj):
attrs = set(obj.__dict__)
for cls in obj.__class__.__mro__:
attrs.update(cls.__dict__)
return sorted(attrs)
... does a fair approximation of the default dir(obj) implementation.
Here is a function I wrote, back in the day. The best answer is using the inspect module, as using __dict__ gives us ALL functions (ours + inherited) and (ALL?) data members AND properties. Where inspect gives us enough information to weed out what we don't want.
def _inspect(a, skipFunctionsAlways=True, skipMagic = True):
"""inspects object attributes, removing all the standard methods from 'object',
and (optionally) __magic__ cruft.
By default this routine skips __magic__ functions, but if you want these on
pass False in as the skipMagic parameter.
By default this routine skips functions, but if you want to see all the functions,
pass in False to the skipFunctionsAlways function. This works together with the
skipMagic parameter: if the latter is True, you won't see __magic__ methods.
If skipFunctionsAlways = False and skipMagic = False, you'll see all the __magic__
methods declared for the object - including __magic__ functions declared by Object
NOT meant to be a comprehensive list of every object attribute - instead, a
list of every object attribute WE (not Python) defined. For a complete list
of everything call inspect.getmembers directly"""
objType = type(object)
def weWantIt(obj):
#return type(a) != objType
output= True
if (skipFunctionsAlways):
output = not ( inspect.isbuiltin(obj) ) #not a built in
asStr = ""
if isinstance(obj, types.MethodType):
if skipFunctionsAlways: #never mind, we don't want it, get out.
return False
else:
asStr = obj.__name__
#get just the name of the function, we don't want the whole name, because we don't want to take something like:
#bound method LotsOfThings.bob of <__main__.LotsOfThings object at 0x103dc70>
#to be a special method because it's module name is special
#WD-rpw 02-23-2008
#TODO: it would be great to be able to separate out superclass methods
#maybe by getting the class out of the method then seeing if that attribute is in that class?
else:
asStr = str(obj)
if (skipMagic):
output = (asStr.find("__") == -1 ) #not a __something__
return (output)
for value in inspect.getmembers( a, weWantIt ):
yield value
{k: getattr(ClassB, k) for k in dir(ClassB)}
Proper values (instead of <property object...>) will be presented when using ClassB instance.
And of course You can filter this by adding things like if not k.startswith('__') in the end.

There is no constant in python so what to do to make variable constant [duplicate]

How do I declare a constant in Python?
In Java, we do:
public static final String CONST_NAME = "Name";
You cannot declare a variable or value as constant in Python.
To indicate to programmers that a variable is a constant, one usually writes it in upper case:
CONST_NAME = "Name"
To raise exceptions when constants are changed, see Constants in Python by Alex Martelli. Note that this is not commonly used in practice.
As of Python 3.8, there's a typing.Final variable annotation that will tell static type checkers (like mypy) that your variable shouldn't be reassigned. This is the closest equivalent to Java's final. However, it does not actually prevent reassignment:
from typing import Final
a: Final[int] = 1
# Executes fine, but mypy will report an error if you run mypy on this:
a = 2
There's no const keyword as in other languages, however it is possible to create a Property that has a "getter function" to read the data, but no "setter function" to re-write the data. This essentially protects the identifier from being changed.
Here is an alternative implementation using class property:
Note that the code is far from easy for a reader wondering about constants. See explanation below.
def constant(f):
def fset(self, value):
raise TypeError
def fget(self):
return f()
return property(fget, fset)
class _Const(object):
#constant
def FOO():
return 0xBAADFACE
#constant
def BAR():
return 0xDEADBEEF
CONST = _Const()
print(hex(CONST.FOO)) # -> '0xbaadfaceL'
CONST.FOO = 0
##Traceback (most recent call last):
## File "example1.py", line 22, in <module>
## CONST.FOO = 0
## File "example1.py", line 5, in fset
## raise TypeError
##TypeError
Code Explanation:
Define a function constant that takes an expression, and uses it to construct a "getter" - a function that solely returns the value of the expression.
The setter function raises a TypeError so it's read-only
Use the constant function we just created as a decoration to quickly define read-only properties.
And in some other more old-fashioned way:
(The code is quite tricky, more explanations below)
class _Const(object):
def FOO():
def fset(self, value):
raise TypeError
def fget(self):
return 0xBAADFACE
return property(**locals())
FOO = FOO() # Define property.
CONST = _Const()
print(hex(CONST.FOO)) # -> '0xbaadfaceL'
CONST.FOO = 0
##Traceback (most recent call last):
## File "example2.py", line 16, in <module>
## CONST.FOO = 0
## File "example2.py", line 6, in fset
## raise TypeError
##TypeError
To define the identifier FOO, firs define two functions (fset, fget - the names are at my choice).
Then use the built-in property function to construct an object that can be "set" or "get".
Note hat the property function's first two parameters are named fset and fget.
Use the fact that we chose these very names for our own getter & setter and create a keyword-dictionary using the ** (double asterisk) applied to all the local definitions of that scope to pass parameters to the property function
In Python instead of language enforcing something, people use naming conventions e.g __method for private methods and using _method for protected methods.
So in same manner you can simply declare the constant as all caps, e.g.:
MY_CONSTANT = "one"
If you want that this constant never changes, you can hook into attribute access and do tricks, but a simpler approach is to declare a function:
def MY_CONSTANT():
return "one"
Only problem is everywhere you will have to do MY_CONSTANT(), but again MY_CONSTANT = "one" is the correct way in Python (usually).
You can also use namedtuple() to create constants:
>>> from collections import namedtuple
>>> Constants = namedtuple('Constants', ['pi', 'e'])
>>> constants = Constants(3.14, 2.718)
>>> constants.pi
3.14
>>> constants.pi = 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
I've recently found a very succinct update to this which automatically raises meaningful error messages and prevents access via __dict__:
class CONST(object):
__slots__ = ()
FOO = 1234
CONST = CONST()
# ----------
print(CONST.FOO) # 1234
CONST.FOO = 4321 # AttributeError: 'CONST' object attribute 'FOO' is read-only
CONST.__dict__['FOO'] = 4321 # AttributeError: 'CONST' object has no attribute '__dict__'
CONST.BAR = 5678 # AttributeError: 'CONST' object has no attribute 'BAR'
We define over ourselves as to make ourselves an instance and then use slots to ensure that no additional attributes can be added. This also removes the __dict__ access route. Of course, the whole object can still be redefined.
Edit - Original solution
I'm probably missing a trick here, but this seems to work for me:
class CONST(object):
FOO = 1234
def __setattr__(self, *_):
pass
CONST = CONST()
#----------
print CONST.FOO # 1234
CONST.FOO = 4321
CONST.BAR = 5678
print CONST.FOO # Still 1234!
print CONST.BAR # Oops AttributeError
Creating the instance allows the magic __setattr__ method to kick in and intercept attempts to set the FOO variable. You could throw an exception here if you wanted to. Instantiating the instance over the class name prevents access directly via the class.
It's a total pain for one value, but you could attach lots to your CONST object. Having an upper class, class name also seems a bit grotty, but I think it's quite succinct overall.
Python doesn't have constants.
Perhaps the easiest alternative is to define a function for it:
def MY_CONSTANT():
return 42
MY_CONSTANT() now has all the functionality of a constant (plus some annoying braces).
Properties are one way to create constants. You can do it by declaring a getter property, but ignoring the setter. For example:
class MyFinalProperty(object):
#property
def name(self):
return "John"
You can have a look at an article I've written to find more ways to use Python properties.
In addition to the two top answers (just use variables with UPPERCASE names, or use properties to make the values read-only), I want to mention that it's possible to use metaclasses in order to implement named constants. I provide a very simple solution using metaclasses at GitHub which may be helpful if you want the values to be more informative about their type/name:
>>> from named_constants import Constants
>>> class Colors(Constants):
... black = 0
... red = 1
... white = 15
...
>>> c = Colors.black
>>> c == 0
True
>>> c
Colors.black
>>> c.name()
'black'
>>> Colors(0) is c
True
This is slightly more advanced Python, but still very easy to use and handy. (The module has some more features, including constants being read-only, see its README.)
There are similar solutions floating around in various repositories, but to the best of my knowledge they either lack one of the fundamental features that I would expect from constants (like being constant, or being of arbitrary type), or they have esoteric features added that make them less generally applicable. But YMMV, I would be grateful for feedback. :-)
Edit: Added sample code for Python 3
Note: this other answer looks like it provides a much more complete implementation similar to the following (with more features).
First, make a metaclass:
class MetaConst(type):
def __getattr__(cls, key):
return cls[key]
def __setattr__(cls, key, value):
raise TypeError
This prevents statics properties from being changed. Then make another class that uses that metaclass:
class Const(object):
__metaclass__ = MetaConst
def __getattr__(self, name):
return self[name]
def __setattr__(self, name, value):
raise TypeError
Or, if you're using Python 3:
class Const(object, metaclass=MetaConst):
def __getattr__(self, name):
return self[name]
def __setattr__(self, name, value):
raise TypeError
This should prevent instance props from being changed. To use it, inherit:
class MyConst(Const):
A = 1
B = 2
Now the props, accessed directly or via an instance, should be constant:
MyConst.A
# 1
my_const = MyConst()
my_const.A
# 1
MyConst.A = 'changed'
# TypeError
my_const.A = 'changed'
# TypeError
Here's an example of above in action. Here's another example for Python 3.
PEP 591 has the 'final' qualifier. Enforcement is down to the type checker.
So you can do:
MY_CONSTANT: Final = 12407
Note: Final keyword is only applicable for Python 3.8 version
from enum import Enum
class StringConsts(str,Enum):
ONE='one'
TWO='two'
print(f'Truth is {StringConsts.ONE=="one"}') #Truth is True
StringConsts.ONE="one" #Error: Cannot reassign
This mixin of Enum and str gives you the power of not having to reimplement setattr (through Enum) and comparison to other str objects (through str).
This might deprecate http://code.activestate.com/recipes/65207-constants-in-python/?in=user-97991 completely.
I declare constant values using frozen data class like this:
from dataclasses import dataclass
#dataclass(frozen=True)
class _Const:
SOME_STRING = 'some_string'
SOME_INT = 5
Const = _Const()
# In another file import Const and try
print(Const.SOME_STRING) # ITS OK!
Const.SOME_INT = 6 # dataclasses.FrozenInstanceError: cannot assign to field 'SOME_INT'
You can use a namedtuple as a workaround to effectively create a constant that works the same way as a static final variable in Java (a Java "constant"). As workarounds go, it's sort of elegant. (A more elegant approach would be to simply improve the Python language --- what sort of language lets you redefine math.pi? -- but I digress.)
(As I write this, I realize another answer to this question mentioned namedtuple, but I'll continue here because I'll show a syntax that more closely parallels what you'd expect in Java, as there is no need to create a named type as namedtuple forces you to do.)
Following your example, you'll remember that in Java we must define the constant inside some class; because you didn't mention a class name, let's call it Foo. Here's the Java class:
public class Foo {
public static final String CONST_NAME = "Name";
}
Here's the equivalent Python.
from collections import namedtuple
Foo = namedtuple('_Foo', 'CONST_NAME')('Name')
The key point I want to add here is that you don't need a separate Foo type (an "anonymous named tuple" would be nice, even though that sounds like an oxymoron), so we name our namedtuple _Foo so that hopefully it won't escape to importing modules.
The second point here is that we immediately create an instance of the nametuple, calling it Foo; there's no need to do this in a separate step (unless you want to). Now you can do what you can do in Java:
>>> Foo.CONST_NAME
'Name'
But you can't assign to it:
>>> Foo.CONST_NAME = 'bar'
…
AttributeError: can't set attribute
Acknowledgement: I thought I invented the namedtuple approach, but then I see that someone else gave a similar (although less compact) answer. Then I also noticed What are "named tuples" in Python?, which points out that sys.version_info is now a namedtuple, so perhaps the Python standard library already came up with this idea much earlier.
Note that unfortunately (this still being Python), you can erase the entire Foo assignment altogether:
>>> Foo = 'bar'
(facepalm)
But at least we're preventing the Foo.CONST_NAME value from being changed, and that's better than nothing. Good luck.
Here is an implementation of a "Constants" class, which creates instances with read-only (constant) attributes. E.g. can use Nums.PI to get a value that has been initialized as 3.14159, and Nums.PI = 22 raises an exception.
# ---------- Constants.py ----------
class Constants(object):
"""
Create objects with read-only (constant) attributes.
Example:
Nums = Constants(ONE=1, PI=3.14159, DefaultWidth=100.0)
print 10 + Nums.PI
print '----- Following line is deliberate ValueError -----'
Nums.PI = 22
"""
def __init__(self, *args, **kwargs):
self._d = dict(*args, **kwargs)
def __iter__(self):
return iter(self._d)
def __len__(self):
return len(self._d)
# NOTE: This is only called if self lacks the attribute.
# So it does not interfere with get of 'self._d', etc.
def __getattr__(self, name):
return self._d[name]
# ASSUMES '_..' attribute is OK to set. Need this to initialize 'self._d', etc.
#If use as keys, they won't be constant.
def __setattr__(self, name, value):
if (name[0] == '_'):
super(Constants, self).__setattr__(name, value)
else:
raise ValueError("setattr while locked", self)
if (__name__ == "__main__"):
# Usage example.
Nums = Constants(ONE=1, PI=3.14159, DefaultWidth=100.0)
print 10 + Nums.PI
print '----- Following line is deliberate ValueError -----'
Nums.PI = 22
Thanks to #MikeGraham 's FrozenDict, which I used as a starting point. Changed, so instead of Nums['ONE'] the usage syntax is Nums.ONE.
And thanks to #Raufio's answer, for idea to override __ setattr __.
Or for an implementation with more functionality, see #Hans_meine 's
named_constants at GitHub
A tuple technically qualifies as a constant, as a tuple will raise an error if you try to change one of its values. If you want to declare a tuple with one value, then place a comma after its only value, like this:
my_tuple = (0 """Or any other value""",)
To check this variable's value, use something similar to this:
if my_tuple[0] == 0:
#Code goes here
If you attempt to change this value, an error will be raised.
Here it is a collection of idioms that I created as an attempt to improve some of the already available answers.
I know the use of constant is not pythonic, and you should not do this at home!
However, Python is such a dynamic language! This forum shows how it is possible the creation of constructs that looks and feels like constants. This answer has as the primary purpose to explore what can be expressed by the language.
Please do not be too harsh with me :-).
For more details I wrote a accompaniment blog about these idioms.
In this post, I will call a constant variable to a constant reference to values (immutable or otherwise). Moreover, I say that a variable has a frozen value when it references a mutable object that a client-code cannot update its value(s).
A space of constants (SpaceConstants)
This idiom creates what looks like a namespace of constant variables (a.k.a. SpaceConstants). It is a modification of a code snippet by Alex Martelli to avoid the use of module objects. In particular, this modification uses what I call a class factory because within SpaceConstants function, a class called SpaceConstants is defined, and an instance of it is returned.
I explored the use of class factory to implement a policy-based design look-alike in Python in stackoverflow and also in a blogpost.
def SpaceConstants():
def setattr(self, name, value):
if hasattr(self, name):
raise AttributeError(
"Cannot reassign members"
)
self.__dict__[name] = value
cls = type('SpaceConstants', (), {
'__setattr__': setattr
})
return cls()
sc = SpaceConstants()
print(sc.x) # raise "AttributeError: 'SpaceConstants' object has no attribute 'x'"
sc.x = 2 # bind attribute x
print(sc.x) # print "2"
sc.x = 3 # raise "AttributeError: Cannot reassign members"
sc.y = {'name': 'y', 'value': 2} # bind attribute y
print(sc.y) # print "{'name': 'y', 'value': 2}"
sc.y['name'] = 'yprime' # mutable object can be changed
print(sc.y) # print "{'name': 'yprime', 'value': 2}"
sc.y = {} # raise "AttributeError: Cannot reassign members"
A space of frozen values (SpaceFrozenValues)
This next idiom is a modification of the SpaceConstants in where referenced mutable objects are frozen. This implementation exploits what I call shared closure between setattr and getattr functions. The value of the mutable object is copied and referenced by variable cache define inside of the function shared closure. It forms what I call a closure protected copy of a mutable object.
You must be careful in using this idiom because getattr return the value of cache by doing a deep copy. This operation could have a significant performance impact on large objects!
from copy import deepcopy
def SpaceFrozenValues():
cache = {}
def setattr(self, name, value):
nonlocal cache
if name in cache:
raise AttributeError(
"Cannot reassign members"
)
cache[name] = deepcopy(value)
def getattr(self, name):
nonlocal cache
if name not in cache:
raise AttributeError(
"Object has no attribute '{}'".format(name)
)
return deepcopy(cache[name])
cls = type('SpaceFrozenValues', (),{
'__getattr__': getattr,
'__setattr__': setattr
})
return cls()
fv = SpaceFrozenValues()
print(fv.x) # AttributeError: Object has no attribute 'x'
fv.x = 2 # bind attribute x
print(fv.x) # print "2"
fv.x = 3 # raise "AttributeError: Cannot reassign members"
fv.y = {'name': 'y', 'value': 2} # bind attribute y
print(fv.y) # print "{'name': 'y', 'value': 2}"
fv.y['name'] = 'yprime' # you can try to change mutable objects
print(fv.y) # print "{'name': 'y', 'value': 2}"
fv.y = {} # raise "AttributeError: Cannot reassign members"
A constant space (ConstantSpace)
This idiom is an immutable namespace of constant variables or ConstantSpace. It is a combination of awesomely simple Jon Betts' answer in stackoverflow with a class factory.
def ConstantSpace(**args):
args['__slots__'] = ()
cls = type('ConstantSpace', (), args)
return cls()
cs = ConstantSpace(
x = 2,
y = {'name': 'y', 'value': 2}
)
print(cs.x) # print "2"
cs.x = 3 # raise "AttributeError: 'ConstantSpace' object attribute 'x' is read-only"
print(cs.y) # print "{'name': 'y', 'value': 2}"
cs.y['name'] = 'yprime' # mutable object can be changed
print(cs.y) # print "{'name': 'yprime', 'value': 2}"
cs.y = {} # raise "AttributeError: 'ConstantSpace' object attribute 'x' is read-only"
cs.z = 3 # raise "AttributeError: 'ConstantSpace' object has no attribute 'z'"
A frozen space (FrozenSpace)
This idiom is an immutable namespace of frozen variables or FrozenSpace. It is derived from the previous pattern by making each variable a protected property by closure of the generated FrozenSpace class.
from copy import deepcopy
def FreezeProperty(value):
cache = deepcopy(value)
return property(
lambda self: deepcopy(cache)
)
def FrozenSpace(**args):
args = {k: FreezeProperty(v) for k, v in args.items()}
args['__slots__'] = ()
cls = type('FrozenSpace', (), args)
return cls()
fs = FrozenSpace(
x = 2,
y = {'name': 'y', 'value': 2}
)
print(fs.x) # print "2"
fs.x = 3 # raise "AttributeError: 'FrozenSpace' object attribute 'x' is read-only"
print(fs.y) # print "{'name': 'y', 'value': 2}"
fs.y['name'] = 'yprime' # try to change mutable object
print(fs.y) # print "{'name': 'y', 'value': 2}"
fs.y = {} # raise "AttributeError: 'FrozenSpace' object attribute 'x' is read-only"
fs.z = 3 # raise "AttributeError: 'FrozenSpace' object has no attribute 'z'"
I would make a class that overrides the __setattr__ method of the base object class and wrap my constants with that, note that I'm using python 2.7:
class const(object):
def __init__(self, val):
super(const, self).__setattr__("value", val)
def __setattr__(self, name, val):
raise ValueError("Trying to change a constant value", self)
To wrap a string:
>>> constObj = const("Try to change me")
>>> constObj.value
'Try to change me'
>>> constObj.value = "Changed"
Traceback (most recent call last):
...
ValueError: Trying to change a constant value
>>> constObj2 = const(" or not")
>>> mutableObj = constObj.value + constObj2.value
>>> mutableObj #just a string
'Try to change me or not'
It's pretty simple, but if you want to use your constants the same as you would a non-constant object (without using constObj.value), it will be a bit more intensive. It's possible that this could cause problems, so it might be best to keep the .value to show and know that you are doing operations with constants (maybe not the most 'pythonic' way though).
Unfortunately the Python has no constants so yet and it is shame. ES6 already added support constants to JavaScript (https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Statements/const) since it is a very useful thing in any programming language.
As answered in other answers in Python community use the convention - user uppercase variable as constants, but it does not protect against arbitrary errors in code.
If you like, you may be found useful a single-file solution as next
(see docstrings how use it).
file constants.py
import collections
__all__ = ('const', )
class Constant(object):
"""
Implementation strict constants in Python 3.
A constant can be set up, but can not be changed or deleted.
Value of constant may any immutable type, as well as list or set.
Besides if value of a constant is list or set, it will be converted in an immutable type as next:
list -> tuple
set -> frozenset
Dict as value of a constant has no support.
>>> const = Constant()
>>> del const.temp
Traceback (most recent call last):
NameError: name 'temp' is not defined
>>> const.temp = 1
>>> const.temp = 88
Traceback (most recent call last):
...
TypeError: Constanst can not be changed
>>> del const.temp
Traceback (most recent call last):
...
TypeError: Constanst can not be deleted
>>> const.I = ['a', 1, 1.2]
>>> print(const.I)
('a', 1, 1.2)
>>> const.F = {1.2}
>>> print(const.F)
frozenset([1.2])
>>> const.D = dict()
Traceback (most recent call last):
...
TypeError: dict can not be used as constant
>>> del const.UNDEFINED
Traceback (most recent call last):
...
NameError: name 'UNDEFINED' is not defined
>>> const()
{'I': ('a', 1, 1.2), 'temp': 1, 'F': frozenset([1.2])}
"""
def __setattr__(self, name, value):
"""Declaration a constant with value. If mutable - it will be converted to immutable, if possible.
If the constant already exists, then made prevent againt change it."""
if name in self.__dict__:
raise TypeError('Constanst can not be changed')
if not isinstance(value, collections.Hashable):
if isinstance(value, list):
value = tuple(value)
elif isinstance(value, set):
value = frozenset(value)
elif isinstance(value, dict):
raise TypeError('dict can not be used as constant')
else:
raise ValueError('Muttable or custom type is not supported')
self.__dict__[name] = value
def __delattr__(self, name):
"""Deny against deleting a declared constant."""
if name in self.__dict__:
raise TypeError('Constanst can not be deleted')
raise NameError("name '%s' is not defined" % name)
def __call__(self):
"""Return all constans."""
return self.__dict__
const = Constant()
if __name__ == '__main__':
import doctest
doctest.testmod()
If this is not enough, see full testcase for it.
import decimal
import uuid
import datetime
import unittest
from ..constants import Constant
class TestConstant(unittest.TestCase):
"""
Test for implementation constants in the Python
"""
def setUp(self):
self.const = Constant()
def tearDown(self):
del self.const
def test_create_constant_with_different_variants_of_name(self):
self.const.CONSTANT = 1
self.assertEqual(self.const.CONSTANT, 1)
self.const.Constant = 2
self.assertEqual(self.const.Constant, 2)
self.const.ConStAnT = 3
self.assertEqual(self.const.ConStAnT, 3)
self.const.constant = 4
self.assertEqual(self.const.constant, 4)
self.const.co_ns_ta_nt = 5
self.assertEqual(self.const.co_ns_ta_nt, 5)
self.const.constant1111 = 6
self.assertEqual(self.const.constant1111, 6)
def test_create_and_change_integer_constant(self):
self.const.INT = 1234
self.assertEqual(self.const.INT, 1234)
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.INT = .211
def test_create_and_change_float_constant(self):
self.const.FLOAT = .1234
self.assertEqual(self.const.FLOAT, .1234)
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.FLOAT = .211
def test_create_and_change_list_constant_but_saved_as_tuple(self):
self.const.LIST = [1, .2, None, True, datetime.date.today(), [], {}]
self.assertEqual(self.const.LIST, (1, .2, None, True, datetime.date.today(), [], {}))
self.assertTrue(isinstance(self.const.LIST, tuple))
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.LIST = .211
def test_create_and_change_none_constant(self):
self.const.NONE = None
self.assertEqual(self.const.NONE, None)
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.NONE = .211
def test_create_and_change_boolean_constant(self):
self.const.BOOLEAN = True
self.assertEqual(self.const.BOOLEAN, True)
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.BOOLEAN = False
def test_create_and_change_string_constant(self):
self.const.STRING = "Text"
self.assertEqual(self.const.STRING, "Text")
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.STRING += '...'
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.STRING = 'TEst1'
def test_create_dict_constant(self):
with self.assertRaisesRegexp(TypeError, 'dict can not be used as constant'):
self.const.DICT = {}
def test_create_and_change_tuple_constant(self):
self.const.TUPLE = (1, .2, None, True, datetime.date.today(), [], {})
self.assertEqual(self.const.TUPLE, (1, .2, None, True, datetime.date.today(), [], {}))
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.TUPLE = 'TEst1'
def test_create_and_change_set_constant(self):
self.const.SET = {1, .2, None, True, datetime.date.today()}
self.assertEqual(self.const.SET, {1, .2, None, True, datetime.date.today()})
self.assertTrue(isinstance(self.const.SET, frozenset))
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.SET = 3212
def test_create_and_change_frozenset_constant(self):
self.const.FROZENSET = frozenset({1, .2, None, True, datetime.date.today()})
self.assertEqual(self.const.FROZENSET, frozenset({1, .2, None, True, datetime.date.today()}))
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.FROZENSET = True
def test_create_and_change_date_constant(self):
self.const.DATE = datetime.date(1111, 11, 11)
self.assertEqual(self.const.DATE, datetime.date(1111, 11, 11))
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.DATE = True
def test_create_and_change_datetime_constant(self):
self.const.DATETIME = datetime.datetime(2000, 10, 10, 10, 10)
self.assertEqual(self.const.DATETIME, datetime.datetime(2000, 10, 10, 10, 10))
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.DATETIME = None
def test_create_and_change_decimal_constant(self):
self.const.DECIMAL = decimal.Decimal(13123.12312312321)
self.assertEqual(self.const.DECIMAL, decimal.Decimal(13123.12312312321))
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.DECIMAL = None
def test_create_and_change_timedelta_constant(self):
self.const.TIMEDELTA = datetime.timedelta(days=45)
self.assertEqual(self.const.TIMEDELTA, datetime.timedelta(days=45))
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.TIMEDELTA = 1
def test_create_and_change_uuid_constant(self):
value = uuid.uuid4()
self.const.UUID = value
self.assertEqual(self.const.UUID, value)
with self.assertRaisesRegexp(TypeError, 'Constanst can not be changed'):
self.const.UUID = []
def test_try_delete_defined_const(self):
self.const.VERSION = '0.0.1'
with self.assertRaisesRegexp(TypeError, 'Constanst can not be deleted'):
del self.const.VERSION
def test_try_delete_undefined_const(self):
with self.assertRaisesRegexp(NameError, "name 'UNDEFINED' is not defined"):
del self.const.UNDEFINED
def test_get_all_defined_constants(self):
self.assertDictEqual(self.const(), {})
self.const.A = 1
self.assertDictEqual(self.const(), {'A': 1})
self.const.B = "Text"
self.assertDictEqual(self.const(), {'A': 1, 'B': "Text"})
Advantages:
1. Access to all constants for whole project
2. Strict control for values of constants
Lacks:
1. Not support for custom types and the type 'dict'
Notes:
Tested with Python3.4 and Python3.5 (I am use the 'tox' for it)
Testing environment:
.
$ uname -a
Linux wlysenko-Aspire 3.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
We can create a descriptor object.
class Constant:
def __init__(self,value=None):
self.value = value
def __get__(self,instance,owner):
return self.value
def __set__(self,instance,value):
raise ValueError("You can't change a constant")
1) If we wanted to work with constants at the instance level then:
class A:
NULL = Constant()
NUM = Constant(0xFF)
class B:
NAME = Constant('bar')
LISTA = Constant([0,1,'INFINITY'])
>>> obj=A()
>>> print(obj.NUM) #=> 255
>>> obj.NUM =100
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: You can't change a constant
2) if we wanted to create constants only at the class level, we could use a metaclass that serves as a container for our constants (our descriptor objects); all the classes that descend will inherit our constants (our descriptor objects) without any risk that can be modified.
# metaclass of my class Foo
class FooMeta(type): pass
# class Foo
class Foo(metaclass=FooMeta): pass
# I create constants in my metaclass
FooMeta.NUM = Constant(0xff)
FooMeta.NAME = Constant('FOO')
>>> Foo.NUM #=> 255
>>> Foo.NAME #=> 'FOO'
>>> Foo.NUM = 0 #=> ValueError: You can't change a constant
If I create a subclass of Foo, this class will inherit the constant without the possibility of modifying them
class Bar(Foo): pass
>>> Bar.NUM #=> 255
>>> Bar.NUM = 0 #=> ValueError: You can't change a constant
The Pythonic way of declaring "constants" is basically a module level variable:
RED = 1
GREEN = 2
BLUE = 3
And then write your classes or functions. Since constants are almost always integers, and they are also immutable in Python, you have a very little chance of altering it.
Unless, of course, if you explicitly set RED = 2.
There is a cleaner way to do this with namedtuple:
from collections import namedtuple
def make_consts(name, **kwargs):
return namedtuple(name, kwargs.keys())(**kwargs)
Usage Example
CONSTS = make_consts("baz1",
foo=1,
bar=2)
With this exactly approach you can namespace your constants.
Here's a trick if you want constants and don't care their values:
Just define empty classes.
e.g:
class RED:
pass
class BLUE:
pass
There's no perfect way to do this. As I understand it most programmers will just capitalize the identifier, so PI = 3.142 can be readily understood to be a constant.
On the otherhand, if you want something that actually acts like a constant, I'm not sure you'll find it. With anything you do there will always be some way of editing the "constant" so it won't really be a constant. Here's a very simple, dirty example:
def define(name, value):
if (name + str(id(name))) not in globals():
globals()[name + str(id(name))] = value
def constant(name):
return globals()[name + str(id(name))]
define("PI",3.142)
print(constant("PI"))
This looks like it will make a PHP-style constant.
In reality all it takes for someone to change the value is this:
globals()["PI"+str(id("PI"))] = 3.1415
This is the same for all the other solutions you'll find on here - even the clever ones that make a class and redefine the set attribute method - there will always be a way around them. That's just how Python is.
My recommendation is to just avoid all the hassle and just capitalize your identifiers. It wouldn't really be a proper constant but then again nothing would.
I am trying different ways to create a real constant in Python and perhaps I found the pretty solution.
Example:
Create container for constants
>>> DAYS = Constants(
... MON=0,
... TUE=1,
... WED=2,
... THU=3,
... FRI=4,
... SAT=5,
... SUN=6
... )
Get value from container
>>> DAYS.MON
0
>>> DAYS['MON']
0
Represent with pure python data structures
>>> list(DAYS)
['WED', 'SUN', 'FRI', 'THU', 'MON', 'TUE', 'SAT']
>>> dict(DAYS)
{'WED': 2, 'SUN': 6, 'FRI': 4, 'THU': 3, 'MON': 0, 'TUE': 1, 'SAT': 5}
All constants are immutable
>>> DAYS.MON = 7
...
AttributeError: Immutable attribute
>>> del DAYS.MON
...
AttributeError: Immutable attribute
Autocomplete only for constants
>>> dir(DAYS)
['FRI', 'MON', 'SAT', 'SUN', 'THU', 'TUE', 'WED']
Sorting like list.sort
>>> DAYS.sort(key=lambda (k, v): v, reverse=True)
>>> list(DAYS)
['SUN', 'SAT', 'FRI', 'THU', 'WED', 'TUE', 'MON']
Copability with python2 and python3
Simple container for constants
from collections import OrderedDict
from copy import deepcopy
class Constants(object):
"""Container of constant"""
__slots__ = ('__dict__')
def __init__(self, **kwargs):
if list(filter(lambda x: not x.isupper(), kwargs)):
raise AttributeError('Constant name should be uppercase.')
super(Constants, self).__setattr__(
'__dict__',
OrderedDict(map(lambda x: (x[0], deepcopy(x[1])), kwargs.items()))
)
def sort(self, key=None, reverse=False):
super(Constants, self).__setattr__(
'__dict__',
OrderedDict(sorted(self.__dict__.items(), key=key, reverse=reverse))
)
def __getitem__(self, name):
return self.__dict__[name]
def __len__(self):
return len(self.__dict__)
def __iter__(self):
for name in self.__dict__:
yield name
def keys(self):
return list(self)
def __str__(self):
return str(list(self))
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, str(self.__dict__))
def __dir__(self):
return list(self)
def __setattr__(self, name, value):
raise AttributeError("Immutable attribute")
def __delattr__(*_):
raise AttributeError("Immutable attribute")
Python dictionaries are mutable, so they don't seem like a good way to declare constants:
>>> constants = {"foo":1, "bar":2}
>>> print constants
{'foo': 1, 'bar': 2}
>>> constants["bar"] = 3
>>> print constants
{'foo': 1, 'bar': 3}
In python, a constant is simply a variable with a name in all capitals, with words separated by the underscore character,
e.g
DAYS_IN_WEEK = 7
The value is mutable, as in you can change it. But given the rules for the name tell you is a constant, why would you? I mean, it is your program after all!
This is the approach taken throughout python. There is no private keyword for the same reason. Prefix the name with an underscore and you know it is intended to be private. Code can break the rule....just as a programmer could remove the private keyword anyway.
Python could have added a const keyword... but a programmer could remove keyword and then change the constant if they want to, but why do that? If you want to break the rule, you could change the rule anyway. But why bother to break the rule if the name makes the intention clear?
Maybe there is some unit test where it makes sense to apply a change to value? To see what happens for an 8 day week even though in the real world the number of days in the week cannot be changed. If the language stopped you making an exception if there is just this one case you need to break the rule...you would then have to stop declaring it as a constant, even though it still is a constant in the application, and there is just this one test case that sees what happens if it is changed.
The all upper case name tells you it is intended to be a constant. That is what is important. Not a language forcing constraints on code you have the power to change anyway.
That is the philosophy of python.
(This paragraph was meant to be a comment on those answers here and there, which mentioned namedtuple, but it is getting too long to be fit into a comment, so, here it goes.)
The namedtuple approach mentioned above is definitely innovative. For the sake of completeness, though, at the end of the NamedTuple section of its official documentation, it reads:
enumerated constants can be implemented with named tuples, but it is simpler and more efficient to use a simple class declaration:
class Status:
open, pending, closed = range(3)
In other words, the official documentation kind of prefers to use a practical way, rather than actually implementing the read-only behavior. I guess it becomes yet another example of Zen of Python:
Simple is better than complex.
practicality beats purity.
Maybe pconst library will help you (github).
$ pip install pconst
from pconst import const
const.APPLE_PRICE = 100
const.APPLE_PRICE = 200
[Out] Constant value of "APPLE_PRICE" is not editable.
You can use StringVar or IntVar, etc, your constant is const_val
val = 'Stackoverflow'
const_val = StringVar(val)
const.trace('w', reverse)
def reverse(*args):
const_val.set(val)
You can do it with collections.namedtuple and itertools:
import collections
import itertools
def Constants(Name, *Args, **Kwargs):
t = collections.namedtuple(Name, itertools.chain(Args, Kwargs.keys()))
return t(*itertools.chain(Args, Kwargs.values()))
>>> myConstants = Constants('MyConstants', 'One', 'Two', Three = 'Four')
>>> print myConstants.One
One
>>> print myConstants.Two
Two
>>> print myConstants.Three
Four
>>> myConstants.One = 'Two'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
In Python, constants do not exist, but you can indicate that a variable is a constant and must not be changed by adding CONST_ to the start of the variable name and stating that it is a constant in a comment:
myVariable = 0
CONST_daysInWeek = 7 # This is a constant - do not change its value.
CONSTANT_daysInMonth = 30 # This is also a constant - do not change this value.
Alternatively, you may create a function that acts like a constant:
def CONST_daysInWeek():
return 7;

Is there a way to access __dict__ (or something like it) that includes base classes?

Suppose we have the following class hierarchy:
class ClassA:
#property
def foo(self): return "hello"
class ClassB(ClassA):
#property
def bar(self): return "world"
If I explore __dict__ on ClassB like so, I only see the bar attribute:
for name,_ in ClassB.__dict__.items():
if name.startswith("__"):
continue
print(name)
Output is bar
I can roll my own means to get attributes on not only the specified type but its ancestors. However, my question is whether there's already a way in python for me to do this without re-inventing a wheel.
def return_attributes_including_inherited(type):
results = []
return_attributes_including_inherited_helper(type,results)
return results
def return_attributes_including_inherited_helper(type,attributes):
for name,attribute_as_object in type.__dict__.items():
if name.startswith("__"):
continue
attributes.append(name)
for base_type in type.__bases__:
return_attributes_including_inherited_helper(base_type,attributes)
Running my code as follows...
for attribute_name in return_attributes_including_inherited(ClassB):
print(attribute_name)
... gives back both bar and foo.
Note that I'm simplifying some things: name collisions, using items() when for this example I could use dict, skipping over anything that starts with __, ignoring the possibility that two ancestors themselves have a common ancestor, etc.
EDIT1 - I tried to keep the example simple. But I really want both the attribute name and the attribute reference for each class and ancestor class. One of the answers below has me on a better track, I'll post some better code when I get it to work.
EDIT2 - This does what I want and is very succinct. It's based on Eli's answer below.
def get_attributes(type):
attributes = set(type.__dict__.items())
for type in type.__mro__:
attributes.update(type.__dict__.items())
return attributes
It gives back both the attribute names and their references.
EDIT3 - One of the answers below suggested using inspect.getmembers. This appears very useful because it's like dict only it operates on ancestor classes as well.
Since a large part of what I was trying to do was find attributes marked with a particular descriptor, and include ancestors classes, here is some code that would help do that in case it helps anyone:
class MyCustomDescriptor:
# This is greatly oversimplified
def __init__(self,foo,bar):
self._foo = foo
self._bar = bar
pass
def __call__(self,decorated_function):
return self
def __get__(self,instance,type):
if not instance:
return self
return 10
class ClassA:
#property
def foo(self): return "hello"
#MyCustomDescriptor(foo="a",bar="b")
def bar(self): pass
#MyCustomDescriptor(foo="c",bar="d")
def baz(self): pass
class ClassB(ClassA):
#property
def something_we_dont_care_about(self): return "world"
#MyCustomDescriptor(foo="e",bar="f")
def blah(self): pass
# This will get attributes on the specified type (class) that are of matching_attribute_type. It just returns the attributes themselves, not their names.
def get_attributes_of_matching_type(type,matching_attribute_type):
return_value = []
for member in inspect.getmembers(type):
member_name = member[0]
member_instance = member[1]
if isinstance(member_instance,matching_attribute_type):
return_value.append(member_instance)
return return_value
# This will return a dictionary of name & instance of attributes on type that are of matching_attribute_type (useful when you're looking for attributes marked with a particular descriptor)
def get_attribute_name_and_instance_of_matching_type(type,matching_attribute_type):
return_value = {}
for member in inspect.getmembers(ClassB):
member_name = member[0]
member_instance = member[1]
if isinstance(member_instance,matching_attribute_type):
return_value[member_name] = member_instance
return return_value
You should use python's inspect module for any such introspective capabilities.
.
.
>>> class ClassC(ClassB):
... def baz(self):
... return "hiya"
...
>>> import inspect
>>> for attr in inspect.getmembers(ClassC):
... print attr
...
('__doc__', None)
('__module__', '__main__')
('bar', <property object at 0x10046bf70>)
('baz', <unbound method ClassC.baz>)
('foo', <property object at 0x10046bf18>)
Read more about the inspect module here.
You want to use dir:
for attr in dir(ClassB):
print attr
Sadly there isn't a single composite object. Every attribute access for a (normal) python object first checks obj.__dict__, then the attributes of all it's base classes; while there are some internal caches and optimizations, there isn't a single object you can access.
That said, one thing that could improve your code is to use cls.__mro__ instead of cls.__bases__... instead of the class's immediate parents, cls.__mro__ contains ALL the ancestors of the class, in the exact order Python would search, with all common ancestors occuring only once. That would also allow your type-searching method to be non-recursive. Loosely...
def get_attrs(obj):
attrs = set(obj.__dict__)
for cls in obj.__class__.__mro__:
attrs.update(cls.__dict__)
return sorted(attrs)
... does a fair approximation of the default dir(obj) implementation.
Here is a function I wrote, back in the day. The best answer is using the inspect module, as using __dict__ gives us ALL functions (ours + inherited) and (ALL?) data members AND properties. Where inspect gives us enough information to weed out what we don't want.
def _inspect(a, skipFunctionsAlways=True, skipMagic = True):
"""inspects object attributes, removing all the standard methods from 'object',
and (optionally) __magic__ cruft.
By default this routine skips __magic__ functions, but if you want these on
pass False in as the skipMagic parameter.
By default this routine skips functions, but if you want to see all the functions,
pass in False to the skipFunctionsAlways function. This works together with the
skipMagic parameter: if the latter is True, you won't see __magic__ methods.
If skipFunctionsAlways = False and skipMagic = False, you'll see all the __magic__
methods declared for the object - including __magic__ functions declared by Object
NOT meant to be a comprehensive list of every object attribute - instead, a
list of every object attribute WE (not Python) defined. For a complete list
of everything call inspect.getmembers directly"""
objType = type(object)
def weWantIt(obj):
#return type(a) != objType
output= True
if (skipFunctionsAlways):
output = not ( inspect.isbuiltin(obj) ) #not a built in
asStr = ""
if isinstance(obj, types.MethodType):
if skipFunctionsAlways: #never mind, we don't want it, get out.
return False
else:
asStr = obj.__name__
#get just the name of the function, we don't want the whole name, because we don't want to take something like:
#bound method LotsOfThings.bob of <__main__.LotsOfThings object at 0x103dc70>
#to be a special method because it's module name is special
#WD-rpw 02-23-2008
#TODO: it would be great to be able to separate out superclass methods
#maybe by getting the class out of the method then seeing if that attribute is in that class?
else:
asStr = str(obj)
if (skipMagic):
output = (asStr.find("__") == -1 ) #not a __something__
return (output)
for value in inspect.getmembers( a, weWantIt ):
yield value
{k: getattr(ClassB, k) for k in dir(ClassB)}
Proper values (instead of <property object...>) will be presented when using ClassB instance.
And of course You can filter this by adding things like if not k.startswith('__') in the end.

How do you make an access a hash in python?

I know this question sounds very basic. But I can't find it using Google. I know that there are dictionaries that look like
o = {
'a': 'b'
}
and they're accessed with o['a']. But what are the ones that are accessed as o.a called? I know they exist because I'm using optparse library and it returns an object accessible like that.
You can not access dicts using the '.' unless you implement your own dict type by deriving from dict and providing access through the '.' notation by implementing the __getattribute__() API which is in charge for performing attribute style access.
See http://docs.python.org/reference/datamodel.html#object.__getattribute__
Looks to me like optparse simulates an attribute interface to a hidden dict, but there's a standard library that does something sort of similar, if not really a dictionary: collections.namedtuple. I can't speak to the other mechanisms presented here.
You can access the variables as a dictionary in "new style classes". I think you have to be a little careful though.
>>> class C(object): # python 2 code, inheriting from 'object' is automatic in 3
a = 5
def __init__(self):
self.j = 90
>>> c = C()
>>> c.__dict__
{'j': 90}
>>> print c.j
90
>>> print c.a
5
notice that 'j' showed up in the dictionary and 'a' didn't. It looks to have something to do with how they are initialized. I'm a little confused by that behavior and wouldn't mind an explanation from a guru :D
Edit:
Doing a little more playing, it is apparent why they decided to go with the behavior above (but it is still a little strange
>>> class C(object):
a = 5
def __init__(self):
self.j = 90
self.funct = range # assigning a variable to a function
>>> c = C()
>>> c.__dict__
{'j': 90, 'funct': <built-in function range>}
I think it separates the class object (which would be the same for every class) with new class members (such as those initiated inside init). If I now do
>>> c.newvar = 234
>>> c.__dict__
{'j': 90, 'newvar': 234, 'funct': <built-in function range>}
You can see it is building a dictionary! Hopefully this helps :D
name.attribute is object access in Python, not dict access
__getattr__ is the method you want to implement, along with __setattr__. Here is a very simplistic example (no error checking, etc) that shows an object which allows both dictionary-style and attribute-style access:
Missing = object()
class AttrElem(object):
def __init__(self, **kwds):
self.items = kwds.copy()
def __getitem__(self, name):
result = self.items.get(name, Missing)
if result is not Missing:
return result
raise KeyError("key %r not found" % name)
def __setitem__(self, name, value):
self.items[name] = value
def __getattr__(self, name):
result = self.items.get(name, Missing)
if result is not Missing:
return result
raise AttributeError("attribute %r not found" % name)
def __setattr__(self, name, value):
if name == 'items':
object.__setattr__(self, name, value)
else:
self.items[name] = value
def __repr__(self):
return 'AttrElem(%s)' % ', '.join(
["%s:%s" % (k, v) for k, v in self.items.items()]
)
In use it looks like this:
example = AttrElem(this=7, that=9)
print(example)
example.those = 'these'
example['who'] = 'Guido'
print(example)
As you can see from the code __getitem__ and __getattr__ are both very similary, differing only in the exception that gets raised when the target cannot be found.

Categories

Resources