I've started to use constructs like these:
class DictObj(object):
def __init__(self):
self.d = {}
def __getattr__(self, m):
return self.d.get(m, None)
def __setattr__(self, m, v):
super.__setattr__(self, m, v)
Update: based on this thread, I've revised the DictObj implementation to:
class dotdict(dict):
def __getattr__(self, attr):
return self.get(attr, None)
__setattr__= dict.__setitem__
__delattr__= dict.__delitem__
class AutoEnum(object):
def __init__(self):
self.counter = 0
self.d = {}
def __getattr__(self, c):
if c not in self.d:
self.d[c] = self.counter
self.counter += 1
return self.d[c]
where DictObj is a dictionary that can be accessed via dot notation:
d = DictObj()
d.something = 'one'
I find it more aesthetically pleasing than d['something']. Note that accessing an undefined key returns None instead of raising an exception, which is also nice.
Update: Smashery makes a good point, which mhawke expands on for an easier solution. I'm wondering if there are any undesirable side effects of using dict instead of defining a new dictionary; if not, I like mhawke's solution a lot.
AutoEnum is an auto-incrementing Enum, used like this:
CMD = AutoEnum()
cmds = {
"peek": CMD.PEEK,
"look": CMD.PEEK,
"help": CMD.HELP,
"poke": CMD.POKE,
"modify": CMD.POKE,
}
Both are working well for me, but I'm feeling unpythonic about them.
Are these in fact bad constructs?
Your DictObj example is actually quite common. Object-style dot-notation access can be a win if you are dealing with ‘things that resemble objects’, ie. they have fixed property names containing only characters valid in Python identifiers. Stuff like database rows or form submissions can be usefully stored in this kind of object, making code a little more readable without the excess of ['item access'].
The implementation is a bit limited - you don't get the nice constructor syntax of dict, len(), comparisons, 'in', iteration or nice reprs. You can of course implement those things yourself, but in the new-style-classes world you can get them for free by simply subclassing dict:
class AttrDict(dict):
__getattr__ = dict.__getitem__
__setattr__ = dict.__setitem__
__delattr__ = dict.__delitem__
To get the default-to-None behaviour, simply subclass Python 2.5's collections.defaultdict class instead of dict.
With regards to the DictObj, would the following work for you? A blank class will allow you to arbitrarily add to or replace stuff in a container object.
class Container(object):
pass
>>> myContainer = Container()
>>> myContainer.spam = "in a can"
>>> myContainer.eggs = "in a shell"
If you want to not throw an AttributeError when there is no attribute, what do you think about the following? Personally, I'd prefer to use a dict for clarity, or to use a try/except clause.
class QuietContainer(object):
def __getattr__(self, attribute):
try:
return object.__getattr__(self,attribute)
except AttributeError:
return None
>>> cont = QuietContainer()
>>> print cont.me
None
Right?
This is a simpler version of your DictObj class:
class DictObj(object):
def __getattr__(self, attr):
return self.__dict__.get(attr)
>>> d = DictObj()
>>> d.something = 'one'
>>> print d.something
one
>>> print d.somethingelse
None
>>>
As far as I know, Python classes use dictionaries to store their attributes anyway (that's hidden from the programmer), so it looks to me that what you've done there is effectively emulate a Python class... using a python class.
It's not "wrong" to do this, and it can be nicer if your dictionaries have a strong possibility of turning into objects at some point, but be wary of the reasons for having bracket access in the first place:
Dot access can't use keywords as keys.
Dot access has to use Python-identifier-valid characters in the keys.
Dictionaries can hold any hashable element -- not just strings.
Also keep in mind you can always make your objects access like dictionaries if you decide to switch to objects later on.
For a case like this I would default to the "readability counts" mantra: presumably other Python programmers will be reading your code and they probably won't be expecting dictionary/object hybrids everywhere. If it's a good design decision for a particular situation, use it, but I wouldn't use it without necessity to do so.
The one major disadvantage of using something like your DictObj is you either have to limit allowable keys or you can't have methods on your DictObj such as .keys(), .values(), .items(), etc.
There's a symmetry between this and this answer:
class dotdict(dict):
__getattr__= dict.__getitem__
__setattr__= dict.__setitem__
__delattr__= dict.__delitem__
The same interface, just implemented the other way round...
class container(object):
__getitem__ = object.__getattribute__
__setitem__ = object.__setattr__
__delitem__ = object.__delattr__
Don't overlook Bunch.
It is a child of dictionary and can import YAML or JSON, or convert any existing dictionary to a Bunch and vice-versa. Once "bunchify"'d, a dictionary gains dot notations without losing any other dictionary methods.
I like dot notation a lot better than dictionary fields personally. The reason being that it makes autocompletion work a lot better.
It's not bad if it serves your purpose. "Practicality beats purity".
I saw such approach elserwhere (eg. in Paver), so this can be considered common need (or desire).
Because you ask for undesirable side-effects:
A disadvantage is that in visual editors like eclipse+pyDev, you will see many undefined variable errors on lines using the dot notation. Pydef will not be able to find such runtime "object" definitions. Whereas in the case of a normal dictionary, it knows that you are just getting a dictionary entry.
You would need to 1) ignore those errors and live with red crosses; 2) suppress those warnings on a line by line basis using ##UndefinedVariable or 3) disable undefined variable error entirely, causing you to miss real undefined variable definitions.
If you're looking for an alternative that handles nested dicts:
Recursively transform a dict to instances of the desired class
import json
from collections import namedtuple
class DictTransformer():
#classmethod
def constantize(self, d):
return self.transform(d, klass=namedtuple, klassname='namedtuple')
#classmethod
def transform(self, d, klass, klassname):
return self._from_json(self._to_json(d), klass=klass, klassname=klassname)
#classmethod
def _to_json(self, d, access_method='__dict__'):
return json.dumps(d, default=lambda o: getattr(o, access_method, str(o)))
#classmethod
def _from_json(self, jsonstr, klass, klassname):
return json.loads(jsonstr, object_hook=lambda d: klass(klassname, d.keys())(*d.values()))
Ex:
constants = {
'A': {
'B': {
'C': 'D'
}
}
}
CONSTANTS = DictTransformer.transform(d, klass=namedtuple, klassname='namedtuple')
CONSTANTS.A.B.C == 'D'
Pros:
handles nested dicts
can potentially generate other classes
namedtuples provide immutability for constants
Cons:
may not respond to .keys and .values if those are not provided on your klass (though you can sometimes mimic with ._fields and list(A.B.C))
Thoughts?
h/t to #hlzr for the original class idea
Related
I have a class which is essentially a collection/list of things. But I want to add some extra functions to this list. What I would like, is the following:
I have an instance li = MyFancyList(). Variable li should behave as it was a list whenever I use it as a list: [e for e in li], li.expand(...), for e in li.
Plus it should have some special functions like li.fancyPrint(), li.getAMetric(), li.getName().
I currently use the following approach:
class MyFancyList:
def __iter__(self):
return self.li
def fancyFunc(self):
# do something fancy
This is ok for usage as iterator like [e for e in li], but I do not have the full list behavior like li.expand(...).
A first guess is to inherit list into MyFancyList. But is that the recommended pythonic way to do? If yes, what is to consider? If no, what would be a better approach?
If you want only part of the list behavior, use composition (i.e. your instances hold a reference to an actual list) and implement only the methods necessary for the behavior you desire. These methods should delegate the work to the actual list any instance of your class holds a reference to, for example:
def __getitem__(self, item):
return self.li[item] # delegate to li.__getitem__
Implementing __getitem__ alone will give you a surprising amount of features, for example iteration and slicing.
>>> class WrappedList:
... def __init__(self, lst):
... self._lst = lst
... def __getitem__(self, item):
... return self._lst[item]
...
>>> w = WrappedList([1, 2, 3])
>>> for x in w:
... x
...
1
2
3
>>> w[1:]
[2, 3]
If you want the full behavior of a list, inherit from collections.UserList. UserList is a full Python implementation of the list datatype.
So why not inherit from list directly?
One major problem with inheriting directly from list (or any other builtin written in C) is that the code of the builtins may or may not call special methods overridden in classes defined by the user. Here's a relevant excerpt from the pypy docs:
Officially, CPython has no rule at all for when exactly overridden method of subclasses of built-in types get implicitly called or not. As an approximation, these methods are never called by other built-in methods of the same object. For example, an overridden __getitem__ in a subclass of dict will not be called by e.g. the built-in get method.
Another quote, from Luciano Ramalho's Fluent Python, page 351:
Subclassing built-in types like dict or list or str directly is error-
prone because the built-in methods mostly ignore user-defined
overrides. Instead of subclassing the built-ins, derive your classes
from UserDict , UserList and UserString from the collections
module, which are designed to be easily extended.
... and more, page 370+:
Misbehaving built-ins: bug or feature?
The built-in dict , list and str types are essential building blocks of Python itself, so
they must be fast — any performance issues in them would severely impact pretty much
everything else. That’s why CPython adopted the shortcuts that cause their built-in
methods to misbehave by not cooperating with methods overridden by subclasses.
After playing around a bit, the issues with the list builtin seem to be less critical (I tried to break it in Python 3.4 for a while but did not find a really obvious unexpected behavior), but I still wanted to post a demonstration of what can happen in principle, so here's one with a dict and a UserDict:
>>> class MyDict(dict):
... def __setitem__(self, key, value):
... super().__setitem__(key, [value])
...
>>> d = MyDict(a=1)
>>> d
{'a': 1}
>>> class MyUserDict(UserDict):
... def __setitem__(self, key, value):
... super().__setitem__(key, [value])
...
>>> m = MyUserDict(a=1)
>>> m
{'a': [1]}
As you can see, the __init__ method from dict ignored the overridden __setitem__ method, while the __init__ method from our UserDict did not.
The simplest solution here is to inherit from list class:
class MyFancyList(list):
def fancyFunc(self):
# do something fancy
You can then use MyFancyList type as a list, and use its specific methods.
Inheritance introduces a strong coupling between your object and list. The approach you implement is basically a proxy object.
The way to use heavily depends of the way you will use the object. If it have to be a list, then inheritance is probably a good choice.
EDIT: as pointed out by #acdr, some methods returning list copy should be overriden in order to return a MyFancyList instead a list.
A simple way to implement that:
class MyFancyList(list):
def fancyFunc(self):
# do something fancy
def __add__(self, *args, **kwargs):
return MyFancyList(super().__add__(*args, **kwargs))
If you don't want to redefine every method of list, I suggest you the following approach:
class MyList:
def __init__(self, list_):
self.li = list_
def __getattr__(self, method):
return getattr(self.li, method)
This would make methods like append, extend and so on, work out of the box. Beware, however, that magic methods (e.g. __len__, __getitem__ etc.) are not going to work in this case, so you should at least redeclare them like this:
class MyList:
def __init__(self, list_):
self.li = list_
def __getattr__(self, method):
return getattr(self.li, method)
def __len__(self):
return len(self.li)
def __getitem__(self, item):
return self.li[item]
def fancyPrint(self):
# do whatever you want...
Please note, that in this case if you want to override a method of list (extend, for instance), you can just declare your own so that the call won't pass through the __getattr__ method. For instance:
class MyList:
def __init__(self, list_):
self.li = list_
def __getattr__(self, method):
return getattr(self.li, method)
def __len__(self):
return len(self.li)
def __getitem__(self, item):
return self.li[item]
def fancyPrint(self):
# do whatever you want...
def extend(self, list_):
# your own version of extend
Based on the two example methods you included in your post (fancyPrint, findAMetric), it doesn't seem that you need to store any extra state in your lists. If this is the case, you're best off simple declaring these as free functions and ignoring subtyping altogether; this completely avoids problems like list vs UserList, fragile edge cases like return types for __add__, unexpected Liskov issues, &c. Instead, you can write your functions, write your unit tests for their output, and rest assured that everything will work exactly as intended.
As an added benefit, this means your functions will work with any iterable types (such as generator expressions) without any extra effort.
I've been switching from Matlab to NumPy/Scipy, and I think NumPy is great in many aspects.
But one thing that I don't feel comfortable is that I cannot find a data structure similar to struct in C/C++.
For example, I may want to do the following thing:
struct Parameters{
double frame_size_sec;
double frame_step_sec;
}
One simplest way is using a dictionary as follows.
parameters = {"frame_size_sec" : 0.0, "frame_step_sec", 0.0}
But in case of a dictionary, unlike struct, any keys may be added. I'd like to restrict keys.
The other option might be using a class as follows. But it also has the same type of problems.
class Parameters:
frame_size_sec = 0.0
frame_step_sec = 0.0
From a thread, I saw that there is a data structure called named tuple, which looks great, but the biggest problem with it is that fields are immutable. So it is still different from what I want.
In sum, what would be the best way to use a struct-like object in python?
If you don't need actual memory layout guarantees, user-defined classes can restrict their set of instance members to a fixed list using __slots__. So for example:
class Parameters: # On Python 2, class Parameters(object):, as __slots__ only applies to new-style classes
__slots__ = 'frame_size_sec', 'frame_step_sec'
def __init__(self, frame_size_sec=0., frame_step_sec=0.):
self.frame_size_sec = float(frame_size_sec)
self.frame_step_sec = float(frame_step_sec)
gets you a class where on initialization, it's guaranteed to assign two float members, and no one can add new instance attributes (accidentally or on purpose) to any instance of the class.
Please read the caveats at the __slots__ documentation; in inheritance cases for instance, if a superclass doesn't define __slots__, then the subclass will still have __dict__ and therefore can have arbitrary attributes defined on it.
If you need memory layout guarantees and stricter (C) types for variables, you'll want to look at ctypes Structures, but from what you're saying, it sounds like you're just trying to enforce a fixed, limited set of attributes, not specific types or memory layouts.
While taking the risk of not being very Pythonic, you can create an immutable dictionary by subclassing the dict class and overwriting some of its methods:
def not_supported(*args, **kwargs):
raise NotImplementedError('ImmutableDict is immutable')
class ImmutableDict(dict):
__delitem__ = not_supported
__setattr__ = not_supported
update = not_supported
clear = not_supported
pop = not_supported
popitem = not_supported
def __getattr__(self, item):
return self[item]
def __setitem__(self, key, value):
if key in self.keys():
dict.__setitem__(self, key, value)
else:
raise NotImplementedError('ImmutableDict is immutable')
Some usage examples:
my_dict = ImmutableDict(a=1, b=2)
print my_dict['a']
>> 1
my_dict['a'] = 3 # will work, can modify existing key
my_dict['c'] = 1 # will raise an exception, can't add a new key
print my_dict.a # also works because we overwrote __getattr__ method
>> 3
I saw one of my colleague write his code like:
class a(dict):
# something
pass
Is this a common skill? What does it serve for?
This can be done when you want a class with the default behaviour of a dictionary (getting and setting keys), but the instances are going to be used in highlu specific circumstances, and you anticipate the need to provide custom methods or constructors specific to those.
For example, you may want have a dynamic KeyStorage that starts as a in-memory store, but later adapt the class to keep the data on disk.
You can also mangle the keys and values as needed - for storage of unicode data on a database with a specific encoding, for example.
In some cases it makes sense. For example you could create a dict that allows case insensitive lookup:
class case_insensitive_dict(dict):
def __getitem__(self, key):
return super(case_insensitive_dict, self).__getitem__(key.lower())
def __setitem__(self, key, value):
return super(case_insensitive_dict, self).__setitem__(key.lower(), value)
d = case_insensitive_dict()
d["AbCd"] = 1
print d["abcd"]
(this might require additional error handling)
Extending the built-in dict class can be useful to create dict "supersets" (e.g. "bunch" class where keys can be accessed object-style, as in javascript) without having to reimplement MutableMapping's 5 methods by hand.
But if your colleague literally writes
class MyDict(dict):
pass
without any customisation, I can only see evil uses for it, such as adding attributes to the dict:
>>> a = {}
>>> a.foo = 3
AttributeError: 'dict' object has no attribute 'foo'
>>> b = MyDict()
>>> b.foo = 3
>>>
I'm trying to move away from Matlab to Python. While the magic ? in IPython is nice, one very nice feature of Matlab is that you can see on the command line (by omitting the ;) the instance variables (called properties in Matlab) of the object in question. Is this possible in python (I guess via IPython)?
Ideally a class like this:
class MyClass(object):
_x = 5
#property
def x(self):
return self._x + 100
#x.setter
def x(self, value):
self._x = value + 1
def myFunction(self, y):
return self.x ** 2 + y
Would display something like:
mc = Myclass()
mc
<package.MyClass> <superclass1> <superclass2>
Attributes:
_x: 5
x: 105
Method Attributes:
myFunction(self, y)
Is that possible via overriding the print method (if such a thing exits) of the class? Or via a magic method in ipython?
The short answer is that there is no way to get a list of all attributes of an object in Python, because the attributes could be generated dynamically. For an extreme example, consider this class:
>>> class Spam(object):
... def __getattr__(self, attr):
... if attr.startswith('x'):
... return attr[1:]
>>> spam = Spam()
>>> spam.xeggs
'eggs'
Even if the interpreter could someone figure out a list of all attributes, that list would be infinite.
For simple classes, spam.__dict__ is often good enough. It doesn't handle dynamic attributes, __slots__-based attributes, class attributes, C extension classes, attributes inherited from most of the above, and all kinds of other things. But it's at least something—and sometimes, it's the something you want. To a first approximation, it's exactly the stuff you explicitly assigned in __init__ or later, and nothing else.
For a best effort aimed at "everything" aimed at human readability, use dir(spam).
For a best effort aimed at "everything" for programmatic use, use inspect.getmembers(spam). (Although in fact the implementation is just a wrapper around dir in CPython 2.x, it could do more—and in fact does in CPython 3.2+.)
These will both handle a wide range of things that __dict__ cannot, and may skip things that are in the __dict__ but that you don't want to see. But they're still inherently incomplete.
Whatever you use, to get the values as well as the keys is easy. If you're using __dict__ or getmembers, it's trivial; the __dict__ is, normally, either a dict, or something that acts close enough to a dict for your purposes, and getmembers explicitly returns key-value pairs. If you're using dir, you can get a dict very easily:
{key: getattr(spam, key) for key in dir(spam)}
One last thing: "object" is a bit of an ambiguous term. It can mean "any instance of a class derived from object", "any instance of a class", "any instance of a new-style class", or "any value of any type at all" (modules, classes, functions, etc.). You can use dir and getmembers on just about anything; the exact details of what that means are described in the docs.
One even-last-er thing: You may notice that getmembers returns things like ('__str__', <method-wrapper '__str__' of Spam object at 0x1066be790>), which you probably aren't interested in. Since the results are just name-value pairs, if you just want to remove __dunder__ methods, _private variables, etc., that's easy. But often, you want to filter on the "kind of member". The getmembers function takes a filter parameter, but the docs don't do a great job explaining how to use it (and, on top of that, expect that you understand how descriptors work). Basically, if you want a filter, it's usually callable, lambda x: not callable(x), or a lambda made up of a combination of inspect.isfoo functions.
So, this is common enough you may want to write it up as a function:
def get_public_variables(obj):
return [(name, value) for name, value
in inspect.getmembers(obj, lambda x: not callable(x))
if not name.startswith('_')]
You can turn that into a custom IPython %magic function, or just make a %macro out of it, or just leave it as a regular function and call it explicitly.
In a comment, you asked whether you can just package this up into a __repr__ function instead of trying to create a %magic function or whatever.
If you've already got all of your classes inheriting from a single root class, this is a great idea. You can write a single __repr__ that works for all of your classes (or, if it works for 99% of them, you can override that __repr__ in the other 1%), and then every time you evaluate any of your objects in the interpreter or print them out, you'll get what you want.
However, a few things to keep in mind:
Python has both __str__ (what you get if you print an object) and __repr__ (what you get if you just evaluate an object at the interactive prompt) for a reason. Usually, the former is a nice human-readable representation, while the latter is something that's either eval-able (or typable-into-the-interactive-prompt), or the concise angle-bracket form that gives you just enough to distinguish the type and identity of an object.
That's just a convention rather than a rule, so you can feel free to break it. However, if you are going to break it, you may still want to make use of the str/repr distinction—e.g., make repr give you a complete dump of all the internals, while str shows just the useful public values.
More seriously, you have to consider how repr values are composed. For example, if you print or repr a list, you get, effectively, '[' + ', '.join(map(repr, item))) + ']'. This is going to look pretty odd with a multi-line repr. And it'll be even worse if you use any kind of pretty-printer that tries to indent nested collections, like the one that's built into IPython. The result probably won't be unreadable, it'll just defeat the benefits that the pretty-printer is meant to provide.
As for the specific stuff you want to display: That's all pretty easy. Something like this:
def __repr__(self):
lines = []
classes = inspect.getmro(type(self))
lines.append(' '.join(repr(cls) for cls in classes))
lines.append('')
lines.append('Attributes:')
attributes = inspect.getmembers(self, callable)
longest = max(len(name) for name, value in attributes)
fmt = '{:>%s}: {}' % (longest, )
for name, value in attributes:
if not name.startswith('__'):
lines.append(fmt.format(name, value))
lines.append('')
lines.append('Methods:')
methods = inspect.getmembers(self, negate(callable))
for name, value in methods:
if not name.startswith('__'):
lines.append(name)
return '\n'.join(lines)
Right-justifying the attribute names is the hardest part here. (And I probably got it wrong, since this is untested code…) Everything else is either easy, or fun (playing with different filters to getmembers to see what they do).
I was able to achieve what I wanted with IPython (at least roughly) by implementing _repr_pretty_:
def get_public_variables(obj):
from inspect import getmembers
return [(name, value) for name, value in
getmembers(obj, lambda x: not callable(x)) if
not name.startswith('__')]
class MySuperClass(object):
def _repr_pretty_(self, p, cycle):
for (name, value) in get_public_variables(self):
f = '{:>12}{} {:<} \n'
line = f.format(str(name), ':', str(value))
# p.text(str(name) + ': ' + str(value) + '\n')
p.text(line)
class MyClass(MySuperClass):
_x = 5
#property
def x(self):
return self._x + 100
gives me out of:
mc = MyClass()
mc
Out[15]:
_x: 5
x: 105
clearly there is some fine tuning to be done in terms of whitespace, etc. But this was roughly what I was trying to accomplish
You can get to an object's instance variable using obj.__dict__, e.g.:
class Dog:
def __init__(self, name, age):
self.name = name
self.age = age
d1 = Dog("Fido", 7)
for key, val in d1.__dict__.items():
print(key, ": ", val)
Output:
age : 7
name : Fido
You might find this doesn't work well with real objects that might have a large number of instance variables and methods though.
The following code
import types
class A:
class D:
pass
class C:
pass
for d in dir(A):
if type(eval('A.'+d)) is types.ClassType:
print d
outputs
C
D
How do I get it to output in the order in which these classes were defined in the code? I.e.
D
C
Is there any way other than using inspect.getsource(A) and parsing that?
Note that that parsing is already done for you in inspect - take a look at inspect.findsource, which searches the module for the class definition and returns the source and line number. Sorting on that line number (you may also need to split out classes defined in separate modules) should give the right order.
However, this function doesn't seem to be documented, and is just using a regular expression to find the line, so it may not be too reliable.
Another option is to use metaclasses, or some other way to either implicitly or explicitly ordering information to the object. For example:
import itertools, operator
next_id = itertools.count().next
class OrderedMeta(type):
def __init__(cls, name, bases, dct):
super(OrderedMeta, cls).__init__(name, bases, dct)
cls._order = next_id()
# Set the default metaclass
__metaclass__ = OrderedMeta
class A:
class D:
pass
class C:
pass
print sorted([cls for cls in [getattr(A, name) for name in dir(A)]
if isinstance(cls, OrderedMeta)], key=operator.attrgetter("_order"))
However this is a fairly intrusive change (requires setting the metaclass of any classes you're interested in to OrderedMeta)
The inspect module also has the findsource function. It returns a tuple of source lines and line number where the object is defined.
>>> import inspect
>>> import StringIO
>>> inspect.findsource(StringIO.StringIO)[1]
41
>>>
The findsource function actually searches trough the source file and looks for likely candidates if it is given a class-object.
Given a method-, function-, traceback-, frame-, or code-object, it simply looks at the co_firstlineno attribute of the (contained) code-object.
No, you can't get those attributes in the order you're looking for. Python attributes are stored in a dict (read: hashmap), which has no awareness of insertion order.
Also, I would avoid the use of eval by simply saying
if type(getattr(A, d)) is types.ClassType:
print d
in your loop. Note that you can also just iterate through key/value pairs in A.__dict__
AFAIK, no -- there isn't*. This is because all of a class's attributes are stored in a dictionary (which is, as you know, unordered).
*: it might actually be possible, but that would require either decorators or possibly metaclass hacking. Do either of those interest you?
class ExampleObject:
def example2():
pass
def example1():
pass
context = ExampleObject
def sort_key(item):
return inspect.findsource(item)[1]
properties = [
getattr(context, attribute) for attribute in dir(context)
if callable(getattr(context, attribute)) and
attribute.startswith('__') is False
]
properties.sort(key=sort_key)
print(properties)
Should print out:
[<function ExampleObject.example2 at 0x7fc2baf9e940>, <function ExampleObject.example1 at 0x7fc2bae5e790>]
I needed to use this as well for some compiler i'm building, and this proved very useful.
I'm not trying to be glib here, but would it be feasible for you to organize the classes in your source alphabetically? i find that when there are lots of classes in one file this can be useful in its own right.