Hide deprecated methods from tab completion - python

I would like to control which methods appear when a user uses tab-completion on a custom object in ipython - in particular, I want to hide functions that I have deprecated. I still want these methods to be callable, but I don't want users to see them and start using them if they are inspecting the object. Is this something that is possible?

Partial answer for you. I'll post the example code and then explain why its only a partial answer.
Code:
class hidden(object): # or whatever its parent class is
def __init__(self):
self.value = 4
def show(self):
return self.value
def change(self,n):
self.value = n
def __getattr__(self, attrname):
# put the dep'd method/attribute names here
deprecateds = ['dep_show','dep_change']
if attrname in deprecateds:
print("These aren't the methods you're looking for.")
def dep_change(n):
self.value = n
def dep_show():
return self.value
return eval(attrname)
else:
raise AttributeError, attrname
So now the caveat: they're not methods (note the lack of self as the first variable). If you need your users (or your code) to be able to call im_class, im_func, or im_self on any of your deprecated methods, then this hack won't work. Also, i'm pretty sure there's going to be a performance hit because you're defining each dep'd function inside __getattr__. This won't affect your other attribute lookups (had I put them in __getattribute__, that would be a different matter), but it will slow down access to those deprecated methods. This can be (largely, but not entirely) negated by putting each function definition inside its own if block, instead of doing a list-membership check, but, depending on how big your function is, that could be really annoying to maintain.
UPDATE:
1) If you want to make the deprecated functions methods (and you do), just use
import types
return types.MethodType(eval(attrname), self)
instead of
return eval(attrname)
in the above snippet, and add self as the first argument to the function defs. It turns them into instancemethods (so you can use im_class, im_func, and im_self to your heart's content).
2) If the __getattr__ hook didn't thrill you, there's another option (that I know of) (albiet, with its own caveats, and we'll get to those): Put the deprecated functions definitions inside __init__, and hide them with a custom __dir__. Here's what the above code would look like done this way:
class hidden(object):
def __init__(self):
self.value = 4
from types import MethodType
def dep_show(self):
return self.value
self.__setattr__('dep_show', MethodType(dep_show, self))
def dep_change(self, n):
self.value = n
self.__setattr__('dep_change', MethodType(dep_change, self))
def show(self):
return self.value
def change(self, n):
self.value = n
def __dir__(self):
heritage = dir(super(self.__class__, self)) # inherited attributes
hide = ['dep_show', 'dep_change']
show = [k for k in self.__class__.__dict__.keys() + self.__dict__.keys() if not k in heritage + private]
return sorted(heritage + show)
The advantage here is that you're not defining the functions anew every lookup, which nets you speed. The disadvantage here is that because you're not defining functions anew each lookup, they have to 'persist' (if you will). So, while the custom __dir__ method hides your deprecateds from dir(hiddenObj) and, therefore, IPython's tab-completion, they still exist in the instance's __dict__ attribute, where users can discover them.

Seems like there is a special magic method for the introcpection which is called by dir(): __dir__(). Isn't it what you are lookin for?

The DeprecationWarning isn't emitted until the method is called, so you'd have to have a separate attribute on the class that stores the names of deprecated methods, then check that before suggesting a completion.
Alternatively, you could walk the AST for the method looking for DeprecationWarning, but that will fail if either the class is defined in C, or if the method may emit a DeprecationWarning based on the type or value of the arguments.

About the completion mechanism in IPython, it is documented here:
http://ipython.scipy.org/doc/manual/html/api/generated/IPython.core.completer.html#ipcompleter
But a really interesting example for you is the traits completer, that does precisely what you want to do: it hides some methods (based on their names) from the autocompletion.
Here is the code:
http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/Extensions/ipy_traits_completer.py

Related

Is there a way of defining multiple methods with similar bodies?

I'm trying to write my own Python ANSI terminal colorizer, and eventialy came to the point where I want to define a bunch of public properties with similar body, so I want to ask: is there a way in Python to define multiple similar methods with slight difference between them?
Actual code I'm stuck with:
class ANSIColors:
#classmethod
def __wrap(cls, _code: str):
return cls.PREFIX + _code + cls.POSTFIX
#classproperty
def reset(cls: Type['ANSIColors']):
return cls.__wrap(cls.CODES['reset'])
#classproperty
def black(cls: Type['ANSIColors']):
return cls.__wrap(cls.CODES['black'])
...
If it actually matters, here is the code for classproperty decorator:
def classproperty(func):
return classmethod(property(func))
I will be happy to see answers with some Python-intended solutions, rather then code generation programs.
Edit 1: it will be great to preserve given properties names.
I don't think you need (or really want) to use properties to do what you want to accomplish, which is good because doing so would require a lot of repetitive code if you have many entries in the class' CODES attribute (which I'm assuming is a dictionary mapping).
You could instead use __getattr__() to dynamically look up the strings associated with the names in the class' CODES attribute because then you wouldn't need to explicitly create a property for each of them. However in this case it needs to be applied it the class-of-the-class — in other words the class' metaclass.
The code below show how to define one that does this:
class ANSIColorsMeta(type):
def __getattr__(cls, key):
"""Call (mangled) private class method __wrap() with the key's code."""
return getattr(cls, f'_{cls.__name__}__wrap')(cls.CODES[key])
class ANSIColors(metaclass=ANSIColorsMeta):
#classmethod
def __wrap(cls, code: str):
return cls.PREFIX + code + cls.POSTFIX
PREFIX = '<prefix>'
POSTFIX = '<postfix>'
CODES = {'reset': '|reset|', 'black': '|black|'}
if __name__ == '__main__':
print(ANSIColors.reset) # -> <prefix>|reset|<postfix>
print(ANSIColors.black) # -> <prefix>|black|<postfix>
print(ANSIColors.foobar) # -> KeyError: 'foobar'
✶ It's important to also note that this could be made much faster by having the metaclass' __getattr__() assign the result of the lookup to an actual cls attribute, thereby avoiding the need to repeat the whole process if the same key is ever used again (because __getattr__() is only called when the default attribute access fails) effectively caching looked-up values and auto-optimizing itself based on how it's actually being used.

Parametric classes in python

I want to define a pair of classes that are almost identical, except that the class methods are decorated in two different ways. Currently, I just have a factory function that takes the decorator as an argument, constructs the class using that decorator, and returns the class. Greatly simplified, something like this works:
# Defined in mymodule.py
def class_factory(decorator):
class C:
#decorator
def fancy_func(self, x):
# some fanciness
return x
return C
C1 = class_factory(decorator1)
C2 = class_factory(decorator2)
And I can use these as usual:
import mymodule
c1 = mymodule.C1()
c2 = mymodule.C2()
I'm not entirely comfortable with this, for a number of reasons. First, a purely aesthetic reason: the types of both objects display as mymodule.class_factory.<locals>.C. They're not actually identical, but they look like it, and it causes problems with the documentation. Second, my class is pretty complicated. I'd actually like to use inheritance and mixins and so on, but in any case, those other classes also need access to the decorators. So currently, I make several factories, and call the parent class factories inside the child class factory, and the child inherits from the parents created in this way. But this means I can't really use the resulting parents as classes outside the factory.
So my questions are
Is there a better design pattern for this sort of thing? It would be really convenient if there were some way to use inheritance, where the decorators are actually methods in a class, and I inherit in two different ways.
Is there anything wrong with changing the <locals> part of the class name by just altering C.__qualname__ before returning?
To be a bit more specific: I want one version of the class to work extremely quickly with numpy arrays, and I want another version of the class to work with arbitrary python objects — especially sympy expressions. So for the first, I decorate with #numba.guvectorize (and relatives). This means I actually need to pass numba some signatures, so I can't just rely on numba falling back to object mode for the second case. But for simplicity, I think we can ignore the issue of signatures here. For the second case, I basically make a no-op decorator that ignores signatures and does nothing to the function.
Here's an approach using __init_subclass__. I use keyword arguments here, but you could easily change it so the decorators are defined as methods on C1 and C2 and are applied in __init_subclass__.
def passthru(f):
return f
class BaseC:
def __init_subclass__(cls, /, decorator=passthru, **kwargs):
super().__init_subclass__(**kwargs)
# if you also have class attributes or methods you don't want to decorate,
# you might need to maintain an explicit list of decoratable methods
for attr in dir(cls):
if not attr.startswith('__'):
setattr(cls, attr, decorator(getattr(cls, attr)))
def fancy_func(self, x):
# some fanciness
return x
def two(f):
return lambda self, x: "surprise"
class C1(BaseC):
pass
class C2(BaseC, decorator=two):
pass
print(C1().fancy_func(42))
print(C2().fancy_func(42))
# further subclassing
class C3(C2):
pass
print(C3().fancy_func(42))
I took #Jasmijn's suggestion of using __init_subclass__. But since I really need multiple decorators (jit, guvectorize, and sometimes neither even when using numba with other methods), I tweaked it a little. Rather than jitting every public method, I use decorators to flag methods with attributes explaining how to compile them.
I decorate the individual methods much like I would have originally, indicating whether to jit or whatnot. But these decorators don't actually do any compilation; they just add hidden attributes to the functions indicating whether and how to apply the actual decorators. Then, when a subclass is created, __init_subclass__ loops through, looking for these attributes on all the subclass's methods, and applying any requested compilation.
I turn this into a pretty general class, named Jitter below. Any class that wants the option of jitting in multiple ways can just inherit from this class and decorate methods with Jitter.jit or Jitter.guvectorize. By default, nothing much happens to those functions, so the first child class of Jitter can be used with sympy, for example. But I can also inherit from such a class while adding the relevant keyword(s) to the class definition, enabling jitting in the subclass. Here's the Jitter class:
class Jitter:
def jit(f):
f._jit = True
return f
def guvectorize(*args, **kwargs):
def wrapper(f):
f._guvectorize = (args, kwargs)
return f
return wrapper
def __init_subclass__(cls, /, jit=None, guvectorize=None, **kwargs):
super().__init_subclass__(**kwargs)
for attr_name in dir(cls):
attr = getattr(cls, attr_name)
if jit is not None and hasattr(attr, '_jit'):
setattr(cls, attr_name, jit(attr))
elif guvectorize is not None and hasattr(attr, '_guvectorize'):
args, kwargs = getattr(attr, '_guvectorize')
setattr(cls, attr_name, guvectorize(*args, **kwargs)(attr))
Now, I can inherit from this class very conveniently:
import numba as nb
class Adder(Jitter):
#Jitter.jit
def add(x, y):
return x + y
class NumbaAdder(Adder, jit=nb.njit):
pass
Here, Adder.add is a regular python function that just happens to have a _jit attribute, but NumbaAdder.add is a numba jit function. For more realistic code, I would use the same Jitter class and the same NumbaAdder class, but would put all the complexity into the Adder class.
Note that we could decorate with Adder.jit, but this would be precisely the same as decorating with Jitter.jit, because Adder.jit doesn't get changed (if at all) until after the decorators in the class definition have already been applied, so we still need to loop through and apply the jit functions with __init_subclass__.

Using a metaclass on a class drived from another class implemented in C

I am working on a ctypes drop-in-replacement / extension and ran into an issue I do not fully understand.
I am trying to build a class factory for call-back function decorators similar to CFUNCTYPE and WINFUNCTYPE. Both factories produce classes derived from ctypes._CFuncPtr. Like every ctypes function interface, they have properties like argtypes and restype. I want to extend the classes allowing an additional property named some_param and I thought, why not, let's try this with "getter" and "setter" methods - how hard can it be ...
Because I am trying to use "getter" and "setter" methods (#property) on a property of a class (NOT a property of objects), I ended up writing a metaclass. Because my class is derived from ctypes._CFuncPtr, I think my metaclass must be derived from ctypes._CFuncPtr.__class__ (I could be wrong here).
The example below works, sort of:
import ctypes
class a_class:
def b_function(self, some_param_parg):
class c_class_meta(ctypes._CFuncPtr.__class__):
def __init__(cls, *args):
super().__init__(*args) # no idea if this is good ...
cls._some_param_ = some_param_parg
#property
def some_param(cls):
return cls._some_param_
#some_param.setter
def some_param(cls, value):
if not isinstance(value, list):
raise TypeError('some_param must be a list')
cls._some_param_ = value
class c_class(ctypes._CFuncPtr, metaclass = c_class_meta):
_argtypes_ = ()
_restype_ = None
_flags_ = ctypes._FUNCFLAG_STDCALL # change for CFUNCTYPE or WINFUNCTYPE etc ...
return c_class
d_class = a_class().b_function([1, 2, 3])
print(d_class.some_param)
d_class.some_param = [2, 6]
print(d_class.some_param)
d_class.some_param = {} # Raises an error - as expected
So far so good - using the above any further does NOT work anymore. The following pseudo-code (if used on an actual function from a DLL or shared object) will fail - in fact, it will cause the CPython interpreter to segfault ...
some_routine = ctypes.windll.LoadLibrary('some.dll').some_routine
func_type = d_class(ctypes.c_int16, ctypes.c_int16) # similar to CFUNCTYPE/WINFUNCTYPE
func_type.some_param = [4, 5, 6] # my "special" property
some_routine.argtypes = (ctypes.c_int16, func_type)
#func_type
def demo(x):
return x - 1
some_routine(4, demo) # segfaults HERE!
I am not entirely sure what goes wrong. ctypes._CFuncPtr is implemented in C, which could be a relevant limitation ... I could also have made a mistake in the implementation of the metaclass. Can someone enlighten me?
(For additional context, I am working on this function.)
Maybe ctypes metaclasses simply won't work nicely being subclasses - since it is itself written in C, it may bypass the routes inheritance imposes for some shortcuts and end up in failures.
Ideally this "bad behavior" would have to be properly documented, filled as bugs against CPython's ctypes and fixed - to my knowledge there are not many people who can fix ctypes bugs.
On the other hand, having a metaclass just because you want a property-like attribute at class level is overkill.
Python's property itself is just pre-made, very useful builtin class that implements the descriptor protocol. Any class you create yourself that implements proper __get__ and __set__ methods can replace "property" (and often, when logic is shared across property-attributes, leads to shorter, non duplicated code)
On a second though, unfortunately, descriptor setters will only work for instances, not for classes (which makes sense, since doing cls.attr will already get you the special code-guarded value, and there is no way a __set__ method could be called on it)
So, if you could work with "manually" setting the values in the cls.__dict__ and putting your logic in the __get__ attribute, you could do:
PREFIX = "_cls_prop_"
class ClsProperty:
def __set_name__(self, owner, name):
self.name = name
def __get__(self, instance, owner):
value = owner.__dict__.get(PREFIX + self.name)
# Logic to transform/check value goes here:
if not isinstance(value, list):
raise TypeError('some_param must be a list')
return value
def b_function(some_param_arg):
class c_class(ctypes._CFuncPtr):
_argtypes_ = ()
_restype_ = None
_flags_ = 0 # ctypes._FUNCFLAG_STDCALL # change for CFUNCTYPE or WINFUNCTYPE etc ...
_some_param_ = ClsProperty()
setattr(c_class, PREFIX + "_some_param_", some_param_arg)
return c_class
d_class = b_function([1, 2, 3])
print(d_class._some_param_)
d_class._some_param_ = [1, 2]
print(d_class._some_param_)
If that does not work, I don't think other approaches trying to extend CTypes metaclass will work anyway, but if you want a try, instead of a "meta-property", you might try to customize the metaclass' __setitem__ instead, to do your parameter checking, instead of using property.

Late (runtime) addition of additional parent class possible?

This is about multiple inheritance. Parent class A provides a few methods and B parent class B a few additional ones. By creating a class inheriting from A and B I could instantiate an object having both method sets.
Now my problem is, that I detect only after having instantiated A, that the methods from B would be helpful too (or more strictly stated, that my object is also of class B).
While
aInstance.bMethod = types.MethodType(localFunction, aInstance)
works in principle, it has to be repeated for any bMethod, and looks unnecessary complicated. It also requires stand-alone (local) functions instead of a conceptually cleaner class B. Is there a more streamlined approach?
Update:
I tried abstract base class with some success, but there only the methods of one additional class could be added.
What I finally achieved is a little routine, which adds all top-level procedures of a given module:
from types import MethodType
from inspect import ismodule, isfunction, getmembers
# adds all functions found in module as methods to given obj
def classMagic(obj, module):
assert(ismodule(module))
for name, fn in getmembers(module, isfunction):
if not name.startswith("__"):
setattr(obj, name, MethodType(fn, obj))
Functionally this is sufficient, and I'm also pleased with the automatism, that all functions are processed and I don't have separate places of function definition and adding it as method, so maintenace is easy. The only remaining issue is reflected by the startswith line, as an example for a neccessary naming convention, if selected functions shall not be added.
If I understand correctly, you want to add mixins to your class at run time. A very common way of adding mixins in Python is through decorators (rather than inheritance), so we can borrow this idea to do something runtime to the object (instead to the class).
I used functools.partial to freeze the self parameter, to emulate the process of binding a function to an object (i.e. turn a function into a method).
from functools import partial
class SimpleObject():
pass
def MixinA(obj):
def funcA1(self):
print('A1 - propertyA is equal to %s' % self.propertyA)
def funcA2(self):
print('A2 - propertyA is equal to %s' % self.propertyA)
obj.propertyA = 0
obj.funcA1 = partial(funcA1, self=obj)
obj.funcA2 = partial(funcA2, self=obj)
return obj
def MixinB(obj):
def funcB1(self):
print('B1')
obj.funcB1 = partial(funcB1, self=obj)
return obj
o = SimpleObject()
# need A characteristics?
o = MixinA(o)
# need B characteristics?
o = MixinB(o)
Instead of functools.partial, you can also use types.MethodType as you did in your question; I think that is a better/cleaner solution.

Why do we use #staticmethod?

I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.

Categories

Resources