Is there a way of defining multiple methods with similar bodies? - python

I'm trying to write my own Python ANSI terminal colorizer, and eventialy came to the point where I want to define a bunch of public properties with similar body, so I want to ask: is there a way in Python to define multiple similar methods with slight difference between them?
Actual code I'm stuck with:
class ANSIColors:
#classmethod
def __wrap(cls, _code: str):
return cls.PREFIX + _code + cls.POSTFIX
#classproperty
def reset(cls: Type['ANSIColors']):
return cls.__wrap(cls.CODES['reset'])
#classproperty
def black(cls: Type['ANSIColors']):
return cls.__wrap(cls.CODES['black'])
...
If it actually matters, here is the code for classproperty decorator:
def classproperty(func):
return classmethod(property(func))
I will be happy to see answers with some Python-intended solutions, rather then code generation programs.
Edit 1: it will be great to preserve given properties names.

I don't think you need (or really want) to use properties to do what you want to accomplish, which is good because doing so would require a lot of repetitive code if you have many entries in the class' CODES attribute (which I'm assuming is a dictionary mapping).
You could instead use __getattr__() to dynamically look up the strings associated with the names in the class' CODES attribute because then you wouldn't need to explicitly create a property for each of them. However in this case it needs to be applied it the class-of-the-class — in other words the class' metaclass.
The code below show how to define one that does this:
class ANSIColorsMeta(type):
def __getattr__(cls, key):
"""Call (mangled) private class method __wrap() with the key's code."""
return getattr(cls, f'_{cls.__name__}__wrap')(cls.CODES[key])
class ANSIColors(metaclass=ANSIColorsMeta):
#classmethod
def __wrap(cls, code: str):
return cls.PREFIX + code + cls.POSTFIX
PREFIX = '<prefix>'
POSTFIX = '<postfix>'
CODES = {'reset': '|reset|', 'black': '|black|'}
if __name__ == '__main__':
print(ANSIColors.reset) # -> <prefix>|reset|<postfix>
print(ANSIColors.black) # -> <prefix>|black|<postfix>
print(ANSIColors.foobar) # -> KeyError: 'foobar'
✶ It's important to also note that this could be made much faster by having the metaclass' __getattr__() assign the result of the lookup to an actual cls attribute, thereby avoiding the need to repeat the whole process if the same key is ever used again (because __getattr__() is only called when the default attribute access fails) effectively caching looked-up values and auto-optimizing itself based on how it's actually being used.

Related

How to determine if a method was called from within the class where it's defined?

I'm trying to implement an (admittedly unPythonic) way of encapsulating a lot of instance variables.
I have these variables' names mapped to the respective values inside a dictionary, so instead of writing a lot of boilerplate (i.e. self.var = val, like times 50), I'm iterating over the dictionary while calling __setattr__(), this way:
class MyClass:
__slots__ = ("var1", "var2", "var3")
def __init__(self, data):
for k, v in data.items():
self.__setattr__(k, v)
Then I would override __setattr__() in a way that controls access to these properties.
From within __setattr__(), I'd check if the object has the property first, in order to allow setattr calls inside __init__():
def __setattr__(self, k, v):
if k in self.__class__.__slots__:
if hasattr(self, k):
return print("Read-only property")
super().__setattr__(k, v)
The problem is, I also need some of these properties to be writeable elsewhere in myClass, even if they were already initialized in __init__(). So I'm looking for some way to determine if setattr was called inside the class scope or outside of it, e.g.:
class MyClass:
__slots__ = ("var",)
def __init__(self):
self.__setattr__("var", 0)
def increase_val(self):
self.var += 1 # THIS SHOULD BE ALLOWED
my_obj = MyClass()
my_obj.var += 1 # THIS SHOULD BE FORBIDDEN
My pseudo definition would be like:
# pseudocode
def setattr:
if attribute in slots and scope(setattr) != MyClass:
return print("Read-only property")
super().setattr
Also, I'd rather not store the entire dictionary in one instance variable, as I need properties to be immutable.
Answering my own question to share with anyone with the same issue.
Thanks to #DeepSpace in the comments I've delved a bit into the frame inspection topic which I totally ignored before.
Since the well known inspect library relies on sys._getframe() in some parts, namely the parts that I'm mainly interested in, I've decided to use sys instead.
The function returns the current frame object in the execution stack, which is equipped with some useful properties.
E.g., f_back allows you to locate the immediate outer frame, which in case __setattr__() was called within the class, is the class itself.
On the outer frame, f_locals returns a dictionary with the variables in the frame's local scope and their respective values.
One can look for self inside f_locals to determine wether the context is a class, although it's a bit 'dirty' since any non-class context could have a self variable too.
However, if self is mapped to an object of type MyClass, then there shouldn't be ambiguities.
Here's my final definition of __setattr__()
def __setattr__(self, k, v):
if k in self.__class__.__slots__:
self_object = sys._getframe(1).f_back.f_locals.get("self")
if self_object is None or self_object.__class__ != MyClass:
return print(k, "is a read-only property")
super().__setattr__(k, v)
As a conclusion, I feel like pursuing variable privacy in Python is kind of going against the tide; it's definitely a cleaner solution to label variables as 'protected' according to the recognized standard, without bothering too much about the actual accessibility.
Another side note is that frame inspection doesn't look like a very reliable approach for applications meant for production, but more like a debugging tool. As a matter of fact, some inspect functions do not work with some Python implementations, e.g. those lacking stack frame support.

How to keep python project modular?

Context
I've been working on a python project recently, and found modularity very important. For example you made a class with some attributes and some line of code that uses those attributes like
a = A()
print("hi"+a.imA)
If you were to modify imA of class A to another type, you would have to modify the print statement. In my case I had to do this so many times. It was annoying and time consuming. get/set methods would've solved this, but I heard that get/set are not 'good python'. So how would you solve this problem without using get and set methods?
First point: you would have saved yourself quite some hassle by using string formatting instead of string concatenation, ie:
print("hi {}".format(a.imA))
Granted, the final result may or not be what you'd expect depending on how a.imA type implements __str__() and __repr__() but at least this will not break the code.
wrt/ getters and setters, they are indeed considered rather unpythonic, because python has a strong support for computed attributes, and a simple generic implementation is available as the builtin property type.
NB: actually what's considered unpythonic is to systematically use implementation attributes and getters/setters (either explicits or - as is the case with computed attributes - implicits) when a plain public attribute is enough, and this is considered unpythonic because you can always turn a plain attribute into a computed one without breaking the client code (assuming of course you don't change the type nor semantic of the attribute) - something that was not possible with early OOPLs like Smalltalk, C++ or Java (Smalltalk being a bit of a special case actually but that's another topic).
In your case, if the point was to change the stored value's type without breaking the API, the simple obvious canonical solution was to use a property delegating to an implementation attribute:
before:
class Foo(object):
def __init__(self, bar):
# `bar` is expected to be the string representation of an int.
self.bar = bar
def frobnicate(self, val):
return (int(self.bar) + val) / 2
after:
class Foo(object):
def __init__(self, bar):
# `bar` is expected to be the string representation of an int.
self.bar = bar
# but we want to store it as an int
#property
def bar(self):
return str(self._bar)
#bar.setter
def bar(self, value):
self._bar = int(value)
def frobnicate(self, val):
# internally we use the implementation attribute `_bar`
return (self._bar + val) / 2
And you now have the value stored internally as an int, but the public interface is (almost) exactly the same - the only difference being that passing something that cannot be passed to int() will raise at the expected place (when you set it) instead than breaking at the most unexpected one (when you call .frobnicate())
Now note that that changing the type of a public attribute is just like changing the return type of a getter (or the type of a setter argument) - in both cases you are breaking the contract - so if what you wanted was really to change the type of A.imA, neither getters nor properties would have solved your issue - getters and setters (or in Python computed attributes) can only protect you from implementation changes.
EDIT: oh and yes: this has nothing to do with modularity (which is about writing decoupled, self-contained code that's easier to read, test, maintain and eventually reuse), but with encapsulation (which aim is to make the public interface resilient to implementation changes).
First, use
print(f"hi {a.imA}") # Python 3.6+
or
print("hi {}".format(a.imA)) # all Python 3
instead of
print("hi"+a.imA)
That way, str will be called automatically on each argument.
Then define a __str__ function in all your classes, so that printing any class always works.
class A:
def __init__(self):
self._member_1 = "spam"
def __str__(self):
return f"A(member 1: {self._member_1})"

Why do we use #staticmethod?

I just can't see why do we need to use #staticmethod. Let's start with an exmaple.
class test1:
def __init__(self,value):
self.value=value
#staticmethod
def static_add_one(value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
a=test1(3)
print(a.new_val) ## >>> 4
class test2:
def __init__(self,value):
self.value=value
def static_add_one(self,value):
return value+1
#property
def new_val(self):
self.value=self.static_add_one(self.value)
return self.value
b=test2(3)
print(b.new_val) ## >>> 4
In the example above, the method, static_add_one , in the two classes do not require the instance of the class(self) in calculation.
The method static_add_one in the class test1 is decorated by #staticmethod and work properly.
But at the same time, the method static_add_one in the class test2 which has no #staticmethod decoration also works properly by using a trick that provides a self in the argument but doesn't use it at all.
So what is the benefit of using #staticmethod? Does it improve the performance? Or is it just due to the zen of python which states that "Explicit is better than implicit"?
The reason to use staticmethod is if you have something that could be written as a standalone function (not part of any class), but you want to keep it within the class because it's somehow semantically related to the class. (For instance, it could be a function that doesn't require any information from the class, but whose behavior is specific to the class, so that subclasses might want to override it.) In many cases, it could make just as much sense to write something as a standalone function instead of a staticmethod.
Your example isn't really the same. A key difference is that, even though you don't use self, you still need an instance to call static_add_one --- you can't call it directly on the class with test2.static_add_one(1). So there is a genuine difference in behavior there. The most serious "rival" to a staticmethod isn't a regular method that ignores self, but a standalone function.
Today I suddenly find a benefit of using #staticmethod.
If you created a staticmethod within a class, you don't need to create an instance of the class before using the staticmethod.
For example,
class File1:
def __init__(self, path):
out=self.parse(path)
def parse(self, path):
..parsing works..
return x
class File2:
def __init__(self, path):
out=self.parse(path)
#staticmethod
def parse(path):
..parsing works..
return x
if __name__=='__main__':
path='abc.txt'
File1.parse(path) #TypeError: unbound method parse() ....
File2.parse(path) #Goal!!!!!!!!!!!!!!!!!!!!
Since the method parse is strongly related to the classes File1 and File2, it is more natural to put it inside the class. However, sometimes this parse method may also be used in other classes under some circumstances. If you want to do so using File1, you must create an instance of File1 before calling the method parse. While using staticmethod in the class File2, you may directly call the method by using the syntax File2.parse.
This makes your works more convenient and natural.
I will add something other answers didn't mention. It's not only a matter of modularity, of putting something next to other logically related parts. It's also that the method could be non-static at other point of the hierarchy (i.e. in a subclass or superclass) and thus participate in polymorphism (type based dispatching). So if you put that function outside the class you will be precluding subclasses from effectively overriding it. Now, say you realize you don't need self in function C.f of class C, you have three two options:
Put it outside the class. But we just decided against this.
Do nothing new: while unused, still keep the self parameter.
Declare you are not using the self parameter, while still letting other C methods to call f as self.f, which is required if you wish to keep open the possibility of further overrides of f that do depend on some instance state.
Option 2 demands less conceptual baggage (you already have to know about self and methods-as-bound-functions, because it's the more general case). But you still may prefer to be explicit about self not being using (and the interpreter could even reward you with some optimization, not having to partially apply a function to self). In that case, you pick option 3 and add #staticmethod on top of your function.
Use #staticmethod for methods that don't need to operate on a specific object, but that you still want located in the scope of the class (as opposed to module scope).
Your example in test2.static_add_one wastes its time passing an unused self parameter, but otherwise works the same as test1.static_add_one. Note that this extraneous parameter can't be optimized away.
One example I can think of is in a Django project I have, where a model class represents a database table, and an object of that class represents a record. There are some functions used by the class that are stand-alone and do not need an object to operate on, for example a function that converts a title into a "slug", which is a representation of the title that follows the character set limits imposed by URL syntax. The function that converts a title to a slug is declared as a staticmethod precisely to strongly associate it with the class that uses it.

Python, executing extra code at method definition

I am writing a python API/server to allow an external device (microcontroller) to remotely call methods of an object by sending a string with the name of the method. These methods would be stored in a dictionary. e.g. :
class Server:
...
functions = {}
def register(self, func):
self.functions[func.__name__] = func
def call(self, func_name, args):
self.functions[func_name](*args)
...
I know that I could define functions externally to the class definition and register them manually, but I would really like that the registering step would be done automatically. Consider the following class:
class MyServer(Server):
...
def add(self,a,b):
print a+b
def sub(self,a,b):
print a-b
...
It would work by subclassing a server class and by defining methods to be called. How could I get the methods to be automatically registered in the functions dictionary?
One way that I thought it could be done is with a metaclass that look at a pattern in the methods name add if a match is found, add that methods to the functions dictionary. It seems overkill...
Would it be possible to decorate the methods to be registered? Can someone give me a hint to the simplest solution to this problem?
There is no need to construct a dictionary, just use the getattr() built-in function:
def call(self, func_name, args):
getattr(self, func_name)(*args)
Python actually uses a dictionary to access attributes on objects anyway (it's called __dict__, - but using getattr() is better than accessing it directly).
If you really want to construct that dict for some reason, then look at the inspect module:
def __init__(self, ...):
self.functions = dict(inspect.getmembers(self, inspect.ismethod))
If you want to pick specific methods, you could use a decorator to do that, but as BrenBarn points out, the instance doesn't exist at the time the methods are decorated, so you need to use the mark and recapture technique to do what you want.

Hide deprecated methods from tab completion

I would like to control which methods appear when a user uses tab-completion on a custom object in ipython - in particular, I want to hide functions that I have deprecated. I still want these methods to be callable, but I don't want users to see them and start using them if they are inspecting the object. Is this something that is possible?
Partial answer for you. I'll post the example code and then explain why its only a partial answer.
Code:
class hidden(object): # or whatever its parent class is
def __init__(self):
self.value = 4
def show(self):
return self.value
def change(self,n):
self.value = n
def __getattr__(self, attrname):
# put the dep'd method/attribute names here
deprecateds = ['dep_show','dep_change']
if attrname in deprecateds:
print("These aren't the methods you're looking for.")
def dep_change(n):
self.value = n
def dep_show():
return self.value
return eval(attrname)
else:
raise AttributeError, attrname
So now the caveat: they're not methods (note the lack of self as the first variable). If you need your users (or your code) to be able to call im_class, im_func, or im_self on any of your deprecated methods, then this hack won't work. Also, i'm pretty sure there's going to be a performance hit because you're defining each dep'd function inside __getattr__. This won't affect your other attribute lookups (had I put them in __getattribute__, that would be a different matter), but it will slow down access to those deprecated methods. This can be (largely, but not entirely) negated by putting each function definition inside its own if block, instead of doing a list-membership check, but, depending on how big your function is, that could be really annoying to maintain.
UPDATE:
1) If you want to make the deprecated functions methods (and you do), just use
import types
return types.MethodType(eval(attrname), self)
instead of
return eval(attrname)
in the above snippet, and add self as the first argument to the function defs. It turns them into instancemethods (so you can use im_class, im_func, and im_self to your heart's content).
2) If the __getattr__ hook didn't thrill you, there's another option (that I know of) (albiet, with its own caveats, and we'll get to those): Put the deprecated functions definitions inside __init__, and hide them with a custom __dir__. Here's what the above code would look like done this way:
class hidden(object):
def __init__(self):
self.value = 4
from types import MethodType
def dep_show(self):
return self.value
self.__setattr__('dep_show', MethodType(dep_show, self))
def dep_change(self, n):
self.value = n
self.__setattr__('dep_change', MethodType(dep_change, self))
def show(self):
return self.value
def change(self, n):
self.value = n
def __dir__(self):
heritage = dir(super(self.__class__, self)) # inherited attributes
hide = ['dep_show', 'dep_change']
show = [k for k in self.__class__.__dict__.keys() + self.__dict__.keys() if not k in heritage + private]
return sorted(heritage + show)
The advantage here is that you're not defining the functions anew every lookup, which nets you speed. The disadvantage here is that because you're not defining functions anew each lookup, they have to 'persist' (if you will). So, while the custom __dir__ method hides your deprecateds from dir(hiddenObj) and, therefore, IPython's tab-completion, they still exist in the instance's __dict__ attribute, where users can discover them.
Seems like there is a special magic method for the introcpection which is called by dir(): __dir__(). Isn't it what you are lookin for?
The DeprecationWarning isn't emitted until the method is called, so you'd have to have a separate attribute on the class that stores the names of deprecated methods, then check that before suggesting a completion.
Alternatively, you could walk the AST for the method looking for DeprecationWarning, but that will fail if either the class is defined in C, or if the method may emit a DeprecationWarning based on the type or value of the arguments.
About the completion mechanism in IPython, it is documented here:
http://ipython.scipy.org/doc/manual/html/api/generated/IPython.core.completer.html#ipcompleter
But a really interesting example for you is the traits completer, that does precisely what you want to do: it hides some methods (based on their names) from the autocompletion.
Here is the code:
http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/Extensions/ipy_traits_completer.py

Categories

Resources