Python class function return super() - python

So I was messing around with a readonly-modifyable class pattern which is pretty common in java. It involves creating a base class containig readonly properties, and extending that class for a modifyable version. Usually there is a function readonly() or something simular to revert a modifyable-version back to a readonly-version of itself.
Unfortunately you cannot directly define a setter for a property defined in a super class in python, but you can simply redefine it as shown below. Mor interestingly In python you got the 'magic function' super returning a proxy-object of the parent/super class which allows for a cheecky readonly() implementation which feels really hacky, but as far as I can test it just works.
class Readonly:
_t:int
def __init__(self, t: int):
self._t = t
#property
def t(self) -> int:
return self._t
class Modifyable(Readonly):
def __init__(self, t: int):
super().__init__(t)
#property
def t(self) -> int:
return self._t # can also be super().t
#t.setter
def t(self, t):
self._t = t
def readonly(self) -> Readonly:
return super()
In the above pattern I can call the readonly function and obtain a proxy to the parent object without having to instantiate a readonly version. The only problem would be that when setting the readonly-attribute instead of throwing AttributeError: can't set attribute, it will throw a AttributeError: 'super' object has no attribute '<param-name>'
So here are the question for this:
Can this cause problems (exposing the super proxy-object outside the class itself)?
I tested this on python 3.8.5, but not sure if that is by accident and goes against the 'python-semantics' so to speak?
Is there a better/more desirable way to achieve this? (I have no idea if its even worth the ambiguous error message in ragards to performance for example)
I Would love to hear opinions and/or insights into this

Related

pylint giving not-callable error for object property that is callable

Not sure if I am doing something wrong or if this is a problem with pylint. In the code below I get a linting error that self.type is not callable E1102.
Although I could just ignore it and keep working, seems like this kind of thing should be easy to fix... just can't figure out how to fix it.
from typing import Callable
class Thing:
def __init__(self, thing_type: Callable):
self._type = thing_type
self._value = None
#property
def type(self) -> Callable:
return self._type
#property
def value(self):
return self._value
#value.setter
def value(self, value):
self._value = self.type(value)
Will leave this answer unaccepted for awhile, but I did figure out that it has something to do with the use of the #property decorator. In the process of creating the new object property, the type hint information is lost to pylint, and VSCode intellisense doesn't seem to work. Using the old style property syntax I get proper linting and type hinting (see image below)
Seems a bit of a shame, but am hoping someone comes up with a better answer
If self._type is a class (instead of an instance), you might want to annotate it with type instead. type is the default metaclass in python.
pylint should handle that better - it knows that you can call a class to create an instance.
class Thing:
def __init__(self, thing_type: type):
self._type: type = thing_type
self._value = None
#property
def value(self):
return self._value
#value.setter
def value(self, value):
self._value = self._type(value)
Note that I also included a variable annotation that requires python 3.6 in there.
Also note that if you do not include a type.setter anywhere, python will conclude on it's own that type cannot be set. If you then refer to _type in your constructor (which you're doing right now), then you already bypass it not being settable.
Edits:
I changed the value.setter to use self._type instead. Since it's a private variable to the instance, this is perfectly fine to use. This stops the pylint E1102 from appearing.
Removing the type property will remove E0601. This is because your property is, in the class namespace, shadowing the global variable named type. If it is defined, pylint thinks you intend to use the property(which at that point, is an instance of the property class, instead of a property on a class).

Creating a python class with ONLY read-only instance attributes

When developing code for test automation, I often transform responses from the SUT from XML / JSON / whatever to a Python object model to make working with it afterwards easier.
Since the client should not alter the information stored in the object model, it would make sense to have all instance attributes read-only.
For simple cases, this can be achieved by using a namedtuple from the collections module. But in most cases, a simple namedtuple won't do.
I know that the probably most pythonic way would be to use properties:
class MyReadOnlyClass(object):
def __init__(self, a):
self.__a = a
#property
def a(self):
return self.__a
This is OK if I'm dealing only with a few attributes, but it gets lengthy pretty soon.
So I was wondering if there would be any other acceptable approach? What I came up with was this:
MODE_RO = "ro"
MODE_RW = "rw"
class ReadOnlyBaseClass(object):
__mode = MODE_RW
def __init__(self):
self.__mode = MODE_RO
def __setattr__(self, key, value):
if self.__mode != MODE_RW:
raise AttributeError("May not set attribute")
else:
self.__dict__[key] = value
I could then subclass it and use it like this:
class MyObjectModel(ReadOnlyBaseClass):
def __init__(self, a):
self.a = a
super(MyObjectModel, self).__init__()
After the super call, adding or modifying instance attributes is not possible (... that easily, at least).
A possible caveat I came to think about is that if someone was to modify the __mode attribute and set it to MODE_RO, no new instances could be created. But that seems acceptable since its clearly marked as "private" (in the Pyhon way).
I would be interested if you see any more problems with this solution, or have completely different and better approaches.
Or maybe discourage this at all (with explanation, please)?

Is calling __new__ from a classmethod a pythonic way of doing?

I want to build a class, that is handling and processing some data. So I want property to handle all the data processing silently when new data is passed And I want to overload init with classmethod, to have flexibility on parameters passed to instance creation. So I came up with the following solution :
class Cooper():
def __init__(self):
...create all 'private' attributes
#classmethod
def Dougie(cls,data,datatype):
inst = cls.__new__(cls)
setattr(inst,datatype,data)
return inst
#property
def datatype1(self):
return self._datatype1
#datatype1.setter
def datatype1(self,newdata):
self._datatype1,self._datatype2,... = updatedata1(newdata)
#property
def datatype2(self):
return self._datatype2
#datatype2.setter
def datatype2(self,newdata):
self._datatype1,self._datatype2,... = updatedata2(newdata)
... to be continued...
Is this a pythonic way ? Or shoud I really create a metaclass (I get a little fir afraid there) ? What are the caveats ? Is there a better way ?

Calling static method using class name - Good or Bad?

I am getting more and more familiar with static methods and class methods and I know the difference between each. But I ran into a problem today where I would kind of like to reference both self and cls in a method.
The only way I know how to accomplish this is to make a normal class method (not with #classmethod, but simply with def) and call the class with it's name explicitely like so:
class myClass:
def __init__():
self._ser.connect('COM5')
def ask(self, message: str) -> str:
return myClass.clean_output(self._ser.query(message))
#staticmethod
def clean_output(dirty_string: str):
clean_string = dirty_string.strip().replace(chr(4))
return clean_string
This example is an over-simplified version of the philosophy. I would like to call a clean or parse function on data I get back, like from a serial device. Is there any way to implement the ask method like so?:
def ask(self, message: str) -> str:
return cls.clean_output(self._ser.query(message))
Or is it ok that I'm calling it with myClass explicitly like that? If it is, when should programmers use #classmethod and when is it permissible to use the class name itself? Is using the #classmethod decorator only really needed when you expect subclassing to happen?
Just call the static method on self:
def ask(self, message: str) -> str:
return self.clean_output(self._ser.query(message))
It is available there too.
Attributes on a class are always available on the instances too (provided there is no attribute on the instance itself with the same name masking it). Methods (be they regular, static or class methods) are no exception, they too are just attributes. Their binding behaviour doesn't matter here.

Mimic Python's NoneType

I'm creating several classes to use them as state flags (this is more of an exercise, though I'm going to use them in a real project), just like we use None in Python, i.e.
... some_var is None ...
NoneType has several special properties, most importantly it's a singleton, that is there can't be more than one NoneType instance during any interpreter session, and its instances (None objects) are immutable. I've come up with two possible ways to implement somewhat similar behaviour in pure Python and I'm eager to know which one looks better from the architectural standpoint.
1. Don't use instances at all.
The idea is to have a metaclass, that produces immutable classes. The classes are prohibited to have instances.
class FlagMetaClass(type):
def __setattr__(self, *args, **kwargs):
raise TypeError("{} class is immutable".format(self))
def __delattr__(self, *args, **kwargs):
self.__setattr__()
def __repr__(self):
return self.__name__
class BaseFlag(object):
__metaclass__ = FlagMetaClass
def __init__(self):
raise TypeError("Can't create {} instances".format(type(self)))
def __repr__(self):
return str(type(self))
class SomeFlag(BaseFlag):
pass
And we get the desired behaviour
a = BaseFlag
a is BaseFlag # -> True
a is SomeFlag # -> False
Obviously any attempt to set attributes on these classes will fail (of course there are several hacks to overcome this, but the direct way is closed). And the classes themselves are unique objects loaded in a namespace.
2. A proper singleton class
class FlagMetaClass(type):
_instances = {}
def __call__(cls):
if cls not in cls._instances:
cls._instances[cls] = super(FlagMetaClass, cls).__call__()
return cls._instances[cls] # This may be slightly modified to
# raise an error instead of returning
# the same object, e.g.
# def __call__(cls):
# if cls in cls._instances:
# raise TypeError("Can't have more than one {} instance".format(cls))
# cls._instances[cls] = super(FlagMetaClass, cls).__call__()
# return cls._instances[cls]
def __setattr__(self, *args, **kwargs):
raise TypeError("{} class is immutable".format(self))
def __delattr__(self, *args, **kwargs):
self.__setattr__()
def __repr__(self):
return self.__name__
class BaseFlag(object):
__metaclass__ = FlagMetaClass
__slots__ = []
def __repr__(self):
return str(type(self))
class SomeFlag(BaseFlag):
pass
Here the Flag classes are real singletons. This particular implementation doesn't raise an error when we try to create another instance, but returns the same object (though it's easy to alter this behaviour). Both classes and instances can't be directly modified. The point is to create an instance of each class upon import like it's done with None.
Both approaches give me somewhat immutable unique objects that can be used for comparison just like the None. To me the second one looks more NoneType-like, since None is an instance, but I'm not sure that it's worth the increase in idealogical complexity. Looking forward to hear from you.
Theoretically, it's an interesting exercise. But when you say "though I'm going to use them in a real project" then you lose me.
If the real project is highly unPythonic (using traits or some other package to emulate static typing, using __slots__ to keep people from falling on sharp objects, etc.) -- well, I've got nothing for you, because I've got no use for that, but others do.
If the real project is Pythonic, then do the simplest thing possible.
Your "not use instances at all" answer is the correct one here, but you don't need to do a lot of class definition, either.
For example, if you have a function that could accept None as a real parameter, and you want to tell if the parameter has been defaulted, then just do this:
class NoParameterGiven:
pass
def my_function(my_parameter=NoParameterGiven):
if my_parameter is NoParameterGiven:
<do all my default stuff>
That class is so cheap, there's no reason even to share it between files. Just create it where you need it.
Your state classes are a different story, and you might want to use something like that enum module that #Dunes mentioned -- it has some nice features.
OTOH, if you want to keep it really simple, you could just do something like this:
class MyStates:
class State1: pass
class State2: pass
class State3 pass
You don't need to instantiate anything, and you can refer to them like this: MyStates.State1.

Categories

Resources