Python: creating a class instance via static method vs class method - python

Let's say I have a class and would like to implement a method which creates an instance of that class. What I have is 2 options:
static method,
class method.
An example:
class DummyClass:
def __init__(self, json):
self.dict = json
#staticmethod
def from_json_static(json):
return DummyClass(json)
#classmethod
def from_json_class(cls, json):
return cls(json)
Both of the methods work:
dummy_dict = {"dummy_var": 124}
dummy_instance = DummyClass({"test": "abc"})
dummy_instance_from_static = dummy_instance.from_json_static(dummy_dict)
print(dummy_instance_from_static.dict)
> {'dummy_var': 124}
dummy_instance_from_class = DummyClass.from_json_class(dummy_dict)
print(dummy_instance_from_class.dict)
> {'dummy_var': 124}
What I often see in codes of other people is the classmethod design instead of staticmethod. Why is this the case?
Or, rephrasing the question to possibly get a more comprehensive answer: what are the pros and cons of creating a class instance via classmethod vs staticmethod in Python?

Two big advantages of the #classmethod approach:
First, you don't hard-code the name. Given modern refactoring tools in IDEs, this isn't as big of a deal, but it is nice to not have your code break if you change the name of your Foo, class to Bar::
class Bar:
#statmicmethod
def make_me():
return Foo()
Another advantage (at least, you should understand the difference!) is how this behaves with inheritance:
class Foo:
#classmethod
def make_me_cm(cls):
return cls()
#staticmethod
def make_me_sm():
return Foo()
class Bar(Foo):
pass
print(Bar.make_me_cm()) # it's a Bar instance
print(Bar.make_me_sm()) # it's a Foo instance

Related

Python class multiple constructors

There is a class that I want to be constructed from a string in 2 different ways. Here is what I mean:
class ParsedString():
def __init__(self, str):
#parse string and init some fields
def __init__2(self, str):
#parse string in another way and init the same fields
In Java I would provide a private constructor with 2 static factory methods each of which define a way of parsing string and then call the private constructor.
What is the common way to solve such problem in Python?
Just like in java:
class ParsedString():
def __init__(self, x):
print('init from', x)
#classmethod
def from_foo(cls, foo):
return cls('foo' + foo)
#classmethod
def from_bar(cls, bar):
return cls('bar' + bar)
one = ParsedString.from_foo('!')
two = ParsedString.from_bar('!')
docs: https://docs.python.org/3/library/functions.html?highlight=classmethod#classmethod
There's no way, however, to make the constructor private. You can take measures, like a hidden parameter, to prevent it from being called directly, but that wouldn't be considered "pythonic".

Mock class instances without calling `__init__` and mock their respective attributes

I have a class MyClass with a complex __init__ function.
This class had a method my_method(self) which I would like to test.
my_method only needs attribute my_attribute from the class instance.
Is there a way I can mock class instances without calling __init__ and by setting the attributes of each class instance instead?
What I have:
# my_class.py
from utils import do_something
class MyClass(object):
def __init__(self, *args, **kwargs):
# complicated function which I would like to bypass when initiating a mocked instance class
pass
def my_method(self):
return do_something(self.my_attribute)
What I tried
#mock.patch("my_class.MyClass")
def test_my_method(class_mock, attribute):
instance = class_mock.return_value
instance.my_attribute = attribute
example_instance = my_class.MyClass()
out_my_method = example_instance.my_method()
# then perform some assertions on `out_my_method`
however this still makes usage of __init__ which I hope we can by-pass or mock.
As I mentioned in the comments, one way to test a single method without having to create an instance is:
MyClass.my_method(any_object_with_my_attribute)
The problem with this, as with both options in quamrana's answer, is that we have now expanded the scope of any future change just because of the tests. If a change to my_method requires access to an additional attribute, we now have to change both the implementation and something else (the SuperClass, the MockMyClass, or in this case any_object_with_my_attribute_and_another_one).
Let's have a more concrete example:
import json
class MyClass:
def __init__(self, filename):
with open(filename) as f:
data = json.load(f)
self.foo = data.foo
self.bar = data.bar
self.baz = data.baz
def my_method(self):
return self.foo ** 2
Here any test that requires an instance of MyClass. is painful because of the file access in __init__. A more testable implementation would split apart the detail of how the data is accessed and the initialisation of a valid instance:
class MyClass:
def __init__(self, foo, bar, baz):
self.foo = foo
self.bar = bar
self.baz = baz
def my_method(self):
return self.foo ** 2
#classmethod
def from_json(cls, filename):
with open(filename) as f:
data = json.load(f)
return cls(data.foo, data.bar, data.baz)
You have to refactor MyClass("path/to/file") to MyClass.from_json("path/to/file"), but wherever you already have the data (e.g. in your tests) you can use e.g. MyClass(1, 2, 3) to create the instance without requiring a file (you only need to consider the file in the tests of from_json itself). This makes it clearer what the instance actually needs, and allows the introduction of other ways to construct an instance without changing the interface.
There are at least two options I can see:
Extract a super class:
class SuperClass:
def __init__(self, attribute):
self.my_attribute = attribute
def my_method(self):
return do_something(self.my_attribute)
class MyClass(SuperClass):
def __init__(self, *args, **kwargs):
super().__init__(attribute) # I don't know where attribute comes from
# complicated function which I would like to bypass when initiating a mocked instance class
Your tests can instantiate SuperClass and call my_method().
Inherit from MyClass as is and make your own simple __init__():
class MockMyClass(MyClass):
def __init__(self, attribute):
self.my_attribute = attribute
Now your test code can instantiate MockMyClass with the required attribute and call my_method()
For instance, you could write the test as follows
def test_my_method(attribute):
class MockMyClass(MyClass):
def __init__(self, attribute):
self.my_attribute = attribute
out_my_method = MockMyClass(attribute).my_method()
# perform assertions on out_my_method

Multiple Inheritance Dependency - Base requires AbstractBaseClass

The gist of the question: if inheriting multiple classes how can I guarantee that if one class is inherited, a compliment Abstract Base Class (abc) is also used by the child object.
I've been messing around with pythons inheritance trying to see what kind of cool stuff I can do and I came up with this pattern, which is kind of interesting.
I've been trying to use this make implementing and testing objects that interface with my cache easier. I've got three modules:
ICachable.py
Cacheable.py
SomeClass.py
ICacheable.py
import abc
class ICacheable(abc.ABC):
#property
#abc.abstractmethod
def CacheItemIns(self):
return self.__CacheItemIns
#CacheItemIns.setter
#abc.abstractmethod
def CacheItemIns(self, value):
self.__CacheItemIns = value
return
#abc.abstractmethod
def Load(self):
"""docstring"""
return
#abc.abstractmethod
def _deserializeCacheItem(self):
"""docstring"""
return
#abc.abstractmethod
def _deserializeNonCacheItem(self):
"""docstring"""
return
Cacheable.py
class Cacheable:
def _getFromCache(self, itemName, cacheType,
cachePath=None):
"""docstring"""
kwargs = {"itemName" : itemName,
"cacheType" : cacheType,
"cachePath" : cachePath}
lstSearchResult = CacheManager.SearchCache(**kwargs)
if lstSearchResult[0]:
self.CacheItemIns = lstSearchResult[1]
self._deserializeCacheItem()
else:
cacheItem = CacheManager.NewItem(**kwargs)
self.CacheItemIns = cacheItem
self._deserializeNonCacheItem()
return
SomeClass.py
import ICacheable
import Cacheable
class SomeClass(Cacheable, ICacheable):
__valueFromCache1:str = ""
__valueFromCache2:str = ""
__CacheItemIns:dict = {}
#property
def CacheItemIns(self):
return self.__CacheItemIns
#CacheItemIns.setter
def CacheItemIns(self, value):
self.__CacheItemIns = value
return
def __init__(self, itemName, cacheType):
#Call Method from Cacheable
self.__valueFromCache1
self.__valueFromCache2
self.__getItemFromCache(itemName, cacheType)
return
def _deserializeCacheItem(self):
"""docstring"""
self.__valueFromCache1 = self.CacheItemIns["val1"]
self.__valueFromCache2 = self.CacheItemIns["val2"]
return
def _deserializeNonCacheItem(self):
"""docstring"""
self.__valueFromCache1 = #some external function
self.__valueFromCache2 = #some external function
return
So this example works, but the scary thing is that there is no gurantee that a class inherriting Cacheable also inherits ICacheable. Which seems like a design flaw, as Cacheable is useless on its own. However the ability to abstract things from my subclass/child class with this is powerful. Is there a way to guarantee Cacheable's dependency on ICacheable?
If you explicitly do not want inheritance, you can register classes as virtual subclasses of an ABC.
#ICacheable.register
class Cacheable:
...
That means every subclass of Cacheable is automatically treated as subclass of ICacheable as well. This is mostly useful if you have an efficient implementation that would be slowed down by having non-functional Abstract Base Classes to traverse, e.g. for super calls.
However, ABCs are not just Interfaces and it is fine to inherit from them. In fact, part of the benefit of ABC is that it enforces subclasses to implement all abstract methods. An intermediate helper class, such as Cacheable, is fine not to implement all methods when it is never instantiated. However, any non-virtual subclass that is instantiated must be concrete.
>>> class FailClass(Cacheable, ICacheable):
... ...
...
>>> FailClass()
TypeError: Can't instantiate abstract class FailClass with abstract methods CacheItemIns, Load, _deserializeCacheItem, _deserializeNonCacheItem
Note that if you
always subclass as class AnyClass(Cacheable, ICacheable):
never instantiate Cacheable
that is functionally equivalent to Cacheable inheriting from ICacheable. The Method Resolution Order (i.e. the inheritance diamond) is the same.
>>> AnyClass.__mro__
(__main__. AnyClass, __main__.Cacheable, __main__.ICacheable, abc.ABC, object)

Class instance as static attribute

Python 3 doesn't allow you to reference a class inside its body (except in methods):
class A:
static_attribute = A()
def __init__(self):
...
This raises a NameError in the second line because 'A' is not defined.
Alternatives
I have quickly found one workaround:
class A:
#property
#classmethod
def static_property(cls):
return A()
def __init__(self):
...
Although this isn't exactly the same since it returns a different instance every time (you could prevent this by saving the instance to a static variable the first time).
Are there simpler and/or more elegant alternatives?
EDIT:
I have moved the question about the reasons for this restriction to a separate question
The expression A() can't be run until the class A has been defined. In your first block of code, the definition of A is not complete at the point you are trying to execute A().
Here is a simpler alternative:
class A:
def __init__(self):
...
A.static_attribute = A()
When you define a class, Python immediately executes the code within the definition. Note that's different than defining a function where Python compiles the code, but doesn't execute it.
That's why this will create an error:
class MyClass(object):
a = 1 / 0
But this won't:
def my_func():
a = 1 / 0
In the body of A's class definition, A is not yet defined, so you can't reference it until after it's been defined.
There are several ways you can accomplish what you're asking, but it's not clear to me why this would be useful in the first place, so if you can provide more details about your use case, it'll be easier to recommend which path to go down.
The simplest would be what khelwood posted:
class A(object):
pass
A.static_attribute = A()
Because this is modifying class creation, using a metaclass could be appropriate:
class MetaA(type):
def __new__(mcs, name, bases, attrs):
cls = super(MetaA, mcs).__new__(mcs, name, bases, attrs)
cls.static_attribute = cls()
return cls
class A(object):
__metaclass__ = MetaA
Or you could use descriptors to have the instance lazily created or if you wanted to customize access to it further:
class MyDescriptor(object):
def __get__(self, instance, owner):
owner.static_attribute = owner()
return owner.static_attribute
class A(object):
static_attribute = MyDescriptor()
Using the property decorator is a viable approach, but it would need to be done something like this:
class A:
_static_attribute = None
#property
def static_attribute(self):
if A._static_attribute is None:
A._static_attribute = A()
return A._static_attribute
def __init__(self):
pass
a = A()
print(a.static_attribute) # -> <__main__.A object at 0x004859D0>
b = A()
print(b.static_attribute) # -> <__main__.A object at 0x004859D0>
You can use a class decorator:
def set_static_attribute(cls):
cls.static_attribute = cls()
return cls
#set_static_attribute
class A:
pass
Now:
>>>> A.static_attribute
<__main__.A at 0x10713a0f0>
Applying the decorator on top of the class makes it more explicit than setting static_attribute after a potentially long class definition. The applied decorator "belongs" to the class definition. So if you move the class around in your source code you will more likely move it along than an extra setting of the attribute outside the class.

class __init__ (not instance __init__)

Here's a very simple example of what I'm trying to get around:
class Test(object):
some_dict = {Test: True}
The problem is that I cannot refer to Test while it's still being defined
Normally, I'd just do this:
class Test(object):
some_dict = {}
def __init__(self):
if self.__class__.some_dict == {}:
self.__class__.some_dict = {Test: True}
But I never create an instance of this class. It's really just a container to hold a group of related functions and data (I have several of these classes, and I pass around references to them, so it is necessary for Test to be it's own class)
So my question is, how could I refer to Test while it's being defined, or is there something similar to __init__ that get's called as soon as the class is defined? If possible, I want self.some_dict = {Test: True} to remain inside the class definition. This is the only way I know how to do this so far:
class Test(object):
#classmethod
def class_init(cls):
cls.some_dict = {Test: True}
Test.class_init()
The class does in fact not exist while it is being defined. The way the class statement works is that the body of the statement is executed, as a block of code, in a separate namespace. At the end of the execution, that namespace is passed to the metaclass (such as type) and the metaclass creates the class using the namespace as the attributespace.
From your description, it does not sound necessary for Test to be a class. It sounds like it should be a module instead. some_dict is a global -- even if it's a class attribute, there's only one such attribute in your program, so it's not any better than having a global -- and any classmethods you have in the class can just be functions.
If you really want it to be a class, you have three options: set the dict after defining the class:
class Test:
some_dict = {}
Test.some_dict[Test] = True
Use a class decorator (in Python 2.6 or later):
def set_some_dict(cls):
cls.some_dict[cls] = True
#set_some_dict
class Test:
some_dict = {}
Or use a metaclass:
class SomeDictSetterType(type):
def __init__(self, name, bases, attrs):
self.some_dict[self] = True
super(SomeDictSetterType, self).__init__(name, bases, attrs)
class Test(object):
__metaclass__ = SomeDictSetterType
some_dict = {}
You could add the some_dict attribute after the main class definition.
class Test(object):
pass
Test.some_dict = {Test: True}
I've tried to use classes in this way in the past, and it gets ugly pretty quickly (for example, all the methods will need to be class methods or static methods, and you will probably realise eventually that you want to define certain special methods, for which you will have to start using metaclasses). It could make things a lot easier if you just use class instances instead - there aren't really any downsides.
A (weird-looking) alternative to what others have suggested: you could use __new__:
class Test(object):
def __new__(cls):
cls.some_dict = {cls: True}
Test()
You could even have __new__ return a reference to the class and use a decorator to call it:
def instantiate(cls):
return cls()
#instantiate
class Test(object):
def __new__(cls):
cls.some_dict = {cls: True}
return cls
You can also use a metaclass (a function here but there are other ways):
def Meta(name, bases, ns):
klass = type(name, bases, ns)
setattr(klass, 'some_dict', { klass: True })
return klass
class Test(object):
__metaclass__ = Meta
print Test.some_dict
Thomas's first example is very good, but here's a more Pythonic way of doing the same thing.
class Test:
x = {}
#classmethod
def init(cls):
# do whatever setup you need here
cls.x[cls] = True
Test.init()

Categories

Resources