I am learning python
I'm wondering if there is a mechanism to "inject" an object (a fake object in my case) into the class under test without explicitly adding it in the costructor/setter.
## source file
class MyBusinessClass():
def __init__(self):
self.__engine = RepperEngine()
def doSomething(self):
## bla bla ...
success
## test file
## fake I'd like to inkject
class MyBusinessClassFake():
def __init__(self):
pass
def myPrint(self) :
print ("Hello from Mock !!!!")
class Test(unittest.TestCase):
## is there an automatic mechanism to inject MyBusinessClassFake
## into MyBusinessClass without costructor/setter?
def test_XXXXX_whenYYYYYY(self):
unit = MyBusinessClass()
unit.doSomething()
self.assertTrue(.....)
in my test I'd like to "inject" the object "engine" without passing it in the costructor. I've tried few option (e.g.: #patch ...) without success.
IOC is not needed in Python. Here's a Pythonic approach.
class MyBusinessClass(object):
def __init__(self, engine=None):
self._engine = engine or RepperEngine()
# Note: _engine doesn't exist until constructor is called.
def doSomething(self):
## bla bla ...
success
class Test(unittest.TestCase):
def test_XXXXX_whenYYYYYY(self):
mock_engine = mock.create_autospec(RepperEngine)
unit = MyBusinessClass(mock_engine)
unit.doSomething()
self.assertTrue(.....)
You could also stub out the class to bypass the constuctor
class MyBusinessClassFake(MyBusinessClass):
def __init__(self): # we bypass super's init here
self._engine = None
Then in your setup
def setUp(self):
self.helper = MyBusinessClassFake()
Now in your test you can use a context manager.
def test_XXXXX_whenYYYYYY(self):
with mock.patch.object(self.helper, '_engine', autospec=True) as mock_eng:
...
If you want to inject it with out using the constuctor then you can add it as a class attribute.
class MyBusinessClass():
_engine = None
def __init__(self):
self._engine = RepperEngine()
Now stub to bypass __init__:
class MyBusinessClassFake(MyBusinessClass):
def __init__(self):
pass
Now you can simply assign the value:
unit = MyBusinessClassFake()
unit._engine = mock.create_autospec(RepperEngine)
After years using Python without any DI autowiring framework and Java with Spring I've come to realize plain simple Python code often doesn't need frameworks for dependency injection without autowiring (autowiring is what Guice and Spring both do in Java), i.e., just doing something like this is enough:
def foo(dep = None): # great for unit testing!
self.dep = dep or Dep() # callers can not care about this too
...
This is pure dependency injection (quite simple) but without magical frameworks for automatically injecting them for you (i.e., autowiring) and without Inversion of Control.
Unlike #Dan I disagree that Python doesn't need IoC: Inversion of Control is a simple concept that the framework takes away control of something, often to provide abstraction and take away boilerplate code. When you use template classes this is IoC. If IoC is good or bad it is totally up to how the framework implements it.
Said that, Dependency Injection is a simple concept that doesn't require IoC. Autowiring DI does.
As I dealt with bigger applications the simplistic approach wasn't cutting it anymore: there was too much boilerplate code and the key advantage of DI was missing: to change implementation of something once and have it reflected in all classes that depend on it. If many pieces of your application cares on how to initialize a certain dependency and you change this initialization or wants to change classes you would have to go piece by piece changing it. With an DI framework that would be way easier.
So I've come up with injectable a micro-framework that wouldn't feel non-pythonic and yet would provide first class dependency injection autowiring.
Under the motto Dependency Injection for Humans™ this is what it looks like:
# some_service.py
class SomeService:
#autowired
def __init__(
self,
database: Autowired(Database),
message_brokers: Autowired(List[Broker]),
):
pending = database.retrieve_pending_messages()
for broker in message_brokers:
broker.send_pending(pending)
# database.py
#injectable
class Database:
...
# message_broker.py
class MessageBroker(ABC):
def send_pending(messages):
...
# kafka_producer.py
#injectable
class KafkaProducer(MessageBroker):
...
# sqs_producer.py
#injectable
class SQSProducer(MessageBroker):
...
Be explicit, use a setter. Why not?
class MyBusinessClass:
def __init__(self):
self._engine = RepperEngine()
from unittest.mock import create_autospec
def test_something():
mock_engine = create_autospec(RepperEngine, instance=True)
object_under_test = MyBusinessClass()
object_under_test._engine = mock_engine
Your question seems somewhat unclear, but there is nothing preventing you from using class inheritance to override the original methods. In that case the derivative class would just look like this:
class MyBusinessClassFake(MyBusinessClass):
def __init__(self):
pass
Related
I have an abstract class that I am using with the template pattern and some children that implement specific methods.
class TemplateClass(ABC):
#abstractmethod
def special_process_1():
pass
def common_process():
do_something()
def common_filter():
filter_something()
def __call__():
self.common_filter()
self.special_process_1()
self.common_process()
class classA(TemplateClass):
def special_process_1():
something_A_needs()
class classB(TemplateClass):
def special_process_1():
something_B_needs()
Now, I would like to test the __call__ method, but I am not sure what would be the best way. I think the best would be if I could test on the template so that I don't need to replicate test for classA and classB. However, I am not sure how to do it.
I have tried to test the template as follows:
#fixture
def template_mock():
with patch("TemplateClass.__abstractmethods__", set()):
t = TemplateClass()
t.special_process_1 = MagicMock(return_value=False)
yield t
The problem with the above is that on the tests, mypy would complain about template_mock.special_process_1 being a callable instead of a mock, so it does not have any return_value attribute.
Would be open to what other alternatives are there or if it makes sense at all to be testing this on the base class
I have a class called resources and I have defined one method called get_connect. I want to use the data of which get_connect returns to the other classes. I need at least three classes and I use the data of get_connect and I have to parse that data. To implement this I have written the code below
class resources:
#staticmethod
def get_connect():
return 1 + 2
class Source1(resources):
def __init__(self):
self.response = resources.get_connect()
def get__details1(self):
print(self.response)
class Source2(resources):
def __init__(self):
self.response = resources.get_connect()
def get_details2(self):
print(self.response)
class Source3(resources):
def __init__(self):
self.response = resources.get_connect()
def get__detail3(self):
print(self.response)
source1 = Source1()
source2 = Source2()
source3 = Source3()
source1.get__details1()
source2.get_details2()
source3.get__detail3()
But the problem with the code is for every class in init method I am calling the get_connect method. I don't want to repeat the code. I need help for avoiding redundancy which I have asked below
Is there any way I can call get_connect in one place and use it for other classes maybe a decorator or anything? if yes how can I?
While creating objects also I am calling each class and calling each method every time. is there a way to use any design pattern here?
If anyone helps me with these oops concepts it will be useful.
First of all, is there any reason why you are using get_connect method as static?
Because what you can do here is declare it in the parent class:
class resources:
def __init__(self):
self.response = self.get_connect()
def get_connect(self):
return 1 + 2
This way you do not need to define the __init__ method on every class, as it will be automatically inherited from the parent.
Regarding the second question, it really depends on the context, but you can use a strategy pattern in order to retrieve the class that you require to call. For this rename the method of get details into the same for each of the classes, as basically they're used for the same purpose, but changed on the context of the class implementation:
class Source1(resources):
def get_details(self):
print(self.response)
class Source2(resources):
def get_details(self):
print(self.response)
class Source3(resources):
def get_details(self):
print(self.response)
classes = {
"source_1": Source1,
"source_2": Source2,
"source_3": Source3
}
source_class = classes["source_1"]
source = source_class()
source.get_details()
Hope this helped!
TL;DR: Whats the difference between Dependency Injection and Singleton Pattern if the injected object is a Singleton?
I'm getting mixed results for how to resolve the design problem I am currently facing.
I would like to have a configuration that is application wide so that different objects and alter the configuration.
I thought to resolve this using a Singleton:
class ConfigMeta(type):
_instance = None
def __call__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super().__call__(*args, **kwargs)
return cls._instance
class Config(metaclass=ConfigMeta):
def __init__(self) -> None:
pass
But searching has shown this to be prone to errors and considered bad practice (when managing class states). Just about every other post suggests using Dependency Injection, but they confuse me on how they do it. They all state "your implementation can be a Singleton, but inject it into other objects in thier constructors".
That would be something along the lines of:
# foo.py
from config import Config
class Foo:
def __init__(self):
self.config = Config()
# bar.py
from config import Config
class Bar:
def __init__(self):
self.config = Config()
However, each one of those self.config refers to the same instance. Hence my confusion...
How is this considered Dependency Injection and not Singleton Pattern?
If it is considered Dependency Injection, what would it look like as just Singleton Pattern?
With Dependency Injection (DI) you leave it to the DI system to resolve how to get a specific object. You just declare what kind of object you require. This is complementary to the Singleton Pattern where there whole application is served by a single instance of a specific type. So for example:
class Config:
pass
config = Config() # singleton
class Foo:
def __init__(self):
config = config
Here the Foo class handles the logic how to get a Config object itself. Imagine this object has dependencies itself then this also needs to be sorted out by Foo.
With Dependency Injection on the other hand there is a central unit to handle these sort of things. The user class just declares what object it requires. For example:
class DI:
config = Config()
#classmethod
def get_config_singleton(cls):
return cls.config
#classmethod
def get_config(cls):
return Config()
#classmethod
def inject(cls, func):
from functools import partialmethod
# The DI system chooses what to use here:
return partialmethod(func, config=cls.get_config())
class Foo:
#DI.inject # it's up to the DI system to resolve the declared dependencies
def __init__(self, config: Config): # declare one dependency `config`
self.config = config
Dependency injection means providing the constructor the initialized object, in this case the config.
The code in your question isn't using dependency injection since the constructor __init__ doesn't receive the config as an argument, so you're only using the singleton pattern here.
See more info here about dependency injection in Python.
The gist of the question: if inheriting multiple classes how can I guarantee that if one class is inherited, a compliment Abstract Base Class (abc) is also used by the child object.
I've been messing around with pythons inheritance trying to see what kind of cool stuff I can do and I came up with this pattern, which is kind of interesting.
I've been trying to use this make implementing and testing objects that interface with my cache easier. I've got three modules:
ICachable.py
Cacheable.py
SomeClass.py
ICacheable.py
import abc
class ICacheable(abc.ABC):
#property
#abc.abstractmethod
def CacheItemIns(self):
return self.__CacheItemIns
#CacheItemIns.setter
#abc.abstractmethod
def CacheItemIns(self, value):
self.__CacheItemIns = value
return
#abc.abstractmethod
def Load(self):
"""docstring"""
return
#abc.abstractmethod
def _deserializeCacheItem(self):
"""docstring"""
return
#abc.abstractmethod
def _deserializeNonCacheItem(self):
"""docstring"""
return
Cacheable.py
class Cacheable:
def _getFromCache(self, itemName, cacheType,
cachePath=None):
"""docstring"""
kwargs = {"itemName" : itemName,
"cacheType" : cacheType,
"cachePath" : cachePath}
lstSearchResult = CacheManager.SearchCache(**kwargs)
if lstSearchResult[0]:
self.CacheItemIns = lstSearchResult[1]
self._deserializeCacheItem()
else:
cacheItem = CacheManager.NewItem(**kwargs)
self.CacheItemIns = cacheItem
self._deserializeNonCacheItem()
return
SomeClass.py
import ICacheable
import Cacheable
class SomeClass(Cacheable, ICacheable):
__valueFromCache1:str = ""
__valueFromCache2:str = ""
__CacheItemIns:dict = {}
#property
def CacheItemIns(self):
return self.__CacheItemIns
#CacheItemIns.setter
def CacheItemIns(self, value):
self.__CacheItemIns = value
return
def __init__(self, itemName, cacheType):
#Call Method from Cacheable
self.__valueFromCache1
self.__valueFromCache2
self.__getItemFromCache(itemName, cacheType)
return
def _deserializeCacheItem(self):
"""docstring"""
self.__valueFromCache1 = self.CacheItemIns["val1"]
self.__valueFromCache2 = self.CacheItemIns["val2"]
return
def _deserializeNonCacheItem(self):
"""docstring"""
self.__valueFromCache1 = #some external function
self.__valueFromCache2 = #some external function
return
So this example works, but the scary thing is that there is no gurantee that a class inherriting Cacheable also inherits ICacheable. Which seems like a design flaw, as Cacheable is useless on its own. However the ability to abstract things from my subclass/child class with this is powerful. Is there a way to guarantee Cacheable's dependency on ICacheable?
If you explicitly do not want inheritance, you can register classes as virtual subclasses of an ABC.
#ICacheable.register
class Cacheable:
...
That means every subclass of Cacheable is automatically treated as subclass of ICacheable as well. This is mostly useful if you have an efficient implementation that would be slowed down by having non-functional Abstract Base Classes to traverse, e.g. for super calls.
However, ABCs are not just Interfaces and it is fine to inherit from them. In fact, part of the benefit of ABC is that it enforces subclasses to implement all abstract methods. An intermediate helper class, such as Cacheable, is fine not to implement all methods when it is never instantiated. However, any non-virtual subclass that is instantiated must be concrete.
>>> class FailClass(Cacheable, ICacheable):
... ...
...
>>> FailClass()
TypeError: Can't instantiate abstract class FailClass with abstract methods CacheItemIns, Load, _deserializeCacheItem, _deserializeNonCacheItem
Note that if you
always subclass as class AnyClass(Cacheable, ICacheable):
never instantiate Cacheable
that is functionally equivalent to Cacheable inheriting from ICacheable. The Method Resolution Order (i.e. the inheritance diamond) is the same.
>>> AnyClass.__mro__
(__main__. AnyClass, __main__.Cacheable, __main__.ICacheable, abc.ABC, object)
I have a question regarding unittest with Python! Let's say that I have a docker container set up that handles a specific api endpoint (let's say users, ex: my_site/users/etc/etc/etc). There are quite a few different layers that are broken up and handled for this container. Classes that handle the actual call and response, logic layer, data layer. I am wanting to write tests around the specific calls (just checking for status codes).
There are a lot of different classes that act as Handlers for the given endpoints. There are a few things that I would have to set up differently per one, however, each one inherits from Application and uses some methods from it. I am wanting to do a setUp class for my unittest so I don't have to re-establish this each time. Any advice will help. So far I've mainly seen that inheritance is a bad idea with testing, however, I am only wanting to use this for setUp. Here's an example:
class SetUpClass(unittest.TestCase):
def setUp(self):
self._some_data = data_set.FirstOne()
self._another_data_set = data_set.SecondOne()
def get_app(self):
config = Config()
return Application(config,
first_one=self._some_data,
second_one=self._another_data_set)
class TestFirstHandler(SetUpClass, unittest.TestCase):
def setUp(self):
new_var = something
def tearDown(self):
pass
def test_this_handler(self):
# This specific handler needs the application to function
# but I don't want to define it in this test class
res = self.fetch('some_url/users')
self.assertEqual(res.code, 200)
class TestSecondHandler(SetUpClass, unittest.TestCase):
def setUp(self):
different_var_thats_specific_to_this_handler = something_else
def tearDown(self):
pass
def test_this_handler(self):
# This specific handler needs the application to function
# but I don't want to define it in this test class
res = self.fetch('some_url/users/account/?something_custom={}'.format('WOW'))
self.assertEqual(res.code, 200)
Thanks again!!
As mentioned in the comments, you just need to learn how to use super(). You also don't need to repeat TestCase in the list of base classes.
Here's the simple version for Python 3:
class TestFirstHandler(SetUpClass):
def setUp(self):
super().setUp()
new_var = something
def tearDown(self): # Easier to not declare this if it's empty.
super().tearDown()
def test_this_handler(self):
# This specific handler needs the application to function
# but I don't want to define it in this test class
res = self.fetch('some_url/users')
self.assertEqual(res.code, 200)