My Situation
I'm currently writing on a project in python which I want to use to learn a bit more about software architecture. I've read a few texts and watched a couple of talks about dependency injection and learned to love how clear constructor injection shows the dependencies of an object.
However, I'm kind of struggling how to get a dependency passed to an object. I decided NOT to use a DI framework since:
I don't have enough knowledge of DI to specify my requirements and thus cannot choose a framework.
I want to keep the code free of more "magical" stuff since I have the feeling that introducing a seldom used framework drastically decreases readability. (More code to read of which only a small part is used).
Thus, I'm using custom factory functions to create objects and explicitly pass their dependencies:
# Business and Data Objects
class Foo:
def __init__(self,bar):
self.bar = bar
def do_stuff(self):
print(self.bar)
class Bar:
def __init__(self,prefix):
self.prefix = prefix
def __str__(self):
return str(self.prefix)+"Hello"
# Wiring up dependencies
def create_bar():
return Bar("Bar says: ")
def create_foo():
return Foo(create_bar())
# Starting the application
f = create_foo()
f.do_stuff()
Alternatively, if Foo has to create a number of Bars itself, it gets the creator function passed through its constructor:
# Business and Data Objects
class Foo:
def __init__(self,create_bar):
self.create_bar = create_bar
def do_stuff(self,times):
for _ in range(times):
bar = self.create_bar()
print(bar)
class Bar:
def __init__(self,greeting):
self.greeting = greeting
def __str__(self):
return self.greeting
# Wiring up dependencies
def create_bar():
return Bar("Hello World")
def create_foo():
return Foo(create_bar)
# Starting the application
f = create_foo()
f.do_stuff(3)
While I'd love to hear improvement suggestions on the code, this is not really the point of this post. However, I feel that this introduction is required to understand
My Question
While the above looks rather clear, readable and understandable to me, I run into a problem when the prefix dependency of Bar is required to be identical in the context of each Foo object and thus is coupled to the Foo object lifetime. As an example consider a prefix which implements a counter (See code examples below for implementation details).
I have two Ideas how to realize this, however, none of them seems perfect to me:
1) Pass Prefix through Foo
The first idea is to add a constructor parameter to Foo and make it store the prefix in each Foo instance.
The obvious drawback is, that it mixes up the responsibilities of Foo. It controls the business logic AND provides one of the dependencies to Bar. Once Bar does not require the dependency any more, Foo has to be modified. Seems like a no-go for me. Since I don't really think this should be a solution, I did not post the code here, but provided it on pastebin for the very interested reader ;)
2) Use Functions with State
Instead of placing the Prefix object inside Foo this approach is trying to encapsulate it inside the create_foo function. By creating one Prefix for each Foo object and referencing it in a nameless function using lambda, I keep the details (a.k.a there-is-a-prefix-object) away from Foo and inside my wiring-logic. Of course a named function would work, too (but lambda is shorter).
# Business and Data Objects
class Foo:
def __init__(self,create_bar):
self.create_bar = create_bar
def do_stuff(self,times):
for _ in range(times):
bar = self.create_bar()
print(bar)
class Bar:
def __init__(self,prefix):
self.prefix = prefix
def __str__(self):
return str(self.prefix)+"Hello"
class Prefix:
def __init__(self,name):
self.name = name
self.count = 0
def __str__(self):
self.count +=1
return self.name+" "+str(self.count)+": "
# Wiring up dependencies
def create_bar(prefix):
return Bar(prefix)
def create_prefix(name):
return Prefix(name)
def create_foo(name):
prefix = create_prefix(name)
return Foo(lambda : create_bar(prefix))
# Starting the application
f1 = create_foo("foo1")
f2 = create_foo("foo2")
f1.do_stuff(3)
f2.do_stuff(2)
f1.do_stuff(2)
This approach seems much more useful to me. However, I'm not sure about common practices and thus fear that having state inside functions is not really recommended. Coming from a java/C++ background, I'd expect a function to be dependent on its parameters, its class members (if it's a method) or some global state. Thus, a parameterless function that does not use global state would have to return exactly the same value every time it is called. This is not the case here. Once the returned object is modified (which means that counter in prefix has been increased), the function returns an object which has a different state than it had when beeing returned the first time.
Is this assumption just caused by my restricted experience in python and do I have to change my mindset, i.e. don't think of functions but of something callable? Or is supplying functions with state an unintended misuse of lambda?
3) Using a Callable Class
To overcome my doubts on stateful functions I could use callable classes where the create_foo function of approach 2 would be replaced by this:
class BarCreator:
def __init__(self, prefix):
self.prefix = prefix
def __call__(self):
return create_bar(self.prefix)
def create_foo(name):
return Foo(BarCreator(create_prefix(name)))
While this seems a usable solution for me, it is sooo much more verbose.
Summary
I'm not absolutely sure how to handle the situation. Although I prefer number 2 I still have my doubts. Furthermore, I'm still hope that anyone comes up with a more elegant way.
Please comment, if there is anything you think is too vague or can be possibly misunderstood. I will improve the question as far as my abilities allow me to do :)
All examples should run under python2.7 and python3 - if you experience any problems, please report them in the comments and I'll try to fix my code.
If you want to inject a callable object but don't want it to have a complex setup -- if, as in your example, it's really just binding to a single input value -- you could try using functools.partial to provide a function <> value pair:
def factory_function(arg):
#processing here
return configurted_object_base_on_arg
class Consumer(object):
def __init__(self, injection):
self._injected = injection
def use_injected_value():
print self._injected()
injectable = functools.partial(factory_function, 'this is the configuration argument')
example = Consumer(injectable)
example.use_injected_value() # should return the result of your factory function and argument
As an aside, if you're creating a dependency injection setup like your option 3, you probably want to put the knwledge about how to do the configuration into a factory class rather than doing it inline as you're doing here. That way you can swap out factories if you want to choose between strategies. It's not functionally very different (unless the creation is more complex than this example and involves persistent state) but it's more flexible down the road if the code looks like
factory = FooBarFactory()
bar1 = factory.create_bar()
alt_factory = FooBlahFactory(extra_info)
bar2 = alt_factory.create_bar()
Related
I'm trying to add flexibility to a python class, so that it notices when one of the init arguments is already an instance of that class. Skip "Initial situation" if you don't mind, how I got here.
Initial situation
I have this class:
class Pet:
def __init__(self, animal):
self._animal = animal
#property
def present(self):
return "This pet is a " + self._animal
...
and there are many functions which accept an instance of this class as an argument (def f(pet, ...)). Everything worked as expected.
I then wanted to add some flexibility to the usage of these functions: if the caller passes a Pet instance, everything keeps on working as before. In all other cases, a Pet instance is created. One way to achieve that, is like this:
def f(pet_or_animal, ...):
if isinstance(pet_or_animal, Pet): #Pet instance was passed
pet = pet_or_animal
else: #animal string was passed
pet = Pet(pet_or_animal)
...
This also works as expected, but these lines are repeated in every function. Not DRY, not good.
Goal
So, I'd like to extract the if/else from each of the functions, and integrate it into the Pet class itself. I tried changing its __init__ method to
class PetA: #I've changed the name to facilitate discussion here.
def __init__(self, pet_or_animal):
if isinstance(pet_or_animal, PetA):
self = pet_or_animal
else:
self._animal = pet_or_animal
...
and start each function with
def f(pet_or_animal, ...):
pet = PetA(pet_or_animal)
...
However, that is not working. If a Pet instance is passed, everything is good, but if a string is called, a Pet instance is not correctly created.
Current (ugly) solution
What is working, is to add a class method to the class, like so:
class PetB: #I've changed the name to facilitate discussion here.
#classmethod
def init(cls, pet_or_animal):
if isinstance(pet_or_animal, PetB):
return pet_or_animal
else:
return cls(pet_or_animal)
def __init__(self, animal):
self._animal = animal
...
and also change the functions to
def f(pet_or_animal, ...):
pet = PetB.init(pet_or_animal) #ugly
...
Questions
Does anyone know, how to change class PetA so, that it has the intended behavior? To be sure, here is the quick test:
pb1 = PetB.init('dog')
pb2 = PetB.init(pb1) #correctly initialized; points to same instance as pb1 (as desired)
pa1 = PetA('cat')
pa2 = PetA(pa1) #incorrectly initialized; pa1 != pa2
More generally, is this the right way to go about adding this flexibility? Another option I considered was writing a separate function to just do the checking, but this too is rather ugly and yet another thing to keep track of. I'd rather keep everything neat and wrapped in the class itself.
And one final remark: I realize that some people might find the added class method (petB) a more elegant solution. The reason I prefer to add to the __init__ method (petA) is that, in my real-world use, I already allow for many different types of initialization arguments. So, there is already a list of if/elif/elif/... statements that check, just which of the possibilities is used by the creator. I'd like to extend that by one more case, namely, if an initialized instance is passed.
Many thanks
I believe your current "ugly" solution is actually the correct approach.
This pushes the flexibility up as far as possible, since it is messy. Even though python allows for arbitrary types and values to float around, your users and yourself will thank you for keeping that constrained to the outermost levels.
I would think of it as (don't need to implement it this way)
class Pet:
#classmethod
def from_animal(cls, ...):
...
#classmethod
def from_pet(cls, ...):
...
#classmethod
def auto(cls, ...):
if is_pet(...):
return cls.from_pet(...)
def __init__(cls, internal_rep):
...
etc.
It is a code smell if you don't know whether your function is taking an object or an initializer. See if you can do processing as up-front as possible with user input and standardize everything beyond there.
You could use a function instead to get the same behaviour you want:
def make_pet_if_required(pet_or_animal):
if isinstance(pet_or_animal, PetA):
return pet_or_animal
else:
return Pet(pet_or_animal)
And then:
def f(pet_or_animal, ...):
pet = make_pet_if_required(pet_or_animal)
...
For more "beauty" you can try turning that function call into a decorator.
Learing Python I just encountered something I do not really understand. Let us take this example:
class CV_Test:
classVar = 'First'
cv = CV_Test()
print(cv.classVar)
CV_Test.classVar = 'Second'
cv2 = CV_Test()
print(cv2.classVar)
print(CV_Test.classVar)
Output:
First
Second
Second
Can anyone tell me why this is possible and what it is good for? Isn't this contradictory to defining a class as a blueprint if I can change maybe crucial values within a class from outside and is this not a conflict of the OOP paradigam of encapsulation. Coming from .NET I actually just know accessing variables via a getter and setter but not just like this. So I am curious what important purpose there can be that this is allowed.
Why is it possible? Python does not follow a restrictive programming paradigm, meaning that if something can make sense in some scenario, the interpreter should not stand in the way of the programmer willing to do that.
That being said, this approach requires a higher level of discipline and responsibility on the programmer's side, but also allows for a greater degree of flexibility in its meta-programming capabilities.
So, in the end this is a design choice. The advantage of it is that you do not need to explicitly have to use getters/setters.
For protected/private members/methods it is customary to prepend a _ or __, respectively. Additionally, one would be able to fake a getter/setter protected behavior (which would also allow the execution of additional code) via the method decorators #property and #.setter, e.g.:
class MyClass():
_an_attribute = False
#property
def an_attribute(self):
return self._an_attribute
#an_attribute.setter
def an_attribute(self, value):
self._an_attribute = value
This can be used like this:
x = MyClass()
x.an_attribute
# False
x.an_attribute = 1
# sets the internal `_an_attribute` to 1.
x.an_attribute
# 1
and you can leave out the #an_attribute.setter part, if you want a read-only (sort of) property, so that the following code:
x = MyClass()
x.an_attribute
# False
but, attempting to change its value would result in:
x.an_attribute = 1
AttributeError: can't set attribute
Of course you can still do:
x._an_attribute = 2
x.an_attribute
# 2
(EDIT: added some more code to better show the usage)
EDIT: On monkey patching
Additionally, in your code, you are also modifying the class after its definition, and the changes have retrospective (sort of) effects.
This is typically called monkey patching and can again be useful in some scenarios where you want to trigger a certain behavior in some portion of code while keeping most of its logic, e.g.:
class Number():
value = '0'
def numerify(self):
return float(self.value)
x = Number()
x.numerify()
# 0.0
Number.numerify = lambda self: int(self.value)
x.numerify()
# 0
But this is certainly not a encouraged programming style if cleaner options are available.
Suppose that I have a function in my Python application that define some kind of context - a user_id for example. This function call other functions that do not take this context as a function argument. For example:
def f1(user, operation):
user_id = user.id
# somehow define user_id as a global/context variable for any function call inside this scope
f2(operation)
def f2(operation):
# do something, not important, and then call another function
f3(operation)
def f3(operation):
# get user_id if there is a variable user_id in the context, get `None` otherwise
user_id = getcontext("user_id")
# do something with user_id and operation
My questions are:
Can the Context Variables of Python 3.7 be used for this? How?
Is this what these Context Variables are intended for?
How to do this with Python v3.6 or earlier?
EDIT
For multiple reasons (architectural legacy, libraries, etc) I can't/won't change the signature of intermediary functions like f2, so I can't just pass user_id as arguments, neither place all those functions inside the same class.
You can use contextvars in Python 3.7 for what you're asking about. It's usually really easy:
import contextvars
user_id = contextvars.ContextVar("user_id")
def f1(user, operation):
user_id.set(user.id)
f2()
def f2():
f3()
def f3():
print(user_id.get(default=None)) # gets the user_id value, or None if no value is set
The set method on the ContextVar returns a Token instance, which you can use to reset the variable to the value it had before the set operation took place. So if you wanted f1 to restore things the way they were (not really useful for a user_id context variable, but more relevant for something like setting the precision in the decimal module), you can do:
token = some_context_var.set(value)
try:
do_stuff() # can use value from some_context_var with some_context_var.get()
finally:
some_context_var.reset(token)
There's more to the contextvars module than this, but you almost certainly don't need to deal with the other stuff. You probably only need to be creating your own contexts and running code in other contexts if you're writing your own asynchronous framework from scratch.
If you're just using an existing framework (or writing a library that you want to play nice with asynchronous code), you don't need to deal with that stuff. Just create a global ContextVar (or look up one already defined by your framework) and get and set values on it as shown above, and you should be good to go.
A lot of contextvars use is probably going to be in the background, as an implementation detail of various libraries that want to have a "global" state that doesn't leak changes between threads or between separate asynchronous tasks within a single thread. The example above might make more sense in this kind of situation: f1 and f3 are part of the same library, and f2 is a user-supplied callback passed into the library somewhere else.
Essentially what you're looking for is a way to share a state between a set of function. The canonical way to do so in an object oriented language is to use a class:
class Foo(object):
def __init__(self, operation, user=None):
self._operation = operation
self._user_id = user.id if user else None
def f1(self):
print("in f1 : {}".format(self._user_id))
self.f2()
def f2(self):
print("in f2 : {}".format(self._user_id))
self.f3()
def f3(self):
print("in f3 : {}".format(self._user_id))
f = Foo(operation, user)
f.f1()
With this solution, your class instances (here f) are "the context" in which the functions are executed - each instance having it's own dedicated context.
The functional programing equivalent would be to use closures, I'm not going to give an example here since while Python supports closures, it's still first and mainly an object language so the OO solution is the most obvious.
And finally, the clean procedural solution is to pass this context (which can be expressed as a dict or any similar datatype) all along the call chain, as shown in DFE's answer.
As a general rule : relying on global variables or some "magic" context that could - or not - be set by you-dont-who-nor-where-nor-when makes for code that is hard if not impossible to reason about, and that can break in the most unpredictable ways (googling for "globals evil" will yield an awful lot of litterature on the topic).
You can use kwargs in your function calls in order to pass
def f1(user, operation):
user_id = user.id
# somehow define user_id as a global/context variable for any function call inside this scope
f2(operation, user_id=user_id)
def f2(operation, **kwargs):
# do something, not important, and then call another function
f3(operation, **kwargs)
def f3(operation, **kwargs):
# get user_id if there is a variable user_id in the context, get `None` otherwise
user_id = kwargs.get("user_id")
# do something with user_id and operation
the kwargs dict is the equivalent to what you are looking at in context variables, but limited at a call stack. It is the same memory element passed (through pointer-like) in each function and not duplicates variables in memory.
In my opinion, but I would like to see what you all think, context variables is an elegant way to authorize globals variables and to control it.
I have a function foo that takes a parameter stuff
Stuff can be something in a database and I'd like to create a function that takes a stuff_id, get the stuff from the db, execute foo.
Here's my attempt to solve it:
1/ Create a second function with suffix from_stuff_id
def foo(stuff):
do something
def foo_from_stuff_id(stuff_id):
stuff = get_stuff(stuff_id)
foo(stuff)
2/ Modify the first function
def foo(stuff=None, stuff_id=None):
if stuff_id:
stuff = get_stuff(stuff_id)
do something
I don't like both ways.
What's the most pythonic way to do it ?
Assuming foo is the main component of your application, your first way. Each function should have a different purpose. The moment you combine multiple purposes into a single function, you can easily get lost in long streams of code.
If, however, some other function can also provide stuff, then go with the second.
The only thing I would add is make sure you add docstrings (PEP-257) to each function to explain in words the role of the function. If necessary, you can also add comments to your code.
I'm not a big fan of type overloading in Python, but this is one of the cases where I might go for it if there's really a need:
def foo(stuff):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
With type annotations it would look like this:
def foo(stuff: Union[int, Stuff]):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
It basically depends on how you've defined all these functions. If you're importing get_stuff from another module the second approach is more Pythonic, because from an OOP perspective you create functions for doing one particular purpose and in this case when you've already defined the get_stuff you don't need to call it within another function.
If get_stuff it's not defined in another module, then it depends on whether you are using classes or not. If you're using a class and you want to use all these modules together you can use a method for either accessing or connecting to the data base and use that method within other methods like foo.
Example:
from some module import get_stuff
MyClass:
def __init__(self, *args, **kwargs):
# ...
self.stuff_id = kwargs['stuff_id']
def foo(self):
stuff = get_stuff(self.stuff_id)
# do stuff
Or if the functionality of foo depends on the existence of stuff you can have a global stuff and simply check for its validation :
MyClass:
def __init__(self, *args, **kwargs):
# ...
_stuff_id = kwargs['stuff_id']
self.stuff = get_stuff(_stuff_id) # can return None
def foo(self):
if self.stuff:
# do stuff
else:
# do other stuff
Or another neat design pattern for such situations might be using a dispatcher function (or method in class) that delegates the execution to different functions based on the state of stuff.
def delegator(stff, stuff_id):
if stuff: # or other condition
foo(stuff)
else:
get_stuff(stuff_id)
I'm coming from the C# world, so my views may be a little skewed. I'm looking to do DI in Python, however I'm noticing a trend with libraries where they all appear to rely on a service locator. That is, you must tie your object creation to the framework, such as injectlib.build(MyClass) in order to get an instance of MyClass.
Here is an example of what I mean -
from injector import Injector, inject
class Inner(object):
def __init__(self):
self.foo = 'foo'
class Outer(object):
#inject(inner=Inner)
def __init__(self, inner=None):
if inner is None:
print('inner not provided')
self.inner = Inner()
else:
print('inner provided')
self.inner = inner
injector = Injector()
outer = Outer()
print(outer.inner.foo)
outer = injector.get(Outer)
print(outer.inner.foo)
Is there a way in Python to create a class while automatically inferring dependency types based on parameter names? So if I have a constructor parameter called my_class, then an instance of MyClass will be injected. Reason I ask is that I don't see how I could inject a dependency into a class that gets created automatically via a third party library.
To answer the question you explicitly asked: no, there's no built-in way in Python to automatically get a MyClass object from a parameter named my_class.
That said, neither "tying your object creation to the framework" nor the example code you gave seem terribly Pythonic, and this question in general is kind of confusing because DI in dynamic languages isn't really a big deal.
For general thoughts about DI in Python I'd say this presentation gives a pretty good overview of different approaches. For your specific question, I'll give two options based on what you might be trying to do.
If you're trying to add DI to your own classes, I would use paramaters with default values in the constructor, as that presentation shows. E.g:
import time
class Example(object):
def __init__(self, sleep_func=time.sleep):
self.sleep_func = sleep_func
def foo(self):
self.sleep_func(10)
print('Done!')
And then you could just pass in a dummy sleep function for testing or whatever.
If you're trying to manipulate a library's classes through DI, (not something I can really imagine a use case for, but seems like what you're asking) then I would probably just monkey patch those classes to change whatever needed changing. E.g:
import test_module
def dummy_sleep(*args, **kwargs):
pass
test_module.time.sleep = dummy_sleep
e = test_module.Example()
e.foo()