I'd like to make a function which would also act as context manager if called with with statement. Example usage would be:
# Use as function
set_active_language("en")
# Use as context manager
with set_active_language("en"):
...
This is very similar to how the standard function open is used.
Here's the solution I came up with:
active_language = None # global variable to store active language
class set_active_language(object):
def __init__(self, language):
global active_language
self.previous_language = active_language
active_language = language
def __enter__(self):
pass
def __exit__(self, *args):
global active_language
active_language = self.previous_language
This code is not thread-safe, but this is not related to the problem.
What I don't like about this solution is that class constructor pretends to be a simple function and is used only for its side effects.
Is there a better way to do this?
Note that I haven't tested this solution.
Update: the reason why I don't want to split function and context manager into separate entities is is naming. The function and the context manager do the same thing, basically, so it seems reasonable to use one name for both. Naming the context processor would be problematic if I wanted to keep it separate. What should it be? active_language? This name may (and will) collide with variable name. override_active_language might work, though.
Technically no, you cannot do this. But you can fake it well enough that people (who didn't overthink it) wouldn't notice.
def set_active_language(language):
global active_language
previous_language = active_language
active_language = language
class ActiveScope(object):
def __enter__(self):
pass
def __exit__(self, *args):
global active_language
active_language = previous_language
return ActiveScope()
When used as a function the ActiveScope class is just a slightly wasteful no-op.
Hopefully someone will prove me wrong, but I think the answer is no: there is no other way. And also, another short-coming of the method you chose is that it might misbehave when used along with other context managers in a with a, b, c: statement. The intended side-effect of the CM is executed on object construction, and not in the __enter__ method as would be expected.
To be able to do what you want, you would have to know from inside the class constructor whether it was initialized as a context manager in a with statement, or simply called as a function. As far as I can tell, there is no way to gather that, not even with the inspect module.
Related
I am trying to provide wrappers for short-cutting every-day commands. Python environments are very useful to do that.
Is it possible to provide all methods of an object to the local namespace within a new environment?
class my_object:
def method_a():
...
class my_environment:
...
def __enter__(self):
some_object = my_object()
# something like `from some_object import *` ??
return(some_object)
...
with my_environment() as some_object:
# standard syntax:
some_object.method_a()
# shortcut:
method_a() # how to make this possible?
It will be rather complex, and IMHO will not be worth it. The problem is that in Python, local variables are local to a function and not to a bloc. So what you are asking for would require that:
__enter__ declares nonlocal variables for all of the methods from some_object and saves their previous value if any
__exit__ restore the previous values if any of those variables, or deletes them if they did not previously existed
Possible but not really Pythonic IMHO (the reason why I have not proposed any code...). After all, inside a method Python requires the object to be explicitely passed, and requires it to be prepended to any internal method call or attribute access. So my advice is to stick to the standard syntax here...
What you are looking for is class hierarchy. On the way, please be careful with the conventions for class names.
class MyObject:
def method_a():
...
class MyEnvironment(MyObject):
...
def __enter__(self):
return self
...
with MyEnvironment() as some_object:
# standard syntax:
some_object.method_a()
The shortcut you are looking doesn't make much sense because the method_a() was defined as a method, therefore it should be called together with the instance.
Maybe #staticmethod can serve your case better.
class MyEnvironment:
#staticmethod
def method_a():
...
MyEnvironment.method_a()
Suppose that I have a function in my Python application that define some kind of context - a user_id for example. This function call other functions that do not take this context as a function argument. For example:
def f1(user, operation):
user_id = user.id
# somehow define user_id as a global/context variable for any function call inside this scope
f2(operation)
def f2(operation):
# do something, not important, and then call another function
f3(operation)
def f3(operation):
# get user_id if there is a variable user_id in the context, get `None` otherwise
user_id = getcontext("user_id")
# do something with user_id and operation
My questions are:
Can the Context Variables of Python 3.7 be used for this? How?
Is this what these Context Variables are intended for?
How to do this with Python v3.6 or earlier?
EDIT
For multiple reasons (architectural legacy, libraries, etc) I can't/won't change the signature of intermediary functions like f2, so I can't just pass user_id as arguments, neither place all those functions inside the same class.
You can use contextvars in Python 3.7 for what you're asking about. It's usually really easy:
import contextvars
user_id = contextvars.ContextVar("user_id")
def f1(user, operation):
user_id.set(user.id)
f2()
def f2():
f3()
def f3():
print(user_id.get(default=None)) # gets the user_id value, or None if no value is set
The set method on the ContextVar returns a Token instance, which you can use to reset the variable to the value it had before the set operation took place. So if you wanted f1 to restore things the way they were (not really useful for a user_id context variable, but more relevant for something like setting the precision in the decimal module), you can do:
token = some_context_var.set(value)
try:
do_stuff() # can use value from some_context_var with some_context_var.get()
finally:
some_context_var.reset(token)
There's more to the contextvars module than this, but you almost certainly don't need to deal with the other stuff. You probably only need to be creating your own contexts and running code in other contexts if you're writing your own asynchronous framework from scratch.
If you're just using an existing framework (or writing a library that you want to play nice with asynchronous code), you don't need to deal with that stuff. Just create a global ContextVar (or look up one already defined by your framework) and get and set values on it as shown above, and you should be good to go.
A lot of contextvars use is probably going to be in the background, as an implementation detail of various libraries that want to have a "global" state that doesn't leak changes between threads or between separate asynchronous tasks within a single thread. The example above might make more sense in this kind of situation: f1 and f3 are part of the same library, and f2 is a user-supplied callback passed into the library somewhere else.
Essentially what you're looking for is a way to share a state between a set of function. The canonical way to do so in an object oriented language is to use a class:
class Foo(object):
def __init__(self, operation, user=None):
self._operation = operation
self._user_id = user.id if user else None
def f1(self):
print("in f1 : {}".format(self._user_id))
self.f2()
def f2(self):
print("in f2 : {}".format(self._user_id))
self.f3()
def f3(self):
print("in f3 : {}".format(self._user_id))
f = Foo(operation, user)
f.f1()
With this solution, your class instances (here f) are "the context" in which the functions are executed - each instance having it's own dedicated context.
The functional programing equivalent would be to use closures, I'm not going to give an example here since while Python supports closures, it's still first and mainly an object language so the OO solution is the most obvious.
And finally, the clean procedural solution is to pass this context (which can be expressed as a dict or any similar datatype) all along the call chain, as shown in DFE's answer.
As a general rule : relying on global variables or some "magic" context that could - or not - be set by you-dont-who-nor-where-nor-when makes for code that is hard if not impossible to reason about, and that can break in the most unpredictable ways (googling for "globals evil" will yield an awful lot of litterature on the topic).
You can use kwargs in your function calls in order to pass
def f1(user, operation):
user_id = user.id
# somehow define user_id as a global/context variable for any function call inside this scope
f2(operation, user_id=user_id)
def f2(operation, **kwargs):
# do something, not important, and then call another function
f3(operation, **kwargs)
def f3(operation, **kwargs):
# get user_id if there is a variable user_id in the context, get `None` otherwise
user_id = kwargs.get("user_id")
# do something with user_id and operation
the kwargs dict is the equivalent to what you are looking at in context variables, but limited at a call stack. It is the same memory element passed (through pointer-like) in each function and not duplicates variables in memory.
In my opinion, but I would like to see what you all think, context variables is an elegant way to authorize globals variables and to control it.
I have a function foo that takes a parameter stuff
Stuff can be something in a database and I'd like to create a function that takes a stuff_id, get the stuff from the db, execute foo.
Here's my attempt to solve it:
1/ Create a second function with suffix from_stuff_id
def foo(stuff):
do something
def foo_from_stuff_id(stuff_id):
stuff = get_stuff(stuff_id)
foo(stuff)
2/ Modify the first function
def foo(stuff=None, stuff_id=None):
if stuff_id:
stuff = get_stuff(stuff_id)
do something
I don't like both ways.
What's the most pythonic way to do it ?
Assuming foo is the main component of your application, your first way. Each function should have a different purpose. The moment you combine multiple purposes into a single function, you can easily get lost in long streams of code.
If, however, some other function can also provide stuff, then go with the second.
The only thing I would add is make sure you add docstrings (PEP-257) to each function to explain in words the role of the function. If necessary, you can also add comments to your code.
I'm not a big fan of type overloading in Python, but this is one of the cases where I might go for it if there's really a need:
def foo(stuff):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
With type annotations it would look like this:
def foo(stuff: Union[int, Stuff]):
if isinstance(stuff, int):
stuff = get_stuff(stuff)
...
It basically depends on how you've defined all these functions. If you're importing get_stuff from another module the second approach is more Pythonic, because from an OOP perspective you create functions for doing one particular purpose and in this case when you've already defined the get_stuff you don't need to call it within another function.
If get_stuff it's not defined in another module, then it depends on whether you are using classes or not. If you're using a class and you want to use all these modules together you can use a method for either accessing or connecting to the data base and use that method within other methods like foo.
Example:
from some module import get_stuff
MyClass:
def __init__(self, *args, **kwargs):
# ...
self.stuff_id = kwargs['stuff_id']
def foo(self):
stuff = get_stuff(self.stuff_id)
# do stuff
Or if the functionality of foo depends on the existence of stuff you can have a global stuff and simply check for its validation :
MyClass:
def __init__(self, *args, **kwargs):
# ...
_stuff_id = kwargs['stuff_id']
self.stuff = get_stuff(_stuff_id) # can return None
def foo(self):
if self.stuff:
# do stuff
else:
# do other stuff
Or another neat design pattern for such situations might be using a dispatcher function (or method in class) that delegates the execution to different functions based on the state of stuff.
def delegator(stff, stuff_id):
if stuff: # or other condition
foo(stuff)
else:
get_stuff(stuff_id)
I have a class that will always have only 1 object at the time. I'm just starting OOP in python and I was wondering what is a better approach: to assign an instance of this class to the variable and operate on that variable or rather have this instance referenced in the class variable instead. Here is an example of what I mean:
Referenced instance:
def Transaction(object):
current_transaction = None
in_progress = False
def __init__(self):
self.__class__.current_transaction = self
self.__class__.in_progress = True
self.name = 'abc'
self.value = 50
def update(self):
do_smth()
Transaction()
if Transaction.in_progress:
Transaction.current_transaction.update()
print Transaction.current_transaction.name
print Transaction.current_transaction.value
instance in a variable
def Transaction(object):
def __init__(self):
self.name = 'abc'
self.value = 50
def update(self):
do_smth()
current_transaction = Transaction()
in_progress = True
if in_progress:
current_transaction.update()
print current_transaction.name
print current_transaction.value
It's possible to see that you've encapsulated too much in the first case just by comparing the overall readability of the code: the second is much cleaner.
A better way to implement the first option is to use class methods: decorate all your method with #classmethod and then call with Transaction.method().
There's no practical difference in code quality for these two options. However, assuming that the the class is final, that is, without derived classes, I would go for a third choice: use the module as a singleton and kill the class. This would be the most compact and most readable choice. You don't need classes to create sigletons.
I think the first version doesn't make much sense, and the second version of your code would be better in almost all situations. It can sometimes be useful to write a Singleton class (where only one instance ever exists) by overriding __new__ to always return the saved instance (after it's been created the first time). But usually you don't need that unless you're wrapping some external resource that really only ever makes sense to exist once.
If your other code needs to share a single instance, there are other ways to do so (e.g. a global variable in some module or a constructor argument for each other object that needs a reference).
Note that if your instances have a very well defined life cycle, with specific events that should happen when they're created and destroyed, and unknown code running and using the object in between, the context manager protocol may be something you should look at, as it lets you use your instances in with statements:
with Transaction() as trans:
trans.whatever() # the Transaction will be notified if anything raises
other_stuff() # an exception that is not caught within the with block
trans.foo() # (so it can do a rollback if it wants to)
foo() # the Transaction will be cleaned up (e.g. committed) when the indented with block ends
Implementing the context manager protocol requires an __enter__ and __exit__ method.
My Situation
I'm currently writing on a project in python which I want to use to learn a bit more about software architecture. I've read a few texts and watched a couple of talks about dependency injection and learned to love how clear constructor injection shows the dependencies of an object.
However, I'm kind of struggling how to get a dependency passed to an object. I decided NOT to use a DI framework since:
I don't have enough knowledge of DI to specify my requirements and thus cannot choose a framework.
I want to keep the code free of more "magical" stuff since I have the feeling that introducing a seldom used framework drastically decreases readability. (More code to read of which only a small part is used).
Thus, I'm using custom factory functions to create objects and explicitly pass their dependencies:
# Business and Data Objects
class Foo:
def __init__(self,bar):
self.bar = bar
def do_stuff(self):
print(self.bar)
class Bar:
def __init__(self,prefix):
self.prefix = prefix
def __str__(self):
return str(self.prefix)+"Hello"
# Wiring up dependencies
def create_bar():
return Bar("Bar says: ")
def create_foo():
return Foo(create_bar())
# Starting the application
f = create_foo()
f.do_stuff()
Alternatively, if Foo has to create a number of Bars itself, it gets the creator function passed through its constructor:
# Business and Data Objects
class Foo:
def __init__(self,create_bar):
self.create_bar = create_bar
def do_stuff(self,times):
for _ in range(times):
bar = self.create_bar()
print(bar)
class Bar:
def __init__(self,greeting):
self.greeting = greeting
def __str__(self):
return self.greeting
# Wiring up dependencies
def create_bar():
return Bar("Hello World")
def create_foo():
return Foo(create_bar)
# Starting the application
f = create_foo()
f.do_stuff(3)
While I'd love to hear improvement suggestions on the code, this is not really the point of this post. However, I feel that this introduction is required to understand
My Question
While the above looks rather clear, readable and understandable to me, I run into a problem when the prefix dependency of Bar is required to be identical in the context of each Foo object and thus is coupled to the Foo object lifetime. As an example consider a prefix which implements a counter (See code examples below for implementation details).
I have two Ideas how to realize this, however, none of them seems perfect to me:
1) Pass Prefix through Foo
The first idea is to add a constructor parameter to Foo and make it store the prefix in each Foo instance.
The obvious drawback is, that it mixes up the responsibilities of Foo. It controls the business logic AND provides one of the dependencies to Bar. Once Bar does not require the dependency any more, Foo has to be modified. Seems like a no-go for me. Since I don't really think this should be a solution, I did not post the code here, but provided it on pastebin for the very interested reader ;)
2) Use Functions with State
Instead of placing the Prefix object inside Foo this approach is trying to encapsulate it inside the create_foo function. By creating one Prefix for each Foo object and referencing it in a nameless function using lambda, I keep the details (a.k.a there-is-a-prefix-object) away from Foo and inside my wiring-logic. Of course a named function would work, too (but lambda is shorter).
# Business and Data Objects
class Foo:
def __init__(self,create_bar):
self.create_bar = create_bar
def do_stuff(self,times):
for _ in range(times):
bar = self.create_bar()
print(bar)
class Bar:
def __init__(self,prefix):
self.prefix = prefix
def __str__(self):
return str(self.prefix)+"Hello"
class Prefix:
def __init__(self,name):
self.name = name
self.count = 0
def __str__(self):
self.count +=1
return self.name+" "+str(self.count)+": "
# Wiring up dependencies
def create_bar(prefix):
return Bar(prefix)
def create_prefix(name):
return Prefix(name)
def create_foo(name):
prefix = create_prefix(name)
return Foo(lambda : create_bar(prefix))
# Starting the application
f1 = create_foo("foo1")
f2 = create_foo("foo2")
f1.do_stuff(3)
f2.do_stuff(2)
f1.do_stuff(2)
This approach seems much more useful to me. However, I'm not sure about common practices and thus fear that having state inside functions is not really recommended. Coming from a java/C++ background, I'd expect a function to be dependent on its parameters, its class members (if it's a method) or some global state. Thus, a parameterless function that does not use global state would have to return exactly the same value every time it is called. This is not the case here. Once the returned object is modified (which means that counter in prefix has been increased), the function returns an object which has a different state than it had when beeing returned the first time.
Is this assumption just caused by my restricted experience in python and do I have to change my mindset, i.e. don't think of functions but of something callable? Or is supplying functions with state an unintended misuse of lambda?
3) Using a Callable Class
To overcome my doubts on stateful functions I could use callable classes where the create_foo function of approach 2 would be replaced by this:
class BarCreator:
def __init__(self, prefix):
self.prefix = prefix
def __call__(self):
return create_bar(self.prefix)
def create_foo(name):
return Foo(BarCreator(create_prefix(name)))
While this seems a usable solution for me, it is sooo much more verbose.
Summary
I'm not absolutely sure how to handle the situation. Although I prefer number 2 I still have my doubts. Furthermore, I'm still hope that anyone comes up with a more elegant way.
Please comment, if there is anything you think is too vague or can be possibly misunderstood. I will improve the question as far as my abilities allow me to do :)
All examples should run under python2.7 and python3 - if you experience any problems, please report them in the comments and I'll try to fix my code.
If you want to inject a callable object but don't want it to have a complex setup -- if, as in your example, it's really just binding to a single input value -- you could try using functools.partial to provide a function <> value pair:
def factory_function(arg):
#processing here
return configurted_object_base_on_arg
class Consumer(object):
def __init__(self, injection):
self._injected = injection
def use_injected_value():
print self._injected()
injectable = functools.partial(factory_function, 'this is the configuration argument')
example = Consumer(injectable)
example.use_injected_value() # should return the result of your factory function and argument
As an aside, if you're creating a dependency injection setup like your option 3, you probably want to put the knwledge about how to do the configuration into a factory class rather than doing it inline as you're doing here. That way you can swap out factories if you want to choose between strategies. It's not functionally very different (unless the creation is more complex than this example and involves persistent state) but it's more flexible down the road if the code looks like
factory = FooBarFactory()
bar1 = factory.create_bar()
alt_factory = FooBlahFactory(extra_info)
bar2 = alt_factory.create_bar()