I have a Python 3 script similar to the following:
def transaction(write=False):
def _transaction_deco(fn):
#wraps(fn)
def _wrapper(*args, **kwargs):
env.timestamp = arrow.utcnow()
with TxnManager(write=write) as txn:
ret = fn(*args, **kwargs)
delattr(env, 'timestamp')
return ret
return _wrapper
return _transaction_deco
#transaction(True)
def fn1(x=False):
if x:
return fn2()
do_something()
#transaction(True)
def fn2():
do_something_else()
fn1 and fn2 can be called independently, and both are wrapped in the transaction decorator.
Also, fn1 can call fn2 under certain circumstances. This is where I have a problem, because the decorator code is called twice, transactions are nested, and the timestamp is set twice and deleted twice (actually, the second time delattr(env, 'timestamp') is called, an exception is raised because timestamp is no longer there).
I want whatever is decorated with transaction to only be decorated if not already so.
I tried to modify my code based on hints from a separate Q&A:
def transaction(write=False):
def _transaction_deco(fn):
# Avoid double wrap.
if getattr(fn, 'in_txn', False):
return fn(*args, **kwargs)
#wraps(fn)
def _wrapper(*args, **kwargs):
# Mark transaction begin timestamp.
env.timestamp = arrow.utcnow()
with TxnManager(write=write) as txn:
ret = fn(*args, **kwargs)
delattr(env, 'timestamp')
return ret
_wrapper.in_txn = True
return _wrapper
return _transaction_deco
I.e. if the _in_txn attribute is set for the function, return the bare function because we are already in a transaction.
However, if I run fn1(True) with a debugger, I see that the check for in_txn is never true because the _wrapper variable is set to f1 for the outer call, and to f2 for the inner call, so the functions are being both decorated separately.
I could set the timestamp variables as properties of the context manager, but I still end up with two bested context managers, and if both trnsactions are R/W, Bad Things will happen.
Can someone suggest a better solution?
Edit: Based on the replied received, I believe I omitted a key element: fn1 and fn2 are API metods and must always run within a transaction. The reason for using a decorator is to prevent the API implementer from having to make a decision about the transaction, or having to wrap the methods manually in a context manager or a decorator. However I would be in favor of a non-decorator approach as long as it keeps the implementation of fn1 and fn2 plain.
Rather than a decorator, I'd just make another context manager to use explicitly.
#contextlib.contextmanager
def MyTxnManager(write):
env.timestamp = arrow.utcnow()
with TxnManager(write=write) as txn:
yield txn
delattr(env, 'timestamp')
with MyTxnManager(True) as txn:
fn1()
Here, instead of hiding the transaction inside fn1 and fn2, we wrap just the timestamp management inside a new transaction manager. This way, calls to fn1 and fn2 occur in a transaction only if we explicitly cause them to do so.
There's no need to handle this in the decorator. It's much simpler to separate out the wrapped and unwrapped versions of fn2, e.g.:
def _fn2():
do_something_else()
#transaction(True)
def fn1(x=False):
if x:
return _fn2()
do_something()
fn2 = transaction(True)(_fn2)
The decorator syntax is just syntactic sugar, so the result will be exactly the same.
You can always just check if the environment has a timestamp already and not create it in that case. It is using the already-existing attribute as a flag instead of making a new one:
def transaction(write=False):
def _transaction_deco(fn):
#wraps(fn)
def _wrapper(*args, **kwargs):
if hasattr(env, 'timestamp'):
ret = fn(*args, **kwargs)
else:
env.timestamp = arrow.utcnow()
with TxnManager(write=write) as txn:
ret = fn(*args, **kwargs)
delattr(env, 'timestamp')
return ret
return _wrapper
return _transaction_deco
This will let you nest an arbitrary number of transacted calls, and only use the timestamp and context manager of the outermost one.
Related
I'm quite new to decorators and I'm trying to build a decorator with an argument that should work both as a decorator and a stand-alone function. The basic idea is to raise an error if some condition is not satisfied. Example:
ensure_condition("fail") # exception should be raised
ensure_condition("pass") # nothing should happen
#ensure_condition("fail") # check condition before every `func` call
def f1():
return 1
I thought about doing this:
def ensure_condition(arg: str):
if not _validate(arg):
raise Exception("failed")
def ensure_condition_decorator(f = lambda *_: None):
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
return wrapper
return ensure_condition_decorator
But the above results in the _validate function being called also when the f1 function is declared (not only executed).
Any other ideas?
Thanks!
I have a function in python that's used in many views. Specifically its in a django app running under uwsgi. The function just fires tracking data into our database. I wanted to create a decorator that would disable that function for a specific call to the view that contains the function. Essentially:
#disable tracking
def view(request):
track(request) //disabled by decorator
The decorator works by replacing the global definition of track with a void function that does nothing. Since we're running this under uwsgi which is multithreaded if I replace a global definition it will replace the function for all threads running under the process, so I defined the decorator to only activate if the tid and pid are equivalent. Here:
def disable_tracking(func):
#decorator
def inner(*args, **kwargs):
original_tracker = pascalservice.track.track
anon = lambda *args, **kwargs: None
tid = lambda : str(current_thread().ident)
pid = lambda : str(getpid())
uid = lambda : tid() + pid()
current_uid = uid()
cache.set(current_uid, True)
switcher = lambda *args, **kwargs: anon(*args, **kwargs) if cache.get(uid()) else original_tracker(*args, **kwargs)
pascalservice.track.track = switcher
result = func(*args, **kwargs)
cache.delete(current_uid)
pascalservice.track.track = original_tracker
return result
return inner
The wierd thing about this decorated function is that I'm getting occasional crashes and I want to verify if this style of coding is correct as it's a little unconventional.
What you are doing is called monkey patching. While not a totally bad practice it often leads to difficult to pinpoint bugs so use it with caution.
If the decorator is mandatory for some reason I would suggest adding some flag to the request object in your decorator and add a check for that flag in your track function.
The decorator :
def disable_tracking(func):
def wrapper(*args, **kwargs):
kwargs["request"].pascalservice_do_not_track = true
return func(*args, **kwargs)
return wrapper
Beginning of track function :
if hasattr(request, "pascalservice_do_not_track"):
return
# do the tracking ...
You may also just comment the line calling track in your view.
I need a decorator (or any other neat design pattern) for functions which are dealing with files. The main purpose is to remain the file pointer at the same position where it was, after the function acts on the file.
Here is my code, including some dummy tests. The problem is that the decorator doesn't work on instance methods, even if I pass the args and kwargs to it. I could not figure out how to design the codeā¦
import unittest
from cStringIO import StringIO
def remain_file_pointer(file_obj):
def wrap(f):
def wrapped_f(*args, **kwargs):
old_position = file_obj.tell()
f(*args, **kwargs)
file_obj.seek(old_position, 0)
return wrapped_f
return wrap
class TestRemainFilepointer(unittest.TestCase):
def test_remains_filepointer(self):
dummy_file = StringIO('abcdefg')
#remain_file_pointer(dummy_file)
def seek_into_file(dummy_file):
dummy_file.seek(1, 0)
self.assertEqual(0, dummy_file.tell())
seek_into_file(dummy_file)
self.assertEqual(0, dummy_file.tell())
def test_remains_filepointer_in_class_method(self):
class FileSeekerClass(object):
def __init__(self):
self.dummy_file = StringIO('abcdefg')
#remain_file_pointer(self.dummy_file)
def seek_into_file(self):
self.dummy_file.seek(1, 0)
fileseeker = FileSeekerClass()
self.assertEqual(0, fileseeker.dummy_file.tell())
fileseeker.seek_into_file()
self.assertEqual(0, fileseeker.dummy_file.tell())
UPDATE:
Just to clarify the basic idea:
The decorator should take an argument, which is a file handler and store the position before the actual function manipulates the file. After that, the pointer should be set to the old position. And this should work either for standalone functions and for methods.
My answer below fixes the problem by assuming that the last argument in the function is the file handler.
I'll not mark this as the final answer, since I'm pretty sure there is a better design.
I came up with a solution which assumes that the last argument of the data-manipulating function is the file handler and access it via args[-1]:
def remain_file_pointer(f):
"""Remain the file pointer position after calling the decorated function
This decorator assumes that the last argument is the file handler.
"""
def wrapper(*args, **kwargs):
file_obj = args[-1]
old_position = file_obj.tell()
return_value = f(*args, **kwargs)
file_obj.seek(old_position, 0)
return return_value
return wrapper
I've got some code in a decorator that I only want run once. Many other functions (utility and otherwise) will be called later down the line, and I want to ensure that other functions that may have this decorator aren't accidentally used way down in the nest of function calls.
I also want to be able to check, at any point, whether or not the current code has been wrapped in the decorator or not.
I've written this, but I just wanted to see if anyone else can think of a better/more elegant solution than checking for the (hopefully!) unique function name in the stack.
import inspect
def my_special_wrapper(fn):
def my_special_wrapper(*args, **kwargs):
""" Do some magic, only once! """
# Check we've not done this before
for frame in inspect.stack()[1:]: # get stack, ignoring current!
if frame[3] == 'my_special_wrapper':
raise StandardError('Special wrapper cannot be nested')
# Do magic then call fn
# ...
fn(*args, **kwargs)
return my_special_wrapper
def within_special_wrapper():
""" Helper to check that the function has been specially wrapped """
for frame in inspect.stack():
if frame[3] == 'my_special_wrapper':
return True
return False
#my_special_wrapper
def foo():
print within_special_wrapper()
bar()
print 'Success!'
#my_special_wrapper
def bar():
pass
foo()
Here is an example of using a global for this task - in what I believe is a relatively safe way:
from contextlib import contextmanager
from functools import wraps
_within_special_context = False
#contextmanager
def flag():
global _within_special_context
_within_special_context = True
try:
yield
finally:
_within_special_context = False
#I'd argue this would be best replaced by just checking the variable, but
#included for completeness.
def within_special_wrapper():
return _within_special_context
def my_special_wrapper(f):
#wraps(f)
def internal(*args, **kwargs):
if not _within_special_context:
with flag():
...
f(*args, **kwargs)
else:
raise Exception("No nested calls!")
return internal
#my_special_wrapper
def foo():
print(within_special_wrapper())
bar()
print('Success!')
#my_special_wrapper
def bar():
pass
foo()
Which results in:
True
Traceback (most recent call last):
File "/Users/gareth/Development/so/test.py", line 39, in <module>
foo()
File "/Users/gareth/Development/so/test.py", line 24, in internal
f(*args, **kwargs)
File "/Users/gareth/Development/so/test.py", line 32, in foo
bar()
File "/Users/gareth/Development/so/test.py", line 26, in internal
raise Exception("No nested calls!")
Exception: No nested calls!
Using a context manager ensures that the variable is unset. You could just use try/finally, but if you want to modify the behaviour for different situations, the context manager can be made to be flexible and reusable.
The obvious solution is to have special_wrapper set a global flag, and just skip its magic if the flag is set.
This is about the only good use of a global variable - to allow a single piece of code to store information that is only used within that code, but which needs to survive the life of execution in that code.
It doesn't need to be set in global scope. The function could set the flag on itself, for example, or on any object or class, as long as nothing else will touch it.
As noted by Lattyware in comments, you'll want to use either a try/except, or perhaps even better, a context manager to ensure the variable is unset.
Update: If you need the wrapped code to be able to check if it is wrapped, then provide a function which returns the value of the flag. You might want to wrap it all up with a class for neatness.
Update 2: I see you're doing this for transaction management. There are probably already libraries which do this. I strongly recommend that you at least look at their code.
While my solution technically works, it requires a manual reset of the decorator, but you could very well modify things such that the outermost function is instead a class (with the instances being the wrappers of the decorated functions passed to it in __init__), and have reset() being called in __exit__(), which would then allow you to use the with statement to create the decorator to be usable only once within the context. Also note that it requires Python 3 due to the nonlocal keyword, but that can easily be adapted to 2.7 with a dict in place of the flag variable.
def once_usable(decorator):
"Apply this decorator function to the decorator you want to be usable only once until it is reset."
def outer_wrapper():
flag = False
def inner_wrapper(*args, **kwargs):
nonlocal flag
if not flag:
flag = True
return decorator(*args, **kwargs)
else:
print("Decorator currently unusable.") # raising an Error also works
def decorator_reset():
nonlocal flag
flag = False
return (inner_wrapper, decorator_reset)
return outer_wrapper()
Testing:
>>> def a(aa):
return aa*2
>>> def b(bb):
def wrapper(*args, **kwargs):
print("Decorated.")
return bb(*args, **kwargs)
return wrapper
>>> dec, reset = once_usable(b)
>>> aa = dec(a)
>>> aa(22)
Decorated.
44
>>> aaa = dec(a)
Decorator currently unusable.
>>> reset()
>>> aaa = dec(a)
>>> aaa(11)
Decorated.
22
I just wrote a class decorator like below, tried to add debug support for every method in the target class:
import unittest
import inspect
def Debug(targetCls):
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name));
result = func(*args, **kwargs)
return result
setattr(targetCls, name, wrapper)
return targetCls
#Debug
class MyTestClass:
def TestMethod1(self):
print 'TestMethod1'
def TestMethod2(self):
print 'TestMethod2'
class Test(unittest.TestCase):
def testName(self):
for name, func in inspect.getmembers(MyTestClass, inspect.ismethod):
print name, func
print '~~~~~~~~~~~~~~~~~~~~~~~~~~'
testCls = MyTestClass()
testCls.TestMethod1()
testCls.TestMethod2()
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
Run above code, the result is:
Finding files... done.
Importing test modules ... done.
TestMethod1 <unbound method MyTestClass.wrapper>
TestMethod2 <unbound method MyTestClass.wrapper>
~~~~~~~~~~~~~~~~~~~~~~~~~~
Start debug support for MyTestClass.TestMethod2()
TestMethod2
Start debug support for MyTestClass.TestMethod2()
TestMethod2
----------------------------------------------------------------------
Ran 1 test in 0.004s
OK
You can find that 'TestMethod2' printed twice.
Is there problem? Is my understanding right for the decorator in python?
Is there any workaround?
BTW, i don't want add decorator to every method in the class.
Consider this loop:
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name))
When wrapper is eventually called, it looks up the value of name. Not finding it in locals(), it looks for it (and finds it) in the extended scope of the for-loop. But by then the for-loop has ended, and name refers to the last value in the loop, i.e. TestMethod2.
So both times the wrapper is called, name evaluates to TestMethod2.
The solution is to create an extended scope where name is bound to the right value. That can be done with a function, closure, with default argument values. The default argument values are evaluated and fixed at definition-time, and bound to the variables of the same name.
def Debug(targetCls):
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
def closure(name=name,func=func):
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name))
result = func(*args, **kwargs)
return result
return wrapper
setattr(targetCls, name, closure())
return targetCls
In the comments eryksun suggests an even better solution:
def Debug(targetCls):
def closure(name,func):
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name));
result = func(*args, **kwargs)
return result
return wrapper
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
setattr(targetCls, name, closure(name,func))
return targetCls
Now closure only has to be parsed once. Each call to closure(name,func) creates its own function scope with the distinct values for name and func bound correctly.
The problem isn't writing a valid class decorator as such; the class is obviously being decorated and doesn't just raise exceptions, you get to code you intended to add to the class. So clearly you need to be looking for a bug in your decorator, not questions of whether you're managing to write a valid decorator.
In this case, the problem is with closures. In your Debug decorator, you loop over name and func, and for each loop iteration you define a function wrapper, which is a closure that has access to the loop variables. The problem is that as soon as the next loop iteration starts the things referred to by the loop variables has changed. But you only ever call any of these wrapper functions after the entire loop has done. So every decorated method ends up calling out to the last values from the loop: in this case, TestMethod2.
What I would do in this case is make a method-level decorator, but as you don't want to explicitly decorate every method you then make a class decorator that goes through all the methods and passes them to the method decorator. This works because you're not giving the wrapper access to your loop variable through a closure; you're instead passing a reference to the thing the loop variable referred to into a function (the decorator function which constructs and returns a wrapper); once that's done it doesn't affect the wrapper function to rebind the loop variable on the next iteration.
This is a very common problem. You think wrapper is a closure that captures the current func argument, but that is not the case. If you don't pass the current func value to the wrapper, it's value is only looked up after the loop, so you get the last value.
You can do this:
def Debug(targetCls):
def wrap(name,func): # use the current func
def wrapper(*args, **kwargs):
print ("Start debug support for %s.%s()" % (targetCls.__name__, name));
result = func(*args, **kwargs)
return result
return wrapper
for name, func in inspect.getmembers(targetCls, inspect.ismethod):
setattr(targetCls, name, wrap(name, func))
return targetCls