Related
I have some code that looks like this.
condition = <expression>
while condition:
<some code>
I would like to be able to write that without having to write a separate statement to create the condition. E.g.,
while <create_condition(<expression>)>:
<some code>
Here are two possibilities that don't work, but that would scratch my itch.
with <expression> as condition:
<some code>
The problem with that is that it doesn't loop. If I embed a while inside the with I'm back where I started.
Define my own function to do this.
def looping_with(<expression>, <some code>):
<define looping_with>
The problem with this is that if <some code> is passed as a lambda expression it is limited to a single expression. None of the workarounds I've seen are attractive.
If <some code> is passed as an actual def one gets a syntax error. You can't pass a function definition as an argument to another function.
One could define the function elsewhere and then pass the function. But the point of with, while, and lambda is that the code itself, not a reference to the code, is embedded in context. (The original version of my code, which is not terrible, is better than that.)
Any suggestions would be appreciated.
UPDATE: (As Dave Beazley likes to say: You're going to hate this.)
I hesitate to offer this example, but this is something like what I'm trying to do.
class Container:
def __init__(self):
self.value = None
class Get_Next:
def __init__(self, gen):
self.gen = gen
def __call__(self, limit, container):
self.runnable_gen = self.gen(limit, container)
return self
def get_next(self):
try:
next(self.runnable_gen)
return True
except StopIteration:
return False
#Get_Next
def a_generator(limit, container):
i = 0
while i < limit:
container.value = i
yield
i += 1
container = Container()
gen = a_generator(5, container)
while gen.get_next():
print(container.value)
print('Done')
When run, the output is:
0
1
2
3
4
Done
P.S. Lest you think this is too far out, there is a very easy way to produce the same result. Remove the decorator from a_generator and then run:
for _ in a_generator(5, container):
print(container.value)
print('Done')
The result is the same.
The problem is that for _ in <something> is too ugly for me.
So, what I'm really looking for is a way to get the functionality of for _ in <something> with nicer syntax. The syntax should (a) indicate that we are establishing a context and (b) looping within that context. Hence the request for a combination of with and while.
You could a context manager class that would help in doing something like that:
class Condition:
def __init__(self, cond):
self.cond = cond
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def __call__(self, *args, **kwargs):
return self.cond(*args, **kwargs)
with Condition(lambda x: x != 3) as condition:
a = 0
while condition(a):
print('a:', a)
a += 1
Output:
a: 0
a: 1
a: 2
I have a generator that must perform a clean up step even if it was never iterated through:
def gen(data):
while True:
item = data.get()
if item is None:
break
# ...
try:
yield transformed_item
except GeneratorExit:
break
# clean up; must happen if gen was called
# ...
Everything works fine (i.e., clean up happens) when I call it like this:
for x in gen(data):
# ...
or like this:
g = gen(data)
r = next(g)
# ...
But when the generator goes out of scope without anyone calling next on it, then of course it never executes any code at all, so GeneratorExit isn't raised inside it, and the clean-up doesn't happen:
g = gen(data)
# g was never used before going out of scope
del g
How can I refactor the code to guarantee the cleanup step occurs even if the generator goes out of scope before it ever had a chance to yield anything?
You could use context handlers for this. It depends how long you need to persist the generator.
class Gen(object):
def __init__(self, data):
self.data = data
def __enter__(self):
return self._gen(self.data)
def __exit__(self, exc_type, exc_val, exc_tb):
# Cleanup
print 'Cleaning up'
def _gen(self, data):
for i in data:
yield i
Then it would look like:
with Gen(data) as g:
r = next(g)
EDIT:
Given the limitation that you can't enforce end users to use context managers, could you just wrap the generator creation in another function and "seed" the generator?
def gen(data):
g = _gen(data)
next(g)
return g
def _gen(data):
yield None
while True:
... # Rest of generator
I'm creating a program that uses the Twisted module and callbacks.
However, I keep having problems because the asynchronous part goes wrecked.
I have learned (also from previous questions..) that the callbacks will be executed at a certain point, but this is unpredictable.
However, I have a certain program that goes like
j = calc(a)
i = calc2(b)
f = calc3(c)
if s:
combine(i, j, f)
Now the boolean s is set by a callback done by calc3. Obviously, this leads to an undefined error because the callback is not executed before the s is needed.
However, I'm unsure how you SHOULD do if statements with asynchronous programming using Twisted. I've been trying many different things, but can't find anything that works.
Is there some way to use conditionals that require callback values?
Also, I'm using VIFF for secure computations (which uses Twisted): VIFF
Maybe what you're looking for is twisted.internet.defer.gatherResults:
d = gatherResults([calc(a), calc2(b), calc3(c)])
def calculated((j, i, f)):
if s:
return combine(i, j, f)
d.addCallback(calculated)
However, this still has the problem that s is undefined. I can't quite tell how you expect s to be defined. If it is a local variable in calc3, then you need to return it so the caller can use it.
Perhaps calc3 looks something like this:
def calc3(argument):
s = bool(argument % 2)
return argument + 1
So, instead, consider making it look like this:
Calc3Result = namedtuple("Calc3Result", "condition value")
def calc3(argument):
s = bool(argument % 2)
return Calc3Result(s, argument + 1)
Now you can rewrite the calling code so it actually works:
It's sort of unclear what you're asking here. It sounds like you know what callbacks are, but if so then you should be able to arrive at this answer yourself:
d = gatherResults([calc(a), calc2(b), calc3(c)])
def calculated((j, i, calc3result)):
if calc3result.condition:
return combine(i, j, calc3result.value)
d.addCallback(calculated)
Or, based on your comment below, maybe calc3 looks more like this (this is the last guess I'm going to make, if it's wrong and you'd like more input, then please actually share the definition of calc3):
def _calc3Result(result, argument):
if result == "250":
# SMTP Success response, yay
return Calc3Result(True, argument)
# Anything else is bad
return Calc3Result(False, argument)
def calc3(argument):
d = emailObserver("The argument was %s" % (argument,))
d.addCallback(_calc3Result)
return d
Fortunately, this definition of calc3 will work just fine with the gatherResults / calculated code block immediately above.
You have to put if in the callback. You may use Deferred to structure your callback.
As stated in previous answer - the preocessing logic should be handled in callback chain, below is simple code demonstration how this could work. C{DelayedTask} is a dummy implementation of a task which happens in the future and fires supplied deferred.
So we first construct a special object - C{ConditionalTask} which takes care of storring the multiple results and servicing callbacks.
calc1, calc2 and calc3 returns the deferreds, which have their callbacks pointed to C{ConditionalTask}.x_callback.
Every C{ConditionalTask}.x_callback does a call to C{ConditionalTask}.process which checks if all of the results have been registered and fires on a full set.
Additionally - C{ConditionalTask}.c_callback sets a flag of wheather or not the data should be processed at all.
from twisted.internet import reactor, defer
class DelayedTask(object):
"""
Delayed async task dummy implementation
"""
def __init__(self,delay,deferred,retVal):
self.deferred = deferred
self.retVal = retVal
reactor.callLater(delay, self.on_completed)
def on_completed(self):
self.deferred.callback(self.retVal)
class ConditionalTask(object):
def __init__(self):
self.resultA=None
self.resultB=None
self.resultC=None
self.should_process=False
def a_callback(self,result):
self.resultA = result
self.process()
def b_callback(self,result):
self.resultB=result
self.process()
def c_callback(self,result):
self.resultC=result
"""
Here is an abstraction for your "s" boolean flag, obviously the logic
normally would go further than just setting the flag, you could
inspect the result variable and do other strange stuff
"""
self.should_process = True
self.process()
def process(self):
if None not in (self.resultA,self.resultB,self.resultC):
if self.should_process:
print 'We will now call the processor function and stop reactor'
reactor.stop()
def calc(a):
deferred = defer.Deferred()
DelayedTask(5,deferred,a)
return deferred
def calc2(a):
deferred = defer.Deferred()
DelayedTask(5,deferred,a*2)
return deferred
def calc3(a):
deferred = defer.Deferred()
DelayedTask(5,deferred,a*3)
return deferred
def main():
conditional_task = ConditionalTask()
dFA = calc(1)
dFB = calc2(2)
dFC = calc3(3)
dFA.addCallback(conditional_task.a_callback)
dFB.addCallback(conditional_task.b_callback)
dFC.addCallback(conditional_task.c_callback)
reactor.run()
I'd interacting with a lot of deeply nested json I didn't write, and would like to make my python script more 'forgiving' to invalid input. I find myself writing involved try-except blocks, and would rather just wrap the dubious function up.
I understand it's a bad policy to swallow exceptions, but I'd rather prefer they to be printed and analysed later, than to actually stop execution. It's more valuable, in my use-case to continue executing over the loop than to get all keys.
Here's what I'm doing now:
try:
item['a'] = myobject.get('key').METHOD_THAT_DOESNT_EXIST()
except:
item['a'] = ''
try:
item['b'] = OBJECT_THAT_DOESNT_EXIST.get('key2')
except:
item['b'] = ''
try:
item['c'] = func1(ARGUMENT_THAT_DOESNT_EXIST)
except:
item['c'] = ''
...
try:
item['z'] = FUNCTION_THAT_DOESNT_EXIST(myobject.method())
except:
item['z'] = ''
Here's what I'd like, (1):
item['a'] = f(myobject.get('key').get('subkey'))
item['b'] = f(myobject.get('key2'))
item['c'] = f(func1(myobject)
...
or (2):
#f
def get_stuff():
item={}
item['a'] = myobject.get('key').get('subkey')
item['b'] = myobject.get('key2')
item['c'] = func1(myobject)
...
return(item)
...where I can wrap either the single data item (1), or a master function (2), in some function that turns execution-halting exceptions into empty fields, printed to stdout. The former would be sort of an item-wise skip - where that key isn't available, it logs blank and moves on - the latter is a row-skip, where if any of the fields don't work, the entire record is skipped.
My understanding is that some kind of wrapper should be able to fix this. Here's what I tried, with a wrapper:
def f(func):
def silenceit():
try:
func(*args,**kwargs)
except:
print('Error')
return(silenceit)
Here's why it doesn't work. Call a function that doesn't exist, it doesn't try-catch it away:
>>> f(meow())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'meow' is not defined
Before I even add a blank return value, I'd like to get it to try-catch correctly. If the function had worked, this would have printed "Error", right?
Is a wrapper function the correct approach here?
UPDATE
I've had a lot of really useful, helpful answers below, and thank you for them---but I've edited the examples I used above to illustrate that I'm trying to catch more than nested key errors, that I'm looking specifically for a function that wraps a try-catch for...
When a method doesn't exist.
When an object doesn't exist, and is getting a method called on it.
When an object that does not exist is being called as an argument to a function.
Any combination of any of these things.
Bonus, when a function doesn't exist.
There are lots of good answers here, but I didn't see any that address the question of whether you can accomplish this via decorators.
The short answer is "no," at least not without structural changes to your code. Decorators operate at the function level, not on individual statements. Therefore, in order to use decorators, you would need to move each of the statements to be decorated into its own function.
But note that you can't just put the assignment itself inside the decorated function. You need to return the rhs expression (the value to be assigned) from the decorated function, then do the assignment outside.
To put this in terms of your example code, one might write code with the following pattern:
#return_on_failure('')
def computeA():
item['a'] = myobject.get('key').METHOD_THAT_DOESNT_EXIST()
item["a"] = computeA()
return_on_failure could be something like:
def return_on_failure(value):
def decorate(f):
def applicator(*args, **kwargs):
try:
return f(*args,**kwargs)
except:
print('Error')
return value
return applicator
return decorate
You could use a defaultdict and the context manager approach as outlined in Raymond Hettinger's PyCon 2013 presentation
from collections import defaultdict
from contextlib import contextmanager
#contextmanager
def ignored(*exceptions):
try:
yield
except exceptions:
pass
item = defaultdict(str)
obj = dict()
with ignored(Exception):
item['a'] = obj.get(2).get(3)
print item['a']
obj[2] = dict()
obj[2][3] = 4
with ignored(Exception):
item['a'] = obj.get(2).get(3)
print item['a']
It's very easy to achieve using configurable decorator.
def get_decorator(errors=(Exception, ), default_value=''):
def decorator(func):
def new_func(*args, **kwargs):
try:
return func(*args, **kwargs)
except errors, e:
print "Got error! ", repr(e)
return default_value
return new_func
return decorator
f = get_decorator((KeyError, NameError), default_value='default')
a = {}
#f
def example1(a):
return a['b']
#f
def example2(a):
return doesnt_exist()
print example1(a)
print example2(a)
Just pass to get_decorator tuples with error types which you want to silence and default value to return.
Output will be
Got error! KeyError('b',)
default
Got error! NameError("global name 'doesnt_exist' is not defined",)
default
Edit: Thanks to martineau i changed default value of errors to tuples with basic Exception to prevents errors.
It depends on what exceptions you expect.
If your only use case is get(), you could do
item['b'] = myobject.get('key2', '')
For the other cases, your decorator approach might be useful, but not in the way you do it.
I'll try to show you:
def f(func):
def silenceit(*args, **kwargs): # takes all kinds of arguments
try:
return func(*args, **kwargs) # returns func's result
except Exeption, e:
print('Error:', e)
return e # not the best way, maybe we'd better return None
# or a wrapper object containing e.
return silenceit # on the correct level
Nevertheless, f(some_undefined_function())won't work, because
a) f() isn't yet active at the execution time and
b) it is used wrong. The right way would be to wrap the function and then call it: f(function_to_wrap)().
A "layer of lambda" would help here:
wrapped_f = f(lambda: my_function())
wraps a lambda function which in turn calls a non-existing function. Calling wrapped_f() leads to calling the wrapper which calls the lambda which tries to call my_function(). If this doesn't exist, the lambda raises an exception which is caught by the wrapper.
This works because the name my_function is not executed at the time the lambda is defined, but when it is executed. And this execution is protected and wrapped by the function f() then. So the exception occurs inside the lambda and is propagated to the wrapping function provided by the decorator, which handles it gracefully.
This move towards inside the lambda function doesn't work if you try to replace the lambda function with a wrapper like
g = lambda function: lambda *a, **k: function(*a, **k)
followed by a
f(g(my_function))(arguments)
because here the name resolution is "back at the surface": my_function cannot be resolved and this happens before g() or even f() are called. So it doesn't work.
And if you try to do something like
g(print)(x.get('fail'))
it cannot work as well if you have no x, because g() protects print, not x.
If you want to protect x here, you'll have to do
value = f(lambda: x.get('fail'))
because the wrapper provided by f() calls that lambda function which raises an exception which is then silenced.
Extending #iruvar answer - starting with Python 3.4 there is an existing context manager for this in Python standard lib: https://docs.python.org/3/library/contextlib.html#contextlib.suppress
from contextlib import suppress
with suppress(FileNotFoundError):
os.remove('somefile.tmp')
with suppress(FileNotFoundError):
os.remove('someotherfile.tmp')
in your case you first evaluate the value of the meow call (which doesn't exist) and then wrap it in the decorator. this doesn't work that way.
first the exception is raised before it was wrapped, then the wrapper is wrongly indented (silenceit should not return itself). You might want to do something like:
def hardfail():
return meow() # meow doesn't exist
def f(func):
def wrapper():
try:
func()
except:
print 'error'
return wrapper
softfail =f(hardfail)
output:
>>> softfail()
error
>>> hardfail()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in hardfail
NameError: global name 'meow' is not defined
anyway in your case I don't understand why you don't use a simple method such as
def get_subkey(obj, key, subkey):
try:
return obj.get(key).get(subkey, '')
except AttributeError:
return ''
and in the code:
item['a'] = get_subkey(myobject, 'key', 'subkey')
Edited:
In case you want something that will work at any depth. You can do something like:
def get_from_object(obj, *keys):
try:
value = obj
for k in keys:
value = value.get(k)
return value
except AttributeError:
return ''
That you'd call:
>>> d = {1:{2:{3:{4:5}}}}
>>> get_from_object(d, 1, 2, 3, 4)
5
>>> get_from_object(d, 1, 2, 7)
''
>>> get_from_object(d, 1, 2, 3, 4, 5, 6, 7)
''
>>> get_from_object(d, 1, 2, 3)
{4: 5}
And using your code
item['a'] = get_from_object(obj, 2, 3)
By the way, on a personal point of view I also like #cravoori solution using contextmanager. But this would mean having three lines of code each time:
item['a'] = ''
with ignored(AttributeError):
item['a'] = obj.get(2).get(3)
Why not just use cycle?
for dst_key, src_key in (('a', 'key'), ('b', 'key2')):
try:
item[dst_key] = myobject.get(src_key).get('subkey')
except Exception: # or KeyError?
item[dst_key] = ''
Or if you wish write a little helper:
def get_value(obj, key):
try:
return obj.get(key).get('subkey')
except Exception:
return ''
Also you can combine both solutions if you have a few places where you need to get value and helper function would be more reasonable.
Not sure that you actually need a decorator for your problem.
Since you're dealing with lots of broken code, it may be excusable to use eval in this case.
def my_eval(code):
try:
return eval(code)
except: # Can catch more specific exceptions here.
return ''
Then wrap all your potentially broken statements:
item['a'] = my_eval("""myobject.get('key').get('subkey')""")
item['b'] = my_eval("""myobject.get('key2')""")
item['c'] = my_eval("""func1(myobject)""")
How about something like this:
def exception_handler(func):
def inner_function(*args, **kwargs):
try:
func(*args, **kwargs)
except TypeError:
print(f"{func.__name__} error")
return inner_function
then
#exception_handler
def doSomethingExceptional():
a=2/0
all credits go to:https://medium.com/swlh/handling-exceptions-in-python-a-cleaner-way-using-decorators-fae22aa0abec
Try Except Decorator for sync and async functions
Note: logger.error can be replaced with print
Latest version can be found here.
In essence, I want to put a variable on the stack, that will be reachable by all calls below that part on the stack until the block exits. In Java I would solve this using a static thread local with support methods, that then could be accessed from methods.
Typical example: you get a request, and open a database connection. Until the request is complete, you want all code to use this database connection. After finishing and closing the request, you close the database connection.
What I need this for, is a report generator. Each report consist of multiple parts, each part can rely on different calculations, sometimes different parts relies in part on the same calculation. As I don't want to repeat heavy calculations, I need to cache them. My idea is to decorate methods with a cache decorator. The cache creates an id based on the method name and module, and it's arguments, looks if it has this allready calculated in a stack variable, and executes the method if not.
I will try and clearify by showing my current implementation. Want I want to do is to simplify the code for those implementing calculations.
First, I have the central cache access object, which I call MathContext:
class MathContext(object):
def __init__(self, fn):
self.fn = fn
self.cache = dict()
def get(self, calc_config):
id = create_id(calc_config)
if id not in self.cache:
self.cache[id] = calc_config.exec(self)
return self.cache[id]
The fn argument is the filename the context is created in relation to, from where data can be read to be calculated.
Then we have the Calculation class:
class CalcBase(object):
def exec(self, math_context):
raise NotImplementedError
And here is a stupid Fibonacci example. Non of the methods are actually recursive, they work on large sets of data instead, but it works to demonstrate how you would depend on other calculations:
class Fibonacci(CalcBase):
def __init__(self, n): self.n = n
def exec(self, math_context):
if self.n < 2: return 1
a = math_context.get(Fibonacci(self.n-1))
b = math_context.get(Fibonacci(self.n-2))
return a+b
What I want Fibonacci to be instead, is just a decorated method:
#cache
def fib(n):
if n<2: return 1
return fib(n-1)+fib(n-2)
With the math_context example, when math_context goes out of scope, so does all it's cached values. I want the same thing for the decorator. Ie. at point X, everything cached by #cache is dereferrenced to be gced.
I went ahead and made something that might just do what you want. It can be used as both a decorator and a context manager:
from __future__ import with_statement
try:
import cPickle as pickle
except ImportError:
import pickle
class cached(object):
"""Decorator/context manager for caching function call results.
All results are cached in one dictionary that is shared by all cached
functions.
To use this as a decorator:
#cached
def function(...):
...
The results returned by a decorated function are not cleared from the
cache until decorated_function.clear_my_cache() or cached.clear_cache()
is called
To use this as a context manager:
with cached(function) as function:
...
function(...)
...
The function's return values will be cleared from the cache when the
with block ends
To clear all cached results, call the cached.clear_cache() class method
"""
_CACHE = {}
def __init__(self, fn):
self._fn = fn
def __call__(self, *args, **kwds):
key = self._cache_key(*args, **kwds)
function_cache = self._CACHE.setdefault(self._fn, {})
try:
return function_cache[key]
except KeyError:
function_cache[key] = result = self._fn(*args, **kwds)
return result
def clear_my_cache(self):
"""Clear the cache for a decorated function
"""
try:
del self._CACHE[self._fn]
except KeyError:
pass # no cached results
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.clear_my_cache()
def _cache_key(self, *args, **kwds):
"""Create a cache key for the given positional and keyword
arguments. pickle.dumps() is used because there could be
unhashable objects in the arguments, but passing them to
pickle.dumps() will result in a string, which is always hashable.
I used this to make the cached class as generic as possible. Depending
on your requirements, other key generating techniques may be more
efficient
"""
return pickle.dumps((args, sorted(kwds.items())), pickle.HIGHEST_PROTOCOL)
#classmethod
def clear_cache(cls):
"""Clear everything from all functions from the cache
"""
cls._CACHE = {}
if __name__ == '__main__':
# used as decorator
#cached
def fibonacci(n):
print "calculating fibonacci(%d)" % n
if n == 0:
return 0
if n == 1:
return 1
return fibonacci(n - 1) + fibonacci(n - 2)
for n in xrange(10):
print 'fibonacci(%d) = %d' % (n, fibonacci(n))
def lucas(n):
print "calculating lucas(%d)" % n
if n == 0:
return 2
if n == 1:
return 1
return lucas(n - 1) + lucas(n - 2)
# used as context manager
with cached(lucas) as lucas:
for i in xrange(10):
print 'lucas(%d) = %d' % (i, lucas(i))
for n in xrange(9, -1, -1):
print 'fibonacci(%d) = %d' % (n, fibonacci(n))
cached.clear_cache()
for n in xrange(9, -1, -1):
print 'fibonacci(%d) = %d' % (n, fibonacci(n))
this question seems to be two question
a) sharing db connection
b) caching/Memoizing
b) you have answered yourselves
a) I don't seem to understand why you need to put it on stack?
you can do one of these
you can use a class and connection
could be attribute of it
you can decorate all your function
so that they get a connection from
central location
each function can explicitly use a
global connection method
you can create a connection and pass
around it, or create a context
object and pass around
context,connection can be a part of
context
etc, etc
You could use a global variable wrapped in a getter function:
def getConnection():
global connection
if connection:
return connection
connection=createConnection()
return connection
"you get a request, and open a database connection.... you close the database connection."
This is what objects are for. Create the connection object, pass it to other objects, and then close it when you're done. Globals are not appropriate. Simply pass the value around as a parameter to the other objects that are doing the work.
"Each report consist of multiple parts, each part can rely on different calculations, sometimes different parts relies in part on the same calculation.... I need to cache them"
This is what objects are for. Create a dictionary with useful calculation results and pass that around from report part to report part.
You don't need to mess with "stack variables", "static thread local" or anything like that.
Just pass ordinary variable arguments to ordinary method functions. You'll be a lot happier.
class MemoizedCalculation( object ):
pass
class Fibonacci( MemoizedCalculation ):
def __init__( self ):
self.cache= { 0: 1, 1: 1 }
def __call__( self, arg ):
if arg not in self.cache:
self.cache[arg]= self(arg-1) + self(arg-2)
return self.cache[arg]
class MathContext( object ):
def __init__( self ):
self.fibonacci = Fibonacci()
You can use it like this
>>> mc= MathContext()
>>> mc.fibonacci( 4 )
5
You can define any number of calculations and fold them all into a single container object.
If you want, you can make the MathContext into a formal Context Manager so that it work with the with statement. Add these two methods to MathContext.
def __enter__( self ):
print "Initialize"
return self
def __exit__( self, type_, value, traceback ):
print "Release"
Then you can do this.
with MathContext() as mc:
print mc.fibonacci( 4 )
At the end of the with statement, you can guaranteed that the __exit__ method was called.