Python functional evaluation efficiency - python

If I do this:
x=[(t,some_very_complex_computation(y)) for t in z]
Apparently some_very_complex_computation(y) is not dependent on t. So it should be evaluated only once. Is there any way to make Python aware of this, so it won't evaluate some_very_complex_computation(y) for every iteration?
Edit: I really want to do that in one line...

Usually you should follow San4ez's advise and just use a temporary variable here. I will still present a few techniques that might prove useful under certain circumstances:
In general, if you want to bind a name just for a sub-expression (which is usually why you need a temporary variable), you can use a lambda:
x = (lambda result=some_very_complex_computation(y): [(t, result) for t in z])()
In this particular case, the following is a quite clean and readable solution:
x = zip(z, itertools.repeat(some_very_complex_computation(y)))
A general note about automatic optimizations like these
In a dynamic language like Python, an implementation would have a very hard time to figure out that some_very_complex_computation is referentially transparent, that is, that it will always return the same result for the same arguments. You might want to look into a functional language like Haskell if you want magic like that.
"Explicit" pureness: Memoization
What you can do however is make some_very_complex_computation explicitly cache its return values for recent arguments:
from functools import lru_cache
#lru_cache()
def some_very_complex_computation(y):
# ...
This is Python 3. In Python 2, you'd have to write the decorator yourself:
from functools import wraps
def memoize(f):
cache = {}
#wraps(f)
def memoized(*args):
if args in cache:
return cache[args]
res = cache[args] = f(*args)
return res
return memoized
#memoize
some_very_complex_computation(x):
# ...

No, you should save value in variable
result = some_very_complex_computation(y)
x = [(t, result) for t in z]

I understand the sometimes perverse urge to get everything into one line, but at the same time it is good to keep things readable. You may consider this more readable than the lambda version:
x=[(t,s) for s in [some_very_complex_calculation(y)] for t in z]
However, you are probably better going for the answer by San4ez as being simple, readable (and possibly faster than creating and iterating through a one element list).

You can either:
Move the call out of the list comprehension
or
Use memoization (i.e. when some_very_complex_computation(y) gets called store the result in a dictionary, and if it gets called again with the same value just return the value stored in the dict

TL;DR version
zip(z, [long_computation(y)] * len(z))
Original answer:
As a rule of thumb, if you have some computation with a long execution time, it would be a good idea to cache the result directly in the function like this:
_cached_results = {}
def computation(v):
if v in _cached_results:
return _cached_results[v]
# otherwise do the computation here...
_cached_results[v] = result
return result
This would solve your problem too.
On one-liners
Doing one-liners for the sake of them is poor coding, yet... if you really wanted to do it in one line:
>>> def func(v):
... print 'executing func'
... return v * 2
...
>>> z = [1, 2, 3]
>>> zip(z, [func(10)] * len(z))
executing func
[(1, 20), (2, 20), (3, 20)]

#San4ez has given traditional, correct, simple, and beautiful answer.
In the spirit of the one-liner though, here's a technique for putting it all in one statement. The core idea is to use a nested for-loop to pre-evaluate subexpressions:
result = [(t, result) for result in [some_very_complex_computation(y)] for t in z]
If that blows your mind, you could just use a semicolon to put multiple statements on one line:
result = some_very_complex_computation(y); x = [(t, result) for t in z]

It can't know whether the function has side effects and changes from run to run, so you have to move the call out of the list comprehension manually.

Related

Can I implement a function or better a decorator that makes func(a1)(a2)(a3)...(an) == func(a1, a2, a3,...,an)? [duplicate]

On Codewars.com I encountered the following task:
Create a function add that adds numbers together when called in succession. So add(1) should return 1, add(1)(2) should return 1+2, ...
While I'm familiar with the basics of Python, I've never encountered a function that is able to be called in such succession, i.e. a function f(x) that can be called as f(x)(y)(z).... Thus far, I'm not even sure how to interpret this notation.
As a mathematician, I'd suspect that f(x)(y) is a function that assigns to every x a function g_{x} and then returns g_{x}(y) and likewise for f(x)(y)(z).
Should this interpretation be correct, Python would allow me to dynamically create functions which seems very interesting to me. I've searched the web for the past hour, but wasn't able to find a lead in the right direction. Since I don't know how this programming concept is called, however, this may not be too surprising.
How do you call this concept and where can I read more about it?
I don't know whether this is function chaining as much as it's callable chaining, but, since functions are callables I guess there's no harm done. Either way, there's two ways I can think of doing this:
Sub-classing int and defining __call__:
The first way would be with a custom int subclass that defines __call__ which returns a new instance of itself with the updated value:
class CustomInt(int):
def __call__(self, v):
return CustomInt(self + v)
Function add can now be defined to return a CustomInt instance, which, as a callable that returns an updated value of itself, can be called in succession:
>>> def add(v):
... return CustomInt(v)
>>> add(1)
1
>>> add(1)(2)
3
>>> add(1)(2)(3)(44) # and so on..
50
In addition, as an int subclass, the returned value retains the __repr__ and __str__ behavior of ints. For more complex operations though, you should define other dunders appropriately.
As #Caridorc noted in a comment, add could also be simply written as:
add = CustomInt
Renaming the class to add instead of CustomInt also works similarly.
Define a closure, requires extra call to yield value:
The only other way I can think of involves a nested function that requires an extra empty argument call in order to return the result. I'm not using nonlocal and opt for attaching attributes to the function objects to make it portable between Pythons:
def add(v):
def _inner_adder(val=None):
"""
if val is None we return _inner_adder.v
else we increment and return ourselves
"""
if val is None:
return _inner_adder.v
_inner_adder.v += val
return _inner_adder
_inner_adder.v = v # save value
return _inner_adder
This continuously returns itself (_inner_adder) which, if a val is supplied, increments it (_inner_adder += val) and if not, returns the value as it is. Like I mentioned, it requires an extra () call in order to return the incremented value:
>>> add(1)(2)()
3
>>> add(1)(2)(3)() # and so on..
6
You can hate me, but here is a one-liner :)
add = lambda v: type("", (int,), {"__call__": lambda self, v: self.__class__(self + v)})(v)
Edit: Ok, how this works? The code is identical to answer of #Jim, but everything happens on a single line.
type can be used to construct new types: type(name, bases, dict) -> a new type. For name we provide empty string, as name is not really needed in this case. For bases (tuple) we provide an (int,), which is identical to inheriting int. dict are the class attributes, where we attach the __call__ lambda.
self.__class__(self + v) is identical to return CustomInt(self + v)
The new type is constructed and returned within the outer lambda.
If you want to define a function to be called multiple times, first you need to return a callable object each time (for example a function) otherwise you have to create your own object by defining a __call__ attribute, in order for it to be callable.
The next point is that you need to preserve all the arguments, which in this case means you might want to use Coroutines or a recursive function. But note that Coroutines are much more optimized/flexible than recursive functions, specially for such tasks.
Here is a sample function using Coroutines, that preserves the latest state of itself. Note that it can't be called multiple times since the return value is an integer which is not callable, but you might think about turning this into your expected object ;-).
def add():
current = yield
while True:
value = yield current
current = value + current
it = add()
next(it)
print(it.send(10))
print(it.send(2))
print(it.send(4))
10
12
16
Simply:
class add(int):
def __call__(self, n):
return add(self + n)
If you are willing to accept an additional () in order to retrieve the result you can use functools.partial:
from functools import partial
def add(*args, result=0):
return partial(add, result=sum(args)+result) if args else result
For example:
>>> add(1)
functools.partial(<function add at 0x7ffbcf3ff430>, result=1)
>>> add(1)(2)
functools.partial(<function add at 0x7ffbcf3ff430>, result=3)
>>> add(1)(2)()
3
This also allows specifying multiple numbers at once:
>>> add(1, 2, 3)(4, 5)(6)()
21
If you want to restrict it to a single number you can do the following:
def add(x=None, *, result=0):
return partial(add, result=x+result) if x is not None else result
If you want add(x)(y)(z) to readily return the result and be further callable then sub-classing int is the way to go.
The pythonic way to do this would be to use dynamic arguments:
def add(*args):
return sum(args)
This is not the answer you're looking for, and you may know this, but I thought I would give it anyway because if someone was wondering about doing this not out of curiosity but for work. They should probably have the "right thing to do" answer.

Generate python function with different arguments

Background
I have a function that takes a number of parameters and returns an error measure which I then want to minimize (using scipy.optimize.leastsq, but that is beside the point right now).
As a toy example, let's assume my function to optimize take the four parameters a,b,c,d:
def f(a,b,c,d):
err = a*b - c*d
return err
The optimizer then want a function with the signature func(x, *args) where x is the parameter vector.
That is, my function is currently written like:
def f_opt(x, *args):
a,b,c,d = x
err = a*b - c*d
return err
But, now I want to do a number of experiments where I fix some parameters while keeping some parameters free in the optimization step.
I could of course do something like:
def f_ad_free(x, b, c):
a, d = x
return f(a,b,c,d)
But this will be cumbersome since I have over 10 parameters which means the combinations of different numbers of free-vs-fixed parameters will potentially be quite large.
First approach using dicts
One solution I had was to write my inner function f with keyword args instead of positional args and then wrap the solution like this:
def generate(func, all_param, fixed_param):
param_dict = {k : None for k in all_param}
free_param = [param for param in all_param if param not in fixed_param]
def wrapped(x, *args):
param_dict.update({k : v for k, v in zip(fixed_param, args)})
param_dict.update({k : v for k, v in zip(free_param, x)})
return func(**param_dict)
return wrapped
Creating a function that fixes 'b' and 'c' then turns into the following:
all_params = ['a','b','c']
f_bc_fixed = generate(f_inner, all_params, ['b', 'c'])
a = 1
b = 2
c = 3
d = 4
f_bc_fixed((a,d), b, c)
Question time!
My question is whether anyone can think of a neater way solve this. Since the final function is going to be run in an optimization step I can't accept too much overhead for each function call.
The time it takes to generate the optimization function is irrelevant.
I can think of several ways to avoid using a closure as you do above, though after doing some testing, I'm not sure either of these will be faster. One approach might be to skip the wrapper and just write a function that accepts
A vector
A list of free names
A dictionary mapping names to values.
Then do something very like what you do above, but in the function itself:
def f(free_vals, free_names, params):
params.update(zip(free_names, free_vals))
err = params['a'] * params['b'] - params['c'] * params['d']
return err
For code that uses variable names multiple times, make vars local up front, e.g.
a = params['a']
b = params['b']
and so on. This might seem cumbersome, but it has the advantage of making everything explicit, avoiding the kinds of namespace searches that could make closures slow.
Then pass a list of free names and a dictionary of fixed params via the args parameter to optimize.leastsq. (Note that the params dictionary is mutable, which means that there could be side effects in theory; but in this case it shouldn't matter because only the free params are being overwritten by update, so I omitted the copy step for the sake of speed.)
The main downsides of this approach are that it shifts some complexity into the call to optimize.leastsq, and it makes your code less reusable. A second approach avoids those problems though it might not be quite as fast: using a callable class.
class OptWrapper(object):
def __init__(self, func, free_names, **fixed_params):
self.func = func
self.free_names = free_names
self.params = fixed_params
def __call__(self, x, *args):
self.params.update(zip(self.free_names, x))
return self.func(**self.params)
You can see that I simplified the parameter structure for __init__; the fixed params are passed here as keyword arguments, and the user must ensure that free_names and fixed_params don't have overlapping names. I think the simplicity is worth the tradeoff but you can easily enforce the separation between the two just as you did in your wrapper code.
I like this second approach best; it has the flexibility of your closure-based approach, but I find it more readable. All the names are in (or can be accessed through) the local namespace, which I thought that would speed things up -- but after some testing I think there's reason to believe that the closure approach will still be faster than this; accessing the __call__ method seems to add about 100 ns per call of overhead. I would strongly recommend testing if performance is a real issue.
Your generate function is basically the same as functools.partial, which is what I would use here.

Is there a way to get the return value of a function and test it's "nonzero" at the same time?

I have code that looks like this:
if(func_cliche_start(line)):
a=func_cliche_start(line)
#... do stuff with 'a' and line here
elif(func_test_start(line)):
a=func_test_start(line)
#... do stuff with a and line here
elif(func_macro_start(line)):
a=func_macro_start(line)
#... do stuff with a and line here
...
Each of the func_blah_start functions either return None or a string (based on the input line). I don't like the redundant call to func_blah_start as it seems like a waste (func_blah_start is "pure", so we can assume no side effects). Is there a better idiom for this type of thing, or is there a better way to do it?
Perhaps I'm wrong, (my C is rusty), but I thought that you could do something this in C:
int a;
if(a=myfunc(input)){ /*do something with a and input here*/ }
is there a python equivalent?
Why don't you assign the function func_cliche_start to variable a before the if statement?
a = func_cliche_start(line)
if a:
pass # do stuff with 'a' and line here
The if statement will fail if func_cliche_start(line) returns None.
You can create a wrapper function to make this work.
def assign(value, lst):
lst[0] = value
return value
a = [None]
if assign(func_cliche_start(line), a):
#... do stuff with 'a[0]' and line here
elif assign(func_test_start(line), a):
#...
You can just loop thru your processing functions that would be easier and less lines :), if you want to do something different in each case, wrap that in a function and call that e.g.
for func, proc in [(func_cliche_start, cliche_proc), (func_test_start, test_proc), (func_macro_start, macro_proc)]:
a = func(line)
if a:
proc(a, line)
break;
I think you should put those blocks of code in functions. That way you can use a dispatcher-style approach. If you need to modify a lot of local state, use a class and methods. (If not, just use functions; but I'll assume the class case here.) So something like this:
from itertools import dropwhile
class LineHandler(object):
def __init__(self, state):
self.state = state
def handle_cliche_start(self, line):
# modify state
def handle_test_start(self, line):
# modify state
def handle_macro_start(self, line):
# modify state
line_handler = LineHandler(initial_state)
handlers = [line_handler.handle_cliche_start,
line_handler.handle_test_start,
line_handler.handle_macro_start]
tests = [func_cliche_start,
func_test_start,
func_macro_start]
handlers_tests = zip(handlers, tests)
for line in lines:
handler_iter = ((h, t(line)) for h, t in handlers_tests)
handler_filter = ((h, l) for h, l in handler_iter if l is not None)
handler, line = next(handler_filter, (None, None))
if handler:
handler(line)
This is a bit more complex than your original code, but I think it compartmentalizes things in a much more scalable way. It does require you to maintain separate parallel lists of functions, but the payoff is that you can add as many as you want without having to write long if statements -- or calling your function twice! There are probably more sophisticated ways of organizing the above too -- this is really just a roughed-out example of what you could do. For example, you might be able to create a sorted container full of (priority, test_func, handler_func) tuples and iterate over it.
In any case, I think you should consider refactoring this long list of if/elif clauses.
You could take a list of functions, make it a generator and return the first Truey one:
functions = [func_cliche_start, func_test_start, func_macro_start]
functions_gen = (f(line) for f in functions)
a = next((x for x in functions_gen if x), None)
Still seems a little strange, but much less repetition.

Switch in Python [duplicate]

This question already has answers here:
Replacements for switch statement in Python?
(44 answers)
Closed 27 days ago.
I have tried making a switch like statement in python, instead of having a lot of if statements.
The code looks like this:
def findStuff(cds):
L=[]
c=0
for i in range(0, len(cds), 3):
a=differencesTo(cds[i:i+3])
result = {
a[2][0]==1: c=i+1,
a[2][1]==1: c=i+2,
a[2][2]==1: c=i+3,
a[1]==1: L.append((cds[i:i+3], a[0], c))
}
return L
My problem is, that this does not work. (Works with if statements, but this would in my opinion be more pretty).
I have found some examples of switches in Python, and they follow this structure. Can anyone help me?
(a) I fail to see what is wrong with if...elif...else
(b) I assume that python does not have a switch statement for the same reason that Smalltalk doesn't: it's almost completely redundant, and in the case where you want to switch on types, you can add an appropriate method to your classes; and likewise switching on values should be largely redundant.
Note: I am informed in the comments that whatever Guido's reason for not creating a switch in the first place, PEPs to have it added were rejected on the basis that support for adding such a statement is extremely limited. See: http://www.python.org/dev/peps/pep-3103/
(c) If you really need switching behaviour, use a hashtable (dict) to store callables. The structure is:
switch_dict = {
Foo: self.doFoo,
Bar: self.doBar,
}
func = switch_dict[switch_var]
result = func() # or if they take args, pass args
There's nothing wrong with a long if:
if switch == 'case0':
do_case0()
elif switch == 'case1':
do_case1()
elif switch == 'case2':
do_case2()
...
If that's too long winded, or if you have a lot of cases, put them in a dictionary:
switch = {'case0': do_case0, 'case1': do_case1, 'case2': do_case2, ...}
switch[case_variable]()
// Alternative:
(switch[case_variable]).__call__()
If your conditions are a bit more complex, you need to think a little about your data structures. e.g.:
switch = {
(0,21): 'never have a pension',
(21,50): 'might have a pension',
(50,65): 'definitely have a pension',
(65, 200): 'already collecting pension'
}
for key, value in switch.items():
if key[0] <= case_var < key[1]:
print(value)
Other ans are suitable for older version of python. For python v3.10+ you can use match/case which is more powerful than general switch/case construct.
def something(val):
match val:
case "A":
return "A"
case "B":
return "B"
case "C":
return "C"
case _:
return "Default"
something("A")
Assignment in Python is a statement, and cannot be a part of expression. Also, using literal in this way evaluates everything at once, which is probably not what you want. Just use ifs, you won't gain any readability by using this.
I don't know which article you've found to do something like this, but this is really messy: the whole result diction will be always evaluated, and instead of doing only part of the work (as a switch / if do), you'll do the whole work everytime. (even if you use only a part of the result).
Really, a fast switch statement in Python is using "if":
if case == 1:
pass
elif case == 2:
pass
elif case == 3:
pass
else:
# default case
pass
With "get" method, you can have the same effect as "switch..case" in C.
Marcin example :
switch_dict = {
Foo: self.doFoo,
Bar: self.doBar,
}
func = switch_dict.get(switch_var, self.dodefault)
result = func() # or if they take args, pass args
You can do something like what you want, but you shouldn't. That said, here's how; you can see how it does not improve things.
The biggest problem with the way you have it is that Python will evaluate your tests and results once, at the time you declare the dictionary. What you'd have to do instead is make all conditions and the resulting statements functions; this way, evaluation is deferred until you call them. Fortunately there is a way to do this inline for simple functions using the lambda keyword. Secondly, the assignment statement can't be used as a value in Python, so our action functions (which are executed if the corresponding condition function returns a truthy value) have to return a value that will be used to increment c; they can't assign to c themselves.
Also, the items in a dictionary aren't ordered, so your tests won't necessarily be performed in the order you define them, meaning you probably should use something other than a dictionary that preserves order, such as a tuple or a list. I am assuming you want only ever one case to execute.
So, here we go:
def findStuff(cds):
cases = [ (lambda: a[2][0] == 1, lambda: i + 1),
(lambda: a[2][1] == 1, lambda: i + 2),
(lambda: a[2][2] == 1, lambda: i + 3),
(lambda: a[1] == 1, lambda: L.append(cds[i:i+3], a[0], c) or 0)
]
L=[]
c=0
for i in range(0, len(cds), 3):
a=differencesTo(cds[i:i+3])
for condition, action in cases:
if condition():
c += action()
break
return L
Is this more readable than a sequence of if/elif statements? Nooooooooooooo. In particular, the fourth case is far less comprehensible than it should be because we are having to rely on a function that returns the increment for c to modify a completely different variable, and then we have to figure out how to get it to return a 0 so that c won't actually be modified. Uuuuuugly.
Don't do this. In fact this code probably won't even run as-is, as I deemed it too ugly to test.
While there is nothing wrong with if..else, I find "switch in Python" still an intriguing problem statement. On that, I think Marcin's (deprecated) option (c) and/or Snim2's second variant can be written in a more readable way.
For this we can declare a switch class, and exploit the __init__() to declare the case we want to switch, while __call__() helps to hand over a dict listing the (case, function) pairs:
class switch(object):
def __init__(self, case):
self._case = case
def __call__(self, dict_):
try:
return dict_[self._case]()
except KeyError:
if 'else' in dict_:
return dict_['else']()
raise Exception('Given case wasn\'t found.')
Or, respectively, since a class with only two methods, of which one is __init__(), isn't really a class:
def switch(case):
def cases(dict_):
try:
return dict_[case]()
except KeyError:
if 'else' in dict_:
return dict_['else']()
raise Exception('Given case wasn\'t found.')
return cases
(note: choose something smarter than Exception)
With for example
def case_a():
print('hello world')
def case_b():
print('sth other than hello')
def default():
print('last resort')
you can call
switch('c') ({
'a': case_a,
'b': case_b,
'else': default
})
which, for this particular example would print
last resort
This doesn't behave like a C switch in that there is no break for the different cases, because each case executes only the function declared for the particular case (i.e. break is implicitly always called). Secondly, each case can list exactly only one function that will be executed upon a found case.

Is this a "pythonic" method of executing functions as a python switch statement for tuple values?

I have a situation where I have six possible situations which can relate to four different results. Instead of using an extended if/else statement, I was wondering if it would be more pythonic to use a dictionary to call the functions that I would call inside the if/else as a replacement for a "switch" statement, like one might use in C# or php.
My switch statement depends on two values which I'm using to build a tuple, which I'll in turn use as the key to the dictionary that will function as my "switch". I will be getting the values for the tuple from two other functions (database calls), which is why I have the example one() and zero() functions.
This is the code pattern I'm thinking of using which I stumbled on with playing around in the python shell:
def one():
#Simulated database value
return 1
def zero():
return 0
def run():
#Shows the correct function ran
print "RUN"
return 1
def walk():
print "WALK"
return 1
def main():
switch_dictionary = {}
#These are the values that I will want to use to decide
#which functions to use
switch_dictionary[(0,0)] = run
switch_dictionary[(1,1)] = walk
#These are the tuples that I will build from the database
zero_tuple = (zero(), zero())
one_tuple = (one(), one())
#These actually run the functions. In practice I will simply
#have the one tuple which is dependent on the database information
#to run the function that I defined before
switch_dictionary[zero_tuple]()
switch_dictionary[one_tuple]()
I don't have the actual code written or I would post it here, as I would like to know if this method is considered a python best practice. I'm still a python learner in university, and if this is a method that's a bad habit, then I would like to kick it now before I get out into the real world.
Note, the result of executing the code above is as expected, simply "RUN" and "WALK".
edit
For those of you who are interested, this is how the relevant code turned out. It's being used on a google app engine application. You should find the code is considerably tidier than my rough example pattern. It works much better than my prior convoluted if/else tree.
def GetAssignedAgent(self):
tPaypal = PaypalOrder() #Parent class for this function
tAgents = []
Switch = {}
#These are the different methods for the actions to take
Switch[(0,0)] = tPaypal.AssignNoAgent
Switch[(0,1)] = tPaypal.UseBackupAgents
Switch[(0,2)] = tPaypal.UseBackupAgents
Switch[(1,0)] = tPaypal.UseFullAgents
Switch[(1,1)] = tPaypal.UseFullAndBackupAgents
Switch[(1,2)] = tPaypal.UseFullAndBackupAgents
Switch[(2,0)] = tPaypal.UseFullAgents
Switch[(2,1)] = tPaypal.UseFullAgents
Switch[(2,2)] = tPaypal.UseFullAgents
#I'm only interested in the number up to 2, which is why
#I can consider the Switch dictionary to be all options available.
#The "state" is the current status of the customer agent system
tCurrentState = (tPaypal.GetNumberofAvailableAgents(),
tPaypal.GetNumberofBackupAgents())
tAgents = Switch[tCurrentState]()
Consider this idiom instead:
>>> def run():
... print 'run'
...
>>> def walk():
... print 'walk'
...
>>> def talk():
... print 'talk'
>>> switch={'run':run,'walk':walk,'talk':talk}
>>> switch['run']()
run
I think it is a little more readable than the direction you are heading.
edit
And this works as well:
>>> switch={0:run,1:walk}
>>> switch[0]()
run
>>> switch[max(0,1)]()
walk
You can even use this idiom for a switch / default type structure:
>>> default_value=1
>>> try:
... switch[49]()
... except KeyError:
... switch[default_value]()
Or (the less readable, more terse):
>>> switch[switch.get(49,default_value)]()
walk
edit 2
Same idiom, extended to your comment:
>>> def get_t1():
... return 0
...
>>> def get_t2():
... return 1
...
>>> switch={(get_t1(),get_t2()):run}
>>> switch
{(0, 1): <function run at 0x100492d70>}
Readability matters
It is a reasonably common python practice to dispatch to functions based on a dictionary or sequence lookup.
Given your use of indices for lookup, an list of lists would also work:
switch_list = [[run, None], [None, walk]]
...
switch_list[zero_tuple]()
What is considered most Pythonic is that which maximizes clarity while meeting other operational requirements. In your example, the lookup tuple doesn't appear to have intrinsic meaning, so the operational intent is being lost of a magic constant. Try to make sure the business logic doesn't get lost in your dispatch mechanism. Using meaningful names for the constants would likely help.

Categories

Resources