Optimal way to store optional additional data on Python - python

I have a rather easy problem but wish to find an elegant solution that does not come up to mind. Let's say I have a function that takes some arguments and performs calculations:
def f(a, b, c):
# preprocessing of some sort
d_discarded = ... # initialized and maybe some computations as well
for i in range(1000):
d_discarded = ...
final_value_update = ...
return final_value_update
Up to a user request, I would like to iteratively store and return also the value of d_discarded, but only up to a user request, so not necessarily. How could I envision an efficient way to do so?
A naive solution would be adding if statements and an additional argument like:
def f(a, b, c, keep_d = False):
# preprocessing of some sort
d_discarded = ... # initialized and maybe some computations as well
if keep_d:
l_discarded = []
l_discarded.append(d_discarded)
for i in range(1000):
d_discarded = ...
final_value_update = ...
if keep_d:
l_discarded.append(d_discarded)
if keep_d:
return final_value_update, l_discarded
return final_value_update
But this is not efficient, nor elegant, as it calls 1002 times passing an if statement. I surely can do this, but wish to learn a more clever process.
Any consideration is appreciated. I can understand the problem is rather broad but I chose to leave it as it is because it is indeed suitable for any setting.

Related

Python: pipelining functions with multiple return/input values, or use OOP? Best Practices?

We have a data 'processing' function and a 'serializing' function. Currently the processor generates & returns 4 different data structures all from the same large data source input, and all outputs are related to each other.
Trying to separate out the 'data processing' from the 'serializing' step has gotten a bit.. messy. Looking for the best practise on what to do here.
def process(input):
...
return a,b,c,d
def serialize(a,b,c):
...
# Different serialization patterns for each of a-c.
a,b,c,d = process(input)
serialize(a,b,c)
go_on_to_do_other_things(d)
That feels janky.
Should I instead use a class where a,b,c,d are member variables?
class VeryImportantDataProcessor:
def processd(self,input):
self.a = ...
self.b = ...
...
def serialize(self):
s3.write(self.a)
convoluted_serialize(self.b)
...
vipd = VeryImportantDataProcessor()
vipd.process(input)
vipd.serialize()
Keen to hear your thoughts on what is best here!
Note after processing and serializing, the code goes on to use variable d for further unrelated shenanigans, but a, b, c are all finished once they've been saved. Not sure if that changes anything.

Generate python function with different arguments

Background
I have a function that takes a number of parameters and returns an error measure which I then want to minimize (using scipy.optimize.leastsq, but that is beside the point right now).
As a toy example, let's assume my function to optimize take the four parameters a,b,c,d:
def f(a,b,c,d):
err = a*b - c*d
return err
The optimizer then want a function with the signature func(x, *args) where x is the parameter vector.
That is, my function is currently written like:
def f_opt(x, *args):
a,b,c,d = x
err = a*b - c*d
return err
But, now I want to do a number of experiments where I fix some parameters while keeping some parameters free in the optimization step.
I could of course do something like:
def f_ad_free(x, b, c):
a, d = x
return f(a,b,c,d)
But this will be cumbersome since I have over 10 parameters which means the combinations of different numbers of free-vs-fixed parameters will potentially be quite large.
First approach using dicts
One solution I had was to write my inner function f with keyword args instead of positional args and then wrap the solution like this:
def generate(func, all_param, fixed_param):
param_dict = {k : None for k in all_param}
free_param = [param for param in all_param if param not in fixed_param]
def wrapped(x, *args):
param_dict.update({k : v for k, v in zip(fixed_param, args)})
param_dict.update({k : v for k, v in zip(free_param, x)})
return func(**param_dict)
return wrapped
Creating a function that fixes 'b' and 'c' then turns into the following:
all_params = ['a','b','c']
f_bc_fixed = generate(f_inner, all_params, ['b', 'c'])
a = 1
b = 2
c = 3
d = 4
f_bc_fixed((a,d), b, c)
Question time!
My question is whether anyone can think of a neater way solve this. Since the final function is going to be run in an optimization step I can't accept too much overhead for each function call.
The time it takes to generate the optimization function is irrelevant.
I can think of several ways to avoid using a closure as you do above, though after doing some testing, I'm not sure either of these will be faster. One approach might be to skip the wrapper and just write a function that accepts
A vector
A list of free names
A dictionary mapping names to values.
Then do something very like what you do above, but in the function itself:
def f(free_vals, free_names, params):
params.update(zip(free_names, free_vals))
err = params['a'] * params['b'] - params['c'] * params['d']
return err
For code that uses variable names multiple times, make vars local up front, e.g.
a = params['a']
b = params['b']
and so on. This might seem cumbersome, but it has the advantage of making everything explicit, avoiding the kinds of namespace searches that could make closures slow.
Then pass a list of free names and a dictionary of fixed params via the args parameter to optimize.leastsq. (Note that the params dictionary is mutable, which means that there could be side effects in theory; but in this case it shouldn't matter because only the free params are being overwritten by update, so I omitted the copy step for the sake of speed.)
The main downsides of this approach are that it shifts some complexity into the call to optimize.leastsq, and it makes your code less reusable. A second approach avoids those problems though it might not be quite as fast: using a callable class.
class OptWrapper(object):
def __init__(self, func, free_names, **fixed_params):
self.func = func
self.free_names = free_names
self.params = fixed_params
def __call__(self, x, *args):
self.params.update(zip(self.free_names, x))
return self.func(**self.params)
You can see that I simplified the parameter structure for __init__; the fixed params are passed here as keyword arguments, and the user must ensure that free_names and fixed_params don't have overlapping names. I think the simplicity is worth the tradeoff but you can easily enforce the separation between the two just as you did in your wrapper code.
I like this second approach best; it has the flexibility of your closure-based approach, but I find it more readable. All the names are in (or can be accessed through) the local namespace, which I thought that would speed things up -- but after some testing I think there's reason to believe that the closure approach will still be faster than this; accessing the __call__ method seems to add about 100 ns per call of overhead. I would strongly recommend testing if performance is a real issue.
Your generate function is basically the same as functools.partial, which is what I would use here.

Is there a way to get the return value of a function and test it's "nonzero" at the same time?

I have code that looks like this:
if(func_cliche_start(line)):
a=func_cliche_start(line)
#... do stuff with 'a' and line here
elif(func_test_start(line)):
a=func_test_start(line)
#... do stuff with a and line here
elif(func_macro_start(line)):
a=func_macro_start(line)
#... do stuff with a and line here
...
Each of the func_blah_start functions either return None or a string (based on the input line). I don't like the redundant call to func_blah_start as it seems like a waste (func_blah_start is "pure", so we can assume no side effects). Is there a better idiom for this type of thing, or is there a better way to do it?
Perhaps I'm wrong, (my C is rusty), but I thought that you could do something this in C:
int a;
if(a=myfunc(input)){ /*do something with a and input here*/ }
is there a python equivalent?
Why don't you assign the function func_cliche_start to variable a before the if statement?
a = func_cliche_start(line)
if a:
pass # do stuff with 'a' and line here
The if statement will fail if func_cliche_start(line) returns None.
You can create a wrapper function to make this work.
def assign(value, lst):
lst[0] = value
return value
a = [None]
if assign(func_cliche_start(line), a):
#... do stuff with 'a[0]' and line here
elif assign(func_test_start(line), a):
#...
You can just loop thru your processing functions that would be easier and less lines :), if you want to do something different in each case, wrap that in a function and call that e.g.
for func, proc in [(func_cliche_start, cliche_proc), (func_test_start, test_proc), (func_macro_start, macro_proc)]:
a = func(line)
if a:
proc(a, line)
break;
I think you should put those blocks of code in functions. That way you can use a dispatcher-style approach. If you need to modify a lot of local state, use a class and methods. (If not, just use functions; but I'll assume the class case here.) So something like this:
from itertools import dropwhile
class LineHandler(object):
def __init__(self, state):
self.state = state
def handle_cliche_start(self, line):
# modify state
def handle_test_start(self, line):
# modify state
def handle_macro_start(self, line):
# modify state
line_handler = LineHandler(initial_state)
handlers = [line_handler.handle_cliche_start,
line_handler.handle_test_start,
line_handler.handle_macro_start]
tests = [func_cliche_start,
func_test_start,
func_macro_start]
handlers_tests = zip(handlers, tests)
for line in lines:
handler_iter = ((h, t(line)) for h, t in handlers_tests)
handler_filter = ((h, l) for h, l in handler_iter if l is not None)
handler, line = next(handler_filter, (None, None))
if handler:
handler(line)
This is a bit more complex than your original code, but I think it compartmentalizes things in a much more scalable way. It does require you to maintain separate parallel lists of functions, but the payoff is that you can add as many as you want without having to write long if statements -- or calling your function twice! There are probably more sophisticated ways of organizing the above too -- this is really just a roughed-out example of what you could do. For example, you might be able to create a sorted container full of (priority, test_func, handler_func) tuples and iterate over it.
In any case, I think you should consider refactoring this long list of if/elif clauses.
You could take a list of functions, make it a generator and return the first Truey one:
functions = [func_cliche_start, func_test_start, func_macro_start]
functions_gen = (f(line) for f in functions)
a = next((x for x in functions_gen if x), None)
Still seems a little strange, but much less repetition.

Python functional evaluation efficiency

If I do this:
x=[(t,some_very_complex_computation(y)) for t in z]
Apparently some_very_complex_computation(y) is not dependent on t. So it should be evaluated only once. Is there any way to make Python aware of this, so it won't evaluate some_very_complex_computation(y) for every iteration?
Edit: I really want to do that in one line...
Usually you should follow San4ez's advise and just use a temporary variable here. I will still present a few techniques that might prove useful under certain circumstances:
In general, if you want to bind a name just for a sub-expression (which is usually why you need a temporary variable), you can use a lambda:
x = (lambda result=some_very_complex_computation(y): [(t, result) for t in z])()
In this particular case, the following is a quite clean and readable solution:
x = zip(z, itertools.repeat(some_very_complex_computation(y)))
A general note about automatic optimizations like these
In a dynamic language like Python, an implementation would have a very hard time to figure out that some_very_complex_computation is referentially transparent, that is, that it will always return the same result for the same arguments. You might want to look into a functional language like Haskell if you want magic like that.
"Explicit" pureness: Memoization
What you can do however is make some_very_complex_computation explicitly cache its return values for recent arguments:
from functools import lru_cache
#lru_cache()
def some_very_complex_computation(y):
# ...
This is Python 3. In Python 2, you'd have to write the decorator yourself:
from functools import wraps
def memoize(f):
cache = {}
#wraps(f)
def memoized(*args):
if args in cache:
return cache[args]
res = cache[args] = f(*args)
return res
return memoized
#memoize
some_very_complex_computation(x):
# ...
No, you should save value in variable
result = some_very_complex_computation(y)
x = [(t, result) for t in z]
I understand the sometimes perverse urge to get everything into one line, but at the same time it is good to keep things readable. You may consider this more readable than the lambda version:
x=[(t,s) for s in [some_very_complex_calculation(y)] for t in z]
However, you are probably better going for the answer by San4ez as being simple, readable (and possibly faster than creating and iterating through a one element list).
You can either:
Move the call out of the list comprehension
or
Use memoization (i.e. when some_very_complex_computation(y) gets called store the result in a dictionary, and if it gets called again with the same value just return the value stored in the dict
TL;DR version
zip(z, [long_computation(y)] * len(z))
Original answer:
As a rule of thumb, if you have some computation with a long execution time, it would be a good idea to cache the result directly in the function like this:
_cached_results = {}
def computation(v):
if v in _cached_results:
return _cached_results[v]
# otherwise do the computation here...
_cached_results[v] = result
return result
This would solve your problem too.
On one-liners
Doing one-liners for the sake of them is poor coding, yet... if you really wanted to do it in one line:
>>> def func(v):
... print 'executing func'
... return v * 2
...
>>> z = [1, 2, 3]
>>> zip(z, [func(10)] * len(z))
executing func
[(1, 20), (2, 20), (3, 20)]
#San4ez has given traditional, correct, simple, and beautiful answer.
In the spirit of the one-liner though, here's a technique for putting it all in one statement. The core idea is to use a nested for-loop to pre-evaluate subexpressions:
result = [(t, result) for result in [some_very_complex_computation(y)] for t in z]
If that blows your mind, you could just use a semicolon to put multiple statements on one line:
result = some_very_complex_computation(y); x = [(t, result) for t in z]
It can't know whether the function has side effects and changes from run to run, so you have to move the call out of the list comprehension manually.

Is this a "pythonic" method of executing functions as a python switch statement for tuple values?

I have a situation where I have six possible situations which can relate to four different results. Instead of using an extended if/else statement, I was wondering if it would be more pythonic to use a dictionary to call the functions that I would call inside the if/else as a replacement for a "switch" statement, like one might use in C# or php.
My switch statement depends on two values which I'm using to build a tuple, which I'll in turn use as the key to the dictionary that will function as my "switch". I will be getting the values for the tuple from two other functions (database calls), which is why I have the example one() and zero() functions.
This is the code pattern I'm thinking of using which I stumbled on with playing around in the python shell:
def one():
#Simulated database value
return 1
def zero():
return 0
def run():
#Shows the correct function ran
print "RUN"
return 1
def walk():
print "WALK"
return 1
def main():
switch_dictionary = {}
#These are the values that I will want to use to decide
#which functions to use
switch_dictionary[(0,0)] = run
switch_dictionary[(1,1)] = walk
#These are the tuples that I will build from the database
zero_tuple = (zero(), zero())
one_tuple = (one(), one())
#These actually run the functions. In practice I will simply
#have the one tuple which is dependent on the database information
#to run the function that I defined before
switch_dictionary[zero_tuple]()
switch_dictionary[one_tuple]()
I don't have the actual code written or I would post it here, as I would like to know if this method is considered a python best practice. I'm still a python learner in university, and if this is a method that's a bad habit, then I would like to kick it now before I get out into the real world.
Note, the result of executing the code above is as expected, simply "RUN" and "WALK".
edit
For those of you who are interested, this is how the relevant code turned out. It's being used on a google app engine application. You should find the code is considerably tidier than my rough example pattern. It works much better than my prior convoluted if/else tree.
def GetAssignedAgent(self):
tPaypal = PaypalOrder() #Parent class for this function
tAgents = []
Switch = {}
#These are the different methods for the actions to take
Switch[(0,0)] = tPaypal.AssignNoAgent
Switch[(0,1)] = tPaypal.UseBackupAgents
Switch[(0,2)] = tPaypal.UseBackupAgents
Switch[(1,0)] = tPaypal.UseFullAgents
Switch[(1,1)] = tPaypal.UseFullAndBackupAgents
Switch[(1,2)] = tPaypal.UseFullAndBackupAgents
Switch[(2,0)] = tPaypal.UseFullAgents
Switch[(2,1)] = tPaypal.UseFullAgents
Switch[(2,2)] = tPaypal.UseFullAgents
#I'm only interested in the number up to 2, which is why
#I can consider the Switch dictionary to be all options available.
#The "state" is the current status of the customer agent system
tCurrentState = (tPaypal.GetNumberofAvailableAgents(),
tPaypal.GetNumberofBackupAgents())
tAgents = Switch[tCurrentState]()
Consider this idiom instead:
>>> def run():
... print 'run'
...
>>> def walk():
... print 'walk'
...
>>> def talk():
... print 'talk'
>>> switch={'run':run,'walk':walk,'talk':talk}
>>> switch['run']()
run
I think it is a little more readable than the direction you are heading.
edit
And this works as well:
>>> switch={0:run,1:walk}
>>> switch[0]()
run
>>> switch[max(0,1)]()
walk
You can even use this idiom for a switch / default type structure:
>>> default_value=1
>>> try:
... switch[49]()
... except KeyError:
... switch[default_value]()
Or (the less readable, more terse):
>>> switch[switch.get(49,default_value)]()
walk
edit 2
Same idiom, extended to your comment:
>>> def get_t1():
... return 0
...
>>> def get_t2():
... return 1
...
>>> switch={(get_t1(),get_t2()):run}
>>> switch
{(0, 1): <function run at 0x100492d70>}
Readability matters
It is a reasonably common python practice to dispatch to functions based on a dictionary or sequence lookup.
Given your use of indices for lookup, an list of lists would also work:
switch_list = [[run, None], [None, walk]]
...
switch_list[zero_tuple]()
What is considered most Pythonic is that which maximizes clarity while meeting other operational requirements. In your example, the lookup tuple doesn't appear to have intrinsic meaning, so the operational intent is being lost of a magic constant. Try to make sure the business logic doesn't get lost in your dispatch mechanism. Using meaningful names for the constants would likely help.

Categories

Resources