Checking input values to methods to reduce the number of computations - python

I have a number of methods that are independent of each other but are needed collectively to compute an output. Thus, when a variable in any of the methods changes all the methods are called in the computation which is slow and expensive. Here is a quick pesudo-code of what I have:
# o represents an origin variable
# valueA represents a variable which can change
def a (o, valueA):
# calculations
return resultA
def b (o, valueB):
# calculations
return resultA
def c (o, valueC1, valueC2):
# calculations
return resultA
def compute (A, B, C1, C2):
one = self.a(o, A)
two = self.b(one,B)
three = self.c(two, C1, C2)
return img
For example when the value of C1 changes, when calling compute all the methods are calculated despite a & b having no change. What I would like is some way of checking which of the values of A,B,C1,C2 have changed between each call to compute.
I have considered defining a list of the values then on the next call comparing it to the new values being pass to compute. Eg; 1st call: list=[1,2,3,4] on 2nd call list=[1,3,4,5] so b & c need calculating but a is the same. However, I am unsure as to how to go from the comparison to defining which method to call?
Some background on my particular application in case it is of use. I have a wxPython window with sliders that determine values for image processing and an image is drawn on each change of these sliders.
What is the best way to compare each call to compute and remove these wasted repeated computations?

If i have to solve this, I would use a Dictionary, where the key is the valueX (or a list of it if have more than one, in your example C) and the value should be the result of the function.
So, you should have something like that:
{ valueA: resultA, valueB: resultB, [valueC1, valueC2]: resultC }
To do that, in the functions you will have to add it:
def a(o, valueA):
[calcs]
dic[valueA] = resultA
return resultA
[...]
def c(o, valueC1, valueC2)
[calcs]
dic[[valueC1, valueC2]] = resultC
return resultC
And, in the function that computes, you can try to get the value for the parameters and if not get the value, calculate it
def compute (A, B, C1, C2):
one = dic.get(A) if dic.get(A) else self.a(o, A)
two = dic.get(B) if dic.get(B) else self.b(one,B)
three = dic.get([C1,C2]) if dic.get([C1,C1]) else self.c(two, C1, C2)
return img
P.D: this is the "crude" implementation of memoize functions that #holdenweb pointed in his comment.

You could consider making the methods memoizing functions that use a dict to look up the results of previously stored computations (probably best in the class namespace to allow memoizing to optimize across all instances).
The memory requirements could be quite severe, however, if the methods are called with many arguments, in which case you might want to adopt a "publish and subscribe" pattern to try and make your computation more "systolic" (driven by changes in the data, loosely).
That' a couple of approaches. I'm sure SO will think of more.

Related

Retrieving a value from a called function

I'm using Python3.
Let's say we have four functions, a, b, c and d. Now assume that the callstack is as following:
a calls b, which calls c, which calls d. Function d calculates a parameter x which is needed later on in function a, but other than that, x is completely irrelevant for b and c.
My question is, what is the best way to "get" the variable x to function a. Intuitively, I'd say that I could let all the functions return x too, and that way x becomes accessible in function a. However, this feels so "bad" because the parameter x is completely irrelevant for the other functions. Could I potentially work with pointers maybe? I just want to know the most professional way to solve such a case.
There's no way to magically pass values between functions. A couple options are:
The intermediate functions take x from the previous one just to return it
The intermediate functions take an extra parameter which can store x and then d fills it.
x is calculated in a and passed to b, c, and d so that d can use it.
All of those functions are methods of a class which would store x as a property, since properties are accessible to all of the methods.
All of these, though, are "bad" practice like you felt. As many of the comments said, there is probably a better way to separate the code between the functions so that you don't have to share that value. Without a more concrete example, no one can really help you here, but some possibilities are:
Flatten the call graph so that a calls all of the other functions directly; then d can returnx directly to a
If x is easy / cheep to calculate, just calculate it twice for a and d separately (and maybe have a separate function to calculate it)
Move the code froma that uses x into d.
You could have the function d assign the value to a member variable x of the function a:
def a():
... # some work
b()
print('x =', a.x)
def b():
... # some work
c()
def c():
... # some work
d()
def d():
... # some work
a.x = 5 # make the value 5 available to the function `a` through the member variable `x`
a()
Output:
x = 5

How to apply recursion function in land subdivision?

I've made a subdivision code that allows division of a polygon by bounding box method. subdivision(coordinates) results in subblockL and subblockR (left and right). If I want to repeat this subdivision code until it reaches the area less than 200, I would need to use recursion method.
ex:
B = subdivision(A)[0], C = subdivision(B)[0], D = subdivision(C)[0]... until it reaches the area close to 200. (in other words,
subdivision(subdivision(subdivision(A)[0])[0])[0]...)
How can I simplify repetition of subdivision? and How can I apply subdivision to every block instead of single block?
while area(subdivision(A)[0]) < 200:
for i in range(A):
subdivision(i)[0]
def sd_recursion(x):
if x == subdivision(A):
return subdivision(A)
else:
return
I'm not sure what function to put in
"What function to put in" is the function itself; that's the definition of recursion.
def sd_recursive(coordinates):
if area(coordinates) < 200:
return [coordinates]
else:
a, b = subdivision(coordinates)
return sd_recursive(a) + sd_recursive(b) # list combination, not arithmetic addition
To paraphrase, if the area is less than 200, simply return the polygon itself. Otherwise, divide the polygon into two parts, and return ... the result of applying the same logic to each part in turn.
Recursive functions are challenging because recursive functions are challenging. Until you have wrapped your head around this apparently circular argument, things will be hard to understand. The crucial design point is to have a "base case" which does not recurse, which in other words escapes the otherwise infinite loop of the function calling itself under some well-defined condition. (There's also indirect recursion, where X calls Y which calls X which calls Y ...)
If you are still having trouble, look at one of the many questions about debugging recursive functions. For example, Understanding recursion in Python
I assumed the function should return a list in every case, but there are multiple ways to arrange this, just so long as all parts of the code obey the same convention. Which way to prefer also depends on how the coordinates are represented and what's convenient for your intended caller.
(In Python, ['a'] + ['b'] returns ['a', 'b'] so this is not arithmetic addition of two lists, it's just a convenient way to return a single list from combining two other lists one after the other.)
Recursion can always be unrolled; the above can be refactored to
def sd_unrolled(coordinates):
result = []
while coordinates:
if area(coordinates[0]) < 200:
result.extend(coordinates[0])
coordinates = coordinates[1:]
a, b = subdivision(coordinates[0])
coordinates = [a, b] + coordinates[1:]
return result
This is tricky in its own right (but could perhaps be simplified by introducing a few temporary variables) and pretty inefficient or at least inelegant as we keep on copying slices of the coordinates list to maintain the tail while we keep manipulating the head (the first element of the list) by splitting it until each piece is small enough.

Matrix Implementation in Python

I am trying to implement a Matrix of Complex numbers in Python. But I am stuck at a particular point in the program. I have two modules Matrix.py, Complex.py and one test program test.py. The module implementation is hosted in Github at https://github.com/Soumya1234/Math_Repository/tree/dev_branch and my test.py is given below
from Matrix import *
from Complex import *
C_init = Complex(2, 0)
print C_init
m1 = Matrix(2, 2, C_init)
m1.print_matrix()
C2= Complex(3, 3)
m1.addValue(1, 1, C2)//This is where all values of the matrix are getting
changed. But I want only the (1,1)th value to be changed to C2
m1.print_matrix()
As mentioned in the comment, the addValue(self,i,j) in Matrix.py is supposed to change the value at the (i,j)th position only. Then why the entire matrix is getting replaced? what I am doing wrong?
If you don't want to implicitly make copies of init_value you could also change Matrix.addValue to this:
def addValue(self,i,j,value):
self.matrix_list[i][j] = value
This is a little more in line with how your Matrix currently works. It's important to remember that a Complex object can't implicitly make a copy of itself, so matrix_list actually has a lot of identical objects (pointers to one object in memory) so if you modify the object in-place, it will change everywhere.
Another tip - try to use the __init__ of Complex meaningfully. You could change this kind of thing:
def __sub__(self,complex_object):
difference=Complex(0,0)
difference.real=self.real-complex_object.real
difference.imag=self.imag-complex_object.imag
return difference
To this:
def __sub__(self, other):
return Complex(self.real - other.real,
self.imag - other.imag)
Which is more concise, doesn't use temporary initialisations or variables, and I find more readable. It might also benefit you to add some kind of .copy() method to Complex, which returns a new Complex object with the same values.
On your methods for string representation - I'd recommend displaying the real and imaginary values as floats, not integers, because they should be real numbers. Here I've rounded them to 2 decimal places:
def __repr__(self):
return "%.2f+j%.2f" %(self.real,self.imag)
Note also that you actually shouldn't need __str__ if it should do the same thing as __repr__. Also, show seems to be doing roughly the same, again.
Also, in Python, there are no private variables, so instead of getReal it's entirely possible to just access it by .real. If you really need getter/setter methods look into #property.
As you're already doing some overloading, I would also recommend implementing addValue in __getitem__, which I think is a good fit of index setting under Python's data model. If you do this:
def __setitem__(self, inds, value):
i, j = inds
self.matrix_list[i][j] = value
You could change the addValue in test.py to this:
m1[1, 1] = C2
The problem is, that in your matrix initialization method you add the same value C_init to all entries of your matrix. As you are not just setting its value in each of the entries but the item itself, you get a huge problem afterwards: as the item stored in (0,0) is the same object as in all other entries, you change all entries together when you just want to change one entry.
You have to modify your initialization method like this:
def __init__(self,x,y,init_value):
self.row=x
self.column=y
self.matrix_list=[[Complex(init_value.getReal(), init_value.getComplex()) for i in range(y)] for j in range(x)]
In this way you add entries of the same value to your matrix, but its not every time a reference to the same object.
Furthermore: just for practicing this is a good example, but if you want to use the Matrix class yo compute something, you should rather use numpy arrays.

Finding a abstraction for repetitive code: Bootstrap analysis

Intro
There is a pattern that I use all the time in my Python code which analyzes
numerical data. All implementations seem overly redundant or very cumbersome or
just do not play nicely with NumPy functions. I'd like to find a better way to
abstract this pattern.
The Problem / Current State
A method of statistical error propagation is the bootstrap method. It works by
running the same analysis many times with slightly different inputs and look at
the distribution of final results.
To compute the actual value of ams_phys, I have the following equation:
ams_phys = (amk_phys**2 - 0.5 * ampi_phys**2) / aB - amcr
All the values that go into that equation have a statistical error associated
with it. These values are also computed from other equations. For instance
amk_phys is computed from this equation, where both numbers also have
uncertainties:
amk_phys_dist = mk_phys / a_inv
The value of mk_phys is given as (494.2 ± 0.3) in a paper. What I now do is
parametric bootstrap and generate R samples from a Gaussian distribution
with mean 494.2 and standard deviation 0.3. This is what I store in
mk_phys_dist:
mk_phys_dist = bootstrap.make_dist(494.2, 0.3, R)
The same is done for a_inv which is also quoted with an error in the
literature. Above equation is then converted into a list comprehension to yield
a new distribution:
amk_phys_dist = [mk_phys / a_inv
for a_inv, mk_phys in zip(a_inv_dist, mk_phys_dist)]
The first equation is then also converted into a list comprehension:
ams_phys_dist = [
(amk_phys**2 - 0.5 * ampi_phys**2) / aB - amcr
for ampi_phys, amk_phys, aB, amcr
in zip(ampi_phys_dist, amk_phys_dist, aB_dist, amcr_dist)]
To get the end result in terms of (Value ± Error), I then take the average and
standard deviation of this distribution of numbers:
ams_phys_val, ams_phys_avg, ams_phys_err \
= bootstrap.average_and_std_arrays(ams_phys_dist)
The actual value is supposed to be computed with the actual value coming in,
not the mean of this bootstrap distribution. Before I had the code replicated
for that, now I have the original value at the 0th position in the _dist
arrays. The arrays now contain 1 + R elements and the
bootstrap.average_and_std_arrays function will separate that element.
This kind of line occurs for every number that I might want to quote in my
writing. I got annoyed by the writing and created a snippet for it:
$1_val, $1_avg, $1_err = bootstrap.average_and_std_arrays($1_dist)
The need for the snippet strongly told me that I need to do some refactoring.
Also the list comprehensions are always of the following pattern:
foo_dist = [ ... bar ...
for bar in bar_dist]
It feels bad to write bar three times there.
The Class Approach
I have tried to make those _dist things a Boot class such that I would not
write ampi_dist and ampi_val but could just use ampi.val without having
to explicitly call this average_and_std_arrays functions and type a bunch of
names for it.
class Boot(object):
def __init__(self, dist):
self.dist = dist
def __str__(self):
return str(self.dist)
#property
def cen(self):
return self.dist[0]
#property
def val(self):
x = np.array(self.dist)
return np.mean(x[1:,], axis=0)
#property
def err(self):
x = np.array(self.dist)
return np.std(x[1:,], axis=0)
However, this still does not solve the problem of the list comprehensions. I
fear that I still have to repeat myself there three times. I could make the
Boot object inherit from list, such that I could at least write it like
this (without the _dist):
bar = Boot([... foo ... for foo in foo])
Magic Approach
Ideally all those list comprehensions would be gone such that I could just
write
bar = ... foo ...
where the dots mean some non-trivial operation. Those can be simple arithmetic
as above, but that could also be a function call to something that does not
support being called with multiple values (like NumPy function do support).
For instance the scipy.optimize.curve_fit function needs to be called a bunch of times:
popt_dist = [op.curve_fit(linear, mpi, diff)[0]
for mpi, diff in zip(mpi_dist, diff_dist)]
One would have to write a wrapper for that because it does not automatically loops over list of arrays.
Question
Do you see a way to abstract this process of running every transformation with
1 + R sets of data? I would like to get rid of those patterns and the huge
number of variables in each namespace (_dist, _val, _avg, ...) as this
makes passing it to function rather tedious.
Still I need to have a lot of freedom in the ... foo ... part where I need to
call arbitrary functions.

Memoized to DP solution - Making Change

Recently I read a problem to practice DP. I wasn't able to come up with one, so I tried a recursive solution which I later modified to use memoization. The problem statement is as follows :-
Making Change. You are given n types of coin denominations of values
v(1) < v(2) < ... < v(n) (all integers). Assume v(1) = 1, so you can
always make change for any amount of money C. Give an algorithm which
makes change for an amount of money C with as few coins as possible.
[on problem set 4]
I got the question from here
My solution was as follows :-
def memoized_make_change(L, index, cost, d):
if index == 0:
return cost
if (index, cost) in d:
return d[(index, cost)]
count = cost / L[index]
val1 = memoized_make_change(L, index-1, cost%L[index], d) + count
val2 = memoized_make_change(L, index-1, cost, d)
x = min(val1, val2)
d[(index, cost)] = x
return x
This is how I've understood my solution to the problem. Assume that the denominations are stored in L in ascending order. As I iterate from the end to the beginning, I have a choice to either choose a denomination or not choose it. If I choose it, I then recurse to satisfy the remaining amount with lower denominations. If I do not choose it, I recurse to satisfy the current amount with lower denominations.
Either way, at a given function call, I find the best(lowest count) to satisfy a given amount.
Could I have some help in bridging the thought process from here onward to reach a DP solution? I'm not doing this as any HW, this is just for fun and practice. I don't really need any code either, just some help in explaining the thought process would be perfect.
[EDIT]
I recall reading that function calls are expensive and is the reason why bottom up(based on iteration) might be preferred. Is that possible for this problem?
Here is a general approach for converting memoized recursive solutions to "traditional" bottom-up DP ones, in cases where this is possible.
First, let's express our general "memoized recursive solution". Here, x represents all the parameters that change on each recursive call. We want this to be a tuple of positive integers - in your case, (index, cost). I omit anything that's constant across the recursion (in your case, L), and I suppose that I have a global cache. (But FWIW, in Python you should just use the lru_cache decorator from the standard library functools module rather than managing the cache yourself.)
To solve for(x):
If x in cache: return cache[x]
Handle base cases, i.e. where one or more components of x is zero
Otherwise:
Make one or more recursive calls
Combine those results into `result`
cache[x] = result
return result
The basic idea in dynamic programming is simply to evaluate the base cases first and work upward:
To solve for(x):
For y starting at (0, 0, ...) and increasing towards x:
Do all the stuff from above
However, two neat things happen when we arrange the code this way:
As long as the order of y values is chosen properly (this is trivial when there's only one vector component, of course), we can arrange that the results for the recursive call are always in cache (i.e. we already calculated them earlier, because y had that value on a previous iteration of the loop). So instead of actually making the recursive call, we replace it directly with a cache lookup.
Since every component of y will use consecutively increasing values, and will be placed in the cache in order, we can use a multidimensional array (nested lists, or else a Numpy array) to store the values instead of a dictionary.
So we get something like:
To solve for(x):
cache = multidimensional array sized according to x
for i in range(first component of x):
for j in ...:
(as many loops as needed; better yet use `itertools.product`)
If this is a base case, write the appropriate value to cache
Otherwise, compute "recursive" index values to use, look up
the values, perform the computation and store the result
return the appropriate ("last") value from cache
I suggest considering the relationship between the value you are constructing and the values you need for it.
In this case you are constructing a value for index, cost based on:
index-1 and cost
index-1 and cost%L[index]
What you are searching for is a way of iterating over the choices such that you will always have precalculated everything you need.
In this case you can simply change the code to the iterative approach:
for each choice of index 0 upwards:
for each choice of cost:
compute value corresponding to index,cost
In practice, I find that the iterative approach can be significantly faster (e.g. *4 perhaps) for simple problems as it avoids the overhead of function calls and checking the cache for preexisting values.

Categories

Resources