Factory functions advice/explanation - python

I have started this question regarding bear options however I am unsure how to proceed with the factory function part:
a) A bear option has payoff
see image
I have to write a python function which returns the value of the payoff.
def bear(S,K):
if S <= K:
value = K
elif K < S and S < 2*K:
value = 2*K - S
else:
value = 0
return value
The next part... b) "Also write a factory function which returns a bear option payoff function of one variable, with K fixed."
I cannot find a simple explanation of what a factory function is, I am just starting to code and my notes have no mention of factory functions as of yet. Any links to web pages, hints or explanations will be much appreciated!

What they want you to do is write a function that returns another function, which can be used to compute bear(S, K) for a fixed value of K.
def bear_for_k(K):
return lambda S: bear(S, K)
Demo:
>>> bear(21, 17)
13
>>> bear_for_k17 = bear_for_k(17)
>>> bear_for_k17(21)
13
>>> bear(112, 81)
50
>>> bear_for_k81 = bear_for_k(81)
>>> bear_for_k81(112)
50
edit in response to comment:
Try the following file:
def bear(S,K):
if S <= K:
value = K
elif K < S and S < 2*K:
value = 2*K - S
else:
value = 0
return value
def bear_for_k(K):
return lambda S: bear(S, K)
#test:
print(bear_for_k(17)(21))
This will print 13, without errors, in Python2 and Python3.

Related

modify recursive function without copy-paste python

I found that I want to modify a recursive function's behavior for a specific input. I know I can do this by rewriting the function, but as the function could be long I want to avoid duplicating code.
As an example, let's say I have implemented the function lambda x: max(x, 10) in the following recursive way:
def orig_fun(x):
if x < 10:
return orig_fun(x + 1)
return x
but now I want to return 20 whenever the input is 5, as in the following
def desired_fun(x):
if x == 5:
return 20
if x < 10:
return orig_fun(x + 1)
return x
which is the same as adding an if statement in the begging of orig_fun or writing a new function copying the body of orig_fun. I don't want to do this because the body must be many many lines. Of course, doing new_fun = lambda x: 20 if x == 5 else orig_fun(x) does not work because new_fun(3) would be 3 instead of 20.
Is there a way I can solve this in Python3?
note that this is a duplicate of Extend recursive (library) function without code duplication which has no satisfying answer (some user talked about "hacky ways" not presented)
You can use a another function to wrap your main function like that:
def orig_fun(x):
if x < 10:
return orig_fun(x + 1)
return x
def wrapper(x):
if x == 5:
return 20
return orig_fun(x)
>>> print(wrapper(8)) # output: 10
>>> print(wrapper(5)) # output: 20
>>> print(wrapper(12)) # output: 12
update
so you want to change (extend) logic in your recursive function without touching it, then let's make your recursive function non-recursive!
# store orig_fun in another location
main_fun = orig_fun
# re-define orig_fun so it will do one more
# step in every call
def orig_fun(x):
if x == 5:
return 20
return main_fun(x)
>>> orig_fun(2) # output: 20
>>> orig_fun(5) # output: 20
>>> orig_fun(7) # output: 10
>>> orig_fun(12) # output: 12
>>> orig_fun(35) # output: 35

A memoized function that takes a tuple of strings to return an integer?

Suppose I have arrays of tuples like so:
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
I am trying to turn these arrays into numerical vectors with each dimension representing a feature.
So the expected output we be something like:
amod = [1, 0, 1] # or [1, 1, 1]
bmod = [1, 1, 2] # or [1, 2, 2]
So the vector that gets created is dependent on what it has seen before (i.e rectangle is still coded as 1 but the new value 'large' gets coded as a next step up as 2).
I think I could use some combination of yield and a memoize function to help me with this. This is what I've tried so far:
def memoize(f):
memo = {}
def helper(x):
if x not in memo:
memo[x] = f(x)
return memo[x]
return helper
#memoize
def verbal_to_value(tup):
u = 1
if tup[0] == 'shape':
yield u
u += 1
if tup[0] == 'fill':
yield u
u += 1
if tup[0] == 'size':
yield u
u += 1
But I keep getting this error:
TypeError: 'NoneType' object is not callable
Is there a way I can create this function that has a memory of what it has seen? Bonus points if it could add keys dynamically so I don't have to hardcode things like 'shape' or 'fill'.
First off: this is my preferred implementation of the memoize
decorator, mostly because of speed ...
def memoize(f):
class memodict(dict):
__slots__ = ()
def __missing__(self, key):
self[key] = ret = f(key)
return ret
return memodict().__getitem__
except for some a few edge cases it has the same effect as yours:
def memoize(f):
memo = {}
def helper(x):
if x not in memo:
memo[x] = f(x)
#else:
# pass
return memo[x]
return helper
but is somewhat faster because the if x not in memo: happens in
native code instead of in python. To understand it you merely need
to know that under normal circumstances: to interpret adict[item]
python calls adict.__getitem__(key), if adict doesn't contain key,
__getitem__() calls adict.__missing__(key) so we can leverage the
python magic methods protocols for our gain...
#This the first idea I had how I would implement your
#verbal_to_value() using memoization:
from collections import defaultdict
work=defaultdict(set)
#memoize
def verbal_to_value(kv):
k, v = kv
aset = work[k] #work creates a new set, if not already created.
aset.add(v) #add value if not already added
return len(aset)
including the memoize decorator, that's 15 lines of code...
#test suite:
def vectorize(alist):
return [verbal_to_value(kv) for kv in alist]
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
print (vectorize(a)) #shows [1,1,1]
print (vectorize(b)) #shows [1,2,2]
defaultdict is a powerful object that has almost the same logic
as memoize: a standard dictionary in every way, except that when the
lookup fails, it runs the callback function to create the missing
value. In our case set()
Unfortunately this problem requires either access to the tupple that
is being used as the key, or to the dictionary state itself. With the
result that we cannot just write a simple function for .default_factory
But we can write a new object based on the memoize/defaultdict pattern:
#This how I would implement your verbal_to_value without
#memoization, though the worker class is so similar to #memoize,
#that it's easy to see why memoize is a good pattern to work from:
class sloter(dict):
__slots__ = ()
def __missing__(self,key):
self[key] = ret = len(self) + 1
#this + 1 bothers me, why can't these vectors be 0 based? ;)
return ret
from collections import defaultdict
work2 = defaultdict(sloter)
def verbal_to_value2(kv):
k, v = kv
return work2[k][v]
#~10 lines of code?
#test suite2:
def vectorize2(alist):
return [verbal_to_value2(kv) for kv in alist]
print (vectorize2(a)) #shows [1,1,1]
print (vectorize2(b)) #shows [1,2,2]
You might have seen something like sloter before, because it's
sometimes used for exactly this sort of situation. Converting member
names to numbers and back. Because of this, we have the advantage of
being able to reverse things like this:
def unvectorize2(a_vector, pattern=('shape','fill','size')):
reverser = [{v:k2 for k2,v in work2[k].items()} for k in pattern]
for index, vect in enumerate(a_vector):
yield pattern[index], reverser[index][vect]
print (list(unvectorize2(vectorize2(a))))
print (list(unvectorize2(vectorize2(b))))
But I saw those yields in your original post, and they've got me
thinking... what if there was a memoize / defaultdict like object
that could take a generator instead of a function and knew to just
advance the generator rather than calling it. Then I realized ...
that yes generators come with a callable called __next__() which
meant that we didn't need a new defaultdict implementation, just a
careful extraction of the correct member funtion...
def count(start=0): #same as: from itertools import count
while True:
yield start
start += 1
#so we could get the exact same behavior as above, (except faster)
#by saying:
sloter3=lambda :defaultdict(count(1).__next__)
#and then
work3 = defaultdict(sloter3)
#or just:
work3 = defaultdict(lambda :defaultdict(count(1).__next__))
#which yes, is a bit of a mindwarp if you've never needed to do that
#before.
#the outer defaultdict interprets the first item. Every time a new
#first item is received, the lambda is called, which creates a new
#count() generator (starting from 1), and passes it's .__next__ method
#to a new inner defaultdict.
def verbal_to_value3(kv):
k, v = kv
return work3[k][v]
#you *could* call that 8 lines of code, but we managed to use
#defaultdict twice, and didn't need to define it, so I wouldn't call
#it 'less complex' or anything.
#test suite3:
def vectorize3(alist):
return [verbal_to_value3(kv) for kv in alist]
print (vectorize3(a)) #shows [1,1,1]
print (vectorize3(b)) #shows [1,2,2]
#so yes, that can also work.
#and since the internal state in `work3` is stored in the exact same
#format, it be accessed the same way as `work2` to reconstruct input
#from output.
def unvectorize3(a_vector, pattern=('shape','fill','size')):
reverser = [{v:k2 for k2,v in work3[k].items()} for k in pattern]
for index, vect in enumerate(a_vector):
yield pattern[index], reverser[index][vect]
print (list(unvectorize3(vectorize3(a))))
print (list(unvectorize3(vectorize3(b))))
Final comments:
Each of these implementations suffer from storing state in a global
variable. Which I find anti-aesthetic but depending on what you're
planning to do with that vector later, that might be a feature. As I
demonstrated.
Edit:
Another day of meditating on this, and the sorts of situations where I might need it,
I think that I'd encapsulate this feature like this:
from collections import defaultdict
from itertools import count
class slotter4:
def __init__(self):
#keep track what order we expect to see keys
self.pattern = defaultdict(count(1).__next__)
#keep track of what values we've seen and what number we've assigned to mean them.
self.work = defaultdict(lambda :defaultdict(count(1).__next__))
def slot(self, kv, i=False):
"""used to be named verbal_to_value"""
k, v = kv
if i and i != self.pattern[k]:# keep track of order we saw initial keys
raise ValueError("Input fields out of order")
#in theory we could ignore this error, and just know
#that we're going to default to the field order we saw
#first. Or we could just not keep track, which might be
#required, if our code runs to slow, but then we cannot
#make pattern optional in .unvectorize()
return self.work[k][v]
def vectorize(self, alist):
return [self.slot(kv, i) for i, kv in enumerate(alist,1)]
#if we're not keeping track of field pattern, we could do this instead
#return [self.work[k][v] for k, v in alist]
def unvectorize(self, a_vector, pattern=None):
if pattern is None:
pattern = [k for k,v in sorted(self.pattern.items(), key=lambda a:a[1])]
reverser = [{v:k2 for k2,v in work3[k].items()} for k in pattern]
return [(pattern[index], reverser[index][vect])
for index, vect in enumerate(a_vector)]
#test suite4:
s = slotter4()
if __name__=='__main__':
Av = s.vectorize(a)
Bv = s.vectorize(b)
print (Av) #shows [1,1,1]
print (Bv) #shows [1,2,2]
print (s.unvectorize(Av))#shows a
print (s.unvectorize(Bv))#shows b
else:
#run the test silently, and only complain if something has broken
assert s.unvectorize(s.vectorize(a))==a
assert s.unvectorize(s.vectorize(b))==b
Good luck out there!
Not the best approach, but may help you to figure out a better solution
class Shape:
counter = {}
def to_tuple(self, tuples):
self.tuples = tuples
self._add()
l = []
for i,v in self.tuples:
l.append(self.counter[i][v])
return l
def _add(self):
for i,v in self.tuples:
if i in self.counter.keys():
if v not in self.counter[i]:
self.counter[i][v] = max(self.counter[i].values()) +1
else:
self.counter[i] = {v: 0}
a = [('shape', 'rectangle'), ('fill', 'no'), ('size', 'huge')]
b = [('shape', 'rectangle'), ('fill', 'yes'), ('size', 'large')]
s = Shape()
s.to_tuple(a)
s.to_tuple(b)

How to write a function as range?

I need to use a function as range but an error appears saying that n was not set:
NameError: name 'n' is not defined
I'm actually learning how to use python and I do not know if the syntax is correct, I just find examples of lists as ranges.
Could someone clear my ideas, give me some suggestions?
[EDIT1] My function z depends on j and f(n).
[EDIT2] I´m usind fibonacci ranges for integrate over a sphere.
The program is something like this:
def f(n):
a, b = 0, 1
for i in range(n):
a, b = b, a+b
return a
def y(n):
return f(n) + some_const
def z(j):
for j in range(0,f(n-1)):
return j*y(n) + j*f(n-1) + j*f(n)
You have
def z(j):
for j in range(0,f(n-1)):
return j*y(n) + j*f(n-1) + j*f(n)
Notice you say this takes something called j while your other functions take n.
Did you mean
def z(n):
for j in range(0,f(n-1)):
return j*y(n) + j*f(n-1) + j*f(n)
When you get an error check the line number it refers to.
Also, consider giving your variables longers names - just single letters get easy to muddle up!
As pointed out by the comment, once this stops giving the error message it might not do what you want.
You first function loops and then returns:
def f(n):
a = something
for i in range(n):
a = a + i
return a
(I presume something is set to, er, something)
Your z function returns as soon as it gets into the loop: perhaps you just want to collect the results and return them?
def z(n):
stuff = []
for j in range(0,f(n-1)):
stuff.append( j*y(n) + j*f(n-1) + j*f(n) )
return stuff
Notice the return is further left - no longer indented inside the for loop.
In fact you could use a list comprehension then:
def z(n):
return [j*y(n) + j*f(n-1) + j*f(n) for j in range(0,f(n-1))]
There are several problems with the snippet that you posted.
It would help if you include the code that calls the functions. It also seems that you should look into local-scope of vars in Python- it does not matter what you call the parameter passed into the function, so you could call the var in the brackets "n" for every function, but it is preferable to give them a meaningful name that indicates what that parameter represents- just useful for others looking at the code, and good practice!
Lastly, using a docstring inside the function makes it very clear what the functions do, and may include a desc. of the params passed (type/class).
def range_sum(n): # instead of f- range_sum seems appropriate
"""
Sums the range of numbers from 0 to n
>>> range_sum(4) # example data
10
"""
# no idea what a is meant to be, unless an accumulator to
# store the total, in which case it must be initialised
accum = 0
for i in range(1, n+1): #iterates from 1 to n
accum = aaccum + i
return a # returns the total
def y(m, const): # use a descriptive func name
"""
Sums the range of numbers from 0 to m and adds const
>>> y(4, 7) # example data
17
"""
return range_sum(m) + const
def z(j, n, m): # pass all the vars you need for the function so they have a value
"""
Something descriptive
>>> z(4, 2, 5) # example data
?
"""
total
for j in range(0,f(n-1)):
total += j*y(m) + j*f(n-1) + j*f(n)
return total
print("First Func, ", range_sum(4))
print("Second Func, ", y(4, 7))
print("Third Func, ", z(4, 2, 5))
Note that the number of arguments passed to each function matches the number expected by the function. It is possible to set defaults, but get the hang of getting this right first.
Not sure what the last function is meant to do, but as mentioned in the comment above, showing some code to illustrate how you call the code can be useful, as in the sample.

Use output of one function and average it in another function

please keep in mind that while I showcase my code, that I am fairly new to programming. So please forgive any problems. I am writing a piece of python code that uses the output of one function and then averages it in another function. I am having troubling proceeding on how to do that, this is what I have so far:
def avg(A):
if not A:
return 0
return sum(A) / len(A)
Using the function above, I have to use it to calculate the average of the function produced below:
def SampleFunction(): # Example Function
A = list(range(300))
for i in range(300):
if i%2:
A[i] = 3.1*(i+1)**1.2 - 7.9*i
else:
A[i] = 4.2*(i+2)**.8 - 6.8*i
return A
Below this is a function I have trying to tie the two together.
def average(SampleFunction):
if len(SampleFunction) == 0: return 0
return sum(SampleFunction) / len(SampleFunction)
def avg(A):
if not A:
return 0
return sum(A) / len(A)
def SampleFunction(): # Example Function
A = list(range(300))
for i in range(300):
if i%2:
A[i] = 3.1*(i+1)**1.2 - 7.9*i
else:
A[i] = 4.2*(i+2)**.8 - 6.8*i
return avg(A) #Return the avg of A instead of just A
You are right at the moment of passing SampleFunction as parameter, but it's a function, you have to call invoke it inside average():
def average(some_function):
result = some_function() # invoke
return avg(result) # use the already defined function 'avg'
When you call it, pass the function you want to average():
print average(SampleFunction)
Note:
I would recommend you to follow Python naming conventions. Names like SomeName are used for classes, whereas names like some_name are used for functions.

how to program functions with alternative return value signatures in python? (next() for alternative iterators)

e.g. so that these would both work - is it possible?
(val,VAL2) = func(args)
val = func(args)
Where val is not a tuple
For example I'd like these to work for my custom object something
for item in something:
do_item(item) #where again item - is not a tuple
for (item,key) in something:
do_more(key,item)
I thought that I need to implement next() function in two different ways...
edit: as follows from the answers below, this should not really be done.
If you mean, can the function act differently based on the return types the caller is expecting, the answer is no (bar seriously nasty bytecode inspection). In this case, you should provide two different iterators on your object, and write something like:
for item in something: # Default iterator: returns non-tuple objects
do_something(item)
for (item,key) in something.iter_pairs(): # iter_pairs returns different iterator
do_something_else(item, key)
eg. see the dictionary object, which uses this pattern. for key in mydict iterates over the dictionary keys. for k,v in mydict.iteritems() iterates over (key, value) pairs.
[Edit] Just in case anyone wants to see what I mean by "seriously nasty bytecode inspection", here's a quick implementation:
import inspect, opcode
def num_expected_results():
"""Return the number of items the caller is expecting in a tuple.
Returns None if a single value is expected, rather than a tuple.
"""
f = inspect.currentframe(2)
code = map(ord, f.f_code.co_code)
pos = f.f_lasti
if code[pos] == opcode.opmap['GET_ITER']: pos += 1 # Skip this and the FOR_ITER
if code[pos] > opcode.EXTENDED_ARG: pos +=5
elif code[pos] > opcode.HAVE_ARGUMENT: pos +=3
else: pos += 1
if code[pos] == opcode.opmap['UNPACK_SEQUENCE']:
return code[pos+1] + (code[pos+2] << 8)
return None
Usable something like:
class MagicDict(dict):
def __iter__(self):
if num_expected_results() == 2:
for k,v in self.iteritems():
yield k,v
else:
for k in self.iterkeys():
yield k
d=MagicDict(foo=1, bar=2)
print "Keys:"
for key in d:
print " ", key
print "Values"
for k,v in d:
print " ",k,v
Disclaimer: This is incredibly hacky, insanely bad practice, and will cause other programmers to hunt you down and kill you if they ever see it in real code. Only works on cpython (if that). Never use this in production code (or for that matter, probably any code).
Have you tried that? It works.
def myfunction(data):
datalen = len(data)
result1 = data[:datalen/2]
result2 = data[datalen/2:]
return result1, result2
a, b = myfunction('stuff')
print a
print b
c = myfunction('other stuff')
print c
In fact there is no such thing as "return signature". All functions return a single object. It seems that you are returning more than one, but in fact you wrap them into a container tuple object.
Yes it's doable:
def a(b):
if b < 5:
return ("o", "k")
else:
return "ko"
and the result:
>>> b = a(4)
>>> b
('o', 'k')
>>> b = a(6)
>>> b
'ko'
I think the thing after is to be careful when you will use the values returned...
>>> def func(a,b):
return (a,b)
>>> x = func(1,2)
>>> x
(1, 2)
>>> (y,z) = func(1,2)
>>> y
1
>>> z
2
That doesn't really answer your question. The real answer is that the left side of the assignment doesn't affect the returned type of the function and can't be used to distinguish between functions with different return types. As noted in other answers, the function can return different types from different return statements but it doesn't know what's on the other side of the equals sign.
In the case of this function, it returns a tuple. If you assign it to x, x has the value of the tuple. (y, z) on the left side of the assignment is "tuple unpacking". The tuple returned by func() is unpacked into y and z.
Update:
Given the example use case, I'd write different generators to handle the cases:
class Something(object):
def __init__(self):
self.d = {'a' : 1,
'b' : 2,
'c' : 3}
def items(self):
for i in self.d.values():
yield i
def items_keys(self):
for k,i in self.d.items():
yield i,k
something = Something()
for item in something.items():
....: print item
....:
1
3
2
for item,key in something.items_keys():
....: print key, " : ", item
....:
a : 1
b : 2
c : 3
Or
You can return a tuple:
In [1]: def func(n):
...: return (n, n+1)
...:
In [2]: a,b = func(1)
In [3]: a
Out[3]: 1
In [4]: b
Out[4]: 2
In [5]: x = func(1)
In [6]: x
Out[6]: (1, 2)
Yes, both would work. In the first example, val1 and val2 would have the two values. In the second example, val would have a tuple. You can try this in your python interpreter:
>>> def foo():
... return ( 1, 2 )
...
>>> x = foo()
>>> (y,z) = foo()
>>> x
(1, 2)
>>> y
1
>>> z
2
It's possible only if you're happy for val to be a 2-item tuple (or if args need not be the same in the two cases). The former is what would happen if the function just ended with something like return 23, 45. Here's an example of the latter idea:
def weirdfunc(how_many_returns):
assert 1 <= how_many_returns <= 4
return 'fee fie foo fum'.split()[:how_many_returns]
var1, var2 = weirdfunc(2) # var1 gets 'fee', var2 gets 'fie'
var, = weirdfunc(1) # var gets 'fee'
This is asking for major confusion. Instead you can follow dict with separate keys, values, items, etc. methods, or you can use a convention of naming unused variables with a single underscore. Examples:
for k in mydict.keys(): pass
for k, v in mydict.items(): pass
for a, b in myobj.foo(): pass
for a, _ in myobj.foo(): pass
for _, b in myobj.foo(): pass
for _, _, _, d in [("even", "multiple", "underscores", "works")]:
print(d)
for item in something: # or something.keys(), etc.
do_item(item)
for item, key in something.items():
do_more(key, item)
If this doesn't fit your function, you should refactor it as two or more functions, because it's clearly trying to fulfill two or more different goals.

Categories

Resources