Limit to Number of Nested Function Calls in Python - python

Just a warning, this code is ugly. I know there are better ways of doing this but this is just an exercise.
I am poking around with the functional programming side of python, but I keep encountering an error when I try and nest a number of function calls:
LEN = 4
def calcentropy(i):
entropy[i] = -1 * reduce(lambda x,y: x+y, map(lambda x: x*np.log2(x), map(lambda x: x * (float(1)/float(NUM)), map(count, range(0,LEN)))))
map(calcentropy, range(0,LEN))
I get an error message stating that I have a mismatch between types; float and None for the last call to range(): TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'
When I do something like:
LEN = 4
def calcFreqs(i): do stuff to freqs
map(calcFreqs, range(0, LEN)
def calcentropy(i):
entropy[i] = -1 * reduce(lambda x,y: x+y, map(lambda x: x*np.log2(x), map(lambda x: x * (float(1)/float(NUM)), freqs))))
map(calcentropy, range(0,LEN))
I don't have any issues.
I think the problem is that LEN is no longer in scope of the call to range(). Is there a way I can fix this, or have I exceeded some sort of limit, and if so, what was it?
Sorry for not adding enough code, my mistake:
import numpy as np
LEN = 4
freqs = np.zeros(4 * LEN, dtype = np.float64)
sites = np.array([0,1,2,3,0,1,2,3,0,1,2,3], dtype = np.int8)
A = np.int8(0)
C = np.int8(1)
G = np.int8(2)
T = np.int8(3)
def count(i):
freqs[i * LEN + A] = E + reduce(lambda x,y:x+y, map(lambda x: 1 if x==A else 0, sites[i::LEN]))
freqs[i * LEN + C] = E + reduce(lambda x,y:x+y, map(lambda x: 1 if x==A else 0, sites[i::LEN]))
freqs[i * LEN + G] = E + reduce(lambda x,y:x+y, map(lambda x: 1 if x==A else 0, sites[i::LEN]))
freqs[i * LEN + T] = E + reduce(lambda x,y:x+y, map(lambda x: 1 if x==A else 0, sites[i::LEN]))
entropy = np.zeros(LEN, dtype = np.float64)
def calcentropy(i):
entropy[i] = -1 * reduce(lambda x,y: x+y, map(lambda x: x*np.log2(x), map(lambda x: x * (float(1)/float(NUM)), map(count, range(0,LEN)))))
map(calcentropy, range(0,LEN))
print entropy
info = map(lambda x: 2-x, entropy)

They issue you're having is that your count function doesn't return anything. In Python that is the same as returning None.
So when you run your long nested statement, you're getting a list of None values back from the innermost map call: map(count, range(0, LEN)). The first None value then causes an exception when it gets passed to the innermost lambda expression and you try to multiply it by a float.
So, you either need to use something else as the innermost value for your big nested structure, or you need to fix up count to return something. It's not clear to me what you're intending to iterate over, so I can't really offer a solid suggestion. Perhaps freqs?
Also, I suggest avoiding map when you simply want to run a function a bunch of times but don't care about the results. Instead, write an explicit for loop. This matters in Python 3, where map returns a generator (which doesn't do anything until you iterate over it).

Related

How do I create an accumulate code that runs by iteration?

I am trying to create an "accumulate" code that run via iteration. In general, a combiner takes in 2 parameters e.g. combiner(x, y) which returns a function e.g. (x+y) and term determines the function of each a value e.g. term(x) gives you x^2 means that the next value of a will be x^2, and next will be the function that determines the next value after a e.g. x = x+1.
I am having issues with my code as it runs an unnecessary additional loop in certain cases (the null value has to be the last value that the loop processes before exiting the while loop e.g.
def accumulate_iter(combiner, null_value, term, a, next, b):
result = term(a)
while a<=b:
a = next(a)
if a<=b:
result = combiner(term(a), result)
else:
result = combiner(null_value, result)
return result
An example of the input will be:
accumulate_iter(lambda x,y: xy, 1, lambda x: xx, 1, lambda x: x+1, 5)
and the output will give you: 14400
def accumulate_iter(combiner, term, a, next, b):
result = term(a)
while a <= b:
a = next(a)
if a <= b:
result = combiner(term(a), result)
return result
print(accumulate_iter(lambda x, y: x * y, lambda x: x * x, 1, lambda x: x + 1, 5))
Output:
14400
You could also get rid of the extra loop iteration completely so you don't need the extra (x<=y) test:
def accumulate_iter(combiner, term, a, next, b):
result = term(a)
a = next(a)
while a <= b:
result = combiner(term(a), result)
a = next(a)
return result
Note that this second version is more true to what's really going on. The loop "combines things", which means you need two things to combine, but you only pick up one new thing on each iteration. So it's natural to have a special case before the loop that deals with the first term and moves past it.

Python Summation Higher Order Function

I'm writing an iterative solution for summation, and it seems to give the correct answer. But I'm told by my tutor that it's giving the wrong result for non-commutative combine operations. I went to google but I'm still unsure what exactly it means...
Here is the recursive code I wrote:
def sum(term, a, next, b):
# First recursive version
if a > b:
return 0
else:
return term(a) + sum(term, next(a), next, b)
def accumulate(combiner, base, term, a, next, b):
# Improved version
if a > b:
return base
else:
return combiner(term(a), accumulate(combiner, base, term, next(a), next, b))
print(sum(lambda x: x, 1, lambda x: x, 5))
print(accumulate(lambda x,y: x+y, 0, lambda x: x, 1, lambda x: x, 5))
# Both solution equate to - 1 + 2 + 3 + 4 + 5
This is the iterative version I wrote that gives the wrong results for non-commutative combine operations -
Edit: accumulate_iter gives the wrong results when lambda x,y: x- y is used for combiner
def accumulate_iter(combiner, null_value, term, a, next, b):
while a <= b:
null_value = combiner(term(a), null_value)
a = next(a)
return null_value
Hoping if someone could provide a solution for this iterative version of accumulate
You accumulate_iter works fine when the combiner is commutative, but it gives different result when the combiner is non-commutative. That's because the recursive accumulate combine elements from the back to the front, but the iterative version combine them from the front to the back.
So what we need to do is to make accumulate_iter combine from behind, and following is a rewritten accumulate_iter:
def accumulate_iter(a, b, base, combiner, next, term):
# we want to combine from behind,
# but it's hard to do that since we are iterate from ahead
# here we first go through the process,
# and store the elements encounted into a list
l = []
while a <= b:
l.append(term(a))
a = next(a)
l.append(base)
print(l)
# now we can combine from behind!
while len(l)>1:
l[-2] = combiner(l[-2], l[-1])
l.pop()
return l[0]

Sum of n lambda functions

I have a list of lambda functions. Lets say this one
l = [lambda x:x**i for i in range(n)]
For every n I need to be able to sum them so I'd have a function like this:
f = lambda x: x + x**2 + x**3 + ... + x**n
Is there any way?
Edit: I wasn't clear. I don't know anything about that functions.
Is this the solution you're looking for?
Python 3.x:
n = 5
g = lambda y: sum( f(y) for f in (lambda x: x**i for i in range(n)) )
print(g(5)) # 781
Python 2.x:
n = 5
g = lambda y: sum( f(y) for f in (lambda x: x**i for i in xrange(n)) )
print g(5) # 781
If you mean a finite sum, up to x**n, use the mathematical shortcut
f = lambda x: (x**(n+1) - 1) / (x - 1) if x != 1 else n
f = lambda x,n: sum( x**i for i in range(n) )
print f(3,4)
>> 40
The simplest way to do this is to avoid creating the list of lambda functions, and to instead sum over a single function. Assuming you've defined x and n, you can do:
f = lambda x, i: x**i
sum(f(x, i) for i in range(n))
In your original example, you have actually created a closure, so your lambda functions do not do what you think they do. Instead, they are all identical, since they all use the final value of i in the closure. That is certainly not what you intended.
n=5
xpower=[]
for i in range(n):
xpower.insert(i, i+1)
print(i,xpower)
f = lambda x, xpower: sum(x**xpower[i] for i in range(len(xpower)))
print("Example with n=5, x=2:"," ", f(2,xpower))

Finding the Limit of (1+1/n)^n as n->infinity using Python/Numpy

I'm trying to use Python to plot how the limit (1+1/n)^n as n->infinity will go towards e at large n.
Why is the plot going towards 1 instead of e?
n = np.arange(0,10000,1)
f = lambda x: np.power(1 + (1/x), x)
plt.plot(n,f(n))
in this line:
f = lambda x: np.power(1 + (1/x), x)
when x is an int so 1/X will always be 0, do
f = lambda x: np.power(1 + (1.0/x), x)

Fibonacci numbers, with an one-liner in Python 3?

I know there is nothing wrong with writing with proper function structure, but I would like to know how can I find nth fibonacci number with most Pythonic way with a one-line.
I wrote that code, but It didn't seem to me best way:
>>> fib = lambda n:reduce(lambda x, y: (x[0]+x[1], x[0]), [(1,1)]*(n-2))[0]
>>> fib(8)
13
How could it be better and simplier?
fib = lambda n:reduce(lambda x,n:[x[1],x[0]+x[1]], range(n),[0,1])[0]
(this maintains a tuple mapped from [a,b] to [b,a+b], initialized to [0,1], iterated N times, then takes the first tuple element)
>>> fib(1000)
43466557686937456435688527675040625802564660517371780402481729089536555417949051
89040387984007925516929592259308032263477520968962323987332247116164299644090653
3187938298969649928516003704476137795166849228875L
(note that in this numbering, fib(0) = 0, fib(1) = 1, fib(2) = 1, fib(3) = 2, etc.)
(also note: reduce is a builtin in Python 2.7 but not in Python 3; you'd need to execute from functools import reduce in Python 3.)
A rarely seen trick is that a lambda function can refer to itself recursively:
fib = lambda n: n if n < 2 else fib(n-1) + fib(n-2)
By the way, it's rarely seen because it's confusing, and in this case it is also inefficient. It's much better to write it on multiple lines:
def fibs():
a = 0
b = 1
while True:
yield a
a, b = b, a + b
I recently learned about using matrix multiplication to generate Fibonacci numbers, which was pretty cool. You take a base matrix:
[1, 1]
[1, 0]
and multiply it by itself N times to get:
[F(N+1), F(N)]
[F(N), F(N-1)]
This morning, doodling in the steam on the shower wall, I realized that you could cut the running time in half by starting with the second matrix, and multiplying it by itself N/2 times, then using N to pick an index from the first row/column.
With a little squeezing, I got it down to one line:
import numpy
def mm_fib(n):
return (numpy.matrix([[2,1],[1,1]])**(n//2))[0,(n+1)%2]
>>> [mm_fib(i) for i in range(20)]
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181]
This is a closed expression for the Fibonacci series that uses integer arithmetic, and is quite efficient.
fib = lambda n:pow(2<<n,n+1,(4<<2*n)-(2<<n)-1)%(2<<n)
>> fib(1000)
4346655768693745643568852767504062580256466051737178
0402481729089536555417949051890403879840079255169295
9225930803226347752096896232398733224711616429964409
06533187938298969649928516003704476137795166849228875L
It computes the result in O(log n) arithmetic operations, each acting on integers with O(n) bits. Given that the result (the nth Fibonacci number) is O(n) bits, the method is quite reasonable.
It's based on genefib4 from http://fare.tunes.org/files/fun/fibonacci.lisp , which in turn was based on an a less efficient closed-form integer expression of mine (see: http://paulhankin.github.io/Fibonacci/)
If we consider the "most Pythonic way" to be elegant and effective then:
def fib(nr):
return int(((1 + math.sqrt(5)) / 2) ** nr / math.sqrt(5) + 0.5)
wins hands down. Why use a inefficient algorithm (and if you start using memoization we can forget about the oneliner) when you can solve the problem just fine in O(1) by approximation the result with the golden ratio? Though in reality I'd obviously write it in this form:
def fib(nr):
ratio = (1 + math.sqrt(5)) / 2
return int(ratio ** nr / math.sqrt(5) + 0.5)
More efficient and much easier to understand.
This is a non-recursive (anonymous) memoizing one liner
fib = lambda x,y=[1,1]:([(y.append(y[-1]+y[-2]),y[-1])[1] for i in range(1+x-len(y))],y[x])[1]
fib = lambda n, x=0, y=1 : x if not n else fib(n-1, y, x+y)
run time O(n), fib(0) = 0, fib(1) = 1, fib(2) = 1 ...
I'm Python newcomer, but did some measure for learning purposes. I've collected some fibo algorithm and took some measure.
from datetime import datetime
import matplotlib.pyplot as plt
from functools import wraps
from functools import reduce
from functools import lru_cache
import numpy
def time_it(f):
#wraps(f)
def wrapper(*args, **kwargs):
start_time = datetime.now()
f(*args, **kwargs)
end_time = datetime.now()
elapsed = end_time - start_time
elapsed = elapsed.microseconds
return elapsed
return wrapper
#time_it
def fibslow(n):
if n <= 1:
return n
else:
return fibslow(n-1) + fibslow(n-2)
#time_it
#lru_cache(maxsize=10)
def fibslow_2(n):
if n <= 1:
return n
else:
return fibslow_2(n-1) + fibslow_2(n-2)
#time_it
def fibfast(n):
if n <= 1:
return n
a, b = 0, 1
for i in range(1, n+1):
a, b = b, a + b
return a
#time_it
def fib_reduce(n):
return reduce(lambda x, n: [x[1], x[0]+x[1]], range(n), [0, 1])[0]
#time_it
def mm_fib(n):
return (numpy.matrix([[2, 1], [1, 1]])**(n//2))[0, (n+1) % 2]
#time_it
def fib_ia(n):
return pow(2 << n, n+1, (4 << 2 * n) - (2 << n)-1) % (2 << n)
if __name__ == '__main__':
X = range(1, 200)
# fibslow_times = [fibslow(i) for i in X]
fibslow_2_times = [fibslow_2(i) for i in X]
fibfast_times = [fibfast(i) for i in X]
fib_reduce_times = [fib_reduce(i) for i in X]
fib_mm_times = [mm_fib(i) for i in X]
fib_ia_times = [fib_ia(i) for i in X]
# print(fibslow_times)
# print(fibfast_times)
# print(fib_reduce_times)
plt.figure()
# plt.plot(X, fibslow_times, label='Slow Fib')
plt.plot(X, fibslow_2_times, label='Slow Fib w cache')
plt.plot(X, fibfast_times, label='Fast Fib')
plt.plot(X, fib_reduce_times, label='Reduce Fib')
plt.plot(X, fib_mm_times, label='Numpy Fib')
plt.plot(X, fib_ia_times, label='Fib ia')
plt.xlabel('n')
plt.ylabel('time (microseconds)')
plt.legend()
plt.show()
The result is usually the same.
Fiboslow_2 with recursion and cache, Fib integer arithmetic and Fibfast algorithms seems the best ones. Maybe my decorator not the best thing to measure performance, but for an overview it seemed good.
Another example, taking the cue from Mark Byers's answer:
fib = lambda n,a=0,b=1: a if n<=0 else fib(n-1,b,a+b)
I wanted to see if I could create an entire sequence, not just the final value.
The following will generate a list of length 100. It excludes the leading [0, 1] and works for both Python2 and Python3. No other lines besides the one!
(lambda i, x=[0,1]: [(x.append(x[y+1]+x[y]), x[y+1]+x[y])[1] for y in range(i)])(100)
Output
[1,
2,
3,
...
218922995834555169026,
354224848179261915075,
573147844013817084101]
Here's an implementation that doesn't use recursion, and only memoizes the last two values instead of the whole sequence history.
nthfib() below is the direct solution to the original problem (as long as imports are allowed)
It's less elegant than using the Reduce methods above, but, although slightly different that what was asked for, it gains the ability to to be used more efficiently as an infinite generator if one needs to output the sequence up to the nth number as well (re-writing slightly as fibgen() below).
from itertools import imap, islice, repeat
nthfib = lambda n: next(islice((lambda x=[0, 1]: imap((lambda x: (lambda setx=x.__setitem__, x0_temp=x[0]: (x[1], setx(0, x[1]), setx(1, x0_temp+x[1]))[0])()), repeat(x)))(), n-1, None))
>>> nthfib(1000)
43466557686937456435688527675040625802564660517371780402481729089536555417949051
89040387984007925516929592259308032263477520968962323987332247116164299644090653
3187938298969649928516003704476137795166849228875L
from itertools import imap, islice, repeat
fibgen = lambda:(lambda x=[0,1]: imap((lambda x: (lambda setx=x.__setitem__, x0_temp=x[0]: (x[1], setx(0, x[1]), setx(1, x0_temp+x[1]))[0])()), repeat(x)))()
>>> list(islice(fibgen(),12))
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]
def fib(n):
x =[0,1]
for i in range(n):
x=[x[1],x[0]+x[1]]
return x[0]
take the cue from Jason S, i think my version have a better understanding.
Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can use and update a variable within a list comprehension:
fib = lambda n,x=(0,1):[x := (x[1], sum(x)) for i in range(n+1)][-1][0]
This:
Initiates the duo n-1 and n-2 as a tuple x=(0, 1)
As part of a list comprehension looping n times, x is updated via an assignment expression (x := (x[1], sum(x))) to the new n-1 and n-2 values
Finally, we return from the last iteration, the first part of the x
To solve this problem I got inspired by a similar question here in Stackoverflow Single Statement Fibonacci, and I got this single line function that can output a list of fibonacci sequence. Though, this is a Python 2 script, not tested on Python 3:
(lambda n, fib=[0,1]: fib[:n]+[fib.append(fib[-1] + fib[-2]) or fib[-1] for i in range(n-len(fib))])(10)
assign this lambda function to a variable to reuse it:
fib = (lambda n, fib=[0,1]: fib[:n]+[fib.append(fib[-1] + fib[-2]) or fib[-1] for i in range(n-len(fib))])
fib(10)
output is a list of fibonacci sequence:
[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
I don't know if this is the most pythonic method but this is the best i could come up with:->
Fibonacci = lambda x,y=[1,1]:[1]*x if (x<2) else ([y.append(y[q-1] + y[q-2]) for q in range(2,x)],y)[1]
The above code doesn't use recursion, just a list to store the values.
My 2 cents
# One Liner
def nthfibonacci(n):
return long(((((1+5**.5)/2)**n)-(((1-5**.5)/2)**n))/5**.5)
OR
# Steps
def nthfibonacci(nth):
sq5 = 5**.5
phi1 = (1+sq5)/2
phi2 = -1 * (phi1 -1)
n1 = phi1**(nth+1)
n2 = phi2**(nth+1)
return long((n1 - n2)/sq5)
Why not use a list comprehension?
from math import sqrt, floor
[floor(((1+sqrt(5))**n-(1-sqrt(5))**n)/(2**n*sqrt(5))) for n in range(100)]
Without math imports, but less pretty:
[int(((1+(5**0.5))**n-(1-(5**0.5))**n)/(2**n*(5**0.5))) for n in range(100)]
import math
sqrt_five = math.sqrt(5)
phi = (1 + sqrt_five) / 2
fib = lambda n : int(round(pow(phi, n) / sqrt_five))
print([fib(i) for i in range(1, 26)])
single line lambda fibonacci but with some extra variables
Similar:
def fibonacci(n):
f=[1]+[0]
for i in range(n):
f=[sum(f)] + f[:-1]
print f[1]
A simple Fibonacci number generator using recursion
fib = lambda x: 1-x if x < 2 else fib(x-1)+fib(x-2)
print fib(100)
This takes forever to calculate fib(100) in my computer.
There is also closed form of Fibonacci numbers.
fib = lambda n: int(1/sqrt(5)*((1+sqrt(5))**n-(1-sqrt(5))**n)/2**n)
print fib(50)
This works nearly up to 72 numbers due to precision problem.
Lambda with logical operators
fibonacci_oneline = lambda n = 10, out = []: [ out.append(i) or i if i <= 1 else out.append(out[-1] + out[-2]) or out[-1] for i in range(n)]
here is how i do it ,however the function returns None for the list comprehension line part to allow me to insert a loop inside ..
so basically what it does is appending new elements of the fib seq inside of a list which is over two elements
>>f=lambda list,x :print('The list must be of 2 or more') if len(list)<2 else [list.append(list[-1]+list[-2]) for i in range(x)]
>>a=[1,2]
>>f(a,7)
You can generate once a list with some values and use as needed:
fib_fix = []
fib = lambda x: 1 if x <=2 else fib_fix[x-3] if x-2 <= len(fib_fix) else (fib_fix.append(fib(x-2) + fib(x-1)) or fib_fix[-1])
fib_x = lambda x: [fib(n) for n in range(1,x+1)]
fib_100 = fib_x(100)
than for example:
a = fib_fix[76]

Categories

Resources