I tried to write a simple python function which should return the list of fib numbers upto some specified max. But I am getting this error. I can't seem to find out what I am doing wrong.
def fib(a,b,n):
f = a+b
if (f > n):
return []
return [f].extend(fib(b,f,n))
>>>fib(0,1,10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lvl2.py", line 35, in fib
return [f].extend(fib(b,f,n))
File "lvl2.py", line 35, in fib
return [f].extend(fib(b,f,n))
File "lvl2.py", line 35, in fib
return [f].extend(fib(b,f,n))
File "lvl2.py", line 35, in fib
return [f].extend(fib(b,f,n))
TypeError: 'NoneType' object is not iterable
list.extend extends a list in-place. You can use the + operator to concatenate two lists together.
However, your code isn't particularly Pythonic. You should use a generator for infinite sequences, or, as a slight improvement over your code:
def fib(a,b,n):
data = []
f = a+b
if (f > n):
return data
data.append(f)
data.extend(fib(b,f,n))
return data
An example using generators for infinite sequences:
def fibgen(a, b):
while True:
a, b = b, a + b
yield b
You can create the generator with fibgen() and pull off the next value using .next().
You may be interested in an especially neat Fibonacci implementation, though it only works in Python 3.2 and higher:
#functools.lru_cache(maxsize=None)
def fib(n):
return fib(n-1) + fib(n-2) if n > 0 else 0
The point of the first line is to memoise the recursive call. In other words, it is slow to evaluate e.g. fib(20), because you will repeat a lot of effort, so instead we cache the values as they are computed.
It is still probably more efficient to do
import itertools
def nth(iterable, n, default=None):
"Returns the nth item or a default value"
return next(islice(iterable, n, None), default)
nth(fibgen())
as above, because it doesn't have the space overhead of the large cache.
Related
I came across this error
def test_rec():
import ast
exec(compile(ast.fix_missing_locations(ast.parse("""
def fact(n):
return 1 if n == 0 else n * fact(n - 1)
print(fact(5))
"""), "<string>", "exec")))
This yield this error, which is weird
Traceback (most recent call last):
File "/Users/gecko/.pyenv/versions/3.9.0/envs/lampy/lib/python3.9/site-packages/nose/case.py", line 198, in runTest
self.test(*self.arg)
File "/Users/gecko/code/lampycode/tests/test_let_lang.py", line 6, in test_rec
exec(compile(ast.fix_missing_locations(ast.parse("""
File "<string>", line 4, in <module>
File "<string>", line 3, in fact
NameError: name 'fact' is not defined
If I copy and paste the same code in REPL it works fine
>>> def fact(n):
... return 1 if n == 0 else n * fact(n - 1)
...
>>> print(fact(5))
120
>>>
Any ideas?
I could reduce the problem further here is the minimal exempla, this would overflow the stack but it gives me the same not defined error
def test_rec3():
exec("""
def f():
f()
f()
""")
--
Second edit, going even further, this only happens inside functions
This works
exec("""
def f(n):
print("end") if n == 1 else f(n-1)
f(10)""")
But this gives me the same error as above
def foo():
exec("""
def f(n):
print("end") if n == 1 else f(n-1)
f(10)""")
foo()
If you use exec with the default locals, then binding local variables is undefined behavior. That includes def, which binds the new function to a local variable.
Also, functions defined inside exec can't access closure variables, which fact would be.
The best way to avoid these problems is to not use exec. The second best way is to provide an explicit namespace:
namespace = {}
exec(whatever, namespace)
I wrote the following code, which works for calculating Fibonacci sequences:
arr = [0]
i = 1
def get_fib(position):
if position == 0:
return arr[0]
if position > 0:
global i
arr.append(i)
i = i + arr[-2]
get_fib(position - 1)
return arr[position]
Is this still recursion, even though I don't use return before get_fib?
Do I need to include return for a function to be recursive?
The function is recursive because it calls itself. So, no, technically you don't need to return the value from that call for it to be recursive.
However, for this function to work, you do. Consider this example:
def factorial(n):
if n == 0:
return 1
else:
return factorial(n-1) * n
This does the same as:
def factorial(n):
if n == 0:
result = 1
else:
result = factorial(n-1) * n
return result
What do you think would happen if we change the next to last line to just:
factorial(n-1) * n
Now there is no longer a result being assigned and the function will probably fail, claiming result has no value. If we change the original in a similar way:
def factorial(n):
if n == 0:
return 1
else:
factorial(n-1) * n
It would calculate factorial(n-1) * n, but it would simply discard the result and since there is no statement after it, the function would return (!) without a return statement, returning None instead.
An example of a recursive function that does something useful without returning anything:
from pathlib import Path
def list_txt(dir):
for name in Path(dir).glob('*'):
if name.is_dir():
list_txt(name)
elif name.suffix.lower() == '.txt':
print(name)
This function is recursive, because it calls itself with del_txt(name), but it doesn't need to return anything, so it will just return None whenever it is done. It would go through a directory and all its subdirectories and list all the .txt files in all of them. A recursive function isn't necessarily the best choice here, but it's very easy to write and maintain, and easy to read.
Yes the function is recursive, by definition, because it calls itself. Where the call to return is placed is not what determines whether or not the function is recursive. However, if you write a recursive function, it must return at some point (known as the "base case"), because if it doesn't it will cause infinite recursion, which will throw an exception ("Runtime Error: maximum recursion depth exceeded") once you pass the Python interpreter's max recursion limit.
Consider two examples:
Example 1:
>>> def fun_a():
... fun_a()
This is a simple function that, calls itself. This function has no terminating condition (the condition when it has to stop calling itself and start popping the stack contents that was build up during the call to itself). This is an example of infinite recursion. If you execute such function, you'll get an error message similar to this:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in fun_a
File "<stdin>", line 2, in fun_a
File "<stdin>", line 2, in fun_a
[Previous line repeated 995 more times]
RecursionError: maximum recursion depth exceeded
Example 2:
>>> def fun_b(n):
... if n == 0:
... return
... fun_b(n-1)
In this case, even though the function is calling itself over and over again, but there is a termination condition, that will stop the function from calling itself again. This is an example of finite recursion and we say that the recursion unfolds on the base case (here base case is when the value of n becomes 0).
To conclude, this is what we say is the general format of a finite recursion. The base case and the recursive call should be in the same order as mentioned here. Otherwise, the function will never stop calling itself and will lead to infinite recursion.
function_name(parameters)
base case
recursive call
A function can be recursive even if there is no return statement. A classic example is the inorder traversal of a binary tree. It doesn't have a return statement. The only requirement for a function to be recursive is that it should call itself. Below is the code (in C).
void inorder(struct node* root)
{
if (root)
{
inorder(root->left);
printf("%d", data);
inorder(root-right);
}
}
I'm trying out the python timeit function in my Python REPL. It can time small pieces of code in two ways: Either as a callable, or as a quoted expression. I'd like to know why the following code produces different timing results.
>>> import timeit
>>> timeit.timeit("lambda *args: None")
0.058281898498535156
>>> timeit.timeit(lambda *args: None)
0.0947730541229248
>>>
My intuition tells me that there should be more 'overhead' associated with the quoted string variant because it requires interpretation, but this does not appear to be the case. But apparently my intuition is mistaken..
Here's another code snippet. There does not appear a huge time difference between invoking the callable function vs. timing the quoted function statement:
>>> def costly_func():
... return list(map(lambda x: x^2, range(10)))
...
>>> import timeit
>>> timeit.timeit(costly_func)
2.421797037124634
>>> timeit.timeit("list(map(lambda x: x^2, range(10)))")
2.3588619232177734
Observe:
>>> def costly():
... return list(map(str, list(range(1_000_000))))
...
>>> timeit.timeit(costly, number=100)
30.65105245400082
>>> timeit.timeit('costly', number=1_000_000_000, globals=globals())
27.45540758000061
Look at the number argument. It took 30 seconds to execute the function costly 100 times. It took almost 30 seconds to execute the expression costly 1'000'000'000 (!) times.
Why? Because the second code does not execute the function costly! The only thing it executes is the expression costly: notice the lack of parentheses, which means it's not a function call. The expression costly is basically a no-op (well, it just requires checking whether the name "costly" exists in the current scope, that's all), that's why it's so fast, and if Python was smart enough to optimise it away, the execution of the expression costly (not costly()!) would be instantaneous!
In your case, saying lambda *args: None is simply defining an anonymous function, right? When you execute this exact code, a new function is created, but not executed (in order to do that, you should call it: (lambda *args: None)()).
So, timing the string "lambda *args: None" with timeit.timeit("lambda *args: None") basically tests how fast Python can spit out new anonymous functions.
Timing the function itself with timeit.timeit(lambda *args: None) tests how fast Python can execute an existing function.
Spitting out newly created functions is a piece of cake, while actually running them can be really hard.
Take this code for example:
def Ackermann(m, n):
if m == 0:
return n + 1
if m > 0:
if n == 0:
return Ackermann(m - 1, 1)
elif n > 0:
return Ackermann(m - 1, Ackermann(m, n - 1))
If you put that exact code in a string and timeit it, you'll get something like this:
>>> code = """def Ackermann(m, n):
... if m == 0:
... return 0
... if m > 0:
... if n == 0:
... return Ackermann(m - 1, 1)
... elif n > 0:
... return Ackermann(m - 1, Ackermann(m, n - 1))"""
>>> timeit.timeit(code, number=1_000_000)
0.10481472999890684
Now try to timeit the function itself:
>>> timeit.timeit(lambda : Ackermann(6, 4), number=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/timeit.py", line 232, in timeit
return Timer(stmt, setup, timer, globals).timeit(number)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/timeit.py", line 176, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
File "<stdin>", line 1, in <lambda>
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 8, in Ackermann
[Previous line repeated 1 more time]
File "<stdin>", line 6, in Ackermann
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 6, in Ackermann
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 8, in Ackermann
File "<stdin>", line 8, in Ackermann
[Previous line repeated 983 more times]
File "<stdin>", line 6, in Ackermann
File "<stdin>", line 2, in Ackermann
RecursionError: maximum recursion depth exceeded in comparison
See - you can't even run that! Actually, probably nobody can since it's so much recursion!
Why did the first call succeed, though? Because it didn't execute anything, it just spit out a lot of new functions and got rid of all of them shortly after.
Recently I'm writing a download program, which uses the HTTP Range field to download many blocks at the same time. I wrote a Python class to represent the Range (the HTTP header's Range is a closed interval):
class ClosedRange:
def __init__(self, begin, end):
self.begin = begin
self.end = end
def __iter__(self):
yield self.begin
yield self.end
def __str__(self):
return '[{0.begin}, {0.end}]'.format(self)
def __len__(self):
return self.end - self.begin + 1
The __iter__ magic method is to support the tuple unpacking:
header = {'Range': 'bytes={}-{}'.format(*the_range)}
And len(the_range) is how many bytes in that Range.
Now I found that 'bytes={}-{}'.format(*the_range) occasionally causes the MemoryError. After some debugging I found that the CPython interpreter will try to call len(iterable) when executing func(*iterable), and (may) allocate memory based on the length. On my machine, when len(the_range) is greater than 1GB, the MemoryError appears.
This is a simplified one:
class C:
def __iter__(self):
yield 5
def __len__(self):
print('__len__ called')
return 1024**3
def f(*args):
return args
>>> c = C()
>>> f(*c)
__len__ called
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
>>> # BTW, `list(the_range)` have the same problem.
>>> list(c)
__len__ called
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
So my questions are:
Why CPython call len(iterable)? From this question I see you won't know an iterator's length until you iterate throw it. Is this an optimization?
Can __len__ method return the 'fake' length (i.e. not the real number of elements in memory) of an object?
Why CPython call len(iterable)? From this question I see you won't know an iterator's length until you iterate throw it. Is this an optimization?
when python (assuming python3) execute f(*c), opcode CALL_FUNCTION_EX is used:
0 LOAD_GLOBAL 0 (f)
2 LOAD_GLOBAL 1 (c)
4 CALL_FUNCTION_EX 0
6 POP_TOP
as c is an iterable, PySequence_Tuple is called to convert it to a tuple, then PyObject_LengthHint is called to determine the new tuple length, as __len__ method is defined on c, it gets called and its return value is used to allocate memory for a new tuple, as malloc failed, finally MemoryError error gets raised.
/* Guess result size and allocate space. */
n = PyObject_LengthHint(v, 10);
if (n == -1)
goto Fail;
result = PyTuple_New(n);
Can __len__ method return the 'fake' length (i.e. not the real number of elements in memory) of an object?
in this scenario, yes.
when the return value of __len__ is smaller than need, python will adjust memory space of new tuple object to fit when filling the tuple. if it is larger than need, although python will allocate extra memory, _PyTuple_Resize will be called in the end to reclaim over-allocated space.
I have a function, and when it is called, I'd like to know what the return value is going to be assigned to - specifically when it is unpacked as a tuple. So:
a = func() # n = 1
vs.
a, b, c = func() # n = 3
I want to use the value of n in func. There must be some magic with inspect or _getframe that lets me do this. Any ideas?
Disclaimer (because this seems to be neccessary nowadays): I know this is funky, and bad practice, and shouldn't be used in production code. It actually looks like something I'd expect in Perl. I'm not looking for a different way to solve my supposed "actual" problem, but I'm curious how to achive what I asked for above. One cool usage of this trick would be:
ONE, TWO, THREE = count()
ONE, TWO, THREE, FOUR = count()
with
def count():
n = get_return_count()
if not n:
return
return range(n)
Adapted from http://code.activestate.com/recipes/284742-finding-out-the-number-of-values-the-caller-is-exp/:
import inspect
import dis
def expecting(offset=0):
"""Return how many values the caller is expecting"""
f = inspect.currentframe().f_back.f_back
i = f.f_lasti + offset
bytecode = f.f_code.co_code
instruction = ord(bytecode[i])
if instruction == dis.opmap['UNPACK_SEQUENCE']:
return ord(bytecode[i + 1])
elif instruction == dis.opmap['POP_TOP']:
return 0
else:
return 1
def count():
# offset = 3 bytecodes from the call op to the unpack op
return range(expecting(offset=3))
Or as an object that can detect when it is unpacked:
class count(object):
def __iter__(self):
# offset = 0 because we are at the unpack op
return iter(range(expecting(offset=0)))
There is little magic about how Python does this.
Simply put, if you use more than one target name on the left-hand side, the right-hand expression must return a sequence of matching length.
Functions that return more than one value really just return one tuple. That is a standard Python structure, a sequence of a certain length. You can measure that length:
retval = func()
print len(retval)
Assignment unpacking is determined at compile time, you cannot dynamically add more arguments on the left-hand side to suit the function you are calling.
Python 3 lets you use a splat syntax, a wildcard, for capturing the remainder of a unpacked assignment:
a, b, *c = func()
c will now be a list with any remaining values beyond the first 2:
>>> def func(*a): return a
...
>>> a, b, *c = func(1, 2)
>>> a, b, c
(1, 2, [])
>>> a, b, *c = func(1, 2, 3)
>>> a, b, c
(1, 2, [3])
>>> a, b, *c = func(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: need more than 1 value to unpack