Lambda function behavior with and without keyword arguments - python

I am using lambda functions for GUI programming with tkinter.
Recently I got stuck when implementing buttons that open files:
self.file=""
button = Button(conf_f, text="Tools opt.",
command=lambda: tktb.helpers.openfile(self.file))
As you see, I want to define a file path that can be updated, and that is not known when creating the GUI.
The issue I had is that earlier my code was :
button = Button(conf_f, text="Tools opt.",
command=lambda f=self.file: tktb.helpers.openfile(f))
The lambda function had a keyword argument to pass the file path. In this case, the parameter f was not updated when self.file was.
I got the keyword argument from a code snippet and I use it everywhere. Obviously I shouldn't...
This is still not clear to me... Could someone explain me the difference between the two lambda forms and when to use one an another?
PS: The following comment led me to the solution but I'd like a little more explanations:
lambda working oddly with tkinter

I'll try to explain it more in depth.
If you do
i = 0
f = lambda: i
you create a function (lambda is essentially a function) which accesses its enclosing scope's i variable.
Internally, it does so by having a so-called closure which contains the i. It is, loosely spoken, a kind of pointer to the real variable which can hold different values at different points of time.
def a():
# first, yield a function to access i
yield lambda: i
# now, set i to different values successively
for i in range(100): yield
g = a() # create generator
f = next(g) # get the function
f() # -> error as i is not set yet
next(g)
f() # -> 0
next(g)
f() # -> 1
# and so on
f.func_closure # -> an object stemming from the local scope of a()
f.func_closure[0].cell_contents # -> the current value of this variable
Here, all values of i are - at their time - stored in that said closure. If the function f() needs them. it gets them from there.
You can see that difference on the disassembly listings:
These said functions a() and f() disassemble like this:
>>> dis.dis(a)
2 0 LOAD_CLOSURE 0 (i)
3 BUILD_TUPLE 1
6 LOAD_CONST 1 (<code object <lambda> at 0xb72ea650, file "<stdin>", line 2>)
9 MAKE_CLOSURE 0
12 YIELD_VALUE
13 POP_TOP
3 14 SETUP_LOOP 25 (to 42)
17 LOAD_GLOBAL 0 (range)
20 LOAD_CONST 2 (100)
23 CALL_FUNCTION 1
26 GET_ITER
>> 27 FOR_ITER 11 (to 41)
30 STORE_DEREF 0 (i)
33 LOAD_CONST 0 (None)
36 YIELD_VALUE
37 POP_TOP
38 JUMP_ABSOLUTE 27
>> 41 POP_BLOCK
>> 42 LOAD_CONST 0 (None)
45 RETURN_VALUE
>>> dis.dis(f)
2 0 LOAD_DEREF 0 (i)
3 RETURN_VALUE
Compare that to a function b() which looks like
>>> def b():
... for i in range(100): yield
>>> dis.dis(b)
2 0 SETUP_LOOP 25 (to 28)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (100)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 11 (to 27)
16 STORE_FAST 0 (i)
19 LOAD_CONST 0 (None)
22 YIELD_VALUE
23 POP_TOP
24 JUMP_ABSOLUTE 13
>> 27 POP_BLOCK
>> 28 LOAD_CONST 0 (None)
31 RETURN_VALUE
The main difference in the loop is
>> 13 FOR_ITER 11 (to 27)
16 STORE_FAST 0 (i)
in b() vs.
>> 27 FOR_ITER 11 (to 41)
30 STORE_DEREF 0 (i)
in a(): the STORE_DEREF stores in a cell object (closure), while STORE_FAST uses a "normal" variable, which (probably) works a little bit faster.
The lambda as well makes a difference:
>>> dis.dis(lambda: i)
1 0 LOAD_GLOBAL 0 (i)
3 RETURN_VALUE
Here you have a LOAD_GLOBAL, while the one above uses LOAD_DEREF. The latter, as well, is for the closure.
I completely forgot about lambda i=i: i.
If you have the value as a default parameter, it finds its way into the function via a completely different path: the current value of i gets passed to the just created function via a default parameter:
>>> i = 42
>>> f = lambda i=i: i
>>> dis.dis(f)
1 0 LOAD_FAST 0 (i)
3 RETURN_VALUE
This way the function gets called as f(). It detects that there is a missing argument and fills the respective parameter with the default value. All this happens before the function is called; from within the function you just see that the value is taken and returned.
And there is yet another way to accomplish your task: Just use the lambda as if it would take a value: lambda i: i. If you call this, it complains about a missing argument.
But you can cope with that with the use of functools.partial:
ff = [functools.partial(lambda i: i, x) for x in range(100)]
ff[12]()
ff[54]()
This wrapper gets a callable and a number of arguments to be passed. The resulting object is a callable which calls the original callable with these arguments plus any arguments you give to it. It can be used here to keep locked to the value intended.

Related

What's the effect of `pass` in Python debug mode

I just found this phenomenon by coincidence.
mylist = [('1',), ('2',), ('3',), ('4',)]
for l in mylist:
print(l)
pass # first pass
pass # second pass
print("end")
If I set the red stop point at the first pass and debug, the program will stop here and the output is:
('1',)
However, if I set the red stop point at the second pass and debug, the output include the end in the last line. It seems like the pass avoid stopping at this point and just let the program run further.
I thought pass should have no real meaning, but it seems not. So how can understand the pass?
Thank you all
pass is just syntactic sugar for the parser to know that a statement is intentionally left empty. It does not generate an opcode, and thus, the debugger can't pause when it gets hit. Instead you're seeing it halt when the next instruction is executed.
You can see this by printing the opcodes generated by an empty function:
>>> def test():
... pass
...
>>> import dis
>>> dis.dis(test)
2 0 LOAD_CONST 0 (None)
3 RETURN_VALUE
pass doesn't do anything. It compiles to no bytecode. However, the bytecode to jump back to the start of the loop is associated with the line of the last statement in the loop, and pass counts. Here's what it looks like if we decompile it, on Python 3.7.3:
import dis
dis.dis(r'''mylist = [('1',), ('2',), ('3',), ('4',)]
for l in mylist:
print(l)
pass # first pass
pass # second pass
print("end")''')
Output:
1 0 LOAD_CONST 0 (('1',))
2 LOAD_CONST 1 (('2',))
4 LOAD_CONST 2 (('3',))
6 LOAD_CONST 3 (('4',))
8 BUILD_LIST 4
10 STORE_NAME 0 (mylist)
2 12 SETUP_LOOP 20 (to 34)
14 LOAD_NAME 0 (mylist)
16 GET_ITER
>> 18 FOR_ITER 12 (to 32)
20 STORE_NAME 1 (l)
3 22 LOAD_NAME 2 (print)
24 LOAD_NAME 1 (l)
26 CALL_FUNCTION 1
28 POP_TOP
4 30 JUMP_ABSOLUTE 18
>> 32 POP_BLOCK
6 >> 34 LOAD_NAME 2 (print)
36 LOAD_CONST 4 ('end')
38 CALL_FUNCTION 1
40 POP_TOP
42 LOAD_CONST 5 (None)
44 RETURN_VALUE
The JUMP_ABSOLUTE and POP_BLOCK get associated with line 4, the first pass.
When you set a breakpoint on the first pass, Python breaks before the JUMP_ABSOLUTE. When you set a breakpoint on the second pass, no bytecode is associated with line 5, so Python breaks on line 6, which does have bytecode.
pass is just a null operator, if your looking to exit the for loop, you need to use break. The reason you see the end of the output from mylist at the second pass is that the first pass just continues the for loop.

PyCharm not hitting Quick and Dirty breakpoint on "pass"

I want to add a quick & dirty breakpoint, e.g when I am interested in stopping in the middle of iterating a long list.
for item in list:
if item == 'curry':
pass
I put a breakpoint on pass, and it is not hit(!).
If I add a following (empty) print
for item in list:
if item = 'curry':
pass
print('')
and breakpoint both pass and print, only print is hit.
Any idea why? Windows 7, (portable) Python 3.7
[Update] as per the comment form #Adam.Er8 I tried inserting and breakpointing the ellipsis literal, ... but that was not hit, although the following print('') was.
[Updtae++] Hmm, it does hit a breakpoint on the pass in
for key, value in dictionary.items():
pass
The pass doesn't actually make it into the bytecode. The code is exactly the same as if it wasn't there. You can see this using the dis module. (examples using 3.7 on linux).
>>> import dis
>>> dis.dis(dis.dis('for i in a:\n\tprint("i")')
1 0 SETUP_LOOP 20 (to 22)
2 LOAD_NAME 0 (a)
4 GET_ITER
>> 6 FOR_ITER 12 (to 20)
8 STORE_NAME 1 (i)
2 10 LOAD_NAME 2 (print)
12 LOAD_CONST 0 ('i')
14 CALL_FUNCTION 1
16 POP_TOP
18 JUMP_ABSOLUTE 6
>> 20 POP_BLOCK
>> 22 LOAD_CONST 1 (None)
24 RETURN_VALUE
>>> dis.dis('for i in a:\n\tpass\n\tprint("i")')
1 0 SETUP_LOOP 20 (to 22)
2 LOAD_NAME 0 (a)
4 GET_ITER
>> 6 FOR_ITER 12 (to 20)
8 STORE_NAME 1 (i)
3 10 LOAD_NAME 2 (print)
12 LOAD_CONST 0 ('i')
14 CALL_FUNCTION 1
16 POP_TOP
18 JUMP_ABSOLUTE 6
>> 20 POP_BLOCK
>> 22 LOAD_CONST 1 (None)
24 RETURN_VALUE
What the bytecode is doing isn't as relevant as the fact both blocks are identical. the pass is just ignored so there is nothing for the debugger to break on.
try replacing pass with ...:
for item in list:
if item = 'curry':
...
you should be able to break-point there
this is called the ellipsis literal, unlike pass it is actually "executed" (well, sort of), and this is why you can break on it, like you would on any other statement, but it has 0 side effects and reads like "nothing" (before discovering this trick I'd just write _ = 0)
EDIT:
you can just set a conditional breakpoint.
In PyCharm this is done by right-clicking the bp and writing the condition:

What are these extra symbols in a comprehension's symtable?

I'm using symtable to get the symbol tables of a piece of code. Curiously, when using a comprehension (listcomp, setcomp, etc.), there are some extra symbols I didn't define.
Reproduction (using CPython 3.6):
import symtable
root = symtable.symtable('[x for x in y]', '?', 'exec')
# Print symtable of the listcomp
print(root.get_children()[0].get_symbols())
Output:
[<symbol '.0'>, <symbol '_[1]'>, <symbol 'x'>]
Symbol x is expected. But what are .0 and _[1]?
Note that with any other non-comprehension construct I'm getting exactly the identifiers I used in the code. E.g., lambda x: y only results in the symbols [<symbol 'x'>, <symbol 'y'>].
Also, the docs say that symtable.Symbol is...
An entry in a SymbolTable corresponding to an identifier in the source.
...although these identifiers evidently don't appear in the source.
The two names are used to implement list comprehensions as a separate scope, and they have the following meaning:
.0 is an implicit argument, used for the iterable (sourced from y in your case).
_[1] is a temporary name in the symbol table, used for the target list. This list eventually ends up on the stack.*
A list comprehension (as well as dict and set comprehension and generator expressions) is executed in a new scope. To achieve this, Python effectively creates a new anonymous function.
Because it is a function, really, you need to pass in the iterable you are looping over as an argument. This is what .0 is for, it is the first implicit argument (so at index 0). The symbol table you produced explicitly lists .0 as an argument:
>>> root = symtable.symtable('[x for x in y]', '?', 'exec')
>>> type(root.get_children()[0])
<class 'symtable.Function'>
>>> root.get_children()[0].get_parameters()
('.0',)
The first child of your table is a function with one argument named .0.
A list comprehension also needs to build the output list, and that list could be seen as a local too. This is the _[1] temporary variable. It never actually becomes a named local variable in the code object that is produced; this temporary variable is kept on the stack instead.
You can see the code object produced when you use compile():
>>> code_object = compile('[x for x in y]', '?', 'exec')
>>> code_object
<code object <module> at 0x11a4f3ed0, file "?", line 1>
>>> code_object.co_consts[0]
<code object <listcomp> at 0x11a4ea8a0, file "?", line 1>
So there is an outer code object, and in the constants, is another, nested code object. That latter one is the actual code object for the loop. It uses .0 and x as local variables. It also takes 1 argument; the names for arguments are the first co_argcount values in the co_varnames tuple:
>>> code_object.co_consts[0].co_varnames
('.0', 'x')
>>> code_object.co_consts[0].co_argcount
1
So .0 is the argument name here.
The _[1] temporary variable is handled on the stack, see the disassembly:
>>> import dis
>>> dis.dis(code_object.co_consts[0])
1 0 BUILD_LIST 0
2 LOAD_FAST 0 (.0)
>> 4 FOR_ITER 8 (to 14)
6 STORE_FAST 1 (x)
8 LOAD_FAST 1 (x)
10 LIST_APPEND 2
12 JUMP_ABSOLUTE 4
>> 14 RETURN_VALUE
Here we see .0 referenced again. _[1] is the BUILD_LIST opcode pushing a list object onto the stack, then .0 is put on the stack for the FOR_ITER opcode to iterate over (the opcode removes the iterable from .0 from the stack again).
Each iteration result is pushed onto the stack by FOR_ITER, popped again and stored in x with STORE_FAST, then loaded onto the stack again with LOAD_FAST. Finally LIST_APPEND takes the top element from the stack, and adds it to the list referenced by the next element on the stack, so to _[1].
JUMP_ABSOLUTE then brings us back to the top of the loop, where we continue to iterate until the iterable is done. Finally, RETURN_VALUE returns the top of the stack, again _[1], to the caller.
The outer code object does the work of loading the nested code object and calling it as a function:
>>> dis.dis(code_object)
1 0 LOAD_CONST 0 (<code object <listcomp> at 0x11a4ea8a0, file "?", line 1>)
2 LOAD_CONST 1 ('<listcomp>')
4 MAKE_FUNCTION 0
6 LOAD_NAME 0 (y)
8 GET_ITER
10 CALL_FUNCTION 1
12 POP_TOP
14 LOAD_CONST 2 (None)
16 RETURN_VALUE
So this makes a function object, with the function named <listcomp> (helpful for tracebacks), loads y, produces an iterator for it (the moral equivalent of iter(y), and calls the function with that iterator as the argument.
If you wanted to translate that to Psuedo-code, it would look like:
def <listcomp>(.0):
_[1] = []
for x in .0:
_[1].append(x)
return _[1]
<listcomp>(iter(y))
The _[1] temporary variable is of course not needed for generator expressions:
>>> symtable.symtable('(x for x in y)', '?', 'exec').get_children()[0].get_symbols()
[<symbol '.0'>, <symbol 'x'>]
Instead of appending to a list, the generator expression function object yields the values:
>>> dis.dis(compile('(x for x in y)', '?', 'exec').co_consts[0])
1 0 LOAD_FAST 0 (.0)
>> 2 FOR_ITER 10 (to 14)
4 STORE_FAST 1 (x)
6 LOAD_FAST 1 (x)
8 YIELD_VALUE
10 POP_TOP
12 JUMP_ABSOLUTE 2
>> 14 LOAD_CONST 0 (None)
16 RETURN_VALUE
Together with the outer bytecode, the generator expression is equivalent to:
def <genexpr>(.0):
for x in .0:
yield x
<genexpr>(iter(y))
* The temporary variable is actually no longer needed; they were used in the initial implementation of comprehensions, but this commit from April 2007 moved the compiler to just using the stack, and this has been the norm for all of the 3.x releases, as well as Python 2.7. It still is easier to think of the generated name as a reference to the stack. Because the variable is no longer needed, I filed issue 32836 to have it removed, and Python 3.8 and onwards will no longer include it in the symbol table.
In Python 2.6, you can still see the actual temporary name in the disassembly:
>>> import dis
>>> dis.dis(compile('[x for x in y]', '?', 'exec'))
1 0 BUILD_LIST 0
3 DUP_TOP
4 STORE_NAME 0 (_[1])
7 LOAD_NAME 1 (y)
10 GET_ITER
>> 11 FOR_ITER 13 (to 27)
14 STORE_NAME 2 (x)
17 LOAD_NAME 0 (_[1])
20 LOAD_NAME 2 (x)
23 LIST_APPEND
24 JUMP_ABSOLUTE 11
>> 27 DELETE_NAME 0 (_[1])
30 POP_TOP
31 LOAD_CONST 0 (None)
34 RETURN_VALUE
Note how the name actually has to be deleted again!
So, the way list-comprehensions are implemented is actually by creating a code-object, it is sort of like creating a one-time use anonymous function, for scoping purposes:
>>> import dis
>>> def f(y): [x for x in y]
...
>>> dis.dis(f)
1 0 LOAD_CONST 1 (<code object <listcomp> at 0x101df9db0, file "<stdin>", line 1>)
3 LOAD_CONST 2 ('f.<locals>.<listcomp>')
6 MAKE_FUNCTION 0
9 LOAD_FAST 0 (y)
12 GET_ITER
13 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
16 POP_TOP
17 LOAD_CONST 0 (None)
20 RETURN_VALUE
>>>
Inspecting the code-object, I can find the .0 symbol:
>>> dis.dis(f.__code__.co_consts[1])
1 0 BUILD_LIST 0
3 LOAD_FAST 0 (.0)
>> 6 FOR_ITER 12 (to 21)
9 STORE_FAST 1 (x)
12 LOAD_FAST 1 (x)
15 LIST_APPEND 2
18 JUMP_ABSOLUTE 6
>> 21 RETURN_VALUE
Note, the LOAD_FAST in the list-comp code-object seems to be loading the unnamed argument, which would correspond to the GET_ITER

Is this calculation executed in Python?

Disclaimer: I'm new to programming, but new to Python. This may be a pretty basic question.
I have the following block of code:
for x in range(0, 100):
y = 1 + 1;
Is the calculation of 1 + 1 in the second line executed 100 times?
I have two suspicions why it might not:
1) The compiler sees 1 + 1 as a constant value, and thus compiles this line into y = 2;.
2) The compiler sees that y is only set and never referenced, so it omits this line of code.
Are either/both of these correct, or does it actually get executed each iteration over the loop?
Option 1 is executed; the CPython compiler simplifies mathematical expressions with constants in the peephole optimiser.
Python will not eliminate the loop body however.
You can introspect what Python produces by looking at the bytecode; use the dis module to take a look:
>>> import dis
>>> def f():
... for x in range(100):
... y = 1 + 1
...
>>> dis.dis(f)
2 0 SETUP_LOOP 26 (to 29)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (100)
9 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
12 GET_ITER
>> 13 FOR_ITER 12 (to 28)
16 STORE_FAST 0 (x)
3 19 LOAD_CONST 3 (2)
22 STORE_FAST 1 (y)
25 JUMP_ABSOLUTE 13
>> 28 POP_BLOCK
>> 29 LOAD_CONST 0 (None)
32 RETURN_VALUE
The bytecode at position 19, LOAD_CONST loads the value 2 to store in y.
You can see the constants associated with the code object in the co_consts attribute of a code object; for functions you can find that object under the __code__ attribute:
>>> f.__code__.co_consts
(None, 100, 1, 2)
None is the default return value for any function, 100 the literal passed to the range() call, 1 the original literal, left in place by the peephole optimiser and 2 is the result of the optimisation.
The work is done in peephole.c, in the fold_binops_on_constants() function:
/* Replace LOAD_CONST c1. LOAD_CONST c2 BINOP
with LOAD_CONST binop(c1,c2)
The consts table must still be in list form so that the
new constant can be appended.
Called with codestr pointing to the first LOAD_CONST.
Abandons the transformation if the folding fails (i.e. 1+'a').
If the new constant is a sequence, only folds when the size
is below a threshold value. That keeps pyc files from
becoming large in the presence of code like: (None,)*1000.
*/
Take into account that Python is a highly dynamic language, such optimisations can only be applied to literals and constants that you cannot later dynamically replace.

python `for i in iter` vs `while True; i = next(iter)`

To my understanding, both these approach work for operating on every item in a generator:
let i be our operator target
let my_iter be our generator
let callable do_something_with return None
While Loop + StopIteratioon
try:
while True:
i = next(my_iter)
do_something_with(i)
except StopIteration:
pass
For loop / list comprehension
for i in my_iter:
do_something_with(i)
[do_something_with(i) for i in my_iter]
Minor Edit: print(i) replaced with do_something_with(i) as suggested by #kojiro to disambiguate a use case with the interpreter mechanics.
As far as I am aware, these are both applicable ways to iterate over a generator, Is there any reason to prefer one over the other?
Right now the for loop is looking superior to me. Due to: less lines/clutter and readability in general, plus single indent.
I really only see the while approach being advantages if you want to handily break the loop on particular exceptions.
the third option is definitively NOT the same as the first two. the third example creates a list, one each for the return value of print(i), which happens to be None, so not a very interesting list.
the first two are semantically similar. There is a minor, technical difference; the while loop, as presented, does not work if my_iter is not, in fact an iterator (ie, has a __next__() method); for instance, if it's a list. The for loop works for all iterables (has an __iter__() method) in addition to iterators.
The correct version is thus:
my_iter = iter(my_iterable)
try:
while True:
i = next(my_iter)
print(i)
except StopIteration:
pass
Now, aside from readability reasons, there in fact is a technical reason you should prefer the for loop; there is a penalty you pay (in CPython, anyhow) for the number of bytecodes executed in tight inner loops. lets compare:
In [1]: def forloop(my_iter):
...: for i in my_iter:
...: print(i)
...:
In [57]: dis.dis(forloop)
2 0 SETUP_LOOP 24 (to 27)
3 LOAD_FAST 0 (my_iter)
6 GET_ITER
>> 7 FOR_ITER 16 (to 26)
10 STORE_FAST 1 (i)
3 13 LOAD_GLOBAL 0 (print)
16 LOAD_FAST 1 (i)
19 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
22 POP_TOP
23 JUMP_ABSOLUTE 7
>> 26 POP_BLOCK
>> 27 LOAD_CONST 0 (None)
30 RETURN_VALUE
7 bytecodes called in inner loop vs:
In [55]: def whileloop(my_iterable):
....: my_iter = iter(my_iterable)
....: try:
....: while True:
....: i = next(my_iter)
....: print(i)
....: except StopIteration:
....: pass
....:
In [56]: dis.dis(whileloop)
2 0 LOAD_GLOBAL 0 (iter)
3 LOAD_FAST 0 (my_iterable)
6 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
9 STORE_FAST 1 (my_iter)
3 12 SETUP_EXCEPT 32 (to 47)
4 15 SETUP_LOOP 25 (to 43)
5 >> 18 LOAD_GLOBAL 1 (next)
21 LOAD_FAST 1 (my_iter)
24 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
27 STORE_FAST 2 (i)
6 30 LOAD_GLOBAL 2 (print)
33 LOAD_FAST 2 (i)
36 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
39 POP_TOP
40 JUMP_ABSOLUTE 18
>> 43 POP_BLOCK
44 JUMP_FORWARD 18 (to 65)
7 >> 47 DUP_TOP
48 LOAD_GLOBAL 3 (StopIteration)
51 COMPARE_OP 10 (exception match)
54 POP_JUMP_IF_FALSE 64
57 POP_TOP
58 POP_TOP
59 POP_TOP
8 60 POP_EXCEPT
61 JUMP_FORWARD 1 (to 65)
>> 64 END_FINALLY
>> 65 LOAD_CONST 0 (None)
68 RETURN_VALUE
9 Bytecodes in the inner loop.
We can actually do even better, though.
In [58]: from collections import deque
In [59]: def deqloop(my_iter):
....: deque(map(print, my_iter), 0)
....:
In [61]: dis.dis(deqloop)
2 0 LOAD_GLOBAL 0 (deque)
3 LOAD_GLOBAL 1 (map)
6 LOAD_GLOBAL 2 (print)
9 LOAD_FAST 0 (my_iter)
12 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
15 LOAD_CONST 1 (0)
18 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
21 POP_TOP
22 LOAD_CONST 0 (None)
25 RETURN_VALUE
everything happens in C, collections.deque, map and print are all builtins. (for cpython) so in this case, there are no bytecodes executed for looping. This is only a useful optimization when the iteration step is a c function (as is the case for print. Otherwise, the overhead of a python function call is larger than the JUMP_ABSOLUTE overhead.
The for loop is the most pythonic. Note that you can break out of for loops as well as while loops.
Don't use the list comprehension unless you need the resulting list, otherwise you are needlessly storing all the elements. Your example list comprehension will only work with the print function in Python 3, it won't work with the print statement in Python 2.
I would agree with you that the for loop is superior. As you mentioned it is less clutter and it is a lot easier to read. Programmers like to keep things as simple as possible and the for loop does that. It is also better for novice Python programmers who might not have learned try/except. Also, as Alasdair mentioned, you can break out of for loops. Also the while loop runs an error if you are using a list unless you use iter() on my_iter first.

Categories

Resources