I was recently reading over PEP 572 on assignment expressions and stumbled upon an interesting use case:
# Compute partial sums in a list comprehension
total = 0
partial_sums = [total := total + v for v in values]
print("Total:", total)
I began exploring the snippet on my own and soon discovered that :+= wasn't valid Python syntax.
# Compute partial sums in a list comprehension
total = 0
partial_sums = [total :+= v for v in values]
print("Total:", total)
I suspect there may be some underlying reason in how := is implemented that wisely precludes :+=, but I'm not sure what it could be. If someone wiser in the ways of Python knows why :+= is unfeasible or impractical or otherwise unimplemented, please share your understanding.
The short version: The addition of the walrus operator was incredibly controversial, and they wanted to discourage overuse, so they limited it to only those cases for which a strong motivating use case was put forward, leaving = the convenient tool for all other cases.
There's a lot of things the walrus operator won't do that it could do (assigning to things looked up on a sequence or mapping, assigning to attributes, etc.), but it would encourage using it all the time, to the detriment of the readability of typical code. Sure, you could choose to write more readable code, ignoring the opportunity to use weird and terrible punctuation-filled nonsense, but if my (and many people's) experience with Perl is any guide, people will use shortcuts to get it done faster now even if the resulting code is unreadable, even by them, a month later.
There are other minor hurdles that get in the way (supporting all of the augmented assignment approaches with the walrus would add a ton of new byte codes to the interpreter, expanding the eval loop's switch significantly, and potentially inhibiting optimizations/spilling from CPU cache), but fundamentally, your motivating case of using a list comprehension for side-effects is a misuse of list comprehensions (a functional construct that, like all functional programming tools, is not intended to have side-effects), as are most cases that would rely on augmented assignment expressions. The strong motivations for introducing this feature were things you could reasonably want to do and couldn't without the walrus, e.g.
Regexes, replacing this terrible, verbose arrow pattern:
m = re.match(r'pattern', string)
if m:
do_thing(m)
else:
m = re.match(r'anotherpattern', string)
if m:
do_another_thing(m)
else:
m = re.match(r'athirdpattern', string)
if m:
do_a_third_thing(m)
with this clean chain of tests:
if m := re.match(r'pattern', string):
do_thing(m)
elif m := re.match(r'anotherpattern', string):
do_another_thing(m)
elif m := re.match(r'athirdpattern', string):
do_a_third_thing(m)
Reading a file by block, replacing:
while True:
block = file.read(4096)
if not block:
break
with the clean:
while block := file.read(4096):
Those are useful things people really need to do with some frequency, and even the "canonical" versions I posted are often misimplemented in other ways (e.g. applying the regex test twice to avoid the arrow pattern, duplicating the block = file.read(4096) once before loop and once at the end so you can run while block:, but in exchange now continue doesn't work properly, and you risk the size of the block changing in one place but not another); the walrus operator allowed for better code.
The listcomp accumulator isn't (much) better code. itertools.accumulate exists, and even if it didn't, the problem can be worked around in other ways with simple generator functions or hand-rolled loops. The PEP does describe it as a benefit (it's why they allowed walrus assignments to "escape" comprehensions), but the discussion relating to this scoping special case was even more divided than the discussion over adding the walrus itself; you can do it, and it's arguably useful, but it's not something where you look at the two options and immediately say "Man, if not for the walrus, this would be awful".
And just to satisfy the folks who want evidence of this, note that they explicitly blocked some use cases that would have worked for free (it took extra work to make the grammar prohibit these usages), specifically to prevent walrus overuse. For example, you can't do:
x := 1
on a line by itself. There's no technical reason you can't, at all, but they intentionally made it a syntax error for the walrus to be used without being wrapped in something. (x := 1) works at top level, but it's annoying enough no one will ever choose it over x = 1, and that was the goal.
Until someone comes up with a common code pattern for which the lack of :+= makes it infeasibly/unnecessarily ugly (and it would have to be really common and really ugly to justify the punctuation filled monstrosity that is :+=), they won't consider it.
The itertools module provides a lot of the mechanisms to achieve the same result without bloating the language:
partial_sums = list(accumulate(values))
total = partial_sums[-1]
My understanding is that a major part of comprehension efficiency is that the calculations for each element in the original iterable can be performed in parallel. If you're calculating a running total, that has to be performed in series.
We already have sum(values), so what does this add in this use case?
If you want something for more general use cases, functools has reduce.
Related
Just saw (and enjoyed) the video of Brandon Rhodes talking on PyCon 2015 about bytearrays.
He said that .extend method is slow, but += operation is implemented differently and is much more efficient. Yes, indeed:
>>> timeit.timeit(setup='ba=bytearray()', stmt='ba.extend(b"xyz123")', number=1000000)
0.19515220914036036
>>> timeit.timeit(setup='ba=bytearray()', stmt='ba += b"xyz123"', number=1000000)
0.09053478296846151
What is the reason of having two ways of extending a bytearray? Are they performing exactly the same task? If not, what is the difference? Which one should be used when?
What is the reason of having two ways of extending a bytearray?
An operator is not chainable like function calls, whereas a method is.
The += operator cannot be used with nonlocal variables.
The += is slightly faster
.extend() may/might/could sometimes possibly be more readable
Are they performing exactly the same task?
Depends on how you look at it. The implementation is not the same, but the result usually is. For a bunch of examples and explanations, maybe try the SO search, and for example this question: Concatenating two lists - difference between '+=' and extend()
Which one should be used when?
If you care about the small performance difference, use the operator when you can. Other than that, just use whichever you like to look at, of course given the restrictions I mentioned above.
But the main question is why += and .extend do not share the same internal function to do the actual work of extending a bytearray.
Because one is faster but has limitations, so we need the other for the cases where we do have the limitations.
Bonus:
The increment operator might cause some funny business with tuples:
if you put a list in a tuple and use the += operator to extend the list, the increment succeeds and you get a TypeError
Source: https://emptysqua.re/blog/python-increment-is-weird-part-ii/
I'm trying to write a python function in a functional way. The problem is I don't know, how to transform an if conditional into a functional style. I have two variables: A and C, which I want to check for the following conditions:
def function():
if(A==0): return 0
elif(C!=0): return 0
elif(A > 4): return 0
else: someOtherFunction()
I looked at the lambda shortcircuiting, but I couldn't get it to work.
I thank you in advance for your help!
From the link you posted:
FP either discourages or outright disallows statements,
and instead works with the evaluation of expressions
So instead of if-statements, you could use a conditional expression:
def function():
return (0 if ((A == 0) or (C != 0) or (A > 4)) else
someOtherFunction())
or, (especially useful if there were many different values):
def function():
return (0 if A == 0 else
0 if C != 0 else
0 if A > 4 else
someOtherFunction())
By the way, the linked article proposes
(<cond1> and func1()) or (<cond2> and func2()) or (func3())
as a short-curcuiting equivalent to
if <cond1>: func1()
elif <cond2>: func2()
else: func3()
The problem is they are not equivalent! The boolean expression fails to return the right value when <cond1> is Truish but func1() is Falsish (e.g. False or 0 or None). (Or similarly when <cond2> is Truish but func2 is Falsish.)
(<cond1> and func1())
is written with the intention of evaluating to func1() when <cond1> is Truish, but when func1() is Falsish, (<cond1> and func1()) evaluates to False, so the entire expression is passed over and Python goes on to evaluate (<cond2> and func2()) instead of short-circuiting.
So here is a bit of interesting history. In 2005,
Raymond Hettinger found a similar hard-to-find bug in type(z)==types.ComplexType and z.real or z when z = (0+4j) because z.real is Falsish. Motivated by a desire to save us from similar bugs, the idea of using a less error-prone syntax (conditional expressions) was born.
There's nothing non-"functional style" in your current code! who said conditionals are not functional anyway? Practically all functional languages have a conditional operator of some sort, for instance the cond special form in Lisp.
I'd take issue with the code if it were using the assignment operator, or mutating state in some way (say, appending to a list) but as it is, the function in the question is already in a "functional style" - there are no state changes.
Perhaps you meant something like this?
return A != 0 and C == 0 and A <= 4 and someOtherFunction()
The above will return False if either A == 0 or C != 0 or A > 4, in all other cases it will return the value of calling someOtherFunction(). And by the way, False can be assumed to evaluate to 0 (for example, 42 + False == 42), so the semantics in the code in the question will be preserved from the caller's point of view.
Notice that you're taking the information in the link out of context. There's absolutely no need to use a lambda for this, the article is only explaining how to get around an inherent limitation of lambdas in Python, which is that you can't return statements inside (like if-elif-else) - only expressions are allowed, but you can fake them with boolean operators. In the context of a normal function by all means, use conditionals.
Although Peter Norvig is a really great guy, his website is pretty hard to search.
I remember reading about Can I do the equivalent of (test ? result : alternative) in Python? on his site a while back during some research before a functional Python talk.
I'm not going to sway you one way or the other in light of my findings, but you should still go and read the section about ternary conditional operators in a functional style.
def if_(test, result, alternative=None):
"If test is true, 'do' result, else alternative. 'Do' means call if callable."
if test:
if callable(result): result = result()
return result
else:
if callable(alternative): alternative = alternative()
return alternative
Just use it as you have it.
Python does not have the syntax and library built-ins to make it easy to program all functions to directly return a single expression. That's not the most important part of functional style anyway, the most important part is making sure that your functions maintain referential integrity. Basically this means that whenever you supply them with the same input values they return the same output.
So when trying to program functionally in Python, I do not refrain from using statements entirely. I use a block of local variable assignments as an equivalent of let ... in ... from Haskell. I use an if/elif/else chain as an equivalent of a case expression from Haskell. And often there are built-in types which do not provide an adequate interface to create new "modified" versions of them rather than updating them in-place, so you instead have to implement such operations with an explicit copy operation and then using mutations on the new copy.
Python allows you to implement functional-style designs directly. You can easily structure your program as a whole bunch of functions which explicitly pass state around and don't have side effects, so you can design your high level algorithms in a very similar way that you would in a functional programming language. Nearly every programming language supports functional programming in this sense, if you're prepared to stomach the boilerplate necessary to fake first class functions. Since Python has first class functions, you don't even have to put up with boilerplate.
But that's as far as Python goes in supporting functional programming. It does't really support implementing functions as single referentially-transparent expressions. But that doesn't really matter. In a language that doesn't enforce or track purity, you get pretty much all the benefits of functional programming that you can by simply designing your program as a bunch of referentially transparent functions, and then how you implement those functions doesn't actually matter as long as the interface is kept referentially transparent.
I know Ruby very well. I believe that I may need to learn Python presently. For those who know both, what concepts are similar between the two, and what are different?
I'm looking for a list similar to a primer I wrote for Learning Lua for JavaScripters: simple things like whitespace significance and looping constructs; the name of nil in Python, and what values are considered "truthy"; is it idiomatic to use the equivalent of map and each, or are mumble somethingaboutlistcomprehensions mumble the norm?
If I get a good variety of answers I'm happy to aggregate them into a community wiki. Or else you all can fight and crib from each other to try to create the one true comprehensive list.
Edit: To be clear, my goal is "proper" and idiomatic Python. If there is a Python equivalent of inject, but nobody uses it because there is a better/different way to achieve the common functionality of iterating a list and accumulating a result along the way, I want to know how you do things. Perhaps I'll update this question with a list of common goals, how you achieve them in Ruby, and ask what the equivalent is in Python.
Here are some key differences to me:
Ruby has blocks; Python does not.
Python has functions; Ruby does not. In Python, you can take any function or method and pass it to another function. In Ruby, everything is a method, and methods can't be directly passed. Instead, you have to wrap them in Proc's to pass them.
Ruby and Python both support closures, but in different ways. In Python, you can define a function inside another function. The inner function has read access to variables from the outer function, but not write access. In Ruby, you define closures using blocks. The closures have full read and write access to variables from the outer scope.
Python has list comprehensions, which are pretty expressive. For example, if you have a list of numbers, you can write
[x*x for x in values if x > 15]
to get a new list of the squares of all values greater than 15. In Ruby, you'd have to write the following:
values.select {|v| v > 15}.map {|v| v * v}
The Ruby code doesn't feel as compact. It's also not as efficient since it first converts the values array into a shorter intermediate array containing the values greater than 15. Then, it takes the intermediate array and generates a final array containing the squares of the intermediates. The intermediate array is then thrown out. So, Ruby ends up with 3 arrays in memory during the computation; Python only needs the input list and the resulting list.
Python also supplies similar map comprehensions.
Python supports tuples; Ruby doesn't. In Ruby, you have to use arrays to simulate tuples.
Ruby supports switch/case statements; Python does not.
Ruby supports the standard expr ? val1 : val2 ternary operator; Python does not.
Ruby supports only single inheritance. If you need to mimic multiple inheritance, you can define modules and use mix-ins to pull the module methods into classes. Python supports multiple inheritance rather than module mix-ins.
Python supports only single-line lambda functions. Ruby blocks, which are kind of/sort of lambda functions, can be arbitrarily big. Because of this, Ruby code is typically written in a more functional style than Python code. For example, to loop over a list in Ruby, you typically do
collection.each do |value|
...
end
The block works very much like a function being passed to collection.each. If you were to do the same thing in Python, you'd have to define a named inner function and then pass that to the collection each method (if list supported this method):
def some_operation(value):
...
collection.each(some_operation)
That doesn't flow very nicely. So, typically the following non-functional approach would be used in Python:
for value in collection:
...
Using resources in a safe way is quite different between the two languages. Here, the problem is that you want to allocate some resource (open a file, obtain a database cursor, etc), perform some arbitrary operation on it, and then close it in a safe manner even if an exception occurs.
In Ruby, because blocks are so easy to use (see #9), you would typically code this pattern as a method that takes a block for the arbitrary operation to perform on the resource.
In Python, passing in a function for the arbitrary action is a little clunkier since you have to write a named, inner function (see #9). Instead, Python uses a with statement for safe resource handling. See How do I correctly clean up a Python object? for more details.
I, like you, looked for inject and other functional methods when learning Python. I was disappointed to find that they weren't all there, or that Python favored an imperative approach. That said, most of the constructs are there if you look. In some cases, a library will make things nicer.
A couple of highlights for me:
The functional programming patterns you know from Ruby are available in Python. They just look a little different. For example, there's a map function:
def f(x):
return x + 1
map(f, [1, 2, 3]) # => [2, 3, 4]
Similarly, there is a reduce function to fold over lists, etc.
That said, Python lacks blocks and doesn't have a streamlined syntax for chaining or composing functions. (For a nice way of doing this without blocks, check out Haskell's rich syntax.)
For one reason or another, the Python community seems to prefer imperative iteration for things that would, in Ruby, be done without mutation. For example, folds (i.e., inject), are often done with an imperative for loop instead of reduce:
running_total = 0
for n in [1, 2, 3]:
running_total = running_total + n
This isn't just a convention, it's also reinforced by the Python maintainers. For example, the Python 3 release notes explicitly favor for loops over reduce:
Use functools.reduce() if you really need it; however, 99 percent of the time an explicit for loop is more readable.
List comprehensions are a terse way to express complex functional operations (similar to Haskell's list monad). These aren't available in Ruby and may help in some scenarios. For example, a brute-force one-liner to find all the palindromes in a string (assuming you have a function p() that returns true for palindromes) looks like this:
s = 'string-with-palindromes-like-abbalabba'
l = len(s)
[s[x:y] for x in range(l) for y in range(x,l+1) if p(s[x:y])]
Methods in Python can be treated as context-free functions in many cases, which is something you'll have to get used to from Ruby but can be quite powerful.
In case this helps, I wrote up more thoughts here in 2011: The 'ugliness' of Python. They may need updating in light of today's focus on ML.
My suggestion: Don't try to learn the differences. Learn how to approach the problem in Python. Just like there's a Ruby approach to each problem (that works very well givin the limitations and strengths of the language), there's a Python approach to the problem. they are both different. To get the best out of each language, you really should learn the language itself, and not just the "translation" from one to the other.
Now, with that said, the difference will help you adapt faster and make 1 off modifications to a Python program. And that's fine for a start to get writing. But try to learn from other projects the why behind the architecture and design decisions rather than the how behind the semantics of the language...
I know little Ruby, but here are a few bullet points about the things you mentioned:
nil, the value indicating lack of a value, would be None (note that you check for it like x is None or x is not None, not with == - or by coercion to boolean, see next point).
None, zero-esque numbers (0, 0.0, 0j (complex number)) and empty collections ([], {}, set(), the empty string "", etc.) are considered falsy, everything else is considered truthy.
For side effects, (for-)loop explicitly. For generating a new bunch of stuff without side-effects, use list comprehensions (or their relatives - generator expressions for lazy one-time iterators, dict/set comprehensions for the said collections).
Concerning looping: You have for, which operates on an iterable(! no counting), and while, which does what you would expect. The fromer is far more powerful, thanks to the extensive support for iterators. Not only nearly everything that can be an iterator instead of a list is an iterator (at least in Python 3 - in Python 2, you have both and the default is a list, sadly). The are numerous tools for working with iterators - zip iterates any number of iterables in parallel, enumerate gives you (index, item) (on any iterable, not just on lists), even slicing abritary (possibly large or infinite) iterables! I found that these make many many looping tasks much simpler. Needless to say, they integrate just fine with list comprehensions, generator expressions, etc.
In Ruby, instance variables and methods are completely unrelated, except when you explicitly relate them with attr_accessor or something like that.
In Python, methods are just a special class of attribute: one that is executable.
So for example:
>>> class foo:
... x = 5
... def y(): pass
...
>>> f = foo()
>>> type(f.x)
<type 'int'>
>>> type(f.y)
<type 'instancemethod'>
That difference has a lot of implications, like for example that referring to f.x refers to the method object, rather than calling it. Also, as you can see, f.x is public by default, whereas in Ruby, instance variables are private by default.
Why are there no ++ and -- operators in Python?
It's not because it doesn't make sense; it makes perfect sense to define "x++" as "x += 1, evaluating to the previous binding of x".
If you want to know the original reason, you'll have to either wade through old Python mailing lists or ask somebody who was there (eg. Guido), but it's easy enough to justify after the fact:
Simple increment and decrement aren't needed as much as in other languages. You don't write things like for(int i = 0; i < 10; ++i) in Python very often; instead you do things like for i in range(0, 10).
Since it's not needed nearly as often, there's much less reason to give it its own special syntax; when you do need to increment, += is usually just fine.
It's not a decision of whether it makes sense, or whether it can be done--it does, and it can. It's a question of whether the benefit is worth adding to the core syntax of the language. Remember, this is four operators--postinc, postdec, preinc, predec, and each of these would need to have its own class overloads; they all need to be specified, and tested; it would add opcodes to the language (implying a larger, and therefore slower, VM engine); every class that supports a logical increment would need to implement them (on top of += and -=).
This is all redundant with += and -=, so it would become a net loss.
This original answer I wrote is a myth from the folklore of computing: debunked by Dennis Ritchie as "historically impossible" as noted in the letters to the editors of Communications of the ACM July 2012 doi:10.1145/2209249.2209251
The C increment/decrement operators were invented at a time when the C compiler wasn't very smart and the authors wanted to be able to specify the direct intent that a machine language operator should be used which saved a handful of cycles for a compiler which might do a
load memory
load 1
add
store memory
instead of
inc memory
and the PDP-11 even supported "autoincrement" and "autoincrement deferred" instructions corresponding to *++p and *p++, respectively. See section 5.3 of the manual if horribly curious.
As compilers are smart enough to handle the high-level optimization tricks built into the syntax of C, they are just a syntactic convenience now.
Python doesn't have tricks to convey intentions to the assembler because it doesn't use one.
I always assumed it had to do with this line of the zen of python:
There should be one — and preferably only one — obvious way to do it.
x++ and x+=1 do the exact same thing, so there is no reason to have both.
Of course, we could say "Guido just decided that way", but I think the question is really about the reasons for that decision. I think there are several reasons:
It mixes together statements and expressions, which is not good practice. See http://norvig.com/python-iaq.html
It generally encourages people to write less readable code
Extra complexity in the language implementation, which is unnecessary in Python, as already mentioned
Because, in Python, integers are immutable (int's += actually returns a different object).
Also, with ++/-- you need to worry about pre- versus post- increment/decrement, and it takes only one more keystroke to write x+=1. In other words, it avoids potential confusion at the expense of very little gain.
Clarity!
Python is a lot about clarity and no programmer is likely to correctly guess the meaning of --a unless s/he's learned a language having that construct.
Python is also a lot about avoiding constructs that invite mistakes and the ++ operators are known to be rich sources of defects.
These two reasons are enough not to have those operators in Python.
The decision that Python uses indentation to mark blocks rather
than syntactical means such as some form of begin/end bracketing
or mandatory end marking is based largely on the same considerations.
For illustration, have a look at the discussion around introducing a conditional operator (in C: cond ? resultif : resultelse) into Python in 2005.
Read at least the first message and the decision message of that discussion (which had several precursors on the same topic previously).
Trivia:
The PEP frequently mentioned therein is the "Python Enhancement Proposal" PEP 308. LC means list comprehension, GE means generator expression (and don't worry if those confuse you, they are none of the few complicated spots of Python).
My understanding of why python does not have ++ operator is following: When you write this in python a=b=c=1 you will get three variables (labels) pointing at same object (which value is 1). You can verify this by using id function which will return an object memory address:
In [19]: id(a)
Out[19]: 34019256
In [20]: id(b)
Out[20]: 34019256
In [21]: id(c)
Out[21]: 34019256
All three variables (labels) point to the same object. Now increment one of variable and see how it affects memory addresses:
In [22] a = a + 1
In [23]: id(a)
Out[23]: 34019232
In [24]: id(b)
Out[24]: 34019256
In [25]: id(c)
Out[25]: 34019256
You can see that variable a now points to another object as variables b and c. Because you've used a = a + 1 it is explicitly clear. In other words you assign completely another object to label a. Imagine that you can write a++ it would suggest that you did not assign to variable a new object but ratter increment the old one. All this stuff is IMHO for minimization of confusion. For better understanding see how python variables works:
In Python, why can a function modify some arguments as perceived by the caller, but not others?
Is Python call-by-value or call-by-reference? Neither.
Does Python pass by value, or by reference?
Is Python pass-by-reference or pass-by-value?
Python: How do I pass a variable by reference?
Understanding Python variables and Memory Management
Emulating pass-by-value behaviour in python
Python functions call by reference
Code Like a Pythonista: Idiomatic Python
It was just designed that way. Increment and decrement operators are just shortcuts for x = x + 1. Python has typically adopted a design strategy which reduces the number of alternative means of performing an operation. Augmented assignment is the closest thing to increment/decrement operators in Python, and they weren't even added until Python 2.0.
I'm very new to python but I suspect the reason is because of the emphasis between mutable and immutable objects within the language. Now, I know that x++ can easily be interpreted as x = x + 1, but it LOOKS like you're incrementing in-place an object which could be immutable.
Just my guess/feeling/hunch.
To complete already good answers on that page:
Let's suppose we decide to do this, prefix (++i) that would break the unary + and - operators.
Today, prefixing by ++ or -- does nothing, because it enables unary plus operator twice (does nothing) or unary minus twice (twice: cancels itself)
>>> i=12
>>> ++i
12
>>> --i
12
So that would potentially break that logic.
now if one needs it for list comprehensions or lambdas, from python 3.8 it's possible with the new := assignment operator (PEP572)
pre-incrementing a and assign it to b:
>>> a = 1
>>> b = (a:=a+1)
>>> b
2
>>> a
2
post-incrementing just needs to make up the premature add by subtracting 1:
>>> a = 1
>>> b = (a:=a+1)-1
>>> b
1
>>> a
2
I believe it stems from the Python creed that "explicit is better than implicit".
First, Python is only indirectly influenced by C; it is heavily influenced by ABC, which apparently does not have these operators, so it should not be any great surprise not to find them in Python either.
Secondly, as others have said, increment and decrement are supported by += and -= already.
Third, full support for a ++ and -- operator set usually includes supporting both the prefix and postfix versions of them. In C and C++, this can lead to all kinds of "lovely" constructs that seem (to me) to be against the spirit of simplicity and straight-forwardness that Python embraces.
For example, while the C statement while(*t++ = *s++); may seem simple and elegant to an experienced programmer, to someone learning it, it is anything but simple. Throw in a mixture of prefix and postfix increments and decrements, and even many pros will have to stop and think a bit.
The ++ class of operators are expressions with side effects. This is something generally not found in Python.
For the same reason an assignment is not an expression in Python, thus preventing the common if (a = f(...)) { /* using a here */ } idiom.
Lastly I suspect that there operator are not very consistent with Pythons reference semantics. Remember, Python does not have variables (or pointers) with the semantics known from C/C++.
as i understood it so you won't think the value in memory is changed.
in c when you do x++ the value of x in memory changes.
but in python all numbers are immutable hence the address that x pointed as still has x not x+1. when you write x++ you would think that x change what really happens is that x refrence is changed to a location in memory where x+1 is stored or recreate this location if doe's not exists.
Other answers have described why it's not needed for iterators, but sometimes it is useful when assigning to increase a variable in-line, you can achieve the same effect using tuples and multiple assignment:
b = ++a becomes:
a,b = (a+1,)*2
and b = a++ becomes:
a,b = a+1, a
Python 3.8 introduces the assignment := operator, allowing us to achievefoo(++a) with
foo(a:=a+1)
foo(a++) is still elusive though.
Maybe a better question would be to ask why do these operators exist in C. K&R calls increment and decrement operators 'unusual' (Section 2.8page 46). The Introduction calls them 'more concise and often more efficient'. I suspect that the fact that these operations always come up in pointer manipulation also has played a part in their introduction.
In Python it has been probably decided that it made no sense to try to optimise increments (in fact I just did a test in C, and it seems that the gcc-generated assembly uses addl instead of incl in both cases) and there is no pointer arithmetic; so it would have been just One More Way to Do It and we know Python loathes that.
This may be because #GlennMaynard is looking at the matter as in comparison with other languages, but in Python, you do things the python way. It's not a 'why' question. It's there and you can do things to the same effect with x+=. In The Zen of Python, it is given: "there should only be one way to solve a problem." Multiple choices are great in art (freedom of expression) but lousy in engineering.
I think this relates to the concepts of mutability and immutability of objects. 2,3,4,5 are immutable in python. Refer to the image below. 2 has fixed id until this python process.
x++ would essentially mean an in-place increment like C. In C, x++ performs in-place increments. So, x=3, and x++ would increment 3 in the memory to 4, unlike python where 3 would still exist in memory.
Thus in python, you don't need to recreate a value in memory. This may lead to performance optimizations.
This is a hunch based answer.
I know this is an old thread, but the most common use case for ++i is not covered, that being manually indexing sets when there are no provided indices. This situation is why python provides enumerate()
Example : In any given language, when you use a construct like foreach to iterate over a set - for the sake of the example we'll even say it's an unordered set and you need a unique index for everything to tell them apart, say
i = 0
stuff = {'a': 'b', 'c': 'd', 'e': 'f'}
uniquestuff = {}
for key, val in stuff.items() :
uniquestuff[key] = '{0}{1}'.format(val, i)
i += 1
In cases like this, python provides an enumerate method, e.g.
for i, (key, val) in enumerate(stuff.items()) :
In addition to the other excellent answers here, ++ and -- are also notorious for undefined behavior. For example, what happens in this code?
foo[bar] = bar++;
It's so innocent-looking, but it's wrong C (and C++), because you don't know whether the first bar will have been incremented or not. One compiler might do it one way, another might do it another way, and a third might make demons fly out of your nose. All would be perfectly conformant with the C and C++ standards.
(EDIT: C++17 has changed the behavior of the given code so that it is defined; it will be equivalent to foo[bar+1] = bar; ++bar; — which nonetheless might not be what the programmer is expecting.)
Undefined behavior is seen as a necessary evil in C and C++, but in Python, it's just evil, and avoided as much as possible.
There seems to be a lot of heated discussion on the net about the changes to the reduce() function in python 3.0 and how it should be removed. I am having a little difficulty understanding why this is the case; I find it quite reasonable to use it in a variety of cases. If the contempt was simply subjective, I cannot imagine that such a large number of people would care about it.
What am I missing? What is the problem with reduce()?
As Guido says in his The fate of reduce() in Python 3000 post:
So now reduce(). This is actually the one I've always hated most, because, apart from a few examples involving + or *, almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what's actually being fed into that function before I understand what the reduce() is supposed to do. So in my mind, the applicability of reduce() is pretty much limited to associative operators, and in all other cases it's better to write out the accumulation loop explicitly.
There is an excellent example of a confusing reduce in the Functional Programming HOWTO article:
Quick, what's the following code doing?
total = reduce(lambda a, b: (0, a[1] + b[1]), items)[1]
You can figure it out, but it takes time to disentangle the expression to figure out
what's going on. Using a short nested def statements makes things a little bit better:
def combine (a, b):
return 0, a[1] + b[1]
total = reduce(combine, items)[1]
But it would be best of all if I had simply used a for loop:
total = 0
for a, b in items:
total += b
Or the sum() built-in and a generator expression:
total = sum(b for a,b in items)
Many uses of reduce() are clearer when written as for loops.
reduce() is not being removed -- it's simply being moved into the functools module. Guido's reasoning is that except for trivial cases like summation, code written using reduce() is usually clearer when written as an accumulation loop.
People worry it encourages an obfuscated style of programming, doing something that can be achieved with clearer methods.
I'm not against reduce myself, I also find it a useful tool sometimes.
The primary reason of reduce's existence is to avoid writing explicit for loops with accumulators. Even though python has some facilities to support the functional style, it is not encouraged. If you like the 'real' and not 'pythonic' functional style - use a modern Lisp (Clojure?) or Haskell instead.
Using reduce to compute the value of a polynomial with Horner's method is both compact and expressive.
Compute polynomial value at x.
a is an array of coefficients for the polynomial
def poynomialValue(a,x):
return reduce(lambda value, coef: value*x + coef, a)