Functional python programming and conditionals - python

I'm trying to write a python function in a functional way. The problem is I don't know, how to transform an if conditional into a functional style. I have two variables: A and C, which I want to check for the following conditions:
def function():
if(A==0): return 0
elif(C!=0): return 0
elif(A > 4): return 0
else: someOtherFunction()
I looked at the lambda shortcircuiting, but I couldn't get it to work.
I thank you in advance for your help!

From the link you posted:
FP either discourages or outright disallows statements,
and instead works with the evaluation of expressions
So instead of if-statements, you could use a conditional expression:
def function():
return (0 if ((A == 0) or (C != 0) or (A > 4)) else
someOtherFunction())
or, (especially useful if there were many different values):
def function():
return (0 if A == 0 else
0 if C != 0 else
0 if A > 4 else
someOtherFunction())
By the way, the linked article proposes
(<cond1> and func1()) or (<cond2> and func2()) or (func3())
as a short-curcuiting equivalent to
if <cond1>: func1()
elif <cond2>: func2()
else: func3()
The problem is they are not equivalent! The boolean expression fails to return the right value when <cond1> is Truish but func1() is Falsish (e.g. False or 0 or None). (Or similarly when <cond2> is Truish but func2 is Falsish.)
(<cond1> and func1())
is written with the intention of evaluating to func1() when <cond1> is Truish, but when func1() is Falsish, (<cond1> and func1()) evaluates to False, so the entire expression is passed over and Python goes on to evaluate (<cond2> and func2()) instead of short-circuiting.
So here is a bit of interesting history. In 2005,
Raymond Hettinger found a similar hard-to-find bug in type(z)==types.ComplexType and z.real or z when z = (0+4j) because z.real is Falsish. Motivated by a desire to save us from similar bugs, the idea of using a less error-prone syntax (conditional expressions) was born.

There's nothing non-"functional style" in your current code! who said conditionals are not functional anyway? Practically all functional languages have a conditional operator of some sort, for instance the cond special form in Lisp.
I'd take issue with the code if it were using the assignment operator, or mutating state in some way (say, appending to a list) but as it is, the function in the question is already in a "functional style" - there are no state changes.
Perhaps you meant something like this?
return A != 0 and C == 0 and A <= 4 and someOtherFunction()
The above will return False if either A == 0 or C != 0 or A > 4, in all other cases it will return the value of calling someOtherFunction(). And by the way, False can be assumed to evaluate to 0 (for example, 42 + False == 42), so the semantics in the code in the question will be preserved from the caller's point of view.
Notice that you're taking the information in the link out of context. There's absolutely no need to use a lambda for this, the article is only explaining how to get around an inherent limitation of lambdas in Python, which is that you can't return statements inside (like if-elif-else) - only expressions are allowed, but you can fake them with boolean operators. In the context of a normal function by all means, use conditionals.

Although Peter Norvig is a really great guy, his website is pretty hard to search.
I remember reading about Can I do the equivalent of (test ? result : alternative) in Python? on his site a while back during some research before a functional Python talk.
I'm not going to sway you one way or the other in light of my findings, but you should still go and read the section about ternary conditional operators in a functional style.
def if_(test, result, alternative=None):
"If test is true, 'do' result, else alternative. 'Do' means call if callable."
if test:
if callable(result): result = result()
return result
else:
if callable(alternative): alternative = alternative()
return alternative

Just use it as you have it.
Python does not have the syntax and library built-ins to make it easy to program all functions to directly return a single expression. That's not the most important part of functional style anyway, the most important part is making sure that your functions maintain referential integrity. Basically this means that whenever you supply them with the same input values they return the same output.
So when trying to program functionally in Python, I do not refrain from using statements entirely. I use a block of local variable assignments as an equivalent of let ... in ... from Haskell. I use an if/elif/else chain as an equivalent of a case expression from Haskell. And often there are built-in types which do not provide an adequate interface to create new "modified" versions of them rather than updating them in-place, so you instead have to implement such operations with an explicit copy operation and then using mutations on the new copy.
Python allows you to implement functional-style designs directly. You can easily structure your program as a whole bunch of functions which explicitly pass state around and don't have side effects, so you can design your high level algorithms in a very similar way that you would in a functional programming language. Nearly every programming language supports functional programming in this sense, if you're prepared to stomach the boilerplate necessary to fake first class functions. Since Python has first class functions, you don't even have to put up with boilerplate.
But that's as far as Python goes in supporting functional programming. It does't really support implementing functions as single referentially-transparent expressions. But that doesn't really matter. In a language that doesn't enforce or track purity, you get pretty much all the benefits of functional programming that you can by simply designing your program as a bunch of referentially transparent functions, and then how you implement those functions doesn't actually matter as long as the interface is kept referentially transparent.

Related

Why are assignments not allowed in Python's `lambda` expressions?

This is not a duplicate of Assignment inside lambda expression in Python, i.e., I'm not asking how to trick Python into assigning in a lambda expression.
I have some λ-calculus background. Considering the following code, it
looks like Python is quite willing to perform side-effects in lambda
expressions:
#!/usr/bin/python
def applyTo42(f):
return f(42)
def double(x):
return x * 2
class ContainsVal:
def __init__(self, v):
self.v = v
def store(self, v):
self.v = v
def main():
print('== functional, no side effects')
print('-- print the double of 42')
print(applyTo42(double))
print('-- print 1000 more than 42')
print(applyTo42(lambda x: x + 1000))
print('-- print c\'s value instead of 42')
c = ContainsVal(23)
print(applyTo42(lambda x: c.v))
print('== not functional, side effects')
print('-- perform IO on 42')
applyTo42(lambda x: print(x))
print('-- set c\'s value to 42')
print(c.v)
applyTo42(lambda x: c.store(x))
print(c.v)
#print('== illegal, but why?')
#print(applyTo42(lambda x: c.v = 99))
if __name__ == '__main__':
main()
But if I uncomment the lines
print('== illegal, but why?')
print(applyTo42(lambda x: c.v = 99))
I'll get
SyntaxError: lambda cannot contain assignment
Why not? What is the deeper reason behind this?
As the code demonstrates, it cannot be about “purity” in a
functional sense.
The only explanation I can imagine is that assignemts do not
return anything, not even None. But that sounds lame and would
be easy to fix (one way: make lambda expressions return None if
body is a statement).
Not an answer:
Because it's defined that way (I want to know why it's defined that way).
Because it's in the grammar (see above).
Use def if you need statements (I did not ask for how to get
statements into a function).
“This would change syntax / the language / semantics” would be ok as an answer if you can come up with an example of such a change, and why it would be bad.
The entire reason lambda exists is that it's an expression.1 If you want something that's like lambda but is a statement, that's just def.
Python expressions cannot contain statements. This is, in fact, fundamental to the language, and Python gets a lot of mileage out of that decision. It's the reason indentation for flow control works instead of being clunky as in many other attempts (like CoffeeScript). It's the reason you can read off the state changes by skimming the first object in each line. It's even part of the reason the language is easy to parse, both for the compiler and for human readers.2
Changing Python to have some way to "escape" the statement-expression divide, except maybe in a very careful and limited way, would turn it into a completely different language, and one that no longer had many of the benefits that cause people to choose Python in the first place.
Changing Python to make most statements expressions (like, say, Ruby) would again turn it into a completely different language without Python's current benefits.
And if Python did make either of those changes, then there'd no longer be a reason for lambda in the first place;2,3 you could just use def statements inside an expression.
What about changing Python to instead make assignments expressions? Well, it should be obvious that would break "you can read off the state changes by skimming the first object in each line". Although Guido usually focuses on the fact that if spam=eggs is an error more often than a useful thing.
The fact that Python does give you ways to get around that when needed, like setattr or even explicitly calling __setitem__ on globals(), doesn't mean it's something that should have direct syntactic support. Something that's very rarely needed doesn't deserve syntactic sugar—and even more so for something that's unusual enough that it should raise eyebrows and/or red flags when it actually is done.
1. I have no idea whether that was Guido's understanding when he originally added lambda back in Python 1.0. But it's definitely the reason lambda wasn't removed in Python 3.0.
2. In fact, Guido has, multiple times, suggested that allowing an LL(1) parser that humans can run in their heads is sufficient reason for the language being statement-based, to the point that other benefits don't even need to be discussed. I wrote about this a few years ago if anyone's interested.
3. If you're wondering why so many languages do have a lambda expression despite already having def: In many languages, ranging from C++ to Ruby, function aren't first-class objects that can be passed around, so they had to invent a second thing that is first-class but works like a function. In others, from Smalltalk to Java, functions don't even exist, only methods, so again, they had to invent a second thing that's not a method but works like one. Python has neither of those problems.
4. A few languages, like C# and JavaScript, actually had perfectly working inline function definitions, but added some kind of lambda syntax as pure syntactic sugar, to make it more concise and less boilerplatey. That might actually be worth doing in Python (although every attempt at a good syntax so far has fallen flat), but it wouldn't be the current lambda syntax, which is nearly as verbose as def.
There is a syntax problem: an assignment is a statement, and the body of a lambda can only have expressions. Python's syntax is designed this way1. Check it out at https://docs.python.org/3/reference/grammar.html.
There is also a semantics problem: what does each statement return?
I don't think there is interest in changing this, as lambdas are meant for very simple and short code. Moreover, a statement would allow sequences of statements as well, and that's not desirable for lambdas.
It could be also fixed by selectively allowing certain statements in the lambda body, and specifying the semantics (e.g. an assignment returns None, or returns the assigned value; the latter makes more sense to me). But what's the benefit?
Lambdas and functions are interchangeable. If you really have a use-case for a particular statement in the body of a lambda, you can define a function that executes it, and your specific problem is solved.
Perhaps you can create a syntactic macro to allow that with MacroPy3 (I'm just guessing, as I'm a fan of the project, but still I haven't had the time to dive in it).
For example MacroPy would allow you to define a macro that transforms f[_ * _] into lambda a, b: a * b, so it should not be impossible to define the syntax for a lambda that calls a function you defined.
1 A good reason to not change it is that it would cripple the syntax, because a lambda can be in places where expressions can be. And statements should not. But that's a very subjective remark of my own.
My answer is based on chepner's comment above and doesn't draw from any other credible or official source, however I think that it will be useful.
If assignment was allowed in lambda expressions, then the error of confusing == (equality test) with = (assignment) would have more chances of escaping into the wild.
Example:
>>> # Correct use of equality test
... list(filter(lambda x: x==1, [0, 1, 0.0, 1.0, 0+0j, 1+0j]))
[1, 1.0, (1+0j)]
>>> # Suppose that assignment is used by mistake instead of equality testing
... # and the return value of an assignment expression is always None
... list(filter(lambda x: None, [0, 1, 0.0, 1.0, 0+0j, 1+0j]))
[]
>>> # Suppose that assignment is used by mistake instead of equality testing
... # and the return value of an assignment expression is the assigned value
... list(filter(lambda x: 1, [0, 1, 0.0, 1.0, 0+0j, 1+0j]))
[0, 1, 0.0, 1.0, 0j, (1+0j)]
As long as exec() (and eval()) is allowed inside lambda, you can do assignments inside lambda:
q = 3
def assign(var_str, val_str):
exec("global " + var_str + "; " +
var_str + " = " + val_str)
lambda_assign = lambda var_str, val_str: assign(var_str, val_str)
q ## gives: 3
lambda_assign("q", "100")
q ## gives: 100
## what would such expression be a win over the direct:
q = 100
## ? `lambda_assign("q", "100")` will be for sure slower than
## `q = 100` isn't it?
q_assign = lambda v: assign("q", v)
q_assign("33")
q ## 33
## but do I need lambda for q_assign?
def q_assign(v): assign("q", v)
## would do it, too, isn't it?
But since lambda expressions allow only 1 expression to be defined inside their body (at least in Python ...), what would be the point of to allow an assignment inside a lambda? Its net effect would be to assign directly (without using any lambda) q = 100, isn't it?
It would be even faster than doing it over a defined lambda, since you have at least one function lookup and execution less to execute ...
There's not really any deeper reasons, it has nothing to do with lambda or functional language designs, it's just to avoid programmers from mixing = and == operators, which is a very common mistake in other languages
IF there's more to this story, I assume like MAYBE because python bdfl GVR has expressed his unloving sides to lambda and other functional features and attempted(and conceded) to remove them from python 3 altogether https://www.artima.com/weblogs/viewpost.jsp?thread=98196
At the time of this writing the core devs were seen having a heated discussions recently on whether to include a limited name binding expression assignment, the debate is still on going so perhaps someday we may see it in lambda(unlikely)
As you said it yourself it is definitely not about side effects or purity, they just don't want lambda to be more than a single expression... ... ...
With that said, here's something about multi expressions assignments in lambda, read on if you're interested
It is not at all impossible in python, in fact it was sometimes necessary to capture variable and sidestep late bindings by (ab)using kwargs(keyword arguments)
edit:
code example
f = lambda x,a=1: (lambda c = a+2, b = a+1: (lambda e = x,d = c+1: print(a,b,c,d,e))())()
f("w")
# output 1 2 3 4 w
# expression assignment through an object's method call
if let(a=1) .a > 0 and let(b=let.a+1) .b != 1 and let(c=let.b+let.a) .c:
print(let.a, let.b, let.c)
# output 1 2 3
As it stands, Python was designed as a statement-based language. Therefore assignment and other name bindings are statements, and do not have any result.
The Python core developers are currently discussing PEP 572, which would introduce a name-binding expression.
I think all the fellows answered this already. We use mostly lambdas function when we just want to:
-create some simple functions that do the work perfectly in a specific place(most of the time hidden inside some other big functions
-The lambda function does not have a name
-Can be used with some other built-ins functions such as map, list and so forth ...
>>> Celsius = [39.2, 36.5, 37.3, 37.8]
>>> Fahrenheit = map(lambda x: (float(9)/5)*x + 32, Celsius) # mapping the list here
>>> print Fahrenheit
[102.56, 97.700000000000003, 99.140000000000001, 100.03999999999999]
Please visit this webpage , this could be useful.Keep it up !!!
https://www.python-course.eu/lambda.php

It is more efficient to use if-return-return or if-else-return?

Suppose I have an if statement with a return. From the efficiency perspective, should I use
if(A > B):
return A+1
return A-1
or
if(A > B):
return A+1
else:
return A-1
Should I prefer one or another when using a compiled language (C) or a scripted one (Python)?
Since the return statement terminates the execution of the current function, the two forms are equivalent (although the second one is arguably more readable than the first).
The efficiency of both forms is comparable, the underlying machine code has to perform a jump if the if condition is false anyway.
Note that Python supports a syntax that allows you to use only one return statement in your case:
return A+1 if A > B else A-1
From Chromium's style guide:
Don't use else after return:
# Bad
if (foo)
return 1
else
return 2
# Good
if (foo)
return 1
return 2
return 1 if foo else 2
I personally avoid else blocks when possible. See the Anti-if Campaign
Also, they don't charge 'extra' for the line, you know :p
"Simple is better than complex" & "Readability is king"
delta = 1 if (A > B) else -1
return A + delta
Regarding coding style:
Most coding standards no matter language ban multiple return statements from a single function as bad practice.
(Although personally I would say there are several cases where multiple return statements do make sense: text/data protocol parsers, functions with extensive error handling etc)
The consensus from all those industry coding standards is that the expression should be written as:
int result;
if(A > B)
{
result = A+1;
}
else
{
result = A-1;
}
return result;
Regarding efficiency:
The above example and the two examples in the question are all completely equivalent in terms of efficiency. The machine code in all these cases have to compare A > B, then branch to either the A+1 or the A-1 calculation, then store the result of that in a CPU register or on the stack.
EDIT :
Sources:
MISRA-C:2004 rule 14.7, which in turn cites...:
IEC 61508-3. Part 3, table B.9.
IEC 61508-7. C.2.9.
With any sensible compiler, you should observe no difference; they should be compiled to identical machine code as they're equivalent.
Version A is simpler and that's why I would use it.
And if you turn on all compiler warnings in Java you will get a warning on the second Version because it is unnecesarry and turns up code complexity.
This is a question of style (or preference) since the interpreter does not care. Personally I would try not to make the final statement of a function which returns a value at an indent level other than the function base. The else in example 1 obscures, if only slightly, where the end of the function is.
By preference I use:
return A+1 if (A > B) else A-1
As it obeys both the good convention of having a single return statement as the last statement in the function (as already mentioned) and the good functional programming paradigm of avoiding imperative style intermediate results.
For more complex functions I prefer to break the function into multiple sub-functions to avoid premature returns if possible. Otherwise I revert to using an imperative style variable called rval. I try not to use multiple return statements unless the function is trivial or the return statement before the end is as a result of an error. Returning prematurely highlights the fact that you cannot go on. For complex functions that are designed to branch off into multiple subfunctions I try to code them as case statements (driven by a dict for instance).
Some posters have mentioned speed of operation. Speed of Run-time is secondary for me since if you need speed of execution Python is not the best language to use. I use Python as its the efficiency of coding (i.e. writing error free code) that matters to me.
I know the question is tagged python, but it mentions dynamic languages so thought I should mention that in ruby the if statement actually has a return type so you can do something like
def foo
rv = if (A > B)
A+1
else
A-1
end
return rv
end
Or because it also has implicit return simply
def foo
if (A>B)
A+1
else
A-1
end
end
which gets around the style issue of not having multiple returns quite nicely.
From a performance point of view, it doesn't matter in Python, and I'd assume the same for every modern language out there.
It really comes down to style and readability. Your second option (if-else block) is more readable than the first, and a lot more readable than the one-liner ternary operation.

How to express conditional execution inside Python lambdas?

What I found out:
In Dive in to Python I read about the peculiar nature of and and or operators
and how shortcircuit evaluation of boolean operators may be used to express conditionals more succinctly via the and-or trick that works very much like the ternary operator in C.
C:
result = condition ? a : b
Python:
result = condition and a or b
This seems to come in handy since lambda functions are restricted to one-liners in Python, but it uses logical syntax to express control flow.
Since Python 2.5 the inline-if seems to have come to the rescue as a more readable syntax for the and-or trick:
result = a if condition else b
So I guess that this is the pythonic replacement for the less readable and-or-construct.
Even if I want to nest multiple conditions, it still looks quite comprehensive:
result = a if condition1 else b if condition2 else c
But in a world of uncertainity I frequently find myself writing some code like this to access a.b.c :
result = a and hasattr(a, 'b') and hasattr(a.b, 'c') and a.b.c or None
So with the help of inline-if I could probably get rid of some ands and ors, resulting in a quite readable piece of code:
result = a.b.c if hasattr(a, 'b') and hasattr(a.b, 'c') else None
I also discovered a somewhat arcane approach for conditonals in this recipe
result = (a, b)[condition]
but this doesn't short-circuit and will result in all kinds of errors if the result of the condition does not return a boolean, 0 or 1.
What I'd like to know:
Now I wonder if it is considered preferable / more pythonic to use the inline-if as much as possible if downwards compatibility is not a concern, or is all just a matter of taste and how much one feels at home in the world of short-circuit evaluation?
Update
I just realized that inline-if is more than syntactic sugar for the and-or-trick, since it will not fail when a is false in a boolean context. So It's probably more fail-proof.
The pythonic thing to do is recognise when you've stretched your code past what is sensible to cram into a single function and just use an ordinary function. Remember, there is nothing you can do with a lambda that couldn't also be done with a named function.
The breaking point will be different for everyone of course, but if you find yourself writing:
return a.b.c if hasattr(a, 'b') and hasattr(a.b, 'c') else None
too much, consider just doing this instead:
try:
return a.b.c
except AttributeError:
return None
Since there is is a special language construct with the inline if-else that does what you want and which was introduced to replace ugly workarounds like those you mention, it is a good idea to use it. Especially since hacks like the and-or trick usually have unexpected corner cases/bugs.
The and-or trick for example fails in this case:
a = 0
b = 1
c = True and a or b
c will be 1, which is not what you expect if you are looking for if-else semantics.
So why use buggy workarounds when there is a language construct that does exactly what you want?
I would rather be explicit about what my code is doing. Inline if is very explicitly doing conditional assignment and readability does count. Inline if would not have made it into the language if and/or side affects were considered preferable.
The pep for conditional expressions goes in to more detail about why that specific syntax was selected and it specifically discusses the and/or hack:
http://www.python.org/dev/peps/pep-0308/
This is for the particular case you mentioned, but I think that mostly every case in which you need chained short-circuit logic can be handled with a more elegant solution. Which is obviously a matter of taste, let me add, so scratch that if you think the above is better than this:
try:
foo = a.b.c
except AttributeError:
print "woops"
In other, less straightforward cases, encapsulating all the testing in a function might improve readibility a lot.
EDIT: by the way, duck typing.

Learning Python from Ruby; Differences and Similarities

I know Ruby very well. I believe that I may need to learn Python presently. For those who know both, what concepts are similar between the two, and what are different?
I'm looking for a list similar to a primer I wrote for Learning Lua for JavaScripters: simple things like whitespace significance and looping constructs; the name of nil in Python, and what values are considered "truthy"; is it idiomatic to use the equivalent of map and each, or are mumble somethingaboutlistcomprehensions mumble the norm?
If I get a good variety of answers I'm happy to aggregate them into a community wiki. Or else you all can fight and crib from each other to try to create the one true comprehensive list.
Edit: To be clear, my goal is "proper" and idiomatic Python. If there is a Python equivalent of inject, but nobody uses it because there is a better/different way to achieve the common functionality of iterating a list and accumulating a result along the way, I want to know how you do things. Perhaps I'll update this question with a list of common goals, how you achieve them in Ruby, and ask what the equivalent is in Python.
Here are some key differences to me:
Ruby has blocks; Python does not.
Python has functions; Ruby does not. In Python, you can take any function or method and pass it to another function. In Ruby, everything is a method, and methods can't be directly passed. Instead, you have to wrap them in Proc's to pass them.
Ruby and Python both support closures, but in different ways. In Python, you can define a function inside another function. The inner function has read access to variables from the outer function, but not write access. In Ruby, you define closures using blocks. The closures have full read and write access to variables from the outer scope.
Python has list comprehensions, which are pretty expressive. For example, if you have a list of numbers, you can write
[x*x for x in values if x > 15]
to get a new list of the squares of all values greater than 15. In Ruby, you'd have to write the following:
values.select {|v| v > 15}.map {|v| v * v}
The Ruby code doesn't feel as compact. It's also not as efficient since it first converts the values array into a shorter intermediate array containing the values greater than 15. Then, it takes the intermediate array and generates a final array containing the squares of the intermediates. The intermediate array is then thrown out. So, Ruby ends up with 3 arrays in memory during the computation; Python only needs the input list and the resulting list.
Python also supplies similar map comprehensions.
Python supports tuples; Ruby doesn't. In Ruby, you have to use arrays to simulate tuples.
Ruby supports switch/case statements; Python does not.
Ruby supports the standard expr ? val1 : val2 ternary operator; Python does not.
Ruby supports only single inheritance. If you need to mimic multiple inheritance, you can define modules and use mix-ins to pull the module methods into classes. Python supports multiple inheritance rather than module mix-ins.
Python supports only single-line lambda functions. Ruby blocks, which are kind of/sort of lambda functions, can be arbitrarily big. Because of this, Ruby code is typically written in a more functional style than Python code. For example, to loop over a list in Ruby, you typically do
collection.each do |value|
...
end
The block works very much like a function being passed to collection.each. If you were to do the same thing in Python, you'd have to define a named inner function and then pass that to the collection each method (if list supported this method):
def some_operation(value):
...
collection.each(some_operation)
That doesn't flow very nicely. So, typically the following non-functional approach would be used in Python:
for value in collection:
...
Using resources in a safe way is quite different between the two languages. Here, the problem is that you want to allocate some resource (open a file, obtain a database cursor, etc), perform some arbitrary operation on it, and then close it in a safe manner even if an exception occurs.
In Ruby, because blocks are so easy to use (see #9), you would typically code this pattern as a method that takes a block for the arbitrary operation to perform on the resource.
In Python, passing in a function for the arbitrary action is a little clunkier since you have to write a named, inner function (see #9). Instead, Python uses a with statement for safe resource handling. See How do I correctly clean up a Python object? for more details.
I, like you, looked for inject and other functional methods when learning Python. I was disappointed to find that they weren't all there, or that Python favored an imperative approach. That said, most of the constructs are there if you look. In some cases, a library will make things nicer.
A couple of highlights for me:
The functional programming patterns you know from Ruby are available in Python. They just look a little different. For example, there's a map function:
def f(x):
return x + 1
map(f, [1, 2, 3]) # => [2, 3, 4]
Similarly, there is a reduce function to fold over lists, etc.
That said, Python lacks blocks and doesn't have a streamlined syntax for chaining or composing functions. (For a nice way of doing this without blocks, check out Haskell's rich syntax.)
For one reason or another, the Python community seems to prefer imperative iteration for things that would, in Ruby, be done without mutation. For example, folds (i.e., inject), are often done with an imperative for loop instead of reduce:
running_total = 0
for n in [1, 2, 3]:
running_total = running_total + n
This isn't just a convention, it's also reinforced by the Python maintainers. For example, the Python 3 release notes explicitly favor for loops over reduce:
Use functools.reduce() if you really need it; however, 99 percent of the time an explicit for loop is more readable.
List comprehensions are a terse way to express complex functional operations (similar to Haskell's list monad). These aren't available in Ruby and may help in some scenarios. For example, a brute-force one-liner to find all the palindromes in a string (assuming you have a function p() that returns true for palindromes) looks like this:
s = 'string-with-palindromes-like-abbalabba'
l = len(s)
[s[x:y] for x in range(l) for y in range(x,l+1) if p(s[x:y])]
Methods in Python can be treated as context-free functions in many cases, which is something you'll have to get used to from Ruby but can be quite powerful.
In case this helps, I wrote up more thoughts here in 2011: The 'ugliness' of Python. They may need updating in light of today's focus on ML.
My suggestion: Don't try to learn the differences. Learn how to approach the problem in Python. Just like there's a Ruby approach to each problem (that works very well givin the limitations and strengths of the language), there's a Python approach to the problem. they are both different. To get the best out of each language, you really should learn the language itself, and not just the "translation" from one to the other.
Now, with that said, the difference will help you adapt faster and make 1 off modifications to a Python program. And that's fine for a start to get writing. But try to learn from other projects the why behind the architecture and design decisions rather than the how behind the semantics of the language...
I know little Ruby, but here are a few bullet points about the things you mentioned:
nil, the value indicating lack of a value, would be None (note that you check for it like x is None or x is not None, not with == - or by coercion to boolean, see next point).
None, zero-esque numbers (0, 0.0, 0j (complex number)) and empty collections ([], {}, set(), the empty string "", etc.) are considered falsy, everything else is considered truthy.
For side effects, (for-)loop explicitly. For generating a new bunch of stuff without side-effects, use list comprehensions (or their relatives - generator expressions for lazy one-time iterators, dict/set comprehensions for the said collections).
Concerning looping: You have for, which operates on an iterable(! no counting), and while, which does what you would expect. The fromer is far more powerful, thanks to the extensive support for iterators. Not only nearly everything that can be an iterator instead of a list is an iterator (at least in Python 3 - in Python 2, you have both and the default is a list, sadly). The are numerous tools for working with iterators - zip iterates any number of iterables in parallel, enumerate gives you (index, item) (on any iterable, not just on lists), even slicing abritary (possibly large or infinite) iterables! I found that these make many many looping tasks much simpler. Needless to say, they integrate just fine with list comprehensions, generator expressions, etc.
In Ruby, instance variables and methods are completely unrelated, except when you explicitly relate them with attr_accessor or something like that.
In Python, methods are just a special class of attribute: one that is executable.
So for example:
>>> class foo:
... x = 5
... def y(): pass
...
>>> f = foo()
>>> type(f.x)
<type 'int'>
>>> type(f.y)
<type 'instancemethod'>
That difference has a lot of implications, like for example that referring to f.x refers to the method object, rather than calling it. Also, as you can see, f.x is public by default, whereas in Ruby, instance variables are private by default.

Why are there no ++ and --​ operators in Python?

Why are there no ++ and -- operators in Python?
It's not because it doesn't make sense; it makes perfect sense to define "x++" as "x += 1, evaluating to the previous binding of x".
If you want to know the original reason, you'll have to either wade through old Python mailing lists or ask somebody who was there (eg. Guido), but it's easy enough to justify after the fact:
Simple increment and decrement aren't needed as much as in other languages. You don't write things like for(int i = 0; i < 10; ++i) in Python very often; instead you do things like for i in range(0, 10).
Since it's not needed nearly as often, there's much less reason to give it its own special syntax; when you do need to increment, += is usually just fine.
It's not a decision of whether it makes sense, or whether it can be done--it does, and it can. It's a question of whether the benefit is worth adding to the core syntax of the language. Remember, this is four operators--postinc, postdec, preinc, predec, and each of these would need to have its own class overloads; they all need to be specified, and tested; it would add opcodes to the language (implying a larger, and therefore slower, VM engine); every class that supports a logical increment would need to implement them (on top of += and -=).
This is all redundant with += and -=, so it would become a net loss.
This original answer I wrote is a myth from the folklore of computing: debunked by Dennis Ritchie as "historically impossible" as noted in the letters to the editors of Communications of the ACM July 2012 doi:10.1145/2209249.2209251
The C increment/decrement operators were invented at a time when the C compiler wasn't very smart and the authors wanted to be able to specify the direct intent that a machine language operator should be used which saved a handful of cycles for a compiler which might do a
load memory
load 1
add
store memory
instead of
inc memory
and the PDP-11 even supported "autoincrement" and "autoincrement deferred" instructions corresponding to *++p and *p++, respectively. See section 5.3 of the manual if horribly curious.
As compilers are smart enough to handle the high-level optimization tricks built into the syntax of C, they are just a syntactic convenience now.
Python doesn't have tricks to convey intentions to the assembler because it doesn't use one.
I always assumed it had to do with this line of the zen of python:
There should be one — and preferably only one — obvious way to do it.
x++ and x+=1 do the exact same thing, so there is no reason to have both.
Of course, we could say "Guido just decided that way", but I think the question is really about the reasons for that decision. I think there are several reasons:
It mixes together statements and expressions, which is not good practice. See http://norvig.com/python-iaq.html
It generally encourages people to write less readable code
Extra complexity in the language implementation, which is unnecessary in Python, as already mentioned
Because, in Python, integers are immutable (int's += actually returns a different object).
Also, with ++/-- you need to worry about pre- versus post- increment/decrement, and it takes only one more keystroke to write x+=1. In other words, it avoids potential confusion at the expense of very little gain.
Clarity!
Python is a lot about clarity and no programmer is likely to correctly guess the meaning of --a unless s/he's learned a language having that construct.
Python is also a lot about avoiding constructs that invite mistakes and the ++ operators are known to be rich sources of defects.
These two reasons are enough not to have those operators in Python.
The decision that Python uses indentation to mark blocks rather
than syntactical means such as some form of begin/end bracketing
or mandatory end marking is based largely on the same considerations.
For illustration, have a look at the discussion around introducing a conditional operator (in C: cond ? resultif : resultelse) into Python in 2005.
Read at least the first message and the decision message of that discussion (which had several precursors on the same topic previously).
Trivia:
The PEP frequently mentioned therein is the "Python Enhancement Proposal" PEP 308. LC means list comprehension, GE means generator expression (and don't worry if those confuse you, they are none of the few complicated spots of Python).
My understanding of why python does not have ++ operator is following: When you write this in python a=b=c=1 you will get three variables (labels) pointing at same object (which value is 1). You can verify this by using id function which will return an object memory address:
In [19]: id(a)
Out[19]: 34019256
In [20]: id(b)
Out[20]: 34019256
In [21]: id(c)
Out[21]: 34019256
All three variables (labels) point to the same object. Now increment one of variable and see how it affects memory addresses:
In [22] a = a + 1
In [23]: id(a)
Out[23]: 34019232
In [24]: id(b)
Out[24]: 34019256
In [25]: id(c)
Out[25]: 34019256
You can see that variable a now points to another object as variables b and c. Because you've used a = a + 1 it is explicitly clear. In other words you assign completely another object to label a. Imagine that you can write a++ it would suggest that you did not assign to variable a new object but ratter increment the old one. All this stuff is IMHO for minimization of confusion. For better understanding see how python variables works:
In Python, why can a function modify some arguments as perceived by the caller, but not others?
Is Python call-by-value or call-by-reference? Neither.
Does Python pass by value, or by reference?
Is Python pass-by-reference or pass-by-value?
Python: How do I pass a variable by reference?
Understanding Python variables and Memory Management
Emulating pass-by-value behaviour in python
Python functions call by reference
Code Like a Pythonista: Idiomatic Python
It was just designed that way. Increment and decrement operators are just shortcuts for x = x + 1. Python has typically adopted a design strategy which reduces the number of alternative means of performing an operation. Augmented assignment is the closest thing to increment/decrement operators in Python, and they weren't even added until Python 2.0.
I'm very new to python but I suspect the reason is because of the emphasis between mutable and immutable objects within the language. Now, I know that x++ can easily be interpreted as x = x + 1, but it LOOKS like you're incrementing in-place an object which could be immutable.
Just my guess/feeling/hunch.
To complete already good answers on that page:
Let's suppose we decide to do this, prefix (++i) that would break the unary + and - operators.
Today, prefixing by ++ or -- does nothing, because it enables unary plus operator twice (does nothing) or unary minus twice (twice: cancels itself)
>>> i=12
>>> ++i
12
>>> --i
12
So that would potentially break that logic.
now if one needs it for list comprehensions or lambdas, from python 3.8 it's possible with the new := assignment operator (PEP572)
pre-incrementing a and assign it to b:
>>> a = 1
>>> b = (a:=a+1)
>>> b
2
>>> a
2
post-incrementing just needs to make up the premature add by subtracting 1:
>>> a = 1
>>> b = (a:=a+1)-1
>>> b
1
>>> a
2
I believe it stems from the Python creed that "explicit is better than implicit".
First, Python is only indirectly influenced by C; it is heavily influenced by ABC, which apparently does not have these operators, so it should not be any great surprise not to find them in Python either.
Secondly, as others have said, increment and decrement are supported by += and -= already.
Third, full support for a ++ and -- operator set usually includes supporting both the prefix and postfix versions of them. In C and C++, this can lead to all kinds of "lovely" constructs that seem (to me) to be against the spirit of simplicity and straight-forwardness that Python embraces.
For example, while the C statement while(*t++ = *s++); may seem simple and elegant to an experienced programmer, to someone learning it, it is anything but simple. Throw in a mixture of prefix and postfix increments and decrements, and even many pros will have to stop and think a bit.
The ++ class of operators are expressions with side effects. This is something generally not found in Python.
For the same reason an assignment is not an expression in Python, thus preventing the common if (a = f(...)) { /* using a here */ } idiom.
Lastly I suspect that there operator are not very consistent with Pythons reference semantics. Remember, Python does not have variables (or pointers) with the semantics known from C/C++.
as i understood it so you won't think the value in memory is changed.
in c when you do x++ the value of x in memory changes.
but in python all numbers are immutable hence the address that x pointed as still has x not x+1. when you write x++ you would think that x change what really happens is that x refrence is changed to a location in memory where x+1 is stored or recreate this location if doe's not exists.
Other answers have described why it's not needed for iterators, but sometimes it is useful when assigning to increase a variable in-line, you can achieve the same effect using tuples and multiple assignment:
b = ++a becomes:
a,b = (a+1,)*2
and b = a++ becomes:
a,b = a+1, a
Python 3.8 introduces the assignment := operator, allowing us to achievefoo(++a) with
foo(a:=a+1)
foo(a++) is still elusive though.
Maybe a better question would be to ask why do these operators exist in C. K&R calls increment and decrement operators 'unusual' (Section 2.8page 46). The Introduction calls them 'more concise and often more efficient'. I suspect that the fact that these operations always come up in pointer manipulation also has played a part in their introduction.
In Python it has been probably decided that it made no sense to try to optimise increments (in fact I just did a test in C, and it seems that the gcc-generated assembly uses addl instead of incl in both cases) and there is no pointer arithmetic; so it would have been just One More Way to Do It and we know Python loathes that.
This may be because #GlennMaynard is looking at the matter as in comparison with other languages, but in Python, you do things the python way. It's not a 'why' question. It's there and you can do things to the same effect with x+=. In The Zen of Python, it is given: "there should only be one way to solve a problem." Multiple choices are great in art (freedom of expression) but lousy in engineering.
I think this relates to the concepts of mutability and immutability of objects. 2,3,4,5 are immutable in python. Refer to the image below. 2 has fixed id until this python process.
x++ would essentially mean an in-place increment like C. In C, x++ performs in-place increments. So, x=3, and x++ would increment 3 in the memory to 4, unlike python where 3 would still exist in memory.
Thus in python, you don't need to recreate a value in memory. This may lead to performance optimizations.
This is a hunch based answer.
I know this is an old thread, but the most common use case for ++i is not covered, that being manually indexing sets when there are no provided indices. This situation is why python provides enumerate()
Example : In any given language, when you use a construct like foreach to iterate over a set - for the sake of the example we'll even say it's an unordered set and you need a unique index for everything to tell them apart, say
i = 0
stuff = {'a': 'b', 'c': 'd', 'e': 'f'}
uniquestuff = {}
for key, val in stuff.items() :
uniquestuff[key] = '{0}{1}'.format(val, i)
i += 1
In cases like this, python provides an enumerate method, e.g.
for i, (key, val) in enumerate(stuff.items()) :
In addition to the other excellent answers here, ++ and -- are also notorious for undefined behavior. For example, what happens in this code?
foo[bar] = bar++;
It's so innocent-looking, but it's wrong C (and C++), because you don't know whether the first bar will have been incremented or not. One compiler might do it one way, another might do it another way, and a third might make demons fly out of your nose. All would be perfectly conformant with the C and C++ standards.
(EDIT: C++17 has changed the behavior of the given code so that it is defined; it will be equivalent to foo[bar+1] = bar; ++bar; — which nonetheless might not be what the programmer is expecting.)
Undefined behavior is seen as a necessary evil in C and C++, but in Python, it's just evil, and avoided as much as possible.

Categories

Resources