not able to understand behaviour of ** operator - python

I have suddenly came across this, I am not able to understand why this is happening!
On python prompt,
using the ** operator on 3 onwards like below giving wrong result.
i.e.,
>>> 2**2**2
16
>>> 3**3**3
7625597484987L
>>> 4**4**4
13407807929942597099574024998205846127479365820592393377723561443721764030073546976801874298166903427690031858186486050853753882811946569946433649006084096L
Then i thought i must have to use parentheses, so i used it and it is giving correct result.
>>>(3**3)**3
19683
BUT the // operator is supporting and giving correct results
in this kind of operations, that is
>>> 4//4//4
0
>>> 40//4//6
1
please help me to understand.

** is right-associative. Mathematically, this makes sense: 333 is equal to 327, not 273.
The documentation states that it is right-associative:
In an unparenthesized sequence of power and unary operators, the operators are evaluated from right to left.

As the docs say:
Operators in the same box group left to right (except for comparisons… and exponentiation, which groups from right to left).
In other words, ** is right-associative, while // (like all other operators except comparisons) is left-associative.
Elsewhere, there's a whole section on The power operator that, after giving a rule (which isn't relevant here) about how power and unary operators interacts, clarifies that:
[I]n an unparenthesized sequence of power and unary operators, the operators are evaluated from right to left…
This is actually the way most programming languages do it.
Exponentiation isn't written with symmetrical operator syntax in mathematics, so there's really no reason it should have the same default associativity. And right-associative exponentiation is much less useful, because (2**3)**4 is exactly the same thing as 2**(3*4), whereas there's nothing obvious that's the same thing as 2**(3**4).

Looks like the ** operator is right-associative, meaning 3**3**3 evaluates as 3**27 and 4**4**4 as 4**256.

When you do stuff like 4**4**4, you should use parentheses to make your intentions explicit. The parser will resolve the ambiguity, as #cHao indicated, but it is confusing to others. You should use (4**4)**4 or 4**(4**4). Explicit here is better than implicit, since taking powers of powers is not exactly a workaday operation we see all of the time.

Related

I'm not able to understand this "or" conditions when i am writing 0 or 3 it is returning 3 and also when i am writing 3 and 0 same condition printed [duplicate]

First, the code:
>>> False or 'hello'
'hello'
This surprising behavior lets you check if x is not None and check the value of x in one line:
>>> x = 10 if randint(0,2) == 1 else None
>>> (x or 0) > 0
# depend on x value...
Explanation: or functions like this:
if x is false, then y, else x
No language that I know lets you do this. So, why does Python?
It sounds like you're combining two issues into one.
First, there's the issue of short-circuiting. Marcin's answer addresses this issue perfectly, so I won't try to do any better.
Second, there's or and and returning the last-evaluated value, rather than converting it to bool. There are arguments to be made both ways, and you can find many languages on either side of the divide.
Returning the last-evaluated value allows the functionCall(x) or defaultValue shortcut, avoids a possibly wasteful conversion (why convert an int 2 into a bool 1 if the only thing you're going to do with it is check whether it's non-zero?), and is generally easier to explain. So, for various combinations of these reasons, languages like C, Lisp, Javascript, Lua, Perl, Ruby, and VB all do things this way, and so does Python.
Always returning a boolean value from an operator helps to catch some errors (especially in languages where the logical operators and the bitwise operators are easy to confuse), and it allows you to design a language where boolean checks are strictly-typed checks for true instead of just checks for nonzero, it makes the type of the operator easier to write out, and it avoids having to deal with conversion for cases where the two operands are different types (see the ?: operator in C-family languages). So, for various combinations of these reasons, languages like C++, Fortran, Smalltalk, and Haskell all do things this way.
In your question (if I understand it correctly), you're using this feature to be able to write something like:
if (x or 0) < 1:
When x could easily be None. This particular use case isn't very useful, both because the more-explicit x if x else 0 (in Python 2.5 and later) is just as easy to write and probably easier to understand (at least Guido thinks so), but also because None < 1 is the same as 0 < 1 anyway (at least in Python 2.x, so you've always got at least one of the two options)… But there are similar examples where it is useful. Compare these two:
return launchMissiles() or -1
return launchMissiles() if launchMissiles() else -1
The second one will waste a lot of missiles blowing up your enemies in Antarctica twice instead of once.
If you're curious why Python does it this way:
Back in the 1.x days, there was no bool type. You've got falsy values like None, 0, [], (), "", etc., and everything else is true, so who needs explicit False and True? Returning 1 from or would have been silly, because 1 is no more true than [1, 2, 3] or "dsfsdf". By the time bool was added (gradually over two 2.x versions, IIRC), the current logic was already solidly embedded in the language, and changing would have broken a lot of code.
So, why didn't they change it in 3.0? Many Python users, including BDFL Guido, would suggest that you shouldn't use or in this case (at the very least because it's a violation of "TOOWTDI"); you should instead store the result of the expression in a variable, e.g.:
missiles = launchMissiles()
return missiles if missiles else -1
And in fact, Guido has stated that he'd like to ban launchMissiles() or -1, and that's part of the reason he eventually accepted the ternary if-else expression that he'd rejected many times before. But many others disagree, and Guido is a benevolent DFL. Also, making or work the way you'd expect everywhere else, while refusing to do what you want (but Guido doesn't want you to want) here, would actually be pretty complicated.
So, Python will probably always be on the same side as C, Perl, and Lisp here, instead of the same side as Java, Smalltalk, and Haskell.
No language that i know lets you do this. So, why Python do?
Then you don't know many languages. I can't think of one language that I do know that does not exhibit this "shortcircuiting" behaviour.
It does it because it is useful to say:
a = b or K
such that a either becomes b, if b is not None (or otherwise falsy), and if not it gets the default value K.
Actually a number of languages do. See Wikipedia about Short-Circuit Evaluation
For the reason why short-circuit evaluation exists, wikipedia writes:
If both expressions used as conditions are simple boolean variables,
it can be actually faster to evaluate both conditions used in boolean
operation at once, as it always requires a single calculation cycle,
as opposed to one or two cycles used in short-circuit evaluation
(depending on the value of the first).
This behavior is not surprising, and it's quite straightforward if you consider Python has the following features regarding or, and and not logical operators:
Short-circuit evaluation: it only evaluates operands up to where it needs to.
Non-coercing result: the result is one of the operands, not coerced to bool.
And, additionally:
The Truth Value of an object is False only for None, False, 0, "", [], {}. Everything else has a truth value of True (this is a simplification; the correct definition is in the official docs)
Combine those features, and it leads to:
or : if the first operand evaluates as True, short-circuit there and return it. Or return the 2nd operand.
and: if the first operand evaluates as False, short-circuit there and return it. Or return the 2nd operand.
It's easier to understand if you generalize to a chain of operations:
>>> a or b or c or d
>>> a and b and c and d
Here is the "rule of thumb" I've memorized to help me easily predict the result:
or : returns the first "truthy" operand it finds, or the last one.
and: returns the first "falsy" operand it finds, or the last one.
As for your question, on why python behaves like that, well... I think because it has some very neat uses, and it's quite intuitive to understand. A common use is a series of fallback choices, the first "found" (ie, non-falsy) is used. Think about this silly example:
drink = getColdBeer() or pickNiceWine() or random.anySoda or "meh, water :/"
Or this real-world scenario:
username = cmdlineargs.username or configFile['username'] or DEFAULT_USERNAME
Which is much more concise and elegant than the alternative.
As many other answers have pointed out, Python is not alone and many other languages have the same behavior, for both short-circuit (I believe most current languanges are) and non-coercion.
"No language that i know lets you do this. So, why Python do?" You seem to assume that all languages should be the same. Wouldn't you expect innovation in programming languages to produce unique features that people value?
You've just pointed out why it's useful, so why wouldn't Python do it? Perhaps you should ask why other languages don't.
You can take advantage of the special features of the Python or operator out of Boolean contexts. The rule of thumb is still that the result of your Boolean expressions is the first true operand or the last in the line.
Notice that the logical operators (or included) are evaluated before the assignment operator =, so you can assign the result of a Boolean expression to a variable in the same way you do with a common expression:
>>> a = 1
>>> b = 2
>>> var1 = a or b
>>> var1
1
>>> a = None
>>> b = 2
>>> var2 = a or b
>>> var2
2
>>> a = []
>>> b = {}
>>> var3 = a or b
>>> var3
{}
Here, the or operator works as expected, returning the first true operand or the last operand if both are evaluated to false.

Why does integer division round down in many scripting languages?

In the languages I have tested, - (x div y ) is not equal to -x div y; I have tested // in Python, / in Ruby, div in Perl 6; C has a similar behavior.
That behavior is usually according to spec, since div is usually defined as the rounding down of the result of the division, however it does not make a lot of sense from the arithmetic point of view, since it makes div behave in a different way depending on the sign, and it causes confusion such as this post on how it is done in Python.
Is there some specific rationale behind this design decision, or is just div defined that way from scratch? Apparently Guido van Rossum uses a coherency argument in a blog post that explains how it is done in Python, but you can have coherency also if you choose to round up.
(Inspired by this question by PMurias in the #perl6 IRC channel)
Ideally, we would like to have two operations div and mod, satisfying, for each b>0:
(a div b) * b + (a mod b) = a
0 <= (a mod b) < b
(-a) div b = -(a div b)
This is, however, a mathematical impossibility. If all the above were true, we would have
1 div 2 = 0
1 mod 2 = 1
since this is the unique integer solution to (1) and (2). Hence, we would also have, by (3),
0 = -0 = -(1 div 2) = (-1) div 2
which, by (1), implies
-1 = ((-1) div 2) * 2 + ((-1) mod 2) = 0 * 2 + ((-1) mod 2) = (-1) mod 2
making (-1) mod 2 < 0 which contradicts (2).
Hence, we need to give up some property among (1), (2), and (3).
Some programming languages give up (3), and make div round down (Python, Ruby).
In some (rare) cases the language offers multiple division operators. For instance, in Haskell we have div,mod satisfying only (1) and (2), similarly to Python, and we also have quot,rem satisfying only (1) and (3). The latter pair of operators rounds division towards zero, at the price of returning negative remainders, e.g., we have (-1) `quot` 2 = 0 and (-1) `rem` 2 = (-1).
C# also gives up (2), and allows % to return a negative remainder. Coherently, integer division rounds towards zero. Java, Scala, Pascal, and C, starting from C99, also adopt this strategy.
Floating-point operations are defined by IEEE754 with numeric applications in mind and, by default, round to the nearest representable value in a very strictly-defined manner.
Integer operations in computers are not defined by general international standards. The operations granted by languages (especially those of the C family) tend to follow whatever the underlying computer provides. Some languages define certain operations more robustly than others, but to avoid excessively difficult or slow implementations on the available (and popular) computers of their time, will choose a definition that follows its behaviour quite closely.
For this reason, integer operations tend to wrap around on overflow (for addition, multiplication, and shifting-left), and round towards negative infinity when producing an inexact result (for division, and shifting-right). Both of these are simple truncation at their respective end of the integer in two's-complement binary arithmetic; the simplest way to handle a corner-case.
Other answers discuss the relationship with the remainder or modulus operator that a language might provide alongside division. Unfortunately they have it backwards. Remainder depends on the definition of division, not the other way around, while modulus can be defined independently of division - if both arguments happen to be positive and division rounds down, they work out to be the same, so people rarely notice.
Most modern languages provide either a remainder operator or a modulus operator, rarely both. A library function may provide the other operation for people who care about the difference, which is that remainder retains the sign of the dividend, while modulus retains the sign of the divisor.
Because the implication of integer division is that the full answer includes a remainder.
Wikipedia has a great article on this, including history as well as theory.
As long as a language satisfies the Euclidean division property that (a/b) * b + (a%b) == a, both flooring division and truncating division are coherent and arithmetically sensible.
Of course people like to argue that one is obviously correct and the other is obviously wrong, but it has more the character of a holy war than a sensible discussion, and it usually has more to do with the choice of their early preferred language than anything else. They also often tend to argue primarily for their chosen %, even though it probably makes more sense to choose / first and then just pick the % that matches.
Flooring (like Python):
No less an authority than Donald Knuth suggests it.
% following the sign of the divisor is apparently what about 70% of all students guess
The operator is usually read as mod or modulo rather than remainder.
"C does it"—which isn't even true.1
Truncating (like C++):
Makes integer division more consistent with IEEE float division (in default rounding mode).
More CPUs implement it. (May not be true at different times in history.)
The operator is read modulo rather than remainder (even though this actually argues against their point).
The division property conceptually is more about remainder than modulus.
The operator is read mod rather than modulo, so it should follow Fortran's distinction. (This may sound silly, but may have been the clincher for C99. See this thread.)
"Euclidean" (like Pascal—/ floors or truncates depending on signs, so % is never negative):
Niklaus Wirth argued that nobody is ever surprised by positive mod.
Raymond T. Boute later argued that you can't implement Euclidean division naively with either of the other rules.
A number of languages provide both. Typically—as in Ada, Modula-2, some Lisps, Haskell, and Julia—they use names related to mod for the Python-style operator and rem for the C++-style operator. But not always—Fortran, for example, calls the same things modulo and mod (as mentioned above for C99).
We don't know why Python, Tcl, Perl, and the other influential scripting languages mostly chose flooring. As noted in the question, Guido van Rossum's answer only explains why he had to choose one of the three consistent answers, not why he picked the one he did.
However, I suspect the influence of C was key. Most scripting languages are (at least initially) implemented in C, and borrow their operator inventory from C. C89's implementation-defined % is obviously broken, and not suitable for a "friendly" language like Tcl or Python. And C calls the operator "mod". So they go with modulus, not remainder.
1. Despite what the question says—and many people using it as an argument—C actually doesn't have similar behavior to Python and friends. C99 requires truncating division, not flooring. C89 allowed either, and also allowed either version of mod, so there's no guarantee of the division property, and no way to write portable code doing signed integer division. That's just broken.
As Paula said, it is because of the remainder.
The algorithm is founded on Euclidean division.
In Ruby, you can write this rebuilding the dividend with consistency:
puts (10/3)*3 + 10%3
#=> 10
It works the same in real life. 10 apples and 3 people. Ok you can cut one apple in three, but going outside the set integers.
With negative numbers the consistency is also kept:
puts (-10/3)*3 + -10%3 #=> -10
puts (10/(-3))*(-3) + 10%(-3) #=> 10
puts (-10/(-3))*(-3) + -10%(-3) #=> -10
The quotient is always round down (down along the negative axis) and the reminder follows:
puts (-10/3) #=> -4
puts -10%3 #=> 2
puts (10/(-3)) #=> -4
puts 10%(-3) # => -2
puts (-10/(-3)) #=> 3
puts -10%(-3) #=> -1
This answer addresses a sub-part of the question that the other (excellent) answers didn't explicitly address. You noted:
you can have coherency also if you choose to round up.
Other answers addressed the choice between rounding down (towards -∞) and truncating (rounding towards 0) but didn't compare rounding up (towards ∞).
(The accepted answer touches on performance reasons to prefer rounding down on a two's-complement machine, which would also apply in comparison to rounding up. But there are more important semantic reasons to avoid rounding up.)
This answer directly addresses why rounding up is not a great solution.
Rounding up breaks elementary-school expectations
Building on an example from a previous answer's, it's common to informally say something like this:
If I evenly divide fourteen marbles among three people, each person gets four marbles and there are two marbles left over.
Indeed, this is how many students are first taught division (before being introduced to fractions/decimals). A student might write 14 ÷ 3 = 4 remainder 2. Since this is introduced so early, we'd really like our div operator to preserve this property.
Or, put a bit more formally, of the three properties discussed in the top-voted answer, the first one ((a div b) × b + (a mod b) = a) is by far the most important.
But rounding up breaks this property. If div rounds up, then 14 div 3 returns 5. This means that the equation above simplifies to 15 + (13 mod 4) = 13 – and that's not true for any definition of mod. Similarly, the less-formal/elementary-school approach is also out of luck – or at least requires introducing negative marbles: "Each person gets 5 marbles and there are negative one marbles left over".
(Rounding to the nearest integer also breaks the property when, as in the example above, that means rounding up.)
Thus, if we want to maintain elementary expectations, we cannot round up. And with rounding up off the table, the coherency argument that you linked in the question is sufficient to justify rounding down.

python x**(p/q) unexpected behavior

x**(p/q) produces (x**p)/q is this a bug, or is this intentional behavior?
I have searched this site and elsewhere on the internet, but cannot find any discussion of this.
No it doesn't:
>>> 2**(20/2)
1024
>>> (2**20)/2
524288
>>> 2**20/2
524288
No its not True but if you dont have parenthesis, because of that / have lowest precedence (least binding) than Exponentiation (**) what you say would be happen !
also in python's wiki :
The power operator ** binds less tightly than an arithmetic or bitwise unary operator on its right, that is, 2**-1 is 0.5.

What does the ** maths operator do in Python?

What does this mean in Python:
sock.recvfrom(2**16)
I know what sock is, and I get the gist of the recvfrom function, but what the heck is 2**16? Specifically, the two asterisk/double asterisk operator?
(english keywords, because it's hard to search for this: times-times star-star asterisk-asterisk double-times double-star double-asterisk operator)
It is the power operator.
From the Python 3 docs:
The power operator has the same semantics as the built-in pow() function, when called with two arguments: it yields its left argument raised to the power of its right argument. The numeric arguments are first converted to a common type, and the result is of that type.
It is equivalent to 216 = 65536, or pow(2, 16)
http://docs.python.org/library/operator.html#mapping-operators-to-functions
a ** b = pow(a,b)
2 raised to the 16th power
I believe that's the power operator, such that 2**5 = 32.
It is the awesome power operator which like complex numbers is another thing you wonder why more programming languages don't have.

Why does "**" bind more tightly than negation?

I was just bitten by the following scenario:
>>> -1 ** 2
-1
Now, digging through the Python docs, it's clear that this is intended behavior, but why? I don't work with any other languages with power as a builtin operator, but not having unary negation bind as tightly as possible seems dangerously counter-intuitive to me.
Is there a reason it was done this way? Do other languages with power operators behave similarly?
That behaviour is the same as in math formulas, so I am not sure what the problem is, or why it is counter-intuitive. Can you explain where have you seen something different? "**" always bind more than "-": -x^2 is not the same as (-x)^2
Just use (-1) ** 2, exactly as you'd do in math.
Short answer: it's the standard way precedence works in math.
Let's say I want to evaluate the polynomial 3x3 - x2 + 5.
def polynomial(x):
return 3*x**3 - x**2 + 5
It looks better than...
def polynomial
return 3*x**3 - (x**2) + 5
And the first way is the way mathematicians do it. Other languages with exponentiation work the same way. Note that the negation operator also binds more loosely than multiplication, so
-x*y === -(x*y)
Which is also the way they do it in math.
If I had to guess, it would be because having an exponentiation operator allows programmers to easily raise numbers to fractional powers. Negative numbers raised to fractional powers end up with an imaginary component (usually), so that can be avoided by binding ** more tightly than unary -. Most languages don't like imaginary numbers.
Ultimately, of course, it's just a convention - and to make your code readable by yourself and others down the line, you'll probably want to explicitly group your (-1) so no one else gets caught by the same trap :) Good luck!
It seems intuitive to me.
Fist, because it's consistent with mathematical notaiton: -2^2 = -4.
Second, the operator ** was widely introduced by FORTRAN long time ago. In FORTRAN, -2**2 is -4, as well.
Ocaml doesn't do the same
# -12.0**2.0
;;
- : float = 144.
That's kind of weird...
# -12.0**0.5;;
- : float = nan
Look at that link though...
order of operations

Categories

Resources