Why does integer division round down in many scripting languages? - python
In the languages I have tested, - (x div y ) is not equal to -x div y; I have tested // in Python, / in Ruby, div in Perl 6; C has a similar behavior.
That behavior is usually according to spec, since div is usually defined as the rounding down of the result of the division, however it does not make a lot of sense from the arithmetic point of view, since it makes div behave in a different way depending on the sign, and it causes confusion such as this post on how it is done in Python.
Is there some specific rationale behind this design decision, or is just div defined that way from scratch? Apparently Guido van Rossum uses a coherency argument in a blog post that explains how it is done in Python, but you can have coherency also if you choose to round up.
(Inspired by this question by PMurias in the #perl6 IRC channel)
Ideally, we would like to have two operations div and mod, satisfying, for each b>0:
(a div b) * b + (a mod b) = a
0 <= (a mod b) < b
(-a) div b = -(a div b)
This is, however, a mathematical impossibility. If all the above were true, we would have
1 div 2 = 0
1 mod 2 = 1
since this is the unique integer solution to (1) and (2). Hence, we would also have, by (3),
0 = -0 = -(1 div 2) = (-1) div 2
which, by (1), implies
-1 = ((-1) div 2) * 2 + ((-1) mod 2) = 0 * 2 + ((-1) mod 2) = (-1) mod 2
making (-1) mod 2 < 0 which contradicts (2).
Hence, we need to give up some property among (1), (2), and (3).
Some programming languages give up (3), and make div round down (Python, Ruby).
In some (rare) cases the language offers multiple division operators. For instance, in Haskell we have div,mod satisfying only (1) and (2), similarly to Python, and we also have quot,rem satisfying only (1) and (3). The latter pair of operators rounds division towards zero, at the price of returning negative remainders, e.g., we have (-1) `quot` 2 = 0 and (-1) `rem` 2 = (-1).
C# also gives up (2), and allows % to return a negative remainder. Coherently, integer division rounds towards zero. Java, Scala, Pascal, and C, starting from C99, also adopt this strategy.
Floating-point operations are defined by IEEE754 with numeric applications in mind and, by default, round to the nearest representable value in a very strictly-defined manner.
Integer operations in computers are not defined by general international standards. The operations granted by languages (especially those of the C family) tend to follow whatever the underlying computer provides. Some languages define certain operations more robustly than others, but to avoid excessively difficult or slow implementations on the available (and popular) computers of their time, will choose a definition that follows its behaviour quite closely.
For this reason, integer operations tend to wrap around on overflow (for addition, multiplication, and shifting-left), and round towards negative infinity when producing an inexact result (for division, and shifting-right). Both of these are simple truncation at their respective end of the integer in two's-complement binary arithmetic; the simplest way to handle a corner-case.
Other answers discuss the relationship with the remainder or modulus operator that a language might provide alongside division. Unfortunately they have it backwards. Remainder depends on the definition of division, not the other way around, while modulus can be defined independently of division - if both arguments happen to be positive and division rounds down, they work out to be the same, so people rarely notice.
Most modern languages provide either a remainder operator or a modulus operator, rarely both. A library function may provide the other operation for people who care about the difference, which is that remainder retains the sign of the dividend, while modulus retains the sign of the divisor.
Because the implication of integer division is that the full answer includes a remainder.
Wikipedia has a great article on this, including history as well as theory.
As long as a language satisfies the Euclidean division property that (a/b) * b + (a%b) == a, both flooring division and truncating division are coherent and arithmetically sensible.
Of course people like to argue that one is obviously correct and the other is obviously wrong, but it has more the character of a holy war than a sensible discussion, and it usually has more to do with the choice of their early preferred language than anything else. They also often tend to argue primarily for their chosen %, even though it probably makes more sense to choose / first and then just pick the % that matches.
Flooring (like Python):
No less an authority than Donald Knuth suggests it.
% following the sign of the divisor is apparently what about 70% of all students guess
The operator is usually read as mod or modulo rather than remainder.
"C does it"—which isn't even true.1
Truncating (like C++):
Makes integer division more consistent with IEEE float division (in default rounding mode).
More CPUs implement it. (May not be true at different times in history.)
The operator is read modulo rather than remainder (even though this actually argues against their point).
The division property conceptually is more about remainder than modulus.
The operator is read mod rather than modulo, so it should follow Fortran's distinction. (This may sound silly, but may have been the clincher for C99. See this thread.)
"Euclidean" (like Pascal—/ floors or truncates depending on signs, so % is never negative):
Niklaus Wirth argued that nobody is ever surprised by positive mod.
Raymond T. Boute later argued that you can't implement Euclidean division naively with either of the other rules.
A number of languages provide both. Typically—as in Ada, Modula-2, some Lisps, Haskell, and Julia—they use names related to mod for the Python-style operator and rem for the C++-style operator. But not always—Fortran, for example, calls the same things modulo and mod (as mentioned above for C99).
We don't know why Python, Tcl, Perl, and the other influential scripting languages mostly chose flooring. As noted in the question, Guido van Rossum's answer only explains why he had to choose one of the three consistent answers, not why he picked the one he did.
However, I suspect the influence of C was key. Most scripting languages are (at least initially) implemented in C, and borrow their operator inventory from C. C89's implementation-defined % is obviously broken, and not suitable for a "friendly" language like Tcl or Python. And C calls the operator "mod". So they go with modulus, not remainder.
1. Despite what the question says—and many people using it as an argument—C actually doesn't have similar behavior to Python and friends. C99 requires truncating division, not flooring. C89 allowed either, and also allowed either version of mod, so there's no guarantee of the division property, and no way to write portable code doing signed integer division. That's just broken.
As Paula said, it is because of the remainder.
The algorithm is founded on Euclidean division.
In Ruby, you can write this rebuilding the dividend with consistency:
puts (10/3)*3 + 10%3
#=> 10
It works the same in real life. 10 apples and 3 people. Ok you can cut one apple in three, but going outside the set integers.
With negative numbers the consistency is also kept:
puts (-10/3)*3 + -10%3 #=> -10
puts (10/(-3))*(-3) + 10%(-3) #=> 10
puts (-10/(-3))*(-3) + -10%(-3) #=> -10
The quotient is always round down (down along the negative axis) and the reminder follows:
puts (-10/3) #=> -4
puts -10%3 #=> 2
puts (10/(-3)) #=> -4
puts 10%(-3) # => -2
puts (-10/(-3)) #=> 3
puts -10%(-3) #=> -1
This answer addresses a sub-part of the question that the other (excellent) answers didn't explicitly address. You noted:
you can have coherency also if you choose to round up.
Other answers addressed the choice between rounding down (towards -∞) and truncating (rounding towards 0) but didn't compare rounding up (towards ∞).
(The accepted answer touches on performance reasons to prefer rounding down on a two's-complement machine, which would also apply in comparison to rounding up. But there are more important semantic reasons to avoid rounding up.)
This answer directly addresses why rounding up is not a great solution.
Rounding up breaks elementary-school expectations
Building on an example from a previous answer's, it's common to informally say something like this:
If I evenly divide fourteen marbles among three people, each person gets four marbles and there are two marbles left over.
Indeed, this is how many students are first taught division (before being introduced to fractions/decimals). A student might write 14 ÷ 3 = 4 remainder 2. Since this is introduced so early, we'd really like our div operator to preserve this property.
Or, put a bit more formally, of the three properties discussed in the top-voted answer, the first one ((a div b) × b + (a mod b) = a) is by far the most important.
But rounding up breaks this property. If div rounds up, then 14 div 3 returns 5. This means that the equation above simplifies to 15 + (13 mod 4) = 13 – and that's not true for any definition of mod. Similarly, the less-formal/elementary-school approach is also out of luck – or at least requires introducing negative marbles: "Each person gets 5 marbles and there are negative one marbles left over".
(Rounding to the nearest integer also breaks the property when, as in the example above, that means rounding up.)
Thus, if we want to maintain elementary expectations, we cannot round up. And with rounding up off the table, the coherency argument that you linked in the question is sufficient to justify rounding down.
Related
What happens when the function is called inside itself (Python) [duplicate]
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions. One of the topics that seems to come up regularly on mailing lists and online discussions is the merits (or lack thereof) of doing a Computer Science Degree. An argument that seems to come up time and again for the negative party is that they have been coding for some number of years and they have never used recursion. So the question is: What is recursion? When would I use recursion? Why don't people use recursion?
There are a number of good explanations of recursion in this thread, this answer is about why you shouldn't use it in most languages.* In the majority of major imperative language implementations (i.e. every major implementation of C, C++, Basic, Python, Ruby,Java, and C#) iteration is vastly preferable to recursion. To see why, walk through the steps that the above languages use to call a function: space is carved out on the stack for the function's arguments and local variables the function's arguments are copied into this new space control jumps to the function the function's code runs the function's result is copied into a return value the stack is rewound to its previous position control jumps back to where the function was called Doing all of these steps takes time, usually a little bit more than it takes to iterate through a loop. However, the real problem is in step #1. When many programs start, they allocate a single chunk of memory for their stack, and when they run out of that memory (often, but not always due to recursion), the program crashes due to a stack overflow. So in these languages recursion is slower and it makes you vulnerable to crashing. There are still some arguments for using it though. In general, code written recursively is shorter and a bit more elegant, once you know how to read it. There is a technique that language implementers can use called tail call optimization which can eliminate some classes of stack overflow. Put succinctly: if a function's return expression is simply the result of a function call, then you don't need to add a new level onto the stack, you can reuse the current one for the function being called. Regrettably, few imperative language-implementations have tail-call optimization built in. * I love recursion. My favorite static language doesn't use loops at all, recursion is the only way to do something repeatedly. I just don't think that recursion is generally a good idea in languages that aren't tuned for it. ** By the way Mario, the typical name for your ArrangeString function is "join", and I'd be surprised if your language of choice doesn't already have an implementation of it.
Simple english example of recursion. A child couldn't sleep, so her mother told her a story about a little frog, who couldn't sleep, so the frog's mother told her a story about a little bear, who couldn't sleep, so the bear's mother told her a story about a little weasel... who fell asleep. ...and the little bear fell asleep; ...and the little frog fell asleep; ...and the child fell asleep.
In the most basic computer science sense, recursion is a function that calls itself. Say you have a linked list structure: struct Node { Node* next; }; And you want to find out how long a linked list is you can do this with recursion: int length(const Node* list) { if (!list->next) { return 1; } else { return 1 + length(list->next); } } (This could of course be done with a for loop as well, but is useful as an illustration of the concept)
Whenever a function calls itself, creating a loop, then that's recursion. As with anything there are good uses and bad uses for recursion. The most simple example is tail recursion where the very last line of the function is a call to itself: int FloorByTen(int num) { if (num % 10 == 0) return num; else return FloorByTen(num-1); } However, this is a lame, almost pointless example because it can easily be replaced by more efficient iteration. After all, recursion suffers from function call overhead, which in the example above could be substantial compared to the operation inside the function itself. So the whole reason to do recursion rather than iteration should be to take advantage of the call stack to do some clever stuff. For example, if you call a function multiple times with different parameters inside the same loop then that's a way to accomplish branching. A classic example is the Sierpinski triangle. You can draw one of those very simply with recursion, where the call stack branches in 3 directions: private void BuildVertices(double x, double y, double len) { if (len > 0.002) { mesh.Positions.Add(new Point3D(x, y + len, -len)); mesh.Positions.Add(new Point3D(x - len, y - len, -len)); mesh.Positions.Add(new Point3D(x + len, y - len, -len)); len *= 0.5; BuildVertices(x, y + len, len); BuildVertices(x - len, y - len, len); BuildVertices(x + len, y - len, len); } } If you attempt to do the same thing with iteration I think you'll find it takes a lot more code to accomplish. Other common use cases might include traversing hierarchies, e.g. website crawlers, directory comparisons, etc. Conclusion In practical terms, recursion makes the most sense whenever you need iterative branching.
Recursion is a method of solving problems based on the divide and conquer mentality. The basic idea is that you take the original problem and divide it into smaller (more easily solved) instances of itself, solve those smaller instances (usually by using the same algorithm again) and then reassemble them into the final solution. The canonical example is a routine to generate the Factorial of n. The Factorial of n is calculated by multiplying all of the numbers between 1 and n. An iterative solution in C# looks like this: public int Fact(int n) { int fact = 1; for( int i = 2; i <= n; i++) { fact = fact * i; } return fact; } There's nothing surprising about the iterative solution and it should make sense to anyone familiar with C#. The recursive solution is found by recognising that the nth Factorial is n * Fact(n-1). Or to put it another way, if you know what a particular Factorial number is you can calculate the next one. Here is the recursive solution in C#: public int FactRec(int n) { if( n < 2 ) { return 1; } return n * FactRec( n - 1 ); } The first part of this function is known as a Base Case (or sometimes Guard Clause) and is what prevents the algorithm from running forever. It just returns the value 1 whenever the function is called with a value of 1 or less. The second part is more interesting and is known as the Recursive Step. Here we call the same method with a slightly modified parameter (we decrement it by 1) and then multiply the result with our copy of n. When first encountered this can be kind of confusing so it's instructive to examine how it works when run. Imagine that we call FactRec(5). We enter the routine, are not picked up by the base case and so we end up like this: // In FactRec(5) return 5 * FactRec( 5 - 1 ); // which is return 5 * FactRec(4); If we re-enter the method with the parameter 4 we are again not stopped by the guard clause and so we end up at: // In FactRec(4) return 4 * FactRec(3); If we substitute this return value into the return value above we get // In FactRec(5) return 5 * (4 * FactRec(3)); This should give you a clue as to how the final solution is arrived at so we'll fast track and show each step on the way down: return 5 * (4 * FactRec(3)); return 5 * (4 * (3 * FactRec(2))); return 5 * (4 * (3 * (2 * FactRec(1)))); return 5 * (4 * (3 * (2 * (1)))); That final substitution happens when the base case is triggered. At this point we have a simple algrebraic formula to solve which equates directly to the definition of Factorials in the first place. It's instructive to note that every call into the method results in either a base case being triggered or a call to the same method where the parameters are closer to a base case (often called a recursive call). If this is not the case then the method will run forever.
Recursion is solving a problem with a function that calls itself. A good example of this is a factorial function. Factorial is a math problem where factorial of 5, for example, is 5 * 4 * 3 * 2 * 1. This function solves this in C# for positive integers (not tested - there may be a bug). public int Factorial(int n) { if (n <= 1) return 1; return n * Factorial(n - 1); }
Recursion refers to a method which solves a problem by solving a smaller version of the problem and then using that result plus some other computation to formulate the answer to the original problem. Often times, in the process of solving the smaller version, the method will solve a yet smaller version of the problem, and so on, until it reaches a "base case" which is trivial to solve. For instance, to calculate a factorial for the number X, one can represent it as X times the factorial of X-1. Thus, the method "recurses" to find the factorial of X-1, and then multiplies whatever it got by X to give a final answer. Of course, to find the factorial of X-1, it'll first calculate the factorial of X-2, and so on. The base case would be when X is 0 or 1, in which case it knows to return 1 since 0! = 1! = 1.
Consider an old, well known problem: In mathematics, the greatest common divisor (gcd) … of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder. The definition of gcd is surprisingly simple: where mod is the modulo operator (that is, the remainder after integer division). In English, this definition says the greatest common divisor of any number and zero is that number, and the greatest common divisor of two numbers m and n is the greatest common divisor of n and the remainder after dividing m by n. If you'd like to know why this works, see the Wikipedia article on the Euclidean algorithm. Let's compute gcd(10, 8) as an example. Each step is equal to the one just before it: gcd(10, 8) gcd(10, 10 mod 8) gcd(8, 2) gcd(8, 8 mod 2) gcd(2, 0) 2 In the first step, 8 does not equal zero, so the second part of the definition applies. 10 mod 8 = 2 because 8 goes into 10 once with a remainder of 2. At step 3, the second part applies again, but this time 8 mod 2 = 0 because 2 divides 8 with no remainder. At step 5, the second argument is 0, so the answer is 2. Did you notice that gcd appears on both the left and right sides of the equals sign? A mathematician would say this definition is recursive because the expression you're defining recurs inside its definition. Recursive definitions tend to be elegant. For example, a recursive definition for the sum of a list is sum l = if empty(l) return 0 else return head(l) + sum(tail(l)) where head is the first element in a list and tail is the rest of the list. Note that sum recurs inside its definition at the end. Maybe you'd prefer the maximum value in a list instead: max l = if empty(l) error elsif length(l) = 1 return head(l) else tailmax = max(tail(l)) if head(l) > tailmax return head(l) else return tailmax You might define multiplication of non-negative integers recursively to turn it into a series of additions: a * b = if b = 0 return 0 else return a + (a * (b - 1)) If that bit about transforming multiplication into a series of additions doesn't make sense, try expanding a few simple examples to see how it works. Merge sort has a lovely recursive definition: sort(l) = if empty(l) or length(l) = 1 return l else (left,right) = split l return merge(sort(left), sort(right)) Recursive definitions are all around if you know what to look for. Notice how all of these definitions have very simple base cases, e.g., gcd(m, 0) = m. The recursive cases whittle away at the problem to get down to the easy answers. With this understanding, you can now appreciate the other algorithms in Wikipedia's article on recursion!
A function that calls itself When a function can be (easily) decomposed into a simple operation plus the same function on some smaller portion of the problem. I should say, rather, that this makes it a good candidate for recursion. They do! The canonical example is the factorial which looks like: int fact(int a) { if(a==1) return 1; return a*fact(a-1); } In general, recursion isn't necessarily fast (function call overhead tends to be high because recursive functions tend to be small, see above) and can suffer from some problems (stack overflow anyone?). Some say they tend to be hard to get 'right' in non-trivial cases but I don't really buy into that. In some situations, recursion makes the most sense and is the most elegant and clear way to write a particular function. It should be noted that some languages favor recursive solutions and optimize them much more (LISP comes to mind).
A recursive function is one which calls itself. The most common reason I've found to use it is traversing a tree structure. For example, if I have a TreeView with checkboxes (think installation of a new program, "choose features to install" page), I might want a "check all" button which would be something like this (pseudocode): function cmdCheckAllClick { checkRecursively(TreeView1.RootNode); } function checkRecursively(Node n) { n.Checked = True; foreach ( n.Children as child ) { checkRecursively(child); } } So you can see that the checkRecursively first checks the node which it is passed, then calls itself for each of that node's children. You do need to be a bit careful with recursion. If you get into an infinite recursive loop, you will get a Stack Overflow exception :) I can't think of a reason why people shouldn't use it, when appropriate. It is useful in some circumstances, and not in others. I think that because it's an interesting technique, some coders perhaps end up using it more often than they should, without real justification. This has given recursion a bad name in some circles.
Recursion is an expression directly or indirectly referencing itself. Consider recursive acronyms as a simple example: GNU stands for GNU's Not Unix PHP stands for PHP: Hypertext Preprocessor YAML stands for YAML Ain't Markup Language WINE stands for Wine Is Not an Emulator VISA stands for Visa International Service Association More examples on Wikipedia
Recursion works best with what I like to call "fractal problems", where you're dealing with a big thing that's made of smaller versions of that big thing, each of which is an even smaller version of the big thing, and so on. If you ever have to traverse or search through something like a tree or nested identical structures, you've got a problem that might be a good candidate for recursion. People avoid recursion for a number of reasons: Most people (myself included) cut their programming teeth on procedural or object-oriented programming as opposed to functional programming. To such people, the iterative approach (typically using loops) feels more natural. Those of us who cut our programming teeth on procedural or object-oriented programming have often been told to avoid recursion because it's error prone. We're often told that recursion is slow. Calling and returning from a routine repeatedly involves a lot of stack pushing and popping, which is slower than looping. I think some languages handle this better than others, and those languages are most likely not those where the dominant paradigm is procedural or object-oriented. For at least a couple of programming languages I've used, I remember hearing recommendations not to use recursion if it gets beyond a certain depth because its stack isn't that deep.
A recursive statement is one in which you define the process of what to do next as a combination of the inputs and what you have already done. For example, take factorial: factorial(6) = 6*5*4*3*2*1 But it's easy to see factorial(6) also is: 6 * factorial(5) = 6*(5*4*3*2*1). So generally: factorial(n) = n*factorial(n-1) Of course, the tricky thing about recursion is that if you want to define things in terms of what you have already done, there needs to be some place to start. In this example, we just make a special case by defining factorial(1) = 1. Now we see it from the bottom up: factorial(6) = 6*factorial(5) = 6*5*factorial(4) = 6*5*4*factorial(3) = 6*5*4*3*factorial(2) = 6*5*4*3*2*factorial(1) = 6*5*4*3*2*1 Since we defined factorial(1) = 1, we reach the "bottom". Generally speaking, recursive procedures have two parts: 1) The recursive part, which defines some procedure in terms of new inputs combined with what you've "already done" via the same procedure. (i.e. factorial(n) = n*factorial(n-1)) 2) A base part, which makes sure that the process doesn't repeat forever by giving it some place to start (i.e. factorial(1) = 1) It can be a bit confusing to get your head around at first, but just look at a bunch of examples and it should all come together. If you want a much deeper understanding of the concept, study mathematical induction. Also, be aware that some languages optimize for recursive calls while others do not. It's pretty easy to make insanely slow recursive functions if you're not careful, but there are also techniques to make them performant in most cases. Hope this helps...
I like this definition: In recursion, a routine solves a small part of a problem itself, divides the problem into smaller pieces, and then calls itself to solve each of the smaller pieces. I also like Steve McConnells discussion of recursion in Code Complete where he criticises the examples used in Computer Science books on Recursion. Don't use recursion for factorials or Fibonacci numbers One problem with computer-science textbooks is that they present silly examples of recursion. The typical examples are computing a factorial or computing a Fibonacci sequence. Recursion is a powerful tool, and it's really dumb to use it in either of those cases. If a programmer who worked for me used recursion to compute a factorial, I'd hire someone else. I thought this was a very interesting point to raise and may be a reason why recursion is often misunderstood. EDIT: This was not a dig at Dav's answer - I had not seen that reply when I posted this
1.) A method is recursive if it can call itself; either directly: void f() { ... f() ... } or indirectly: void f() { ... g() ... } void g() { ... f() ... } 2.) When to use recursion Q: Does using recursion usually make your code faster? A: No. Q: Does using recursion usually use less memory? A: No. Q: Then why use recursion? A: It sometimes makes your code much simpler! 3.) People use recursion only when it is very complex to write iterative code. For example, tree traversal techniques like preorder, postorder can be made both iterative and recursive. But usually we use recursive because of its simplicity.
Here's a simple example: how many elements in a set. (there are better ways to count things, but this is a nice simple recursive example.) First, we need two rules: if the set is empty, the count of items in the set is zero (duh!). if the set is not empty, the count is one plus the number of items in the set after one item is removed. Suppose you have a set like this: [x x x]. let's count how many items there are. the set is [x x x] which is not empty, so we apply rule 2. the number of items is one plus the number of items in [x x] (i.e. we removed an item). the set is [x x], so we apply rule 2 again: one + number of items in [x]. the set is [x], which still matches rule 2: one + number of items in []. Now the set is [], which matches rule 1: the count is zero! Now that we know the answer in step 4 (0), we can solve step 3 (1 + 0) Likewise, now that we know the answer in step 3 (1), we can solve step 2 (1 + 1) And finally now that we know the answer in step 2 (2), we can solve step 1 (1 + 2) and get the count of items in [x x x], which is 3. Hooray! We can represent this as: count of [x x x] = 1 + count of [x x] = 1 + (1 + count of [x]) = 1 + (1 + (1 + count of [])) = 1 + (1 + (1 + 0))) = 1 + (1 + (1)) = 1 + (2) = 3 When applying a recursive solution, you usually have at least 2 rules: the basis, the simple case which states what happens when you have "used up" all of your data. This is usually some variation of "if you are out of data to process, your answer is X" the recursive rule, which states what happens if you still have data. This is usually some kind of rule that says "do something to make your data set smaller, and reapply your rules to the smaller data set." If we translate the above to pseudocode, we get: numberOfItems(set) if set is empty return 0 else remove 1 item from set return 1 + numberOfItems(set) There's a lot more useful examples (traversing a tree, for example) which I'm sure other people will cover.
Well, that's a pretty decent definition you have. And wikipedia has a good definition too. So I'll add another (probably worse) definition for you. When people refer to "recursion", they're usually talking about a function they've written which calls itself repeatedly until it is done with its work. Recursion can be helpful when traversing hierarchies in data structures.
An example: A recursive definition of a staircase is: A staircase consists of: - a single step and a staircase (recursion) - or only a single step (termination)
To recurse on a solved problem: do nothing, you're done. To recurse on an open problem: do the next step, then recurse on the rest.
In plain English: Assume you can do 3 things: Take one apple Write down tally marks Count tally marks You have a lot of apples in front of you on a table and you want to know how many apples there are. start Is the table empty? yes: Count the tally marks and cheer like it's your birthday! no: Take 1 apple and put it aside Write down a tally mark goto start The process of repeating the same thing till you are done is called recursion. I hope this is the "plain english" answer you are looking for!
A recursive function is a function that contains a call to itself. A recursive struct is a struct that contains an instance of itself. You can combine the two as a recursive class. The key part of a recursive item is that it contains an instance/call of itself. Consider two mirrors facing each other. We've seen the neat infinity effect they make. Each reflection is an instance of a mirror, which is contained within another instance of a mirror, etc. The mirror containing a reflection of itself is recursion. A binary search tree is a good programming example of recursion. The structure is recursive with each Node containing 2 instances of a Node. Functions to work on a binary search tree are also recursive.
This is an old question, but I want to add an answer from logistical point of view (i.e not from algorithm correctness point of view or performance point of view). I use Java for work, and Java doesn't support nested function. As such, if I want to do recursion, I might have to define an external function (which exists only because my code bumps against Java's bureaucratic rule), or I might have to refactor the code altogether (which I really hate to do). Thus, I often avoid recursion, and use stack operation instead, because recursion itself is essentially a stack operation.
You want to use it anytime you have a tree structure. It is very useful in reading XML.
Recursion as it applies to programming is basically calling a function from inside its own definition (inside itself), with different parameters so as to accomplish a task.
"If I have a hammer, make everything look like a nail." Recursion is a problem-solving strategy for huge problems, where at every step just, "turn 2 small things into one bigger thing," each time with the same hammer. Example Suppose your desk is covered with a disorganized mess of 1024 papers. How do you make one neat, clean stack of papers from the mess, using recursion? Divide: Spread all the sheets out, so you have just one sheet in each "stack". Conquer: Go around, putting each sheet on top of one other sheet. You now have stacks of 2. Go around, putting each 2-stack on top of another 2-stack. You now have stacks of 4. Go around, putting each 4-stack on top of another 4-stack. You now have stacks of 8. ... on and on ... You now have one huge stack of 1024 sheets! Notice that this is pretty intuitive, aside from counting everything (which isn't strictly necessary). You might not go all the way down to 1-sheet stacks, in reality, but you could and it would still work. The important part is the hammer: With your arms, you can always put one stack on top of the other to make a bigger stack, and it doesn't matter (within reason) how big either stack is.
Recursion is the process where a method call iself to be able to perform a certain task. It reduces redundency of code. Most recurssive functions or methods must have a condifiton to break the recussive call i.e. stop it from calling itself if a condition is met - this prevents the creating of an infinite loop. Not all functions are suited to be used recursively.
hey, sorry if my opinion agrees with someone, I'm just trying to explain recursion in plain english. suppose you have three managers - Jack, John and Morgan. Jack manages 2 programmers, John - 3, and Morgan - 5. you are going to give every manager 300$ and want to know what would it cost. The answer is obvious - but what if 2 of Morgan-s employees are also managers? HERE comes the recursion. you start from the top of the hierarchy. the summery cost is 0$. you start with Jack, Then check if he has any managers as employees. if you find any of them are, check if they have any managers as employees and so on. Add 300$ to the summery cost every time you find a manager. when you are finished with Jack, go to John, his employees and then to Morgan. You'll never know, how much cycles will you go before getting an answer, though you know how many managers you have and how many Budget can you spend. Recursion is a tree, with branches and leaves, called parents and children respectively. When you use a recursion algorithm, you more or less consciously are building a tree from the data.
In plain English, recursion means to repeat someting again and again. In programming one example is of calling the function within itself . Look on the following example of calculating factorial of a number: public int fact(int n) { if (n==0) return 1; else return n*fact(n-1) }
Any algorithm exhibits structural recursion on a datatype if basically consists of a switch-statement with a case for each case of the datatype. for example, when you are working on a type tree = null | leaf(value:integer) | node(left: tree, right:tree) a structural recursive algorithm would have the form function computeSomething(x : tree) = if x is null: base case if x is leaf: do something with x.value if x is node: do something with x.left, do something with x.right, combine the results this is really the most obvious way to write any algorith that works on a data structure. now, when you look at the integers (well, the natural numbers) as defined using the Peano axioms integer = 0 | succ(integer) you see that a structural recursive algorithm on integers looks like this function computeSomething(x : integer) = if x is 0 : base case if x is succ(prev) : do something with prev the too-well-known factorial function is about the most trivial example of this form.
function call itself or use its own definition.
In PEP 3141, why doesn't Number have an add method?
PEP 3141 defines a numerical hierarchy with Complex.__add__ but no Number.__add__. This seems to be a weird choice, since the other numeric type Decimal that (virtually) derives from Number also implements an add method. So why is it this way? If I want to add type annotations or assertions to my code, should I use x:(Complex, Decimal)? Or x:Number and ignore the fact that this declaration is practically meaningless?
I believe the answer can be found in the Rejected Alternatives: The initial version of this PEP defined an algebraic hierarchy inspired by a Haskell Numeric Prelude [3] including MonoidUnderPlus, AdditiveGroup, Ring, and Field, and mentioned several other possible algebraic types before getting to the numbers. We had expected this to be useful to people using vectors and matrices, but the NumPy community really wasn't interested ... There are more complicated number systems where addition is clearly not supported. They could have went in much more detail with their class hierarchy (and originally intended to), but there is a lack of interest in the community. Hence, it is easier just to leave Numbers unspecified for anyone who wants to get more complicated. Note that Monoids are an example where only one binary operation is defined.
In numbers.py. There is note on Decimal and Real. 24 ## Notes on Decimal 25 ## ---------------- 26 ## Decimal has all of the methods specified by the Real abc, but it should 27 ## not be registered as a Real because decimals do not interoperate with 28 ## binary floats (i.e. Decimal('3.14') + 2.71828 is undefined). But, 29 ## abstract reals are expected to interoperate (i.e. R1 + R2 should be 30 ## expected to work if R1 and R2 are both Reals). And also put some related links here. Really a good question, drive me dig the hole around. :P A related github issue PEP 3119 Which all about ABC(Abstract Base Class) and PEP3141 defines Number part. cpython/Lib/numbers.py
not able to understand behaviour of ** operator
I have suddenly came across this, I am not able to understand why this is happening! On python prompt, using the ** operator on 3 onwards like below giving wrong result. i.e., >>> 2**2**2 16 >>> 3**3**3 7625597484987L >>> 4**4**4 13407807929942597099574024998205846127479365820592393377723561443721764030073546976801874298166903427690031858186486050853753882811946569946433649006084096L Then i thought i must have to use parentheses, so i used it and it is giving correct result. >>>(3**3)**3 19683 BUT the // operator is supporting and giving correct results in this kind of operations, that is >>> 4//4//4 0 >>> 40//4//6 1 please help me to understand.
** is right-associative. Mathematically, this makes sense: 333 is equal to 327, not 273. The documentation states that it is right-associative: In an unparenthesized sequence of power and unary operators, the operators are evaluated from right to left.
As the docs say: Operators in the same box group left to right (except for comparisons… and exponentiation, which groups from right to left). In other words, ** is right-associative, while // (like all other operators except comparisons) is left-associative. Elsewhere, there's a whole section on The power operator that, after giving a rule (which isn't relevant here) about how power and unary operators interacts, clarifies that: [I]n an unparenthesized sequence of power and unary operators, the operators are evaluated from right to left… This is actually the way most programming languages do it. Exponentiation isn't written with symmetrical operator syntax in mathematics, so there's really no reason it should have the same default associativity. And right-associative exponentiation is much less useful, because (2**3)**4 is exactly the same thing as 2**(3*4), whereas there's nothing obvious that's the same thing as 2**(3**4).
Looks like the ** operator is right-associative, meaning 3**3**3 evaluates as 3**27 and 4**4**4 as 4**256.
When you do stuff like 4**4**4, you should use parentheses to make your intentions explicit. The parser will resolve the ambiguity, as #cHao indicated, but it is confusing to others. You should use (4**4)**4 or 4**(4**4). Explicit here is better than implicit, since taking powers of powers is not exactly a workaday operation we see all of the time.
Python Math module subtleties
I've been experimenting with the standard python math module and have come across some subtle difficulties. For example, I'm noticing the following behavior concerning indeterminate forms: 0**0 >>> 1 def inf(): return 1e900 # Will return inf inf()**inf() >>> inf And other anomalies of the sort. I'm writing a calculator, and I'd like to have it be mathematically accurate. Is there something I can do about this? Or, is there some way to circumvent this? Thanks in advance.
There's nothing wrong with your first example. 0**0 is often defined to be 1. The second example is all to do with precision of doubles. 1E900 exceeds the maximum positive value of a (most likely 64-bit) double. If you want doubles outside of that range, you'll have to look into libraries. Fortunately Python has one built-in: the decimal module. For example: from decimal import Decimal d = Decimal('1E900') f = d + d print(f) >>> 2E900
According to Wolfram (quoting Knuth) while 0**0 is indeterminate, it's sometimes given as 1. This is because holding the statement 'x**0 = 1' to be true in all cases is in some cases useful. Even more interestingly Python will consider NaN**0 to be 1 as well. http://mathworld.wolfram.com/Power.html In the case of infinity**infinity, you're not really dealing with the mathematical concept of infinity here (where that would be undefined), but rather a number that's too large and has overflowed. As such all that statement is saying is that a number that's huge to the power of another number that's huge is still a number that's huge. Edit: I do not think it is possible to overload a built in type (such as float) in Python so overloading the float.__pow__(x,y) operator directly. What you could possibly do is define your own version of float. class myfloat(float): def __pow__(x,y): if(x==y==0): return 'NaN' else: return float.__pow__(x,y) m = myfloat(0) m**0 Not sure if that's exactly what you're looking for though.
Well returning NaN for 0**0 is almost always useless and lots of algorithms avoid special cases if we assume 0**0 == 1. So while it may not be mathematically perfect - we're talking about IEEE-754 here, mathematical exactness is really the least of our problems [1] But if you want to change it, that's rather simple. The following works as expected in Python 3.2: def my_pow(x, y): if y == 0: return 'NaN' return float.__pow__(float(x), y) pow = my_pow [1] The following code can theoretically execute the if branch with x86 CPUs (well at least in C and co): float x = sqrt(y); if (x != sqrt(y)) printf("Surprise, surprise!\n");
Why does "**" bind more tightly than negation?
I was just bitten by the following scenario: >>> -1 ** 2 -1 Now, digging through the Python docs, it's clear that this is intended behavior, but why? I don't work with any other languages with power as a builtin operator, but not having unary negation bind as tightly as possible seems dangerously counter-intuitive to me. Is there a reason it was done this way? Do other languages with power operators behave similarly?
That behaviour is the same as in math formulas, so I am not sure what the problem is, or why it is counter-intuitive. Can you explain where have you seen something different? "**" always bind more than "-": -x^2 is not the same as (-x)^2 Just use (-1) ** 2, exactly as you'd do in math.
Short answer: it's the standard way precedence works in math. Let's say I want to evaluate the polynomial 3x3 - x2 + 5. def polynomial(x): return 3*x**3 - x**2 + 5 It looks better than... def polynomial return 3*x**3 - (x**2) + 5 And the first way is the way mathematicians do it. Other languages with exponentiation work the same way. Note that the negation operator also binds more loosely than multiplication, so -x*y === -(x*y) Which is also the way they do it in math.
If I had to guess, it would be because having an exponentiation operator allows programmers to easily raise numbers to fractional powers. Negative numbers raised to fractional powers end up with an imaginary component (usually), so that can be avoided by binding ** more tightly than unary -. Most languages don't like imaginary numbers. Ultimately, of course, it's just a convention - and to make your code readable by yourself and others down the line, you'll probably want to explicitly group your (-1) so no one else gets caught by the same trap :) Good luck!
It seems intuitive to me. Fist, because it's consistent with mathematical notaiton: -2^2 = -4. Second, the operator ** was widely introduced by FORTRAN long time ago. In FORTRAN, -2**2 is -4, as well.
Ocaml doesn't do the same # -12.0**2.0 ;; - : float = 144. That's kind of weird... # -12.0**0.5;; - : float = nan Look at that link though... order of operations