I made a simple factorial program:
import sys
sys.set_int_max_str_digits(0)
sys.setrecursionlimit(1000000)
def factorial(x):
if x == 0 | x == 1:
return 1
elif x > 1:
return x * factorial(x - 1)
i = 0
while 1:
print(factorial(i), '\n')
i += 1
But after a while the program halts. I want to know if there's a way to remove the limit on how big it could get.
Recursion is not meant to be infinite. Eventually your program would fail, even on a system with a huge amount of memory.
Also note that the recursion limit given to setrecursionlimit() is not a guarantee that you'll get that recursion depth. To quote from the sys.setrecursionlimit documentation:
The highest possible limit is platform-dependent. A user may need to set the limit higher when they have a program that requires deep recursion and a platform that supports a higher limit. This should be done with care, because a too-high limit can lead to a crash.
I would suggest either limiting the program to calculating a reasonable sized factorial, or not using recursion. Some tasks are much better suited to recursion versus iteration, but factorials is not one of them.
Related
I was practicing writing python code, and I was trying to generate numbers of the Fibonacci series from 1,1,...
I wrote it and it was quite good, but then something odd happened which I write it down here:
# Generating Fibonacci Series in a List:
N = 1
M = 1
Fibonacci = []
ChoiceMethod = 0
ChoiceNum = 0
BelowNum = 0
ChoiceMethod = input('Choose Your Method of Work (1/2/3): ')
if int(ChoiceMethod) == 1:
ChoiceNum = input('Give your Choice Number: ')
while len(Fibonacci) != int(ChoiceNum):
Fibonacci.append(N)
Fibonacci.append(M)
N += M
M += N
print(Fibonacci)
You know it's the first part of the code, if you Run the code and enter 1 in the input (1st method) and then when it say Give your Choice Number, for example give it an even number like 2, it will work just fine, like any even numbers. as soon as you give it an odd number like 5, it will crash the PC. and I can't figure out why.
Found a Fix for it too, if you change while "!=" in "len(Fibonacci) != int(ChoiceNum)" to "<" it will work fine but idk what's wrong overally.
I'M TELLING YOU, IT WILL MOST PROBABLY CRASH YOUR PC TOO, Be Careful with it.
You are causing an infinite loop.
If you introduce an odd number like 5, see that you are adding two numbers at each iteration which means that the length of your Fibonacci list is never going to be odd, it will always be even. For that, you should check the number that you have to introduce and run the loop while checking that max number for your sequence and probably, breaking the loop or adding just 1 element.
As other answers explain, there is a bug in your program that means that it is an infinite loop. So what it actually does is that it builds an ever growing list of ever larger fibonacci numbers.
An infinitely large list requires an infinite amount of memory. And since the numbers themselves don't have an upper bound, that means even more memory.
But why does it crash your PC?
Well, Python doesn't have any built-in limit on the amount of memory it uses. So, a pathological Python script like your one will just keep asking the OS for more and more memory. This is detrimental to the stability of your PC. A couple of things can happen:
When the OS runs out of physical RAM, it will start allocating virtual memory. But virtual memory works by swapping virtual memory pages between physical RAM and your hard disk or SSD. And the OS typically will grab memory pages from other applications.
If there is too much swapping going, the paging rate exceeds your system's ability to keep up. Eventually some vital system service is impacted and ... the system crashes.
Some operating systems have defenses against programs that generate too much paging activity. For example, on Linux there is a kernel process called the OOM Killer which tries to identify processes that are creating excessive paging. When it finds one, it kills it.
The problem is that the OOM Killer can identify the wrong "culprit". But if the victim is a vital system service, and the OOM Killer kills it .... crash.
Fortunately, this kind of crash doesn't usually do any long term damage.
Try this:
Fibonacci = [1, 1]
...
while len(Fibonacci) != int(ChoiceNum):
Fibonacci.append(Fibonacci[-2] + Fibonacci[-1])
With this loop you will add the sum of last two numbers of your Fibonacci list to the list itself. This way your ChoiceNum can be uneven. !However!, do an extra check to see IF ChoiseNum == 1 return the first value of your Fibonacci list, else do the while loop.
I have tested the following python script on 2 Windows machines, and onlinegdb's python compiler.
On the Windows machines, this code when run as it is, just exits midway, non-deterministically, with no error message or warning. I tested with python 3.9.6.
It works as expected and does not crash when the function nCr is a different function outside isPrime. You can comment and uncomment the appropriate text.
On onlinegdb's python compiler, it works as expected in both cases.
import sys
sys.setrecursionlimit(10**6)
# def nCr(n, r):
# if r == 0:
# return 1
# else:
# return nCr(n, r-1) * (n-r+1) / r
def isPrime(N):
def nCr(n, r):
if r == 0:
return 1
else:
return nCr(n, r-1) * (n-r+1) / r
if nCr(N, N//2) % N == 0:
return True
else:
return False
for i in range(4050, 4060 + 1):
print(f"{i}: {isPrime(i)}")
else:
print("Done")
Any clues on what may be causing this? Is it possible to get this to run correctly on Windows? Or should I just avoid inner functions entirely?
Note: I know that the prime checker's logic is incorrect.
Note: You can try a larger range by changing the 4th last line if you are not able to reproduce the crash.
Edit 1:
We found that:
If the recursive depth is sufficiently large, it will most likely cause a crash on all platforms. This number, although large, would still be small enough such that only a small portion of the machines memory is being used.
Moving the function to module level does not prevent the crash.
Increasing system recursionlimit does not affect the crash, if it is more than the depth at which the crash occurs.
So, the question now is:
Is there a way to estimate the depth at which the crash will occur? Also, the depth at which the crash occurs is very small, and if we use our own stack instead of calling the function recursively, then we can keep going till the machine is out of memory. So, should we just avoid using recursive function calls in python?
I know standard CPython has a limit on recursion depth, less than 1000 I think, so the below example code will fail with a "maximum recursion depth exceeded" error.
def rec_add(x):
if x == 0:
return x
else:
return x + add(x - 1)
print(rec_add(1000))
I heard Stackless Python supports infinite recursion depth, but if I run the above code with Stackless Python, it still reports a "maximum recursion depth exceeded" error. I think maybe I need to modify the code somehow to enable it to use the infinite recursion depth feature of Stackless Python?
Any idea how to do infinite recursions in Stackless Python? Thanks.
Note: I know how to increase standard CPython's recursion depth limit over 1000, and I know how to convert the above code to a simple iteration, or simply use the Gauss formula to calculate the sum, those are not what I'm looking for, and the above code is purely as an example.
EDIT: Like I already said in the "Note" part above (that I guess no one actually reads), I know how to increase CPython's recursion limit, and I know how to convert the example code into iterations or just a Gauss sum formula of n * (n + 1) / 2, I'm just asking here because I heard one of the great features of Stackless Python is that it enables infinite recursions, and I don't know how may I enable it for the example code.
EDIT2: I'm not sure if I got the idea of "Stackless Python supports infinite recursions" wrong, but here are some sources that says (or alludes to) that Stackless Python supports infinite recursions:
What are the drawbacks of Stackless Python?
https://bitbucket.org/stackless-dev/stackless/issues/96
https://stackless.readthedocs.io/en/3.6-slp/whatsnew/stackless.html
After fumbling around I got the following code based on an official example code from more than a decade ago here
https://bitbucket.org/stackless-dev/stacklessexamples/src/a01959c240e2aeae068e56b86b4c2a84a8d854e0/examples/?at=default
so I modified the recursive addition code to look like this
import stackless
def call_wrapper(f, args, kwargs, result_ch):
result_ch.send(f(*args, **kwargs))
def call(f, *args, **kwargs):
result_ch = stackless.channel()
stackless.tasklet(call_wrapper)(f, args, kwargs, result_ch)
return result_ch.receive()
def rec_add(n):
if n <= 1:
return 1
return n + call(rec_add, n-1)
print(rec_add(1000000))
It works with large number like 1,000,000, I guess it is kind of an indirect recursion since the function calls another function which starts a tasklet that calls the function itself (or something like this).
Now I'm wondering if this is indeed the supposed way to implement an infinite recursion in Stackless Python, or are there more straight-forward/direct ways of doing it? Thanks.
I am wondering if there is a standard way or a better way to control against infinite recursion than in my code below? I want my recursive function to give up after max attempts. The code below does it by introducing attempt method parameter and incrementing it during the recursive invocation. Is there a better way?
def Rec(attempt=0):
if attempt==10:
return()
else:
print(attempt)
Rec(attempt=attempt+1)
Rec()
There is also this way but is not recommended for what you want to do - I only posted it for reference and is good to use in other cases...
#!/usr/bin/env python
import sys
sys.setrecursionlimit(5)
def Rec(attempt=0):
print attempt
Rec(attempt=attempt+1)
try:
Rec()
except RuntimeError:
print 'maximum recursion depth exceeded'
From sys.setrecursionlimit(limit)
To be even more clear, as said in python docs what sys.setrecursionlimit(limit) does is:
Set the maximum depth of the Python interpreter stack to limit. This
limit prevents infinite recursion from causing an overflow of the C
stack and crashing Python.
The highest possible limit is platform-dependent. A user may need to
set the limit higher when she has a program that requires deep
recursion and a platform that supports a higher limit. This should be
done with care, because a too-high limit can lead to a crash.
So in my opinion is not good to mess with the Python interpreter stack unless you know very well what you are doing.
You could make a decorator, and then you can write your proper recursive function, with its usual exit conditions, but also impose a recursion limit:
def limit_recursion(limit):
def inner(func):
func.count = 0
def wrapper(*args, **kwargs):
func.count += 1
if func.count < limit:
result = func(*args, **kwargs)
else:
result = None
func.count -= 1
return result
return wrapper
return inner
Your code would be (with a limit of 3):
#limit_recursion(limit=3)
def Rec():
print('hi')
Rec()
Running:
>>> Rec()
hi
hi
hi
Yours is already good. That's the way to go. It is good because it is light weight. You basically need an int and a condition branch - that's it.
Alternatively you can try to guarantee to break the loop in the condition without a counter (but that usually is dependent from case to case).
I wrote this simple code in python to calculate a given number of primes.
The question I want to ask is whether or not it's possible for me to write a script that calculates how long it will take, in terms of processor cycles, to execute this? If yes then how?
primes = [2]
pstep = 3
count = 1
def ifprime (a):
""" Checking if the passed number is prime or not"""
global primes
for check in primes:
if (a%check) == 0:
return False
return True
while 1000000000>= count:
if ifprime(pstep):
primes.append (pstep)
print pstep
count += 1
pstep += 1
The interesting thing about this problem is that whether or not I find primes after x cycles of incrementation is something nearly impossible to predict. Moreover, there's recursion happening in this scenario since the larger 'prime' list grow the longer it will take to execute this function.
Any tips?
I think you would have to use an approximation of the distribution of primes, a la PNT which (I think) states that between 1 and x you'll have approximately x/ln(x) primes (ln being natural log). So given rough estimates of the time taken for a single iteration, you should be able to create an estimate.
You have approximately x/ln(x) primes in your list. Your main code block (inside the while loop) has constant time (effectively)...so:
t(x) ~ x/ln(x) * a + b + t(x-1)
where t(x) is the time taken up to and including iteration x, a is the time taken to check each prime in the list (modulous operation), and b is the 'constant' time of the main loop. I faintly remember there is a way to convert such recursive functions to linear ones ;)
If you want to predict the time an arbitrary process needs until it is finished, you can't do that, as that is basically the problem behind the Halting Problem. In special cases you can estimate the time your script will take, for example if you know that it is generated in a way that doesn't allow loops.
In your special case of finding primes, it is even harder to guess the time it will take before running the process, as there is only a lower bound for the number of primes within an intervall, but that doesn't help finding them.
Well, if you are on linux you can use 'time' command and then parse it's result.
For your problem I would do the timing for 1000s of large primes of different size and would draw a chart, so it would be easy to analize.
Well, there is a large branch of theoretical computer science -- complexity theory -- dedicated to just this sort of problem. The general problem (of deciding on whether a code will finish for arbitrary input) you have here is what is called "NP-complete" and is therefore very hard.
But in this case you probably have two options.
The first is to use brute force. Run timeit for isprime(a) for a=1, 2, 3, 4, ..., plot the graph of the times, and try to see if it looks like something obvious: a^2, a log a, whatever.
The right -- but harder -- answer is to analyze your algorithm and see if you can work out how many operations it takes for a "typical case".
When you call isprime(pstep) you are looping pstep * ln(pstep) times, if you have a prime, of which the probability is 1/ln(pstep). So the cost of testing the primes is proportional to step. Unknown is the cost of testing the composites, because we don't know the average lowest factor of the composites between 2 and N. If we ignore it, assuming it is dominated by the cost for the primes, we get a total cost of SUM(pstep) for pstep = 3 to N+3, which is about proportional to N**2.
You can reduce this to N**1.5 by cutting off the loop in isprime() when checked > sqrt(a).