Non-deterministic crash of deterministic program, when inner functions are present - python

I have tested the following python script on 2 Windows machines, and onlinegdb's python compiler.
On the Windows machines, this code when run as it is, just exits midway, non-deterministically, with no error message or warning. I tested with python 3.9.6.
It works as expected and does not crash when the function nCr is a different function outside isPrime. You can comment and uncomment the appropriate text.
On onlinegdb's python compiler, it works as expected in both cases.
import sys
sys.setrecursionlimit(10**6)
# def nCr(n, r):
# if r == 0:
# return 1
# else:
# return nCr(n, r-1) * (n-r+1) / r
def isPrime(N):
def nCr(n, r):
if r == 0:
return 1
else:
return nCr(n, r-1) * (n-r+1) / r
if nCr(N, N//2) % N == 0:
return True
else:
return False
for i in range(4050, 4060 + 1):
print(f"{i}: {isPrime(i)}")
else:
print("Done")
Any clues on what may be causing this? Is it possible to get this to run correctly on Windows? Or should I just avoid inner functions entirely?
Note: I know that the prime checker's logic is incorrect.
Note: You can try a larger range by changing the 4th last line if you are not able to reproduce the crash.
Edit 1:
We found that:
If the recursive depth is sufficiently large, it will most likely cause a crash on all platforms. This number, although large, would still be small enough such that only a small portion of the machines memory is being used.
Moving the function to module level does not prevent the crash.
Increasing system recursionlimit does not affect the crash, if it is more than the depth at which the crash occurs.
So, the question now is:
Is there a way to estimate the depth at which the crash will occur? Also, the depth at which the crash occurs is very small, and if we use our own stack instead of calling the function recursively, then we can keep going till the machine is out of memory. So, should we just avoid using recursive function calls in python?

Related

Factorial function in python is being limited

I made a simple factorial program:
import sys
sys.set_int_max_str_digits(0)
sys.setrecursionlimit(1000000)
def factorial(x):
if x == 0 | x == 1:
return 1
elif x > 1:
return x * factorial(x - 1)
i = 0
while 1:
print(factorial(i), '\n')
i += 1
But after a while the program halts. I want to know if there's a way to remove the limit on how big it could get.
Recursion is not meant to be infinite. Eventually your program would fail, even on a system with a huge amount of memory.
Also note that the recursion limit given to setrecursionlimit() is not a guarantee that you'll get that recursion depth. To quote from the sys.setrecursionlimit documentation:
The highest possible limit is platform-dependent. A user may need to set the limit higher when they have a program that requires deep recursion and a platform that supports a higher limit. This should be done with care, because a too-high limit can lead to a crash.
I would suggest either limiting the program to calculating a reasonable sized factorial, or not using recursion. Some tasks are much better suited to recursion versus iteration, but factorials is not one of them.

Strange multi-processing trouble

Sorry for the vague title, but since I have no clue what the reason for my problem might be I don't know how to describe it better. Anyway, I have a strange problem in connection with multiprocessing and hope some of you can help me.
I'm currently dealing with convex optimization, especially with parallelizing the tasks. My goal is to utilize as many cores as possible on a multi-core-machine (on which I get access only temporarily).
Therefore, I took a look on the CVXPY's Tutorial page and tried to implement an easy example to get into the topic (scroll down, it's the last example on the page). I shortened the example to the parts which are necessary for my question, so my code looks as follows:
import cvxpy as cp
import numpy
from multiprocessing import Pool
# Assign a value to gamma and find the optimal x.
def get_x(gamma_value):
print("E")
gamma.value = gamma_value
result = prob.solve()
print("F")
return x.value
# Problem data.
n = 15
m = 10
numpy.random.seed(1)
A = numpy.random.randn(n, m)
b = numpy.random.randn(n)
# gamma must be nonnegative due to DCP rules.
gamma = cp.Parameter(nonneg=True)
# Construct the problem.
x = cp.Variable(m)
error = cp.sum_squares(A # x - b)
obj = cp.Minimize(error + gamma*cp.norm(x, 1))
prob = cp.Problem(obj)
# Construct a trade-off curve of ||Ax-b||^2 vs. ||x||_1
sq_penalty = []
l1_penalty = []
x_values = []
print("A")
gamma_vals = numpy.logspace(-4,6, num = 6400)
print("B")
for val in gamma_vals:
gamma.value = val
print("C")
# Parallel computation (set to 1 process here).
pool = Pool(processes = 1)
print("D")
x_values = pool.map(get_x, gamma_vals)
print(x_values[-1])
As you might have observed, I added some prints with capital letters, they served to find out where exactly the issue occurs, so I can refer to them in my problem description;
When I run the code the code processes and the letters "A" to "D" are displayed on the screen, so everything's fine until passing "D". But then the program kind of gets stuck. The CPU load is still high, so there definitely is going on something, but the code never reaches capital "E", which would be after successfully starting
x_values = pool.map(get_x, gamma_vals).
In my eyes it looks a bit like being stuck in an infinite loop.
Therefore, I guess someting with this pool.map function must be fishy. First, I thought that calling this function might be time-consuming and therefore it takes long to process (but that's rubbish, the optimization starts within the get_x function, so no reasoning).
Nevertheless, I tried to run my program on the multi-core-machine (with multiple cores, as well as with only a single core) and - surprise - it successfully passed this line nearly instantaneous and started with the actual optimization problem (and finally solved it).
So my issue is, that I don't know what's going wrong on my computer and how to fix it.
I can't access the machine at any time so - of course - I want to try the code on my computer first before uploading it, which isn't possible if even this easy toy-example doesn't work.
I' be grateful for any ideas/help!
Thank you in advance!
FOOTNOTE: I am using WIN10, the multi-core machine uses Linux

Python Program Freezes computer

I have been programming in python for a little while now, and decided to teach my friend as well. I asked him to make a method that would return a list of all the factors of a number, he gave me a script that was a little inefficient but still looked like it should have worked to me. However when run the program freezes up both my and his computer (I have a top of the line gaming pc so I don't think it is using to many resources). I showed him how to fix it, however I still cannot pinpoint what is causing the problem. Here is the code, thanks for your time!
def factors(numb):
facs = []
for i in range(1,int(numb // 2)):
if numb % i == 0:
facs.append(i)
for i in facs:
facs.append((numb / i))
return facs.sort()
p.s. it never throws an error, even after having been let run for a while. Also it is in python 3.4
Your problem is here:
for i in facs:
facs.append((numb / i))
The for loop is iterating over every number in facs, and each time it does it adds a new number to the end. So each time it gets one place closer to the end of the list, the list gets one place longer. This makes an infinite loop and slowly swallows up all your memory.
EDIT: Solving the problem
The loop isn't actually necessary (and neither is the sorting, as the function produces an already sorted list)
def factors(numb):
facs = []
for i in range(1,int(numb // 2)):
if numb % i == 0:
facs.append(i)
return facs
Should work fine.
The problem is in this fragment:
for i in facs:
facs.append((numb / i))
You have a self-incrementing sequence here.
Try to analyse these lines(7,8), Here logic in not looking correct as you aspect(infinite loop).
for i in facs:
facs.append((numb / i))
otherwise test it.
def factors(numb):
l = [1,2,3,4]
for i in l:
print i
l.append(numb/i)
factors(10) // function call

python - assert somehow immediately detects infinite recursion

I wrote a fibonacci function that infinitely recurses, and while python couldn't detect it and threw errors upon hitting the max recursion limit, when I used try and assert to see if fib(30) was equal to some value, it immediately told me it was not. How did it do this? It seems like it didn't even need to run fib(30).
note:
I realize this only works if I do
try:
assert infiniteFib(30) == 832040
except:
print "done immediately"
When I do just the assert, it produces many errors about too many recursions, but with the try it stops at the first error.
What I'm curious is how does python manage to produce an error so quickly about infinite recursion? Wouldn't it need to hit the limit (which takes a long time) to tell whether it was infinite?
EDIT:
Some requested code, but just to be clear, I DON'T want a solution to the errors (I know it's wrong because I deliberately excluded the base case), I want to know how python produces errors so quickly when it should take much longer (if you do fib(30), clearly it takes a while to hit the max recursion limit, but somehow python produces errors way before then):
def fib(n):
return fib(n-1) + fib(n-2)
try: assert(fib(30) == 832040)
except: print "done immediately"
The reason the code you've shown runs quickly is because it catches the exception that is raised by fib when it hits the recursion limit and doesn't print the traceback. Running to the recursion limit doesn't take very long at all, but formatting and printing hundreds of lines of traceback does.
If you inspect the exception you get, you'll see it is the same RuntimeError you get when you run fib normally, not an AssertionError. Try this, to see what is going on better:
try:
assert(fib(30) == 832040)
except Exception as e:
print("Got an Exception: %r" % e)
It's not done immediately. Your code runs until python reaches maximum recursion depth and maximum recursion depth is set to 1000 by default in python to avoid stack overflow errors.
So, actually your code runs till it reaches recursion depth of 1000 and errors out RuntimeError: maximum recursion depth exceeded. You can verify this by modifying your code as below:
i=0
def fib(n):
global i
i = i + 1
print i
return fib(n-1) + fib(n-2)
assert(fib(30) == 832040)
print i
print "done immediately"
In my machine, i am getting the last i value as 984 before errors out.

Forcing a callback to be executed first

I've got the following codesnippet in a program using secure multiparty computation:
c = self.runtime.open(b) # open result
c.addCallback(self.determine)
j = self.compute(i)
return j
Now the function determine sets a boolean to either false or true, depending on the value of c. This boolean is then used by the function compute.
I thought that callbacks are always executed first, before the rest of the program is. However, I'm getting an error from compute that the boolean is undefined.
How can I force the callback to be executed before compute is executed?
Because I'm working within a secure multiparty computation framework, I have to work with callbacks since the value for c is a shared secret. However, the problem would also appear without secret shares I think. The language is Python.
The code for determine and compute would be something like this:
def determine(c):
global computeB
computeB = False
if c == 1:
computeB = True
else:
computeB = False
return c
def compute(i):
if computeB:
do this
else:
do this
return result
The callback gets executed when it gets executed. There is no point in "making" it execute earlier.
I guess you are dealing with twisted so heres a tutorial http://krondo.com/?page_id=1327 but it is helpfull even for understanding async programming in general, which you obviously need.
I'm not a pro in async but I think you want to yield your first function and tell your routine to wait before it goes on.
yield self.runtime.open(b) # open result
j = self.compute(i)
return j

Categories

Resources