I am trying to write a program in python to answer the following problem:
A perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example, the sum of the proper divisors of 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less
than n and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest number that can be written as the sum of two abundant numbers is 24.
By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis, even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.
So here is my code which should theoretically work but which is way too slow.
import math
import time
l = 28123
abondant = []
def listNumbers():
for i in range(1, l):
s = 0
for k in range(1, int(i / 2) + 1):
if i % k == 0:
s += k
if s > i:
abondant.append(i)
def check(nb):
for a in abondant:
for b in abondant:
if a + b == nb:
return False
return True
def main():
abondant_sum = 0
for i in range(12, l):
if check(i):
abondant_sum += i
return abondant_sum
start = time.time()
listNumbers()
print(main())
end = time.time()
print("le programme a mis ", end - start, " ms")
How can I make my program more efficient?
Checking until half and summing up all passing numbers is very inefficient.
Try to change
for k in range(1, int(i / 2) + 1):
if i % k == 0:
s += k
to
for k in range(1, int(i**0.5)+1):
if i % k == 0:
s += k
if k != i//k:
s += i//k
The thing is that you make a double iteration on "abondant" in the check function that you call 28111 times.
It would be much more efficient to only compute a set of all a+b once and then check if your number is inside.
Something like:
def get_combinations():
return set(a+b for a in abondant for b in abondant)
And then maybe for the main function:
def main():
combinations = get_combinations()
non_abondant = filter(lambda nb: nb not in combinations, range(12,l))
return sum(non_abondant)
Once you have the list of abundant number you can make a list result = [False] * 28123 and then
for a in abondant:
for b in abondant:
result[min(a+b, 28123)] = True
Then
l = []
for i in range(len(result)):
if not result[i]:
l.append(i)
print(l)
Related
I know how to check if the number can be represented as the sum of two squares with a brute-force approach.
def sumSquare( n) :
i = 1
while i * i <= n :
j = 1
while(j * j <= n) :
if (i * i + j * j == n) :
print(i, "^2 + ", j , "^2" )
return True
j = j + 1
i = i + 1
return False
But how to do it for n distinct positive integers. So the question would be:
Function which checks if the number can be written as sum of 'n' different squares
I have some examples.
For e.g.
is_sum_of_squares(18, 2) would be false because 18 can be written as the sum of two squares (3^2 + 3^2) but they are not distinct.
(38,3) would be true because 5^2+3^2+2^2 = 38 and 5!=3!=2.
I can't extend the if condition for more values. I think it could be done with recursion, but I have problems with it.
I found this function very useful since it finds the number of squares the number can be split into.
def findMinSquares(n):
T = [0] * (n + 1)
for i in range(n + 1):
T[i] = i
j = 1
while j * j <= i:
T[i] = min(T[i], 1 + T[i - j * j])
j += 1
return T[n]
But again I can't do it with recursion. Sadly I can't wrap my head around it. We started learning it a few weeks ago (I am in high school) and it is so different from the iterative approach.
Recursive approach:
def is_sum_of_squares(x, n, used=None):
x_sqrt = int(x**0.5)
if n == 1:
if x_sqrt**2 == x:
return used.union([x_sqrt])
return None
used = used or set()
for i in set(range(max(used, default=0)+1, int((x/n)**0.5))):
squares = is_sum_of_squares(x-i**2, n-1, used.union([i]))
if squares:
return squares
return None
Quite a compelling exercise. I have attempted solving it using recursion in a form of backtracking. Start with an empty list, run a for loop to add numbers to it from 1 to max feasible (square root of target number) and for each added number continue with recursion. Once the list reaches the required size n, validate the result. If the result is incorrect, backtrack by removing the last number.
Not sure if it is 100% correct though. In terms of speed, I tried it on the (1000,13) input and the process finished reasonably fast (3-4s).
def is_sum_of_squares(num, count):
max_num = int(num ** 0.5)
return backtrack([], num, max_num, count)
def backtrack(candidates, target, max_num, count):
"""
candidates = list of ints of max length <count>
target = sum of squares of <count> nonidentical numbers
max_num = square root of target, rounded
count = desired size of candidates list
"""
result_num = sum([x * x for x in candidates]) # calculate sum of squares
if result_num > target: # if sum exceeded target number stop recursion
return False
if len(candidates) == count: # if candidates reach desired length, check if result is valid and return result
result = result_num == target
if result: # print for result sense check, can be removed
print("Found: ", candidates)
return result
for i in range(1, max_num + 1): # cycle from 1 to max feasible number
if candidates and i <= candidates[-1]:
# for non empty list, skip numbers smaller than the last number.
# allow only ascending order to eliminate duplicates
continue
candidates.append(i) # add number to list
if backtrack(candidates, target, max_num, count): # next recursion
return True
candidates.pop() # if combination was not valid then backtrack and remove the last number
return False
assert(is_sum_of_squares(38, 3))
assert(is_sum_of_squares(30, 3))
assert(is_sum_of_squares(30, 4))
assert(is_sum_of_squares(36, 1))
assert not(is_sum_of_squares(35, 1))
assert not(is_sum_of_squares(18, 2))
assert not(is_sum_of_squares(1000, 13))
I am trying to complete Project Euler question #30, I decided to verify my code against a known answer. Basically the question is this:
Find the sum of all the numbers that can be written as the sum of fifth powers of their digits.
Here is the known answer I am trying to prove with python:
1634 = 1^4 + 6^4 + 3^4 + 4^4
8208 = 8^4 + 2^4 + 0^4 + 8^4
9474 = 9^4 + 4^4 + 7^4 + 4^4
As 1 = 1^4 is not a sum it is not included.
The sum of these numbers is 1634 + 8208 + 9474 = 19316.
When I run my code I get all three of the values which add up to 19316, great! However among these values there is an incorrect one: 6688
Here is my code:
i=1
answer = []
while True:
list = []
i=i+1
digits = [int(x) for x in str(i)]
for x in digits:
a = x**4
list.append(a)
if sum(list) == i:
print(sum(list))
answer.append(sum(list))
The sum of list returns the three correct values, and the value 6688. Can anybody spot something I have missed?
You are checking the sum too early. You check for a matching sum for each individual digit in the number, and 6 ^ 4 + 6 ^ 4 + 8 ^ 4 is 6688. That's three of the digits, not all four.
Move your sum() test out of your for loop:
for x in digits:
a = x**4
list.append(a)
if sum(list) == i:
print(sum(list))
answer.append(sum(list))
At best you could discard a number early when the sum already exceeds the target:
digitsum = 0
for d in digits:
digitsum += d ** 4
if digitsum > i:
break
else:
if digitsum == i:
answer.append(i)
but I'd not bother with that here, and just use a generator expression to combine determining the digits, raising them to the 4th power, and summing:
if sum(int(d) ** 4 for d in str(i)) == i:
answer.append(i)
You haven't defined an upper bound, the point where numbers will always be bigger than the sum of their digits and you need to stop incrementing i. For the sum of nth powers, you can find such a point by taking 9 ^ n, counting its digits, then taking the number of digits in the nth power of 9 times the nth power of 9. If this creates a number with more digits, continue on until the number of digits no longer changes.
In the same vein, you can start i at max(10, 1 + 2 ** n), because the smallest sum you'll be able to make from digits will be using a single 2 digit plus the minimum number of 1 and 0 digits you can get away with, and at any power greater than 1, the power of digits other than 1 and 0 is always greater than the digit value itself, and you can't use i = 1:
def determine_bounds(n):
"""Given a power n > 1, return the lower and upper bounds in which to search"""
nine_power, digit_count = 9 ** n, 1
while True:
upper = digit_count * nine_power
new_count = len(str(upper))
if new_count == digit_count:
return max(10, 2 ** n), upper
digit_count = new_count
If you combine the above function with range(*<expression>) variable-length parameter passing to range(), you can use a for loop:
for i in range(*determine_bounds(4)):
# ...
You can put determining if a number is equal to the sum of its digits raised to a given power n in a function:
def is_digit_power_sum(i, n):
return sum(int(d) ** n for d in str(i)) == i
then you can put everything into a list comprehension:
>>> n = 4
>>> [i for i in range(*determine_bounds(n)) if is_digit_power_sum(i, n)]
[1634, 8208, 9474]
>>> n = 5
>>> [i for i in range(*determine_bounds(n)) if is_digit_power_sum(i, n)]
[4150, 4151, 54748, 92727, 93084, 194979]
The is_digit_power_sum() could benefit from a cache of powers; adding a cache makes the function more than twice as fast for 4-digit inputs:
def is_digit_power_sum(i, n, _cache={}):
try:
powers = _cache[n]
except KeyError:
powers = _cache[n] = {str(d): d ** n for d in range(10)}
return sum(powers[d] for d in str(i)) == i
and of course, the solution to the question is the sum of the numbers:
n = 5
answer = sum(i for i in range(*determine_bounds(n)) if is_digit_power_sum(i, n))
print(answer)
which produces the required output in under half a second on my 2.9 GHz Intel Core i7 MacBook Pro, using Python 3.8.0a3.
Here Fixed:
i=1
answer = []
while True:
list = []
i=i+1
digits = [int(x) for x in str(i)]
for x in digits:
a = x**4
list.append(a)
if sum(list) == i and len(list) == 4:
print(sum(list))
answer.append(sum(list))
The bug I found:
6^4+6^4+8^4 = 6688
So I just put a check for len of list.
My code is very slow when it comes to very large numbers.
def divisors(num):
divs = 1
if num == 1:
return 1
for i in range(1, int(num/2)):
if num % i == 0:
divs += 1
elif int(num/2) == i:
break
else:
pass
return divs
For 10^9 i get a run time of 381.63s.
Here is an approach that determines the multiplicities of the various distinct prime factors of n. Each such power, k, contributes a factor of k+1 to the total number of divisors.
import math
def smallest_divisor(p,n):
#returns the smallest divisor of n which is greater than p
for d in range(p+1,1+math.ceil(math.sqrt(n))):
if n % d == 0:
return d
return n
def divisors(n):
divs = 1
p = 1
while p < n:
p = smallest_divisor(p,n)
k = 0
while n % p == 0:
k += 1
n //= p
divs *= (k+1)
return divs - 1
It returns the number of proper divisors (so not counting the number itself). If you want to count the number itself, don't subtract 1 from the result.
It works rapidly with numbers of the size 10**9, though will slow down dramatically with even larger numbers.
Division is expensive, multiplication is cheap.
Factorize the number into primes. (Download the list of primes, keep dividing from the <= sqrt(num).)
Then count all the permutations.
If your number is a power of exactly one prime, p^n, you obvioulsy have n divisors for it, excluding 1; 8 = 2^3 has 3 divisors: 8, 4, 2.
In general case, your number factors into k primes: p0^n0 * p1^n1 * ... * pk^nk. It has (n0 + 1) * (n1 + 1) * .. * (nk + 1) divisors. The "+1" term counts for the case when all other powers are 0, that is, contribute a 1 to the multiplication.
Alternatively, you could just google and RTFM.
Here is an improved version of my code in the question. The running time is better, 0.008s for 10^9 now.
def divisors(num):
ceiling = int(sqrt(num))
divs = []
if num == 1:
return [1]
for i in range(1, ceiling + 1):
if num % i == 0:
divs.append(num / i)
if i != num // i:
divs.append(i)
return divs
It is important for me to also keep the divisors, so if this can still
be improved I'd be glad.
Consider this:
import math
def num_of_divisors(n):
ct = 1
rest = n
for i in range(2, int(math.ceil(math.sqrt(n)))+1):
while rest%i==0:
ct += 1
rest /= i
print i # the factors
if rest == 1:
break
if rest != 1:
print rest # the last factor
ct += 1
return ct
def main():
number = 2**31 * 3**13
print '{} divisors in {}'.format(num_of_divisors(number), number)
if __name__ == '__main__':
main()
We can stop searching for factors at the square root of n. Multiple factors are found in the while loop. And when a factor is found we divide it out from the number.
edit:
#Mark Ransom is right, the factor count was 1 too small for numbers where one factor was greater than the square root of the number, for instance 3*47*149*6991. The last check for rest != 1 accounts for that.
The number of factors is indeed correct then - you don't have to check beyond sqrt(n) for this.
Both statements where a number is printed can be used to append this number to the number of factors, if desired.
I was working on project euler question 23 with python. For this question, I have to find sum of any numbers <28124 that cannot be made by sum of two abundant numbers. abundant numbers are numbers that are smaller then its own sum of proper divisors.
my apporach was : https://gist.github.com/anonymous/373f23098aeb5fea3b12fdc45142e8f7
from math import sqrt
def dSum(n): #find sum of proper divisors
lst = set([])
if n%2 == 0:
step = 1
else:
step = 2
for i in range(1, int(sqrt(n))+1, step):
if n % i == 0:
lst.add(i)
lst.add(int(n/i))
llst = list(lst)
lst.remove(n)
sum = 0
for j in lst:
sum += j
return sum
#any numbers greater than 28123 can be written as the sum of two abundant numbers.
#thus, only have to find abundant numbers up to 28124 / 2 = 14062
abnum = [] #list of abundant numbers
sum = 0
can = set([])
for i in range(1,14062):
if i < dSum(i):
abnum.append(i)
for i in abnum:
for j in abnum:
can.add(i + j)
print (abnum)
print (can)
cannot = set(range(1,28124))
cannot = cannot - can
cannot = list(cannot)
cannot.sort ()
result = 0
print (cannot)
for i in cannot:
result += i
print (result)
which gave me answer of 31531501, which is wrong.
I googled the answer and answer should be 4179871.
theres like 1 million difference between the answers, so it should mean that I'm removing numbers that cannot be written as sum of two abundant numbers. But when I re-read the code it looks fine logically...
Please save from this despair
Just for some experience you really should look at comprehensions and leveraging the builtins (vs. hiding them):
You loops outside of dSum() (which can also be simplified) could look like:
import itertools as it
abnum = [i for i in range(1,28124) if i < dSum(i)]
can = {i+j for i, j in it.product(abnum, repeat=2)}
cannot = set(range(1,28124)) - can
print(sum(cannot)) # 4179871
There are a few ways to improve your code.
Firstly, here's a more compact version of dSum that's fairly close to your code. Operators are generally faster than function calls, so I use ** .5 instead of calling math.sqrt. I use a conditional expression instead of an if...else block to compute the step size. I use the built-in sum function instead of a for loop to add up the divisors; also, I use integer subtraction to remove n from the total because that's more efficient than calling the set.remove method.
def dSum(n):
lst = set()
for i in range(1, int(n ** .5) + 1, 2 if n % 2 else 1):
if n % i == 0:
lst.add(i)
lst.add(n // i)
return sum(lst) - n
However, we don't really need to use a set here. We can just add the divisor pairs as we find them, if we're careful not to add any divisor twice.
def dSum(n):
total = 0
for i in range(1, int(n ** .5) + 1, 2 if n % 2 else 1):
if n % i == 0:
j = n // i
if i < j:
total += i + j
else:
if i == j:
total += i
break
return total - n
This is slightly faster, and uses less RAM, at the expense of added code complexity. However, there's a more efficient approach to this problem.
Instead of finding the divisors (and hence the divisor sum) of each number individually, it's better to use a sieving approach that finds the divisors of all the numbers in the required range. Here's a simple example.
num = 28124
# Build a table of divisor sums.
table = [1] * num
for i in range(2, num):
for j in range(2 * i, num, i):
table[j] += i
# Collect abundant numbers
abnum = [i for i in range(2, num) if i < table[i]]
print(len(abnum), abnum[0], abnum[-1])
output
6965 12 28122
If we need to find divisor sums for a very large num a good approach is to find the prime power factors of each number, since there's an efficient way to compute the sum of the divisors from the prime power factorization. However, for numbers this small the minor time saving doesn't warrant the extra code complexity. (But I can add some prime power sieve code if you're curious; for finding divisor sums for all numbers < 28124, the prime power sieve technique is about twice as fast as the above code).
AChampion's answer shows a very compact way to find the sum of the numbers that cannot be written as the sum of two abundant numbers. However, it's a bit slow, mostly because it loops over all pairs of abundant numbers in abnum. Here's a faster way.
def main():
num = 28124
# Build a table of divisor sums. table[0] should be 0, but we ignore it.
table = [1] * num
for i in range(2, num):
for j in range(2 * i, num, i):
table[j] += i
# Collect abundant numbers
abnum = [i for i in range(2, num) if i < table[i]]
del table
# Create a set for fast searching
abset = set(abnum)
print(len(abset), abnum[0], abnum[-1])
total = 0
for i in range(1, num):
# Search for pairs of abundant numbers j <= d: j + d == i
for j in abnum:
d = i - j
if d < j:
# No pairs were found
total += i
break
if d in abset:
break
print(total)
if __name__ == "__main__":
main()
output
6965 12 28122
4179871
This code runs in around 2.7 seconds on my old 32bit single core 2GHz machine running Python 3.6.0. On Python 2, it's about 10% faster; I think that's because list comprehensions have less overhead in Python 2 (the run in the current scope rather than creating a new scope).
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
I'm a beginner in Python trying to get better, and I stumbled across the following exercise:
Let n be an integer greater than 1 and s(n) the sum of the dividors of
n. For example,
s(12) 1 + 2 + 3 + 4 + 6 + 12 = 28
Also,
s(s(12)) = s(28) = 1 + 2 + 4 + 7 + 14 + 28 = 56
And
s(s(s(12))) = s(56) = 1 + 2 + 4 + 7 + 8 + 14 + 28 + 56 = 120
We use the notations:
s^1(n) = s(n)
s^2(n) = s(s(n))
s^3(n) = s(s(s(n)))
s^ m (n) = s(s(. . .s(n) . . .)), m times
For the integers n for which exists a positive integer k so that
s^m(n) = k * n
are called (m, k)-perfect, for instance 12 is (3, 10)-perfect since
s^3(12) = s(s(s(12))) = 120 = 10 * 12
Special categories:
For m =1 we have multiperfect numbers
A special case of the above exist for m = 1 and k = 2 which are called
perfect numbers.
For m = 2 and k = 2 we have superperfect numbers.
Write a program which finds and prints all (m, k)-perfect numbers for
m <= MAXM, which are less or equal to (<=) MAXNUM. If an integer
belongs to one of the special categories mentioned above the program
should print a relevant message. Also, the program has to print how
many different (m, k)-perfect numbers were found, what percentage of
the tested numbers they were, in how many occurrences for the
different pairs of (m, k), and how many from each special category
were found(perfect numbers are counted as multiperfect as well).
Here's my code:
import time
start_time = time.time()
def s(n):
tsum = 0
i = 1
con = n
while i < con:
if n % i == 0:
temp = n / i
tsum += i
if temp != i:
tsum += temp
con = temp
i += 1
return tsum
#MAXM
#MAXNUM
i = 2
perc = 0
perc1 = 0
perf = 0
multperf = 0
supperf = 0
while i <= MAXNUM:
pert = perc1
num = i
for m in xrange(1, MAXM + 1):
tsum = s(num)
if tsum % i == 0:
perc1 += 1
k = tsum / i
mes = "%d is a (%d-%d)-perfect number" % (i, m, k)
if m == 1:
multperf += 1
if k == 2:
perf += 1
print mes + ", that is a perfect number"
else:
print mes + ", that is a multiperfect number"
elif m == 2 and k == 2:
supperf += 1
print mes + ", that is a superperfect number"
else:
print mes
num = tsum
i += 1
if pert != perc1: perc += 1
print "Found %d distinct (m-k)-perfect numbers (%.5f per cent of %d ) in %d occurrences" % (
perc, float(perc) / MAXNUM * 100, MAXNUM, perc1)
print "Found %d perfect numbers" % perf
print "Found %d multiperfect numbers (including perfect numbers)" % multperf
print "Found %d superperfect numbers" % supperf
print time.time() - start_time, "seconds"
It works fine, but I would like suggestions on how to make it run faster.
For instance is it faster to use
I = 1
while I <= MAXM:
…..
I += 1
instead of
for I in xrange(1, MAXM + 1)
Would it be better if instead of defining s(n) as a function I put the code into the main program? etc.
If you have anything to suggest for me to read on how to make a program run faster, I'd appreciate it.
And one more thing, originally the exercise required the program to be in C (which I don't know), having written this in Python, how difficult would it be for it to be made into C?
The biggest improvements come from using a better algorithm. Things like
Would it be better if instead of defining s(n) as a function I put the code into the main program?
or whether to use a while loop instead of for i in xrange(1, MAXM + 1): don't make much difference, so should not be considered before one has reached a state where algorithmic improvements are at least very hard to come by.
So let's take a look at your algorithm and how we can drastically improve it without caring about minuscule things like whether a while loop or a for iteration are faster.
def s(n):
tsum = 0
i = 1
con = n
while i < con:
if n % i == 0:
temp = n / i
tsum += i
if temp != i:
tsum += temp
con = temp
i += 1
return tsum
That already contains a good idea, you know that the divisors of n come in pairs and add both divisors once you found the smaller of the pair. You even correctly handle squares.
It works very well for numbers like 120: when you find the divisor 2, you set the stop condition to 60, when you find 3, to 40, ..., when you find 8, you set it to 15, when you find 10, you set it to 12, and then you have only the division by 11, and stop when i is incremented to 12. Not bad.
But it doesn't work so well when n is a prime, then con will never be set to a value smaller than n, and you need to iterate all the way to n before you found the divisor sum. It's also bad for numbers of the form n = 2*p with a prime p, then you loop to n/2, or n = 3*p (n/3, unless p = 2) etc.
By the prime number theorem, the number of primes not exceeding x is asymptotically x/log x (where log is the natural logarithm), and you have a lower bound of
Ω(MAXNUM² / log MAXNUM)
just for computing the divisor sums of the primes. That's not good.
Since you already consider the divisors of n in pairs d and n/d, note that the smaller of the two (ignoring the case d = n/d when n is a square for the moment) is smaller than the square root of n, so once the test divisor has reached the square root, you know that you have found and added all divisors, and you're done. Any further looping is futile wasted work.
So let us consider
def s(n):
tsum = 0
root = int(n**0.5) # floor of the square root of n, at least for small enough n
i = 1
while i < root + 1:
if n % i == 0:
tsum += i + n/i
i += 1
# check whether n is a square, if it is, we have added root twice
if root*root == n:
tsum -= root
return tsum
as a first improvement. Then you always loop to the square root, and computing s(n) for 1 <= n <= MAXNUM is Θ(MAXNUM^1.5). That's already quite an improvement. (Of course, you have to compute the iterated divisor sums, and s(n) can be larger than MAXNUM for some n <= MAXNUM, so you can't infer a complexity bound of O(MAXM * MAXNUM^1.5) for the total algorithm from that. But s(n) cannot be very much larger, so the complexity can't be much worse either.)
But we can still improve on that by using what twalberg suggested, using the prime factorisation of n to compute the divisor sum.
First, if n = p^k is a prime power, the divisors of n are 1, p, p², ..., p^k, and the divisor sum is easily computed (a closed formula for the geometric sum is
(p^(k+1) - 1) / (p - 1)
but whether one uses that or adds the k+1 powers of p dividing n is not important).
Next, if n = p^k * m with a prime p and an m such that p does not divide m, then
s(n) = s(p^k) * s(m)
An easy way to see that decomposition is to write each divisor d of n in the form d = p^a * g where p does not divide g. Then p^a must divide p^k, i.e. a <= k, and g must divide m. Conversely, for every 0 <= a <= k and every g dividing m, p^a * g is a divisor of n. So we can lay out the divisors of n (where 1 = g_1 < g_2 < ... < g_r = m are the divisors of m)
1*g_1 1*g_2 ... 1*g_r
p*g_1 p*g_2 ... p*g_r
: : :
p^k*g_1 p^k*g_2 ... p^k*g_r
and the sum of each row is p^a * s(m).
If we have a list of primes handy, we can then write
def s(n):
tsum = 1
for p in primes:
d = 1
# divide out all factors p of n
while n % p == 0:
n = n//p
d = p*d + 1
tsum *= d
if p*p > n: # n = 1, or n is prime
break
if n > 1: # one last prime factor to account for
tsum *= 1 + n
return tsum
The trial division goes to the second largest prime factor of n [if n is composite] or the square root of the largest prime factor of n, whichever is larger. It has a worst-case bound for the largest divisor tried of n**0.5, which is reached for primes, but for most composites, the division stops much earlier.
If we don't have a list of primes handy, we can replace the line for p in primes: with for p in xrange(2, n): [the upper limit is not important, since it is never reached if it is larger than n**0.5] and get a not too much slower factorisation. (But it can easily be sped up a lot by avoiding even trial divisors larger than 2, that is using a list [2] + [3,5,7...] - best as a generator - for the divisors, even more by also skipping multiples of 3 (except 3), [2,3] + [5,7, 11,13, 17,19, ...] and if you want of a few further small primes.)
Now, that helped, but computing the divisor sums for all n <= MAXNUM still takes Ω(MAXNUM^1.5 / log MAXNUM) time (I haven't analysed, that could be also an upper bound, or the MAXNUM^1.5 could still be a lower bound, anyway, a logarithmic factor rarely makes much of a difference [beyond a constant factor]).
And you compute a lot of divisor sums more than once (in your example, you compute s(56) when investigating 12, again when investigating 28, again when investigating 56). To alleviate the impact of that, memoizing s(n) would be a good idea. Then you need to compute each s(n) only once.
And now we have already traded space for time, so we can use a better algorithm to compute the divisor sums for all 1 <= n <= MAXNUM in one go, with a better time complexity (and also smaller constant factors). Instead of trying out each small enough (prime) number whether it divides n, we can directly mark only multiples, thus avoiding all divisions that leave a remainder - which is the vast majority.
The easy method to do that is
def divisorSums(n):
dsums = [0] + [1]*n
for k in xrange(2, n+1):
for m in xrange(k, n+1, k):
dsums[m] += k
return dsums
with an O(n * log n) time complexity. You can do it a bit better (O(n * log log n) complexity) by using the prime factorisation, but that method is somewhat more complicated, I'm not adding it now, maybe later.
Then you can use the list of all divisor sums to look up s(n) if n <= MAXNUM, and the above implementation of s(n) to compute the divisor sum for values larger than MAXNUM [or you may want to memoize the values up to a larger limit].
dsums = divisorSums(MAXNUM)
def memo_s(n):
if n <= MAXNUM:
return dsums[n]
return s(n)
That's not too shabby,
Found 414 distinct (m-k)-perfect numbers (0.10350 per cent of 400000 ) in 496 occurrences
Found 4 perfect numbers
Found 8 multiperfect numbers (including perfect numbers)
Found 7 superperfect numbers
12.709428072 seconds
for
import time
start_time = time.time()
def s(n):
tsum = 1
for p in xrange(2,n):
d = 1
# divide out all factors p of n
while n % p == 0:
n = n//p
d = p*d + 1
tsum *= d
if p*p > n: # n = 1, or n is prime
break
if n > 1: # one last prime factor to account for
tsum *= 1 + n
return tsum
def divisorSums(n):
dsums = [0] + [1]*n
for k in xrange(2, n+1):
for m in xrange(k, n+1, k):
dsums[m] += k
return dsums
MAXM = 6
MAXNUM = 400000
dsums = divisorSums(MAXNUM)
def memo_s(n):
if n <= MAXNUM:
return dsums[n]
return s(n)
i = 2
perc = 0
perc1 = 0
perf = 0
multperf = 0
supperf = 0
while i <= MAXNUM:
pert = perc1
num = i
for m in xrange(1, MAXM + 1):
tsum = memo_s(num)
if tsum % i == 0:
perc1 += 1
k = tsum / i
mes = "%d is a (%d-%d)-perfect number" % (i, m, k)
if m == 1:
multperf += 1
if k == 2:
perf += 1
print mes + ", that is a perfect number"
else:
print mes + ", that is a multiperfect number"
elif m == 2 and k == 2:
supperf += 1
print mes + ", that is a superperfect number"
else:
print mes
num = tsum
i += 1
if pert != perc1: perc += 1
print "Found %d distinct (m-k)-perfect numbers (%.5f per cent of %d ) in %d occurrences" % (
perc, float(perc) / MAXNUM * 100, MAXNUM, perc1)
print "Found %d perfect numbers" % perf
print "Found %d multiperfect numbers (including perfect numbers)" % multperf
print "Found %d superperfect numbers" % supperf
print time.time() - start_time, "seconds"
By memoizing also the needed divisor sums for n > MAXNUM:
d = {}
for i in xrange(1, MAXNUM+1):
d[i] = dsums[i]
def memo_s(n):
if n in d:
return d[n]
else:
t = s(n)
d[n] = t
return t
the time drops to
3.33684396744 seconds
from functools import lru_cache
...
#lru_cache
def s(n):
...
should make it significantly faster.
[update] oh, sorry, that was added in 3.2 according to the docs. but any cache will do. see Is there a decorator to simply cache function return values?