I 'm trying to solve Project Euler problem 240:
In how many ways can twenty 12-sided dice (sides numbered 1 to 12) be rolled so that the top ten sum to 70?
I've come up with code to solve this. But it really takes a lot of time to compute. I know this approach is pretty bad. Can someone suggest me how I can fix this code to perform better?
import itertools
def check(a,b): # check all the elements in a list a, are lesser than or equal to value b
chk=0
for x in a:
if x<=b:
chk=1
return chk
lst=[]
count=0
for x in itertools.product(range(1,13),repeat=20):
a=sorted([x[y] for y in range(20)])
if sum(a[-10:])==70 and check(a[:10],min(a[-10:])):
count+=1
Below code is for the problem defined in the description of the problem. It works perfectly and gives the exact solution....
import itertools
def check(a,b):
chk=1
for x in a:
if x>b:
chk=0
break
return chk
count=0
for x in itertools.product(range(1,7),repeat=5):
a=sorted([x[y] for y in range(5)])
if sum(a[-3:])==15 and check(a[:2],min(a[-3:])):
count+=1
It's no good iterating over all possibilities, because there are 1220 = 3833759992447475122176 ways to roll 20 twelve-sided dice, and at, say, a million rolls per second, that would take millions of years to complete.
The way to solve this kind of problem is to use dynamic programming. Find some way to split up your problem into the sum of several smaller problems, and build up a table of the answers to these sub-problems until you can compute the result you need.
For example, let T(n, d, k, t) be the number of ways to roll n d-sided dice so that the top k of them sum to t. How can we split this up into sub-problems? Well, we could consider the number of dice, i, that roll d exactly. There are nCi ways to choose these i dice, and T(n − i, d − 1, ...) ways to choose the n − i remaining dice which must roll at most d − 1. (For some suitable choice of parameters for k and t which I've elided.)
Take the product of these, and sum it up for all suitable values of i and you're done. (Well, not quite done: you have to specify the base cases, but that should be easy.)
The number of sub-problems that you need to compute will be at most (n + 1)(d + 1)(k + 1)(t + 1), which in the Project Euler case (n = 20, d = 12, k = 10, t = 70) is at most 213213. (In practice, it's much less than this, because many branches of the tree reach base cases quickly: in my implementation it turns out that the answers to just 791 sub-problems are sufficient to compute the answer.)
To write a dynamic program, it's usually easiest to express it recursively and use memoization to avoid re-computing the answer to sub-problems. In Python you could use the #functools.lru_cache decorator.
So the skeleton of your program could look like this. I've replaced the crucial details by ??? so as not to deprive you of the pleasure of working it out for yourself. Work with small examples (e.g. "two 6-sided dice, the top 1 of which sums to 6") to check that your logic is correct, before trying bigger cases.
def combinations(n, k):
"""Return C(n, k), the number of combinations of k out of n."""
c = 1
k = min(k, n - k)
for i in range(1, k + 1):
c *= (n - k + i)
c //= i
return c
#lru_cache(maxsize=None)
def T(n, d, k, t):
"""Return the number of ways n distinguishable d-sided dice can be
rolled so that the top k dice sum to t.
"""
# Base cases
if ???: return 1
if ???: return 0
# Divide and conquer. Let N be the maximum number of dice that
# can roll exactly d.
N = ???
return sum(combinations(n, i)
* T(n - i, d - 1, ???)
for i in range(N + 1))
With appropriate choices for all the ???, this answers the Project Euler problem in a few milliseconds:
>>> from timeit import timeit
>>> timeit(lambda:T(20, 12, 10, 70), number=1)
0.008017531014047563
>>> T.cache_info()
CacheInfo(hits=1844, misses=791, maxsize=None, currsize=791)
this solution should work - not sure how long it will take on your system.
from itertools import product
lg = (p for p in product(xrange(1,13,1),repeat=10) if sum(p) == 70)
results = {}
for l in lg:
results[l] = [p for p in product(xrange(1,min(l),1),repeat=10)]
what it does is create the "top ten" first. then adds to each "top ten" a list of the possible "next ten" items where the max value is capped at the minimum item in the "top ten"
results is a dict where the key is the "top ten" and the value is a list of the possible "next ten"
the solution (amount of combinations that fit the requirements) would be to count the number of lists in all the result dict like this:
count = 0
for k, v in results.items():
count += len(v)
and then count will be the result.
update
okay, i have thought of a slightly better way of doing this.
from itertools import product
import math
def calc_ways(dice, sides, top, total):
top_dice = (p for p in product(xrange(1,sides+1,1),repeat=top) if sum(p) == total)
n_count = dict((n, math.pow(n, dice-top)) for n in xrange(1,sides+1,1))
count = 0
for l in top_dice:
count += n_count[min(l)]
return count
since im only counting the length of the "next ten" i figured i would just pre-calculate the amount of options for each 'lowest' number in "top ten" so i created a dictionary which does that. the above code will run much smoother, as it is comprised only of a small dictionary, a counter, and a generator. as you can imagine, it will probably still take much time.... but i ran it for the first 1 million results in under 1 minute. so i'm sure its within the feasible range.
good luck :)
update 2
after another comment by you, i understood what i was doing wrong and tried to correct it.
from itertools import product, combinations_with_replacement, permutations
import math
def calc_ways(dice, sides, top, total):
top_dice = (p for p in product(xrange(1,sides+1,1),repeat=top) if sum(p) == total)
n_dice = dice-top
n_sets = len(set([p for p in permutations(range(n_dice)+['x']*top)]))
n_count = dict((n, n_sets*len([p for p in combinations_with_replacement(range(1,n+1,1),n_dice)])) for n in xrange(1,sides+1,1))
count = 0
for l in top_dice:
count += n_count[min(l)]
return count
as you can imagine it is quite a disaster, and does not even give the right answer. i think i am going to leave this one for the mathematicians. since my way of solving this would simply be:
def calc_ways1(dice, sides, top, total):
return len([p for p in product(xrange(1,sides+1,1),repeat=dice) if sum(sorted(p)[-top:]) == total])
which is an elegant 1 line solution, and provides the right answer for calc_ways1(5,6,3,15) but takes forever for the calc_ways1(20,12,10,70) problem.
anyway, math sure seems like the way to go on this, not my silly ideas.
Related
I'm trying to write the fastest algorithm possible to return the number of "magic triples" (i.e. x, y, z where z is a multiple of y and y is a multiple of x) in a list of 3-2000 integers.
(Note: I believe the list was expected to be sorted and unique but one of the test examples given was [1,1,1] with the expected result of 1 - that is a mistake in the challenge itself though because the definition of a magic triple was explicitly noted as x < y < z, which [1,1,1] isn't. In any case, I was trying to optimise an algorithm for sorted lists of unique integers.)
I haven't been able to work out a solution that doesn't include having three consecutive loops and therefore being O(n^3). I've seen one online that is O(n^2) but I can't get my head around what it's doing, so it doesn't feel right to submit it.
My code is:
def solution(l):
if len(l) < 3:
return 0
elif l == [1,1,1]:
return 1
else:
halfway = int(l[-1]/2)
quarterway = int(halfway/2)
quarterIndex = 0
halfIndex = 0
for i in range(len(l)):
if l[i] >= quarterway:
quarterIndex = i
break
for i in range(len(l)):
if l[i] >= halfway:
halfIndex = i
break
triples = 0
for i in l[:quarterIndex+1]:
for j in l[:halfIndex+1]:
if j != i and j % i == 0:
multiple = 2
while (j * multiple) <= l[-1]:
if j * multiple in l:
triples += 1
multiple += 1
return triples
I've spent quite a lot of time going through examples manually and removing loops through unnecessary sections of the lists but this still completes a list of 2,000 integers in about a second where the O(n^2) solution I found completes the same list in 0.6 seconds - it seems like such a small difference but obviously it means mine takes 60% longer.
Am I missing a really obvious way of removing one of the loops?
Also, I saw mention of making a directed graph and I see the promise in that. I can make the list of first nodes from the original list with a built-in function, so in principle I presume that means I can make the overall graph with two for loops and then return the length of the third node list, but I hit a wall with that too. I just can't seem to make progress without that third loop!!
from array import array
def num_triples(l):
n = len(l)
pairs = set()
lower_counts = array("I", (0 for _ in range(n)))
upper_counts = lower_counts[:]
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[i] += 1
upper_counts[j] += 1
return sum(nx * nz for nz, nx in zip(lower_counts, upper_counts))
Here, lower_counts[i] is the number of pairs of which the ith number is the y, and z is the other number in the pair (i.e. the number of different z values for this y).
Similarly, upper_counts[i] is the number of pairs of which the ith number is the y, and x is the other number in the pair (i.e. the number of different x values for this y).
So the number of triples in which the ith number is the y value is just the product of those two numbers.
The use of an array here for storing the counts is for scalability of access time. Tests show that up to n=2000 it makes negligible difference in practice, and even up to n=20000 it only made about a 1% difference to the run time (compared to using a list), but it could in principle be the fastest growing term for very large n.
How about using itertools.combinations instead of nested for loops? Combined with list comprehension, it's cleaner and much faster. Let's say l = [your list of integers] and let's assume it's already sorted.
from itertools import combinations
def div(i,j,k): # this function has the logic
return l[k]%l[j]==l[j]%l[i]==0
r = sum([div(i,j,k) for i,j,k in combinations(range(len(l)),3) if i<j<k])
#alaniwi provided a very smart iterative solution.
Here is a recursive solution.
def find_magicals(lst, nplet):
"""Find the number of magical n-plets in a given lst"""
res = 0
for i, base in enumerate(lst):
# find all the multiples of current base
multiples = [num for num in lst[i + 1:] if not num % base]
res += len(multiples) if nplet <= 2 else find_magicals(multiples, nplet - 1)
return res
def solution(lst):
return find_magicals(lst, 3)
The problem can be divided into selecting any number in the original list as the base (i.e x), how many du-plets we can find among the numbers bigger than the base. Since the method to find all du-plets is the same as finding tri-plets, we can solve the problem recursively.
From my testing, this recursive solution is comparable to, if not more performant than, the iterative solution.
This answer was the first suggestion by #alaniwi and is the one I've found to be the fastest (at 0.59 seconds for a 2,000 integer list).
def solution(l):
n = len(l)
lower_counts = dict((val, 0) for val in l)
upper_counts = lower_counts.copy()
for i in range(n - 1):
lower = l[i]
for j in range(i + 1, n):
upper = l[j]
if upper % lower == 0:
lower_counts[lower] += 1
upper_counts[upper] += 1
return sum((lower_counts[y] * upper_counts[y] for y in l))
I think I've managed to get my head around it. What it is essentially doing is comparing each number in the list with every other number to see if the smaller is divisible by the larger and makes two dictionaries:
One with the number of times a number is divisible by a larger
number,
One with the number of times it has a smaller number divisible by
it.
You compare the two dictionaries and multiply the values for each key because the key having a 0 in either essentially means it is not the second number in a triple.
Example:
l = [1,2,3,4,5,6]
lower_counts = {1:5, 2:2, 3:1, 4:0, 5:0, 6:0}
upper_counts = {1:0, 2:1, 3:1, 4:2, 5:1, 6:3}
triple_tuple = ([1,2,4], [1,2,6], [1,3,6])
I am working on a dice probability program and have been running into some efficiency issues in the permutation section when the numbers get big. For example, the perimeters I am required to run are 10 dice, with 10 sides, with an outcome of 50.
I require a total number of permutations to calculate the probability of the specified outcome given the number of dice and number of sides. The final_count(total, dice, faces) function lets the least number of combinations pass from the generator before moving into the perms(x) function.
The following code works, but for the previously mentioned perimeters it takes an extremely long time.
The perms(x) was posted by #Ashish Datta from this thread:
permutations with unique values
Which is where I believe I need help.
import itertools as it
total = 50
dice = 10
faces = 10
#-------------functions---------------------
# Checks for lists of ALL the same items
def same(lst):
return lst[1:] == lst[:-1]
# Generates the number of original permutations (10 digits takes 1.65s)
def perms(x):
uniq_set = set()
for out in it.permutations(x, len(x)):
if out not in uniq_set:
uniq_set.update([out])
return len(uniq_set)
# Finds total original dice rolls. "combinations" = (10d, 10f, 50t, takes 0.42s)
def final_count(total, dice, faces):
combinations = (it.combinations_with_replacement(range(1, faces+1), dice))
count = 0
for i in combinations:
if sum(i) == total and same(i) == True:
count += 1
elif sum(i) == total and same(i) != True:
count += perms(i)
else:
pass
return count
# --------------functions-------------------
answer = final_count(total, dice, faces) / float(faces**dice)
print(round(answer,4))
I have read the thread How to improve permutation algorithm efficiency with python. I believe my question is different, though a smarter algorithm is my end goal.
I originally posted my first draft of this program in CodeReview. https://codereview.stackexchange.com/questions/212930/calculate-probability-of-dice-total. I realize I am walking a fine line between a question and a code review, but I think in this case, I am more on the question side of things :)
You can use a function that deducts the current dice rolls from the totals for the recursive calls, and short-circuit the search if the total is less than 1 or greater than the number of dices times the number of faces. Use a cache to avoid redundant calculations of the same parameters:
from functools import lru_cache
#lru_cache(maxsize=None)
def final_count(total, dice, faces):
if total < 1 or total > dice * faces:
return 0
if dice == 1:
return 1
return sum(final_count(total - n, dice - 1, faces) for n in range(1, faces + 1))
so that:
final_count(50, 10, 10)
returns within a second: 374894389
I had a similar solution to blhsing but he beat me to it and, to be honest I didn't think of using lru_cache (nice! +1 for that). I'm posting it anyhow if only to illustrate how storage of previously computed counts cuts down on the recursion.
def permutationsTo(target, dices, faces, computed=dict()):
if target > dices*faces or target < 1: return 0
if dices == 1 : return 1
if (target,dices) in computed: return computed[(target,dices)]
result = 0
for face in range(1,min(target,faces+1)):
result += permutationsTo(target-face,dices-1,faces,computed)
computed[(target,dices)] = result
return result
One way to greatly reduce the time is to mathematically count how many combinations there are for each unique group of numbers in combinations, and increment count by that amount. If you have a list of n objects where x1 of them are all alike, x2 of them are all alike, etc., then the total number of ways to arrange them is n!/(x1! x2! x3! ...). For example, the number of different ways to arrange the letters of "Tennessee" is 9!/(1! 4! 2! 2!). So you can make a separate function for this:
import math
import itertools as it
import time
# Count the number of ways to arrange a list of items where
# some of the items may be identical.
def indiv_combos(thelist):
prod = math.factorial(len(thelist))
for i in set(thelist):
icount = thelist.count(i)
prod /= math.factorial(icount)
return prod
def final_count2(total, dice, faces):
combinations = it.combinations_with_replacement(range(1, faces + 1), dice)
count = 0
for i in combinations:
if sum(i) == total:
count += indiv_combos(i)
return count
I don't know off-hand if there's already some built-in function that does the job of what I wrote as indiv_combos2, but you could also use Counter to do the counting and mul to take the product of a list:
from operator import mul
from collections import Counter
def indiv_combos(thelist):
return math.factorial(len(thelist)) / reduce(mul, [math.factorial(i) for i in Counter(thelist).values()],1)
I get mixed results on the times when I try both methods with (25, 10, 10) as the input, but both give me the answer in less than 0.038 seconds every time.
I'm working on solve the below problem from Project Euler, which in short deals with iterating over 'n' dice and updating their values.
A Long Row of Dice - project Euler problem #641
Consider a row of n dice all showing 1.
First turn every second die,(2,4,6,…), so that the number showing is increased by 1. Then turn every third die. The sixth die will now show a 3. Then turn every fourth die and so on until every nth die (only the last die) is turned. If the die to be turned is showing a 6 then it is changed to show a 1.
Let f(n) be the number of dice that are showing a 1 when the process finishes. You are given f(100)=2 and f(10^8)=69.
Find f(10^36).
I've written the below code in Python using numpy, but can't exactly figure out what I'm doing wrong to my function output to match the output above. Right now f(100) returns 1 (should be 2); even f(1000) returns 1.
import numpy as np
def f(n):
# establish dice and the value sets for the dice
dice = np.arange(1, n + 1)
dice_values = np.ones(len(dice))
turns = range(2, len(dice) + 1)
print("{a} dice, {b} values, {c} runs to process".format(a=len(dice), b=len(dice_values), c=len(turns)))
# iterate and update the values of each die
# in our array of dice
for turn in turns:
# if die to be processed is 6, update to 1
dice_values[(dice_values == 6) & (dice % turn == 0)] = 1
# update dice_values to if the die's index has no remainder
# from the turn we're processing.
dice_values += dice % turn == 0
# output status
print('Processed every {0} dice'.format(turn))
print('{0}\n\n'.format(dice_values))
return "f({n}) = {x}".format(n=n, x=len(np.where(dice_values == 1)))
UPDATE 11/12/18
#Prune's guidance has been extremely helpful. My methodology is now as follows:
Find all the squares from 1 to n.
Find all squares with a number of factors which have a remainder of 1, when dividing by 6.
import numpy as np
# brute force to find number of factors for each n
def factors(n):
result = []
i = 1
# This will loop from 1 to int(sqrt(n))
while i * i <= n:
# Check if i divides x without leaving a remainder
if n % i == 0:
result.append(i)
if n / i != i:
result.append(n / i)
i += 1
# Return the list of factors of x
return len(result)
vect_factors = np.vectorize(factors)
# avoid brute forcing all numbers
def f(n):
# create an array of 1 to n + 1
# find all perfect squares in that range
dice = np.arange(1, n + 1)[(np.mod(np.sqrt(np.arange(1, n + 1)), 1) == 0)]
# find all squares which have n-factors, which
# when divided by 6 have a remainder of 1.
dice = dice[np.mod(vect_factors(dice), 6) == 1]
return len(dice)
Worth noting - on my machine, I'm unable to run larger than 10^10. While solving this would be ideal, I feel that what I've learned (and determined how to apply) in the process is enough for me.
UPDATE 11/13/2018
I'm continuing to spend a small bit of time trying to optimize this to get it processing more quickly. Here's the updated code base. This evaluates f(10**10) in 1 min and 17 seconds.
import time
from datetime import timedelta
import numpy as np
import math
from itertools import chain, cycle, accumulate
def find_squares(n):
return np.array([n ** 2 for n in np.arange(1, highest = math.sqrt(n) + 1)])
# brute force to find number of factors for each n
def find_factors(n):
def prime_powers(n):
# c goes through 2, 3, 5, then the infinite (6n+1, 6n+5) series
for c in accumulate(chain([2, 1, 2], cycle([2, 4]))):
if c * c > n: break
if n % c: continue
d, p = (), c
while not n % c:
n, p, d = n // c, p * c, d + (p,)
yield (d)
if n > 1: yield ((n,))
r = [1]
for e in prime_powers(n):
r += [a * b for a in r for b in e]
return len(r)
vect_factors = np.vectorize(find_factors)
# avoid brute forcing all numbers
def f(n):
# create an array of 1 to n + 1
# find all perfect squares in that range
start = time.time()
dice = find_squares(n)
# find all squares which have n-factors, which
# when divided by 6 have a remainder of 1.
dice = dice[np.mod(vect_factors(dice), 6) == 1]
diff = (timedelta(seconds=int(time.time() - start))).__str__()
print("{n} has {remain} dice with a value of 1. Computed in {diff}.".format(n=n, remain=len(dice), diff=diff))
I'm raising an x/y issue. Fixing your 6 => 1 flip will correct your code, but it will not solve the presented problem in reasonable time. To find f(10^36), you're processing 10^36 dice 10^36 times each, even if it's merely a divisibility check in the filter. That's a total of 10^72 checks. I don't know what hardware you have, but even my multi-core monster doesn't loop 10^72 times soon enough for comfort.
Instead, you need to figure out the underlying problem and try to generate a count for integers that fit the description.
The dice are merely a device to count something in mod 6. We're counting divisors of a number, including 1 and the number itself. This the (in)famous divisor function.
The problem at hand doesn't ask us to find σ0(n) for all numbers; it wants us to count how many integers have σ0(n) = 1 (mod 6). These are numbers with 1, 7, 13, 19, ... divisors.
First of all, note that this is an odd number. The only integers with an odd number of divisors are perfect squares. Look at the divisor function; how can we tell whether the square of a number will have the desired quantity of factors, 1 (mod 6)?
Does that get you moving?
WEEKEND UPDATE
My code to step through 10^18 candidates is still too slow to finish in this calendar year. It did well up to about 10^7 and then bogged down in the O(N log N) checking steps.
However, there are many more restrictions I've noted in my tracing output.
The main one is in characterizing what combinations of prime powers result in a solution. If we reduce each power mod 3, we have the following:
0 values do not affect validity of the result.
1 values make the number invalid.
2 values must be paired.
Also, these conditions are both necessary and sufficient to declare a given number as a solution. Therefore, it's possible to generate the desired solutions without bothering to step through the squares of all integers <= 10^18.
Among other things, we will need only primes up to 10^9: a solution's square root will need at least 2 of any prime factor.
I hope that's enough hints for now ... you'll need to construct an algorithm to generate certain restricted composite combinations with a given upper limit for the product.
As mentioned by Thierry in the comments, you are looping back to 2 when you flip dice at a 6. I'd suggest just changing dice_values[(dice_values == 6) & (dice % turn == 0)] = 1 to equal 0.
You also have an issue with return "f({n}) = {x}".format(n=n, x=len(np.where(dice_values == 1))) that I'd fix by replacing x=len(np.where(dice_values == 1)) with x=np.count_nonzero(dice_values == 1)
Doing both these changes gave me an output of f(100)=2
I am trying to create a loop in Python with numpy that will give me a variable "times" with 5 numbers generated randomly between 0 and 20. However, I want there to be one condition: that none of the differences between two adjacent elements in that list are less than 1. What is the best way to achieve this? I tried with the last two lines of code, but this is most likely wrong.
for j in range(1,6):
times = np.random.rand(1, 5) * 20
times.sort()
print times
da = np.diff(times)
if da.sum < 1: break
For instance, for one iteration, this would not be good:
4.25230915 4.36463992 10.35915732 12.39446368 18.46893283
But something like this would be perfect:
1.47166904 6.85610453 10.81431629 12.10176092 15.53569052
Since you are using numpy, you might as well use the built-in functions for uniform random numbers.
def uniform_min_range(a, b, n, min_dist):
while True:
x = np.random.uniform(a, b, size=n)
np.sort(x)
if np.all(np.diff(x) >= min_dist):
return x
It uses the same trial-and-error approach as the previous answer, so depending on the parameters the time to find a solution can be large.
Use a hit and miss approach to guarantee uniform distribution. Here is a straight-Python implementation which should be tweakable for numpy:
import random
def randSpacedPoints(n,a,b,minDist):
#draws n random numbers in [a,b]
# with property that their distance apart is >= minDist
#uses a hit-miss approach
while True:
nums = [a + (b-a)*random.random() for i in range(n)]
nums.sort()
if all(nums[i] + minDist < nums[i+1] for i in range(n-1)):
return nums
For example,
>>> randSpacedPoints(5,0,20,1)
[0.6681336968970486, 6.882374558960349, 9.73325447748434, 11.774594560239493, 16.009157676493903]
If there is no feasible solution this will hang in an infinite loop (so you might want to add a safety parameter which controls the number of trials).
I'm trying to solve problem 12 of Euler project. What is the value of the first triangle number to have over five hundred divisors? ( the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28). This is my code, but it is not fast enough..
Do you have any optimization tips?
n=0
a=0
list=[]
maxcount=0
while True:
n+=1
a+=n
count=0
for x in range(1,int(a+1)):
if a%x==0:
count+=1
if count>maxcount:
maxcount=count
print a, "has", maxcount, "dividors"
Thank you!
Start with reducing the search space, no need to look at numbers that are not triangle numbers. Also try looking at divisors in range(1, sqrt(n)) instead of range(1, n)
Grab the code from this question which implements very fast prime factorization:
Fast prime factorization module
Then use the answer to this question to convert your prime factors into a list of all divisors (the length is what you want):
What is the best way to get all the divisors of a number?
For example, you can add the following function (adapted from the second link) to the bottom of the module from the first link:
def alldivisors(n):
factors = list(factorization(n).items())
nfactors = len(factors)
f = [0] * nfactors
while True:
yield reduce(lambda x, y: x*y, [factors[x][0]**f[x] for x in range(nfactors)], 1)
i = 0
while True:
if i >= nfactors:
return
f[i] += 1
if f[i] <= factors[i][1]:
break
f[i] = 0
i += 1
Then in your code to count divisors you would use len(list(alldivisors(a))) which will calculate the number of divisors significantly more quickly than the brute force method you are currently using.
Apart from number theory: try caching, and doing things the other way around. For example: When you already know that 300 has 18 divisors (and what they are), what does that mean for a number which is dividable by 300? Can you cache such information? (sure you can.)
Pure python speedup hacks won't help you, you need a better algorithm.