I am trying to create a generator that returns numbers in a given range that pass a particular test given by a function foo. However I would like the numbers to be tested in a random order. The following code will achieve this:
from random import shuffle
def MyGenerator(foo, num):
order = list(range(num))
shuffle(order)
for i in order:
if foo(i):
yield i
The Problem
The problem with this solution is that sometimes the range will be quite large (num might be of the order 10**8 and upwards). This function can become slow, having such a large list in memory. I have tried to avoid this problem, with the following code:
from random import randint
def MyGenerator(foo, num):
tried = set()
while len(tried) <= num - 1:
i = randint(0, num-1)
if i in tried:
continue
tried.add(i)
if foo(i):
yield i
This works well most of the time, since in most cases num will be quite large, foo will pass a reasonable number of numbers and the total number of times the __next__ method will be called will be relatively small (say, a maximum of 200 often much smaller). Therefore its reasonable likely we stumble upon a value that passes the foo test and the size of tried never gets large. (Even if it only passes 10% of the time, we wouldn't expect tried to get larger than about 2000 roughly.)
However, when num is small (close to the number of times that the __next__ method is called, or foo fails most of the time, the above solution becomes very inefficient - randomly guessing numbers until it guesses one that isn't in tried.
My attempted solution...
I was hoping to use some kind of function that maps the numbers 0,1,2,..., n onto themselves in a roughly random way. (This isn't being used for any security purposes and so doesn't matter if it isn't the most 'random' function in the world). The function here (Create a random bijective function which has same domain and range) maps signed 32-bit integers onto themselves, but I am not sure how to adapt the mapping to a smaller range. Given num I don't even need a bijection on 0,1,..num just a value of n larger than and 'close' to num (using whatever definition of close you see fit). Then I can do the following:
def mix_function_factory(num):
# something here???
def foo(index):
# something else here??
return foo
def MyGenerator(foo, num):
mix_function = mix_function_factory(num):
for i in range(num):
index = mix_function(i)
if index <= num:
if foo(index):
yield index
(so long as the bijection isn't on a set of numbers massively larger than num the number of times index <= num isn't True will be small).
My Question
Can you think of one of the following:
A potential solution for mix_function_factory or even a few other potential functions for mix_function that I could attempt to generalise for different values of num?
A better way of solving the original problem?
Many thanks in advance....
The problem is basically generating a random permutation of the integers in the range 0..n-1.
Luckily for us, these numbers have a very useful property: they all have a distinct value modulo n. If we can apply some mathemical operations to these numbers while taking care to keep each number distinct modulo n, it's easy to generate a permutation that appears random. And the best part is that we don't need any memory to keep track of numbers we've already generated, because each number is calculated with a simple formula.
Examples of operations we can perform on every number x in the range include:
Addition: We can add any integer c to x.
Multiplication: We can multiply x with any number m that shares no prime factors with n.
Applying just these two operations on the range 0..n-1 already gives quite satisfactory results:
>>> n = 7
>>> c = 1
>>> m = 3
>>> [((x+c) * m) % n for x in range(n)]
[3, 6, 2, 5, 1, 4, 0]
Looks random, doesn't it?
If we generate c and m from a random number, it'll actually be random, too. But keep in mind that there is no guarantee that this algorithm will generate all possible permutations, or that each permutation has the same probability of being generated.
Implementation
The difficult part about the implementation is really just generating a suitable random m. I used the prime factorization code from this answer to do so.
import random
# credit for prime factorization code goes
# to https://stackoverflow.com/a/17000452/1222951
def prime_factors(n):
gaps = [1,2,2,4,2,4,2,4,6,2,6]
length, cycle = 11, 3
f, fs, next_ = 2, [], 0
while f * f <= n:
while n % f == 0:
fs.append(f)
n /= f
f += gaps[next_]
next_ += 1
if next_ == length:
next_ = cycle
if n > 1: fs.append(n)
return fs
def generate_c_and_m(n, seed=None):
# we need to know n's prime factors to find a suitable multiplier m
p_factors = set(prime_factors(n))
def is_valid_multiplier(m):
# m must not share any prime factors with n
factors = prime_factors(m)
return not p_factors.intersection(factors)
# if no seed was given, generate random values for c and m
if seed is None:
c = random.randint(n)
m = random.randint(1, 2*n)
else:
c = seed
m = seed
# make sure m is valid
while not is_valid_multiplier(m):
m += 1
return c, m
Now that we can generate suitable values for c and m, creating the permutation is trivial:
def random_range(n, seed=None):
c, m = generate_c_and_m(n, seed)
for x in range(n):
yield ((x + c) * m) % n
And your generator function can be implemented as
def MyGenerator(foo, num):
for x in random_range(num):
if foo(x):
yield x
That may be a case where the best algorithm depends on the value of num, so why not using 2 selectable algorithms wrapped in one generator ?
you could mix your shuffle and set solutions with a threshold on the value of num. That's basically assembling your 2 first solutions in one generator:
from random import shuffle,randint
def MyGenerator(foo, num):
if num < 100000 # has to be adjusted by experiments
order = list(range(num))
shuffle(order)
for i in order:
if foo(i):
yield i
else: # big values, few collisions with random generator
tried = set()
while len(tried) < num:
i = randint(0, num-1)
if i in tried:
continue
tried.add(i)
if foo(i):
yield i
The randint solution (for big values of num) works well because there aren't so many repeats in the random generator.
Getting the best performance in Python is much trickier than in lower-level languages. For example, in C, you can often save a little bit in hot inner loops by replacing a multiplication by a shift. The overhead of python bytecode-orientation erases this. Of course, this changes again when you consider which variant of "python" you're targetting (pypy? numpy? cython?)- you really have to write your code based on which one you're using.
But even more important is arranging operations to avoid serialized dependencies, since all CPUs are superscalar these days. Of course, real compilers know about this, but it still matters when choosing an algorithm.
One of the easiest ways to gain a little bit over existing answers would be by by generating numbers in chunks using numpy.arange() and applying the ((x + c) * m) % n to the numpy ndarray directly. Every python-level loop that can be avoided helps.
If the function can be applied directly to numpy ndarrays, that might even better. Of course, a sufficiently-small function in python will be dominated by function-call overhead anyway.
The best fast random-number-generator today is PCG. I wrote a pure-python port here but concentrated on flexibility and ease-of-understanding rather than speed.
Xoroshiro128+ is second-best-quality and faster, but less informative to study.
Python's (and many others') default choice of Mersenne Twister is among the worst.
(there's also something called splitmix64 which I don't know enough about to place - some people say it's better than xoroshiro128+, but it has a period problem - of course, you might want that here)
Both default-PCG and xoroshiro128+ use a 2N-bit state to generate N-bit numbers. This is generally desirable, but means numbers will be repeated. PCG has alternate modes that avoid this, however.
Of course, much of this depends on whether num is (close to) a power of 2. In theory, PCG variants can be created for any bit width, but currently only various word sizes are implemented since you'd need explicit masking. I'm not sure exactly how to generate the parameters for new bit sizes (perhaps it's in the paper?), but they can be tested simply by doing a period/2 jump and verifying that the value is different.
Of course, if you're only making 200 calls to the RNG, you probably don't actually need to avoid duplicates on the math side.
Alternatively, you could use an LFSR, which does exist for every bit size (though note that it never generates the all-zeros value (or equivalently, the all-ones value)). LFSRs are serial and (AFAIK) not jumpable, and thus can't be easily split across multiple tasks. Edit: I figured out that this is untrue, simply represent the advance step as a matrix, and exponentiate it to jump.
Note that LFSRs do have the same obvious biases as simply generating numbers in sequential order based on a random start point - for example, if rng_outputs[a:b] all fail your foo function, then rng_outputs[b] will be much more likely as a first output regardless of starting point. PCG's "stream" parameter avoids this by not generating numbers in the same order.
Edit2: I have completed what I thought was a "brief project" implementing LFSRs in python, including jumping, fully tested.
Related
The question is available here. My Python code is
def solution(A, B):
if len(A) == 1:
return [1]
ways = [0] * (len(A) + 1)
ways[1], ways[2] = 1, 2
for i in xrange(3, len(ways)):
ways[i] = ways[i-1] + ways[i-2]
result = [1] * len(A)
for i in xrange(len(A)):
result[i] = ways[A[i]] & ((1<<B[i]) - 1)
return result
The detected time complexity by the system is O(L^2) and I can't see why. Thank you in advance.
First, let's show that the runtime genuinely is O(L^2). I copied a section of your code, and ran it with increasing values of L:
import time
import matplotlib.pyplot as plt
def solution(L):
if L == 0:
return
ways = [0] * (L+5)
ways[1], ways[2] = 1, 2
for i in xrange(3, len(ways)):
ways[i] = ways[i-1] + ways[i-2]
points = []
for L in xrange(0, 100001, 10000):
start = time.time()
solution(L)
points.append(time.time() - start)
plt.plot(points)
plt.show()
The result graph is this:
To understand why this O(L^2) when the obvious "time complexity" calculation suggests O(L), note that "time complexity" is not a well-defined concept on its own since it depends on which basic operations you're counting. Normally the basic operations are taken for granted, but in some cases you need to be more careful. Here, if you count additions as a basic operation, then the code is O(N). However, if you count bit (or byte) operations then the code is O(N^2). Here's the reason:
You're building an array of the first L Fibonacci numbers. The length (in digits) of the i'th Fibonacci number is Theta(i). So ways[i] = ways[i-1] + ways[i-2] adds two numbers with approximately i digits, which takes O(i) time if you count bit or byte operations.
This observation gives you an O(L^2) bit operation count for this loop:
for i in xrange(3, len(ways)):
ways[i] = ways[i-1] + ways[i-2]
In the case of this program, it's quite reasonable to count bit operations: your numbers are unboundedly huge as L increases and addition of huge numbers is linear in clock time rather than O(1).
You can fix the complexity of your code by computing the Fibonacci numbers mod 2^32 -- since 2^32 is a multiple of 2^B[i]. That will keep a finite bound on the numbers you're dealing with:
for i in xrange(3, len(ways)):
ways[i] = (ways[i-1] + ways[i-2]) & ((1<<32) - 1)
There are some other issues with the code, but this will fix the slowness.
I've taken the relevant parts of the function:
def solution(A, B):
for i in xrange(3, len(A) + 1): # replaced ways for clarity
# ...
for i in xrange(len(A)):
# ...
return result
Observations:
A is an iterable object (e.g. a list)
You're iterating over the elements of A in sequence
The behavior of your function depends on the number of elements in A, making it O(A)
You're iterating over A twice, meaning 2 O(A) -> O(A)
On point 4, since 2 is a constant factor, 2 O(A) is still in O(A).
I think the page is not correct in its measurement. Had the loops been nested, then it would've been O(A²), but the loops are not nested.
This short sample is O(N²):
def process_list(my_list):
for i in range(0, len(my_list)):
for j in range(0, len(my_list)):
# do something with my_list[i] and my_list[j]
I've not seen the code the page is using to 'detect' the time complexity of the code, but my guess is that the page is counting the number of loops you're using without understanding much of the actual structure of the code.
EDIT1:
Note that, based on this answer, the time complexity of the len function is actually O(1), not O(N), so the page is not incorrectly trying to count its use for the time-complexity. If it were doing that, it would've incorrectly claimed a larger order of growth because it's used 4 separate times.
EDIT2:
As #PaulHankin notes, asymptotic analysis also depends on what's considered a "basic operation". In my analysis, I've counted additions and assignments as "basic operations" by using the uniform cost method, not the logarithmic cost method, which I did not mention at first.
Most of the time simple arithmetic operations are always treated as basic operations. This is what I see most commonly being done, unless the algorithm being analysed is for a basic operation itself (e.g. time complexity of a multiplication function), which is not the case here.
The only reason why we have different results appears to be this distinction. I think we're both correct.
EDIT3:
While an algorithm in O(N) is also in O(N²), I think it's reasonable to state that the code is still in O(N) b/c, at the level of abstraction we're using, the computational steps that seem more relevant (i.e. are more influential) are in the loop as a function of the size of the input iterable A, not the number of bits being used to represent each value.
Consider the following algorithm to compute an:
def function(a, n):
r = 1
for i in range(0, n):
r *= a
return r
Under the uniform cost method, this is in O(N), because the loop is executed n times, but under logarithmic cost method, the algorithm above turns out to be in O(N²) instead due to the time complexity of the multiplication at line r *= a being in O(N), since the number of bits to represent each number is dependent on the size of the number itself.
Codility Ladder competition is best solved in here:
It is super tricky.
We first compute the Fibonacci sequence for the first L+2 numbers. The first two numbers are used only as fillers, so we have to index the sequence as A[idx]+1 instead of A[idx]-1. The second step is to replace the modulo operation by removing all but the n lowest bits
This is a question regarding a code challenge, please don't supply too much code.. I'd like to figure this out myself as much as possible.
I recently started getting into code challenges, and combined it with learning Python (I'm a frontend javascript developer by day ;) ). All is going well so far and I'm convinced that this is the best way to learn a new language (for me at least).
I'm currently stuck on a challenge that requires me to print all prime numbers in a given range, this is all done by simple Stdin and Stdout.
I've tried two approaches so far, both are working but are too slow.. below is a link and the code of my fastest implementation so far. Maybe I'm missing something super obvious that is causing my python script to slow down. Currently it takes 1.76s for a single test case with a range of 1, 100000
http://ideone.com/GT6Xxk (you can debug the script here as well)
from sys import stdin
from math import sqrt, ceil
next(stdin) # skip unecessary line that describes the number of test cases
def is_prime(number):
initial_divider = sqrt(number)
if number == 2:
return True
elif number % 2 == 0 or int(initial_divider) == initial_divider:
return False
for divider in range(ceil(initial_divider), 1, -1):
if number % divider == 0:
return False
return True
for line in stdin:
low, high = [int(x) for x in line.split(' ')]
primes = [number for number
in range(max(low, 2), high+1)
if is_prime(number)]
for prime in primes:
print (prime)
print('')
The description of the 'assignment' / challenge is as follows:
Input
The input begins with the number t of test cases in a single line (t<=10). In >each of the next t lines there are two numbers m and n (1 <= m <= n <= >1000000000, n-m<=100000) separated by a space.
Output
For every test case print all prime numbers p such that m <= p <= n, one number >per line, test cases separated by an empty line.
Update 1: I cleaned up the logic of the last block, where the gathering of primes and printing is done:
for line in stdin:
low, high = [int(x) for x in line.split(' ')]
for number in range(max(low, 2), high+1):
if is_prime(number):
print (number)
print('')
1) It might be dominated by console IO, printing the output. I changed the output so it uses a generator to collect the primes, convert the numbers to strings, and join the numbers with newlines. This should save some memory in list building and push some Python list iteration down into the Python runtime. That made it ~30% faster in unscientific rushed testing on my PC, doesn't make much difference on ideone. (This might be because I bodged it to run in Python 2, which has some very different iteration/list/generator workings, but used Python 3 on ideone).
2) You run the if number == 2: return True test every time; out of the first 100,000 numbers, most of them aren't 2. I extracted that to print 2 before printing the rest of the primes. Very minor change, not really worth it.
3) Your range counts down - range(ceil(initial_divider), 1, -1) - and that's really weird. It's very much more likely that a number will divide by 3 than by, say, 19. Every third number divides by 3, only every 19th number divides by 19. So for quick-return of the function, try the small dividers first, right? I set it to count up. Noticable speed improvement, I hope it's still working.
That's ~50% of the original runtime, in a casual and completely uncomparable situation. Code now looks like this:
from sys import stdin
from math import sqrt, ceil
next(stdin) # skip unecessary line
def is_prime(number):
initial_divider = sqrt(number)
if number % 2 == 0 or int(initial_divider) == initial_divider:
return False
for divider in range(2, ceil(initial_divider)+1):
if number % divider == 0:
return False
return True
for line in stdin:
low, high = [int(x) for x in line.split(' ')]
primes = '\n'.join(str(number) for number
in range(max(low, 3), high+1)
if is_prime(number))
if low <= 2: print(2)
print (primes)
print('')
Change list comprehension to generator, the script will run faster.
for number in range(max(low, 2), high+1):
if is_prime(number):
yield number
In a language like C or C++ the SPOJ PRIME 1 problem can easily be solved by brute force, i.e. by writing code that sieves all numbers up to 1000,000,000 in less than a second and thus stays below the time limit. Perhaps even in Java, C# or Delphi. But if it is possible in Python at all then it is probably bloody hard and requires some serious fu.
Note, however, that SPOJ PRIME 1 does not ask for a billion numbers to be sieved; it asks for a couple of small ranges to be sieved which are no wider than 100001 numbers, and it likely queries only a handful of ranges. Let's say the number of ranges is 100 (it's probably much less) and the average width is 100000 (it's probably much less) then that's still only 10,000,000 numbers. Sieving the full billion in that situation does two orders of magnitude too much work, which is why SPOJ PRIME 1 manages to weed out the chaff with such precision despite the wide range of languages employed by pundits.
Hence the trick is to do only what's asked - to sieve the ranges provided as input. Even the most simple, straightforward code can do that with lots of time to spare (C++: about a millisecond total). The principle is exactly the same as in my answer to the challenge of drawing 100 random primes from a range with an upper bound of 1,000,000,000, and the same solution applies. The key is writing a function that can sieve a given range (window) without having to sieve all numbers below as well.
Besides, the question of how to beat SPOJ PRIME 1 has been asked numerous times already and the answers given are still valid. A small selection:
How do I efficiently sieve through a selected range for prime numbers?
Efficient algorithm to get primes between two large numbers
Generating prime numbers between m and n
SPOJ PRIME1 : TLE
Segmented Sieve of Erastothenes C++ SPOJ
Spoj PRIME1 using sieve of eratosthenes (in C)
...
def PrimesBelow(limit):
np=set()
p=[2]
for i in xrange(1,limit+1,2):
if i == 1: continue
if i in np: continue
beg=2 if i % 2 == 0 else 0
for j in xrange(beg,int(limit)+1,i):
np.add(j)
p.append(i)
return i
LetzerWille was right. Function above will return a list of prime numbers below (limit). I think that this function will run faster than checking each number is prime or not, because this function will remove multiplies of each number and add them to (np).
Side note: this function will test odd numbers only.
Simple improvement. It was embarrasing to see this simple code. Mine was much longer and slower :( ... but I learned a lot :)
Adding also simple function for measuring time mt()
def PrimesBelow(limit):
np=set()
p=[2]
for i in range(3,limit+1,2):
if i in np: continue
beg = i*i
for j in range(beg,int(limit)+1,i):
np.add(j)
p.append(i)
return p
def mt(n):
import time
t = time.time()
pr = PrimesBelow(n)
print("#-Primes: {}, Time: {}".format(len(pr), round(time.time()-t, 4)))
return pr
pr = mt(100000000)
is about 49 secs on a i7-3770 with 16GB
This will be the optimized code with less number of executions, it can calculate and display 10000 prime numbers within a second.
it uses the prime number property that
* if a number is not divisible by the numbers which are less than its square root then it is prime number.
* instead of checking till the end(Means 1000 iteration to figure out 1000 is prime or not) we can end the loop within 35 iterations,
* break the loop if it is divided by any number at the beginning(if it is even loop will break on first iteration, if it is divisible by 3 then 2 iteration) so we iterate till the end only for the prime numbers
remember one thing you can still optimize the iterations by using the property *if a number is not divisible with the prime numbers less than that then it is prime number but the code will be too large, we have to keep track of the calculated prime numbers, also it is difficult to find a particular number is a prime or not, so this will be the Best logic or code
import math
number=1
count = 0
while(count<10000):
isprime=1
number+=1
for j in range(2,int(math.sqrt(number))+1):
if(number%j==0):
isprime=0
break
if(isprime==1):
print(number,end=" ")
count+=1
print("\nCount "+str(count))
I am writing a simple Python script that generates 6 numbers at random (from 1 to 100) and a larger number (from 100 to 1000). My goals for this script are to:
Calculate all of the possible combinations using at least 2 numbers and any of the simple math operations (adding, subtracting, multiplying and dividing)
Output all of the combinations whose total is within 10 above or below the larger number as 'matches'
The list of numbers need not be exhausted, but repeating numbers isn't accepted. Plus I don't care too much if the code is efficient or not (if anyone decides to post any - I can post mine so far if anyone needs it - preferably post it in Python); as long as it works, I'm happy to optimize it.
I have attempted this myself, only to fail as the program quickly ended with a RunTime Error. I also tried putting in a counter to stop the loop after x passes (where x is a small number such as 50), but that just makes matters worse as it keeps on going infinitely.
I've also done some research, and I found that this (Computing target number from numbers in a set - the second to last answer) is the closest I found to meet my requirements but hasn't got quite there yet.
Thanks for the help! :-)
EDIT: Here is my code:
import random, time, operator
i = 0
numlist = []
while i != 6:
number = random.randint(1, 100)
numlist.append(number)
i += 1
largenumber = random.randint(100, 1000)
print(numlist)
print(largenumber)
def operationTesting():
a, c, m, total = 0, 0, 0, 0
totalnums = 0
operators = ['+', '-', '*', '/']
while total != largenumber:
for a in numlist[m]:
for c in numlist[m+1]:
print(a)
print(c)
if a == c:
operationTesting()
else:
b = random.choice(operators)
if b == '+':
summednums = operator.add(int(a), int(c))
print(summednums)
totalnums = totalnums + summednums
elif b == '-':
summednums = operator.sub(int(a), int(c))
print(summednums)
totalnums = totalnums + summednums
elif b == '*':
summednums = operator.mul(int(a), int(c))
print(summednums)
totalnums = totalnums + summednums
elif b == '/':
summednums = operator.floordiv(int(a), int(c))
print(summednums)
totalnums = totalnums + summednums
print(totalnums)
SystemExit(None)
operationTesting()
A very neat way to do it is using Reverse Polish Notation or Postfix notation. This notation avoids the need for brackets that you would probably want if you were doing it using conventional arithmetic with operator precedence etc.
You can do this with brute force if you are not too bothered about time efficiency. You need to consider what you want to do with division too - if two numbers do not divide exactly, do you want to return the result as 'invalid' in some way (I guess so), or really return a floored division? Note the latter might give you some invalid answers...
Consider the test case of numlist = [1,2,3,4,5,6]. In RPN, we could do something like this
RPN Equivalent to
123456+++++ (1+(2+(3+(4+(5+6)))))
123456++++- (1-(2+(3+(4+(5+6)))))
123456+++-+ (1+(2-(3+(4+(5+6)))))
...
12345+6+-++ (1+(2+(3-((4+5)+6))))
12345+6-+++ (1+(2+(3+((4+5)-6))))
...
And so on. You can probably see that with sufficient combinations, you can get any combinations of numbers, operators and brackets. The brackets are important - to take only 3 numbers obviously
1+2*6
is normally interpreted
(1 + (2*6)) == 13
and is quite different to
((1+2)*6) == 18
In RPN, these would be 126*+ and 12+6* respectively.
So, you've got to generate all your combinations in RPN, then develop an RPN calculator to evaluate them.
Unfortunately, there are quite a lot of permutations with 6 numbers (or any subset thereof). First you can have the numbers in any order, thats 6! = 720 combinations. You will always need n-1 == 5 operators and they can be any one of the 4 operators. So that's 4**5 == 1024 permutations. Finally those 5 operators can be in any one of 5 positions (after first pair of numbers, after first 3, after 4 and so on). You can have maximum 1 operator in the first position, two in the second and so on. That's 5! == 120 permutations. So in total you have 720*1024*120 == 88473600 permutations. Thats roughly 9 * 10**7 Not beyond the realms of computation at all, but it might take 5 minutes or so to generate them all on a fairly quick computer.
You could significantly improve on this by "chopping" the search tree
Loads of the RPN combinations will be arithmetically identical (e.g. 123456+++++ == 12345+6++++ == 1234+5+6+++ etc) - you could use some prior knowledge to improve generate_RPN_combinations so it didn't generate them
identifying intermediate results that show certain combinations could never satisfy your criterion and not exploring any further combinations down that road.
You then have to send each string to the RPN calculator. These are fairly easy to code and a typical programming exercise - you push values onto a stack and when you come to operators, pop the top two members from the stack, apply the operator and push the result onto the stack. If you don't want to implement that - google minimal python rpn calculator and there are resources there to help you.
Note, you say you don't have to use all 6 numbers. Rather than implementing that separately, I would suggest checking any intermediate results when evaluating the combinations for all 6 numbers, if they satisfy the criterion, keep them too.
I am trying to solve this problem on CodeChef: http://www.codechef.com/problems/COINS
But when I submit my code, it apparently takes too long to execute, and says the time has expired. I am not sure if my code is inefficient (it doesn't seem like it to me) or if it I am having trouble with I/O. There is a 9 second time limit, to solve maximum 10 inputs, 0 <= n <= 1 000 000 000.
In Byteland they have a very strange monetary system.
Each Bytelandian gold coin has an integer number written on it. A coin
n can be exchanged in a bank into three coins: n/2, n/3 and n/4. But
these numbers are all rounded down (the banks have to make a profit).
You can also sell Bytelandian coins for American dollars. The exchange
rate is 1:1. But you can not buy Bytelandian coins.
You have one gold coin. What is the maximum amount of American dollars
you can get for it?
Here is my code: It seems to take too long for an input of 1 000 000 000
def coinProfit(n):
a = n/2
b = n/3
c = n/4
if a+b+c > n:
nextProfit = coinProfit(a)+coinProfit(b)+coinProfit(c)
if nextProfit > a+b+c:
return nextProfit
else:
return a+b+c
return n
while True:
try:
n = input()
print(coinProfit(n))
except Exception:
break
The problem is that your code branches each recursive call into three new ones. This leads to exponential behavior.
The nice thing however is that most calls are duplcates: if you call coinProfit with 40, this will cascade to:
coinProfit(40)
- coinProfit(20)
- coinProfit(10)
- coinProfit(6)
- coinProfit(5)
- coinProfit(13)
- coinProfit(10)
What you see is that a lot of effort is repeated (in this small example, coinProfit is called already twice on 10).
You can use Dynamic programming to solve this: store earlier computed results preventing you from branching again on this parts.
One can implement dynamic programing him/herself, but one can use the #memoize decorator to do this automatically.
Now the function does a lot of work way too much times.
import math;
def memoize(f):
memo = {}
def helper(x):
if x not in memo:
memo[x] = f(x)
return memo[x]
return helper
#memoize
def coinProfit(n):
a = math.floor(n/2)
b = math.floor(n/3)
c = math.floor(n/4)
if a+b+c > n:
nextProfit = coinProfit(a)+coinProfit(b)+coinProfit(c)
if nextProfit > a+b+c:
return nextProfit
else:
return a+b+c
return n
The #memoize transforms the function such that: for the function, an array of already calculated outputs is maintained. If for a given input, the output has already been computed, it is stored in the array, and immediately returned. Otherwise it is computed as defined by your method, stored in the array (for later use) and returned.
As #steveha points out, python already has a built-in memoize function called lru_cache, more info can be found here.
A final note is that #memoize or other Dynamic programming constructs, are not the solution to all efficiency problems. First of all #memoize can have an impact on side-effects: say your function prints something on stdout, then with #memoize this will have an impact on the number of times something is printed. And secondly, there are problems like the SAT problem where #memoize simply doesn't work at all, because the context itself is exponential (this as far as we know). Such problems are called NP-hard.
You can optimize the program by storing result in some sort of cache. So if the result exist in cache then no need to perform the calculation , otherwise calculate and put the value in the cache. By this way you avoid calculating already calculated values. E.g.
cache = {0: 0}
def coinProfit(num):
if num in cache:
return cache[num]
else:
a = num / 2
b = num / 3
c = num / 4
tmp = coinProfit(c) + coinProfit(b) + coinProfit(a)
cache[num] = max(num, tmp)
return cache[num]
while True:
try:
print coinProfit(int(raw_input()))
except:
break
I just tried and noticed a few things... This doesn't have to be considered as The answer.
On my (recent) machine, it takes a solid 30 seconds to compute with n = 100 000 000. I imagine that it's pretty normal for the algorithm you just wrote, because it computes the same values times and times again (you didn't optimise your recursion calls with caching as suggested in other answers).
Also, the problem definition is pretty gentle because it insists: each Bytelandian gold coin has an integer number written on it, but these numbers are all rounded down. Knowing this, you should be turning the three first lines of your function into:
import math
def coinProfit(n):
a = math.floor(n/2)
b = math.floor(n/3)
c = math.floor(n/4)
This will prevent a, b, c to be turned into float numbers (Python3 at least) which would make your computer go like crazy into a big recursive mess, even with the smallest values of n.
num = input ()
fact = 0
while fact != num:
fact = fact + 1
rem = num % fact
if rem == 0:
print fact
You only need to go to the square root of the input number to get all the factors (not as far as half the number, as suggested elsewhere). For example, 24 has factors 1, 2, 3, 4, 6, 8, 12, 24. sqrt(24) is approx 4.9. Check 1 and also get 24, check 2 and also get 12, check 3 and also get 8, check 4 and also get 6. Since 5 > 4.9, no need to check it. (Yes, I know 24 isn't the best example as all whole numbers less than sqrt(24) are factors of 24.)
factors = set()
for i in xrange(math.floor(math.sqrt(x))+1):
if x % i == 0:
factors.add(i)
factors.add(x/i)
print factors
There are some really complicated ways to do better for large numbers, but this should get you a decent runtime improvement. Depending on your application, caching could also save you a lot of time.
Use for loops, for starters. Then, let Python increment for you, and get rid of the unnecessary rem variable. This code does exactly the same as your code, except in a "Pythonic" way.
num = input()
for x in xrange(1, num):
if (num % x) == 0:
print fact
xrange(x, y) returns a generator for all integers from x up to, but not including y.
So that prints out all the factors of a number? The first obvious optimisation is that you could quit when fact*2 is greater than num. Anything greater than half of num can't be a factor. That's half the work thrown out instantly.
The second is that you'd be better searching for the prime factorisation and deriving all the possible factors from that. There are a bunch of really smart algorithms for that sort of thing.
Once you get halfway there (once fact>num/2), your not going to discover any new numbers as the numbers above num/2 can be discovered by calculating num/fact for each one (this can also be used to easily print each number with its pair).
The following code should cust the time down by a few seconds on every calculation and cut it in half where num is odd. Hopefully you can follow it, if not, ask.
I'll add more if I think of something later.
def even(num):
'''Even numbers can be divided by odd numbers, so test them all'''
fact=0
while fact<num/2:
fact+=1
rem=num % fact
if rem == 0:
print '%s and %s'%(fact,num/fact)
def odd(num):
'''Odd numbers can't be divided by even numbers, so why try?'''
fact=-1
while fact<num/2:
fact+=2
rem=num % fact
if rem == 0:
print '%s and %s'%(fact,num/fact)
while True:
num=input(':')
if str(num)[-1] in '13579':
odd(num)
else:
even(num)
Research integer factorization methods.
Unfortunately in Python, the divmod operation is implemented as a built-in function. Despite hardware integer division often producing the quotient and the remainder simultaneously, no non-assembly language that I'm aware of has implemented a /% or //% basic operator.
So: the following is a better brute-force algorithm if you count machine operations. It gets all factors in O(sqrt(N)) time without having to calculate sqrt(N) -- look, Mum, no floating point!
# even number
fact = 0
while 1:
fact += 1
fact2, rem = divmod(num, fact)
if not rem:
yield fact
yield fact2
if fact >= fact2 - 1:
# fact >= math.sqrt(num)
break
Yes. Use a quantum computer
Shor's algorithm, named after mathematician Peter Shor, is a quantum
algorithm (an algorithm which runs on a quantum computer) for integer
factorization formulated in 1994. Informally it solves the following
problem: Given an integer N, find its prime factors.
On a quantum computer, to factor an integer N, Shor's algorithm runs
in polynomial time (the time taken is polynomial in log N, which is
the size of the input). Specifically it takes time O((log N)3),
demonstrating that the integer factorization problem can be
efficiently solved on a quantum computer and is thus in the complexity
class BQP. This is exponentially faster than the most efficient known
classical factoring algorithm, the general number field sieve, which
works in sub-exponential time — about O(e1.9 (log N)1/3 (log log
N)2/3). The efficiency of Shor's algorithm is due to the efficiency
of the quantum Fourier transform, and modular exponentiation by
squarings.