Use modulo with numbers greater than 64bit integer in numpy/numba - python

I‘m trying to implement a prime factorization algorithm leveraging the GPU/CUDA for parallelization as a pet project.
I‘m using python with numpy and numba for the parallelization part.
My problem is now that I hit the 64 bit integer boundary quite fast and I am searching for solutions to work around this. Since numpy/numba only supports integers up to 64 bit (compared to arbitrary large numbers in python itself), this is where I‘m stuck at the moment.
In essence the part on the GPU mainly uses the modulo operation and a bit of iteration.
I found that I can use bigger dividends on the modulo by splitting the modulo into multiple operations with a smaller dividend.
Example:
Let’s assume we cannot use numbers bigger than 10^3.
I would like do do the following operation on the GPU:
1019 % 17 = 16
I can do so by splitting the dividend 1019 into multiple arbitary sizes chunks, for example:
1019 = 500 + 519
And then calculating the modulo on all chunks separately and taking the modulo on the sum of the results again.
((500 % 17) + (519 % 17)) % 17 = 16
My question is now:
Is there a similar operation I can perform to work with a bigger divisor and quotient (for example 2037 % 1019)? Or even better, a way to use arbitrary sized numbers in numpy/numba without precision loss?
Bear with me if I didn’t use proper math slang.

You can reduce the dividend much more efficiently using modulus thanks to basic congruent identity rules. Indeed:
Assuming:
a ≡ x [d]
b ≡ y [d]
c ≡ z [d]
Then:
a * b + c ≡ x * y + z [d]
This is an interesting property if d is small. Additionally, all number can be decomposed in a sum of power of two values and it is easy to pre-compute the remainders of 2**n modulus d using modular exponentiation.
For example:
15428794586587458 ≡ 3592296 * 2**32 + 749035842 [2019]
≡ 495 * 1090 + 975 [2019]
≡ 477 + 975 [2019]
≡ 1452 [2019]
Unfortunately, when it comes to the divider, AFAIK there is no simple efficient (ie. polynomial) method to reduce with special assumption on both the divider and the dividend (especially the former). I think prime factorization would be much simpler to solve with such a reduction. Assuming the divisor d is a composite number, then the Chinese remainder theorem should certainly help to split the division in simpler pieces but if d is a prime number then AFAIK this is a hard problem.
In that case, using variable precision numbers (or simply large integers in multiple parts) is certainly the simplest solution. A naive solution is to use a binary search since a ≡ b [d] is equivalent to solve a = k * d + b. Indeed, k can be multiplied by two over and over until k * d is bigger than a and then a binary search on k can be used so a ≤ k * d < a + d. The multiplication can be efficiently computed using the Karatsuba algorithm (there are faster method for very large numbers but this one is faster for reasonably large ones certainly used in your case). AFAIK, a Newton's method can be used to archive a faster reduction (both are algorithmically fast but the later converge faster).

Related

Time complexity of recursion of multiplication

What is the worst case time complexity (Big O notation) of the following function for positive integers?
def rec_mul(a:int, b:int) -> int:
if b == 1:
return a
if a == 1:
return b
else:
return a + rec_mul(a, b-1)
I think it's O(n) but my friend claims it's O(2^n)
My argument:
The function recurs at any case b times, therefor the complexity is O(b) = O(n)
His argument:
since there are n bits, a\b value can be no more than (2^n)-1,
therefor the max number of calls will be O(2^n)
You are both right.
If we disregard the time complexity of addition (and you might discuss whether you have reason to do so or not) and count only the number of iterations, then you are both right because you define:
n = b
and your friend defines
n = log_2(b)
so the complexity is O(b) = O(2^log_2(b)).
Both definitions are valid and both can be practical. You look at the input values, your friend at the lengths of the input, in bits.
This is a good demonstration why big-O expressions mean nothing if you don't define the variables used in those expressions.
Your friend and you can both be right, depending on what is n. Another way to say this is that your friend and you are both wrong, since you both forgot to specify what was n.
Your function takes an input that consists in two variables, a and b. These variables are numbers. If we express the complexity as a function of these numbers, it is really O(b log(ab)), because it consists in b iterations, and each iteration requires an addition of numbers of size up to ab, which takes log(ab) operations.
Now, you both chose to express the complexity in function of n rather than a or b. This is okay; we often do this; but an important question is: what is n?
Sometimes we think it's "obvious" what is n, so we forget to say it.
If you choose n = max(a, b) or n = a + b, then you are right, the complexity is O(n).
If you choose n to be the length of the input, then n is the number of bits needed to represent the two numbers a and b. In other words, n = log(a) + log(b). In that case, your friend is right, the complexity is O(2^n).
Since there is an ambiguity in the meaning of n, I would argue that it's meaningless to express the complexity as a function of n without specifying what n is. So, your friend and you are both wrong.
Background
A unary encoding of the input uses an alphabet of size 1: think tally marks. If the input is the number a, you need O(a) bits.
A binary encoding uses an alphabet of size 2: you get 0s and 1s. If the number is a, you need O(log_2 a) bits.
A trinary encoding uses an alphabet of size 3: you get 0s, 1s, and 2s. If the number is a, you need O(log_3 a) bits.
In general, a k-ary encoding uses an alphabet of size k: you get 0s, 1s, 2s, ..., and k-1s. If the number is a, you need O(log_k a) bits.
What does this have to do with complexity?
As you are aware, we ignore multiplicative constants inside big-oh notation. n, 2n, 3n, etc, are all O(n).
The same holds for logarithms. log_2 n, 2 log_2 n, 3 log_2 n, etc, are all O(log_2 n).
The key observation here is that the ratio log_k1 n / log_k2 n is a constant, no matter what k1 and k2 are... as long as they are greater than 1. That means f(log_k1 n) = O(log_k2 n) for all k1, k2 > 1.
This is important when comparing algorithms. As long as you use an "efficient" encoding (i.e., not a unary encoding), it doesn't matter what base you use: you can simply say f(n) = O(lg n) without specifying the base. This allows us to compare runtime of algorithms without worrying about the exact encoding you use.
So n = b (which implies a unary encoding) is typically never used. Binary encoding is simplest, and doesn't provide a non-constant speed-up over any other encoding, so we usually just assume binary encoding.
That means we almost always assume that n = lg a + lg b as the input size, not n = a + b. A unary encoding is the only one that suggests linear growth, rather than exponential growth, as the values of a and b increase.
One area, though, where unary encodings are used is in distinguishing between strong NP-completeness and weak NP-completeness. Without getting into the theory, if a problem is NP-complete, we don't expect any algorithm to have a polynomial running time, that is, one bounded by O(n**k) for some constant k when using an efficient encoring.
But some algorithms do become polynomial if we allow a unary encoding. If a problem that is otherwise NP-complete becomes polynomial when using an unary encoding, we call that a weakly NP-complete problem. It's still slow, but it is in some sense "faster" than an algorithm where the size of the numbers doesn't matter.

Reversing pow function - finding the power [duplicate]

Given positive integers b, c, m where (b < m) is True it is to find a positive integer e such that
(b**e % m == c) is True
where ** is exponentiation (e.g. in Ruby, Python or ^ in some other languages) and % is modulo operation. What is the most effective algorithm (with the lowest big-O complexity) to solve it?
Example:
Given b=5; c=8; m=13 this algorithm must find e=7 because 5**7%13 = 8
From the % operator I'm assuming that you are working with integers.
You are trying to solve the Discrete Logarithm problem. A reasonable algorithm is Baby step, giant step, although there are many others, none of which are particularly fast.
The difficulty of finding a fast solution to the discrete logarithm problem is a fundamental part of some popular cryptographic algorithms, so if you find a better solution than any of those on Wikipedia please let me know!
This isn't a simple problem at all. It is called calculating the discrete logarithm and it is the inverse operation to a modular exponentation.
There is no efficient algorithm known. That is, if N denotes the number of bits in m, all known algorithms run in O(2^(N^C)) where C>0.
Python 3 Solution:
Thankfully, SymPy has implemented this for you!
SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.
This is the documentation on the discrete_log function. Use this to import it:
from sympy.ntheory import discrete_log
Their example computes \log_7(15) (mod 41):
>>> discrete_log(41, 15, 7)
3
Because of the (state-of-the-art, mind you) algorithms it employs to solve it, you'll get O(\sqrt{n}) on most inputs you try. It's considerably faster when your prime modulus has the property where p - 1 factors into a lot of small primes.
Consider a prime on the order of 100 bits: (~ 2^{100}). With \sqrt{n} complexity, that's still 2^{50} iterations. That being said, don't reinvent the wheel. This does a pretty good job. I might also add that it was almost 4x times more memory efficient than Mathematica's MultiplicativeOrder function when I ran with large-ish inputs (44 MiB vs. 173 MiB).
Since a duplicate of this question was asked under the Python tag, here is a Python implementation of baby step, giant step, which, as #MarkBeyers points out, is a reasonable approach (as long as the modulus isn't too large):
def baby_steps_giant_steps(a,b,p,N = None):
if not N: N = 1 + int(math.sqrt(p))
#initialize baby_steps table
baby_steps = {}
baby_step = 1
for r in range(N+1):
baby_steps[baby_step] = r
baby_step = baby_step * a % p
#now take the giant steps
giant_stride = pow(a,(p-2)*N,p)
giant_step = b
for q in range(N+1):
if giant_step in baby_steps:
return q*N + baby_steps[giant_step]
else:
giant_step = giant_step * giant_stride % p
return "No Match"
In the above implementation, an explicit N can be passed to fish for a small exponent even if p is cryptographically large. It will find the exponent as long as the exponent is smaller than N**2. When N is omitted, the exponent will always be found, but not necessarily in your lifetime or with your machine's memory if p is too large.
For example, if
p = 70606432933607
a = 100001
b = 54696545758787
then 'pow(a,b,p)' evaluates to 67385023448517
and
>>> baby_steps_giant_steps(a,67385023448517,p)
54696545758787
This took about 5 seconds on my machine. For the exponent and the modulus of those sizes, I estimate (based on timing experiments) that brute force would have taken several months.
Discrete logarithm is a hard problem
Computing discrete logarithms is believed to be difficult. No
efficient general method for computing discrete logarithms on
conventional computers is known.
I will add here a simple bruteforce algorithm which tries every possible value from 1 to m and outputs a solution if it was found. Note that there may be more than one solution to the problem or zero solutions at all. This algorithm will return you the smallest possible value or -1 if it does not exist.
def bruteLog(b, c, m):
s = 1
for i in xrange(m):
s = (s * b) % m
if s == c:
return i + 1
return -1
print bruteLog(5, 8, 13)
and here you can see that 3 is in fact the solution:
print 5**3 % 13
There is a better algorithm, but because it is often asked to be implemented in programming competitions, I will just give you a link to explanation.
as said the general problem is hard. however a prcatical way to find e if and only if you know e is going to be small (like in your example) would be just to try each e from 1.
btw e==3 is the first solution to your example, and you can obviously find that in 3 steps, compare to solving the non discrete version, and naively looking for integer solutions i.e.
e = log(c + n*m)/log(b) where n is a non-negative integer
which finds e==3 in 9 steps

Explain a code to check primality based on Fermat's little theorem

I found some Python code that claims checking primality based on Fermat's little theorem:
def CheckIfProbablyPrime(x):
return (2 << x - 2) % x == 1
My questions:
How does it work?
What's its relation to Fermat's little theorem?
How accurate is this method?
If it's not accurate, what's the advantage of using it?
I found it here.
1. How does it work?
Fermat's little theorem says that if a number x is prime, then for any integer a:
If we divide both sides by a, then we can re-write the equation as follows:
I'm going to punt on proving how this works (your first question) because there are many good proofs (better than I can provide) on this wiki page and under some Google searches.
2. Relation between code and theorem
So, the function you posted checks if (2 << x - 2) % x == 1.
First off, (2 << x-2) is the same thing as writing 2**(x-1), or in math-form:
That's because << is the logical left-shift operator, which is explained better here. The relation between bit-shifting and multiplying by powers of 2 is specific to the way that numbers are represented on computers (in binary), but it all boils down to
I can subtract 1 from the exponent on both sides, which gives
Now, we know from above that for any number a,
Let's say then that a = 2. That gives us
Well heck, that's the same as 2 << (x-2)! So then we can write:
Which leads to the final relation:
Now, the math version of mod looks kind of odd, but we can write the equivalent code as follows:
(2 << x - 2) % x == 1
And that's the relation.
3. Accuracy of method
So, I think "accuracy" is a bad term here, because Fermat's little theorem is definitely true for all prime numbers. However, that does not mean that it's true or false for all numbers -- which is to say, if I have some number i, and I'm not sure if i is prime, using Fermat's Little Relation will only tell me if it is definitely NOT prime. If Fermat's Little Relation is true, then i could not be prime. These kinds of numbers are called pseudoprime numbers, or more specifically in this case Fermat Pseudoprime numbers.
If this sort of thing sounds interesting, take a look at the Carmichael numbers AKA the Absolute Fermat Pseudoprimes, which pass the Fermat test in any base but are not prime. In our case we run into numbers which pass in base 2, but Fermat's little theorem might not hold for these numbers in other bases -- the Carmichael numbers pass the test for all bases coprime to x.
On the wiki page of the Carmichael there is a discussion of their distribution over the range of natural numbers -- they appear exponentially with the size of the range over which you're looking, though the exponent is less than 1 (about 1/3). So, if you're searching for primes over a big range, you're going to run into exponentially more Carmichael numbers, which are effectively false positives for this method CheckIfProbablyPrime. That might be okay, depending on your input and how much you care about running into false positives.
4. Why is this useful?
In short, it's an optimization.
The main reason to use something like this is to speed up a search for prime numbers. That's because actually checking if a number is prime is expensive -- i.e. more than O(1) running time. Doable, but still more expensive than O(1) time. So, if we can avoid doing that actual check for some numbers, we'll be able to devote more time to checking actual candidates. Since Fermat's little relation will only say yes if a number is possibly prime (it will never say no if the number is prime), and it can be checked in O(1) time, we can toss it into an is_prime loop to ignore a fair amount of numbers. So, we can speed things up.
There are many primality checks like this one, you can find some coded prime checkers here
Final Note
One of the confusing things about this optimization is that it uses the bit shift operator << instead of the exponentiation operator **. This is because bit shifting is one of the fastest operations that your computer can do, while exponentiation is slower by some amount. It is not always the best optimization in many cases, because most modern languages know how to replace things we write with more optimized operations. But, that's my venture as to why the authors of this code used the bit shift instead of 2**(x-1).
Edit: As MarkDickinson notes, taking the exponent of a number and then modding it explicitly is not the best way to do it. This is a thing called modular exponentiation, and there exist algorithms which can do it faster than the way we've written it. Python's builtin pow actually implements one of these algorithms, and takes an optional third argument to mod by. So we can write a final version of this function:
def CheckIfProbablyPrime(x):
return pow(2, x-1, x) == 1
Which is not only more readable but also faster than the confusing bit-shift crap. You know what they say.
I believe, the code in your example is incorrect because binary left shift operator is not equivalent to power of a number, which is used in Fermat's little theorem. With base of two, binary left shift would be equal to power of x + 1, which is NOT used in a version of Fermat's little format.
Instead, use ** for power of integer in Python.
def CheckIfProbablyPrime(x):
return (2 ** x - 2) % x == 0
" p − a is an integer multiple of p " therefore for primes, following theorem, result of 2 in power of x - 2 divided by x will leave a leftover of 0 (modulo '%' checks for number left over after division.
For x - 1 version,
def CheckIfProbablyPrime(a, x):
return (a ** (x-1) - 1) % x == 0
both variations should result as true for prime numbers, because they're representing the Fermat's little theorem in Python

python: computing extremely large number

I need to compute extremely large numbers using Python.
It could be as large as
factorial(n x 10,000,0000)*factorial(n x 10,000,0000)
so I think on a usual 32-bit computer it overflows...
Is there a workaround?
This is not an issue under Python.
>>> 2**100
1267650600228229401496703205376L
Python automatically converts plain integers into long integers and does not overflow.
http://docs.python.org/2/library/stdtypes.html#numeric-types-int-float-long-complex
You want to use bignums. Python has them natively.
Stirling's approximation to n! is (n/e)^n * sqrt(2 * pi * n). This has roughly n * log(n/e) digits.
You're computing (n * 1e8)!^2. This will have roughly 2 * n * 1e8 * log(n * 1e8 / e) digits, which is 2e8 * n * (17 + log(n)). Assuming n is relatively small and log(n) is negligible, this is 3.4e9 * n digits.
You can store around 2.4 digits in a byte, so your result is going to use (1.4 * n) gigabytes of RAM, assuming Python can store the number perfectly efficiently.
A 32 machine can address at most 4GB of RAM, so n = 1 or 2 is theoretically possible to compute, but for n = 3, the machine can't even hold the result in RAM. As you compute your result, Python's going to have to hold in RAM multiple bignums (for example, temporary variables), so the actual memory usage is going to be higher than the calculations above suggest.
In practice, I'd expect on a 64 bit machine with plenty of RAM you can never get to a result for this calculation in Python -- the machine will spend all its time garbage collecting.
As far as I know, integers in Python (3) can be arbitrarily large.

How do I get the Math equation of Python Algorithm?

ok so I am feeling a little stupid for not knowing this, but a coworker asked so I am asking here: I have written a python algorithm that solves his problem. given x > 0 add all numbers together from 1 to x.
def intsum(x):
if x > 0:
return x + intsum(x - 1)
else:
return 0
intsum(10)
55
first what is this type of equation is this and what is the correct way to get this answer as it is clearly easier using some other method?
This is recursion, though for some reason you're labeling it like it's factorial.
In any case, the sum from 1 to n is also simply:
n * ( n + 1 ) / 2
(You can special case it for negative values if you like.)
Transforming recursively-defined sequences of integers into ones that can be expressed in a closed form is a fascinating part of discrete mathematics -- I heartily recommend Concrete Mathematics: A Foundation for Computer Science, by Ronald Graham, Donald Knuth, and Oren Patashnik (see. e.g. the wikipedia entry about it).
However, the specific sequence you show, fac(x) = fac(x - 1) + x, according to a famous anecdote, was solved by Gauss when he was a child in first grade -- the teacher had given the pupils the taksk of summing numbers from 1 to 100 to keep them quet for a while, but two minutes later there was young Gauss with the answer, 5050, and the explanation: "I noticed that I can sum the first, 1, and the last, 100, that's 101; and the second, 2, and the next-to-last, 99, and that's again 101; and clearly that repeats 50 times, so, 50 times 101, 5050". Not rigorous as proofs go, but quite correct and appropriate for a 6-years-old;-).
In the same way (plus really elementary algebra) you can see that the general case is, as many have already said, (N * (N+1)) / 2 (the product is always even, since one of the numbers must be odd and one even; so the division by two will always produce an integer, as desired, with no remainder).
Here is how to prove the closed form for an arithmetic progression
S = 1 + 2 + ... + (n-1) + n
S = n + (n-1) + ... + 2 + 1
2S = (n+1) + (n+1) + ... + (n+1) + (n+1)
^ you'll note that there are n terms there.
2S = n(n+1)
S = n(n+1)/2
I'm not allowed to comment yet so I'll just add that you'll want to be careful in using range() as it's 0 base. You'll need to use range(n+1) to get the desired effect.
Sorry for the duplication...
sum(range(10)) != 55
sum(range(11)) == 55
OP has asked, in a comment, for a link to the story about Gauss as a schoolchild.
He may want to check out this fascinating article by Brian Hayes. It not only rather convincingly suggests that the Gauss story may be a modern fabrication, but outlines how it would be rather difficult not to see the patterns involved in summing the numbers from 1 to 100. That in fact the only way to miss these patterns would be to solve the problem by writing a program.
The article also talks about different ways to sum arithmetic progressions, which is at the heart of OP's question. There is also an ad-free version here.
Larry is very correct with his formula, and its the fastest way to calculate the sum of all integers up to n.
But for completeness, there are built-in Python functions, that perform what you have done, on lists with arbitrary elements. E.g.
sum()
>>> sum(range(11))
55
>>> sum([2,4,6])
12
or more general, reduce()
>>> import operator
>>> reduce(operator.add, range(11))
55
Consider that N+1, N-1+2, N-2+3, and so on all add up to the same number, and there are approximately N/2 instances like that (exactly N/2 if N is even).
What you have there is called arithmetic sequence and as suggested, you can compute it directly without overhead which might result from the recursion.
And I would say this is a homework despite what you say.

Categories

Resources