I'm using the following code for finding primitive roots modulo n in Python:
Code:
def gcd(a,b):
while b != 0:
a, b = b, a % b
return a
def primRoots(modulo):
roots = []
required_set = set(num for num in range (1, modulo) if gcd(num, modulo) == 1)
for g in range(1, modulo):
actual_set = set(pow(g, powers) % modulo for powers in range (1, modulo))
if required_set == actual_set:
roots.append(g)
return roots
if __name__ == "__main__":
p = 17
primitive_roots = primRoots(p)
print(primitive_roots)
Output:
[3, 5, 6, 7, 10, 11, 12, 14]
Code fragment extracted from: Diffie-Hellman (Github)
Can the primRoots method be simplified or optimized in terms of memory usage and performance/efficiency?
One quick change that you can make here (not efficiently optimum yet) is using list and set comprehensions:
def primRoots(modulo):
coprime_set = {num for num in range(1, modulo) if gcd(num, modulo) == 1}
return [g for g in range(1, modulo) if coprime_set == {pow(g, powers, modulo)
for powers in range(1, modulo)}]
Now, one powerful and interesting algorithmic change that you can make here is to optimize your gcd function using memoization. Or even better you can simply use built-in gcd function form math module in Python-3.5+ or fractions module in former versions:
from functools import wraps
def cache_gcd(f):
cache = {}
#wraps(f)
def wrapped(a, b):
key = (a, b)
try:
result = cache[key]
except KeyError:
result = cache[key] = f(a, b)
return result
return wrapped
#cache_gcd
def gcd(a,b):
while b != 0:
a, b = b, a % b
return a
# or just do the following (recommended)
# from math import gcd
Then:
def primRoots(modulo):
coprime_set = {num for num in range(1, modulo) if gcd(num, modulo) == 1}
return [g for g in range(1, modulo) if coprime_set == {pow(g, powers, modulo)
for powers in range(1, modulo)}]
As mentioned in comments, as a more pythoinc optimizer way you can use fractions.gcd (or for Python-3.5+ math.gcd).
Based on the comment of Pete and answer of Kasramvd, I can suggest this:
from math import gcd as bltin_gcd
def primRoots(modulo):
required_set = {num for num in range(1, modulo) if bltin_gcd(num, modulo) }
return [g for g in range(1, modulo) if required_set == {pow(g, powers, modulo)
for powers in range(1, modulo)}]
print(primRoots(17))
Output:
[3, 5, 6, 7, 10, 11, 12, 14]
Changes:
It now uses pow method's 3-rd argument for the modulo.
Switched to gcd built-in function that's defined in math (for Python 3.5) for a speed boost.
Additional info about built-in gcd is here: Co-primes checking
In the special case that p is prime, the following is a good bit faster:
import sys
# translated to Python from http://www.bluetulip.org/2014/programs/primitive.js
# (some rights may remain with the author of the above javascript code)
def isNotPrime(possible):
# We only test this here to protect people who copy and paste
# the code without reading the first sentence of the answer.
# In an application where you know the numbers are prime you
# will remove this function (and the call). If you need to
# test for primality, look for a more efficient algorithm, see
# for example Joseph F's answer on this page.
i = 2
while i*i <= possible:
if (possible % i) == 0:
return True
i = i + 1
return False
def primRoots(theNum):
if isNotPrime(theNum):
raise ValueError("Sorry, the number must be prime.")
o = 1
roots = []
r = 2
while r < theNum:
k = pow(r, o, theNum)
while (k > 1):
o = o + 1
k = (k * r) % theNum
if o == (theNum - 1):
roots.append(r)
o = 1
r = r + 1
return roots
print(primRoots(int(sys.argv[1])))
You can greatly improve your isNotPrime function by using a more efficient algorithm. You could double the speed by doing a special test for even numbers and then only testing odd numbers up to the square root, but this is still very inefficient compared to an algorithm such as the Miller Rabin test. This version in the Rosetta Code site will always give the correct answer for any number with fewer than 25 digits or so. For large primes, this will run in a tiny fraction of the time it takes to use trial division.
Also, you should avoid using the floating point exponentiation operator ** when you are dealing with integers as in this case (even though the Rosetta code that I just linked to does the same thing!). Things might work fine in a particular case, but it can be a subtle source of error when Python has to convert from floating point to integers, or when an integer is too large to represent exactly in floating point. There are efficient integer square root algorithms that you can use instead. Here's a simple one:
def int_sqrt(n):
if n == 0:
return 0
x = n
y = (x + n//x)//2
while (y<x):
x=y
y = (x + n//x)//2
return x
Those codes are all in-efficient, in many ways, first of all you do not need to iterate for all co-prime reminders of n, you need to check only for powers that are dividers of Euler's function from n. In the case n is prime Euler's function is n-1. If n i prime, you need to factorize n-1 and make check with only those dividers, not all. There is a simple mathematics behind this.
Second. You need better function for powering a number imagine the power is too big, I think in python you have the function pow(g, powers, modulo) which at each steps makes division and getting the remainder only ( _ % modulo ).
If you are going to implement the Diffie-Hellman algorithm it is better to use safe primes. They are such primes that p is a prime and 2p+1 is also prime, so that 2p+1 is called safe prime. If you get n = 2*p+1, then the dividers for that n-1 (n is prime, Euler's function from n is n-1) are 1, 2, p and 2p, you need to check only if the number g at power 2 and g at power p if one of them gives 1, then that g is not primitive root, and you can throw that g away and select another g, the next one g+1, If g^2 and g^p are non equal to 1 by modulo n, then that g is a primitive root, that check guarantees, that all powers except 2p would give numbers different from 1 by modulo n.
The example code uses Sophie Germain prime p and the corresponding safe prime 2p+1, and calculates primitive roots of that safe prime 2p+1.
You can easily re-work the code for any prime number or any other number, by adding a function to calculate Euler's function and to find all divisors of that value. But this is only a demo not a complete code. And there might be better ways.
class SGPrime :
'''
This object expects a Sophie Germain prime p, it does not check that it accept that as input.
Euler function from any prime is n-1, and the order (see method get_order) of any co-prime
remainder of n could be only a divider of Euler function value.
'''
def __init__(self, pSophieGermain ):
self.n = 2*pSophieGermain+1
#TODO! check if pSophieGermain is prime
#TODO! check if n is also prime.
#They both have to be primes, elsewhere the code does not work!
# Euler's function is n-1, #TODO for any n, calculate Euler's function from n
self.elrfunc = self.n-1
# All divisors of Euler's function value, #TODO for any n, get all divisors of the Euler's function value.
self.elrfunc_divisors = [1, 2, pSophieGermain, self.elrfunc]
def get_order(self, r):
'''
Calculate the order of a number, the minimal power at which r would be congruent with 1 by modulo p.
'''
r = r % self.n
for d in self.elrfunc_divisors:
if ( pow( r, d, self.n) == 1 ):
return d
return 0 # no such order, not possible if n is prime, - see small Fermat's theorem
def is_primitive_root(self, r):
'''
Check if r is a primitive root by modulo p. Such always exists if p is prime.
'''
return ( self.get_order(r) == self.elrfunc )
def find_all_primitive_roots(self, max_num_of_roots = None):
'''
Find all primitive roots, only for demo if n is large the list is large for DH or any other such algorithm
better to stop at first primitive roots.
'''
primitive_roots = []
for g in range(1, self.n):
if ( self.is_primitive_root(g) ):
primitive_roots.append(g)
if (( max_num_of_roots != None ) and (len(primitive_roots) >= max_num_of_roots)):
break
return primitive_roots
#demo, Sophie Germain's prime
p = 20963
sggen = SGPrime(p)
print (f"Safe prime : {sggen.n}, and primitive roots of {sggen.n} are : " )
print(sggen.find_all_primitive_roots())
Regards
Related
I can do it in only O(k) time can someone be that kind to help me. I can not use build in functions.
def potnr(a, b):
rez = 1
while b>0:
if b%2:
rez = rez * a
b = b // 2
a = a * a
return rez
def liczba(n, m):
k = 1
while potnr(n, k) < m:
k += 1
return k
print(liczba(2, 16))
I can do it in only O(k) time can someone be that kind to help me
n^k >= m if and only if k >= log m base n
Since log m base n = log m / log n, this is as simple as:
from math import log, ceil
def smallest_k(n, m):
return ceil(log(m)/log(n))
This runs in O(1) time.
This one should work (I just fixed the value of k returned, for there was no guarantee it was the smallest value with the previous return):
import math
def min_power(n,m):
b=1
while n**b < m:
b *= 2
a = b/2
while b-a > 1:
c = (a+b)/2
if n**c < m:
a = c
else:
b = c
k = math.ceil(a)
return k if (n**k >= m) else k+1
min_power(35,10**250)
# Out[23]: 162
First determine any natural number k for which n ^ k >= m. Then refine your estimate to find the smallest such k.
It's easiest to find the initial estimate for k as a power of 2. Have a temporary value which holds n ^ k. Start from k = 1, repeatedly multiply k by 2, and square your temporary variable, until your k is sufficiently big.
Your real k will be greater than half the estimate you found. Numbers in that range have log2(k) bits. Check each bit, starting from the most significant one. For each such bit, calculate n ^ k for two values of k: with that bit equal to 0 and 1. Compare with m - this will tell you the value of that bit. Proceed to lower-significant bits, until you get to bit 0 (least significant bit).
I am not sure you are allowed to assume that calculating n ^ k has O(1) complexity. If not, you have to store intermediate results for all n ^ k calculations at first stage, or alternatively, use sqrt to calculate lesser powers of n.
I'm trying to write a program to find sum of first N natural numbers i.e. 1 + 2 + 3 + .. + N modulo 1000000009
I know this can be done by using the formula N * (N+1) / 2 but I'm trying to find a sort of recursive function to calculate the sum.
I tried searching the web, but I didn't get any solution to this.
Actually, the problem here is that the number N can have upto 100000 digits.
So, here is what I've tried until now.
First I tried splitting the number into parts each of length 9, then convert them into integers so that I can perform arithmetic operations using the operators for integers.
For example, the number 52562372318723712 will be split into 52562372 & 318723712.
But I didn't find a way to manipulate these numbers.
Then again I tried to write a function as follows:
def find_sum(n):
# n is a string
if len(n) == 1:
# use the formula if single digit
return int(int(n[0]) * (int(n[0]) + 1) / 2)
# I'm not sure what to return here
# I'm expecting some manipulation with n[0]
# and a recursive call to the function itself
# I've also not used modulo here just for testing with smaller numbers
# I'll add it once I find a solution to this
return int(n[0]) * something + find_sum(n[1:])
I'm not able to find the something here.
Can this be solved like this?
or is there any other method to do so?
NOTE: I prefer a solution similar to the above function because I want to modify this function to meet my other requirements which I want to try myself before asking here. But if it is not possible, any other solution will also be helpful.
Please give me any hint to solve it.
Your best bet is to just use the N*(N+1)/2 formula -- but using it mod p. The only tricky part is to interpret division by 2 -- this had to be the inverse of 2 mod p. For p prime (or simply for p odd) this is very easy to compute: it is just (p+1)//2.
Thus:
def find_sum(n,p):
two_inv = (p+1)//2 #inverse of 2, mod p
return ((n%p)*((n+1)%p)*two_inv)%p
For example:
>>> find_sum(10000000,1000000009)
4550000
>>> sum(range(1,10000001))%1000000009
4550000
Note that the above function will fail if you pass an even number for p.
On Edit as #user11908059 observed, it is possible to dispense with multiplication by the modular inverse of 2. As an added benefit, this approach no longer depends on the modulus being odd:
def find_sum2(n,k):
if n % 2 == 0:
a,b = (n//2) % k, (n+1) % k
else:
a,b = n % k, ((n+1)//2) % k
return (a*b) % k
How would I generate a non-prime random number in a range in Python?
I am confused as to how I can create an algorithm that would produce a non-prime number in a certain range. Do I define a function or create a conditional statement? I would like each number in the range to have the same probability. For example, in 1 - 100, each non-prime would not have a 1% chance but instead has a ~1.35% chance.
Now, you didn't say anything about efficiency, and this could surely be optimized, but this should solve the problem. This should be an efficient algorithm for testing primality:
import random
def isPrime(n):
if n % 2 == 0 and n > 2:
return False
return all(n % i for i in range(3, int(math.sqrt(n)) + 1, 2))
def randomNonPrime(rangeMin, rangeMax):
nonPrimes = filter(lambda n: not isPrime(n), xrange(rangeMin, rangeMax+1))
if not nonPrimes:
return None
return random.choice(nonPrimes)
minMax = (1000, 10000)
print randomNonPrime(*minMax)
After returning a list of all non-primes in range, a random value is selected from the list of non-primes, making the selection of any non-prime in range just as likely as any other non-prime in the range.
Edit
Although you didn't ask about efficiency, I was bored, so I figured out a method of doing this that makes a range of (1000, 10000000) take a little over 6 seconds on my machine instead of over a minute and a half:
import numpy
import sympy
def randomNonPrime(rangeMin, rangeMax):
primesInRange = numpy.fromiter(
sympy.sieve.primerange(rangeMin, rangeMax),
dtype=numpy.uint32,
count=-1
)
numbersInRange = numpy.arange(rangeMin, rangeMax+1, dtype=numpy.uint32)
nonPrimes = numbersInRange[numpy.invert(numpy.in1d(numbersInRange, primesInRange))]
if not nonPrimes.size:
return None
return numpy.random.choice(nonPrimes)
minMax = (1000, 10000000)
print randomNonPrime(*minMax)
This uses the SymPy symbolic mathematics library to optimize the generation of prime numbers in a range, and then uses NumPy to filter our output and select a random non-prime.
The algorithm and ideas to choose is very dependent on your exact use-case, as mentioned by #smarx.
Assumptions:
Each non-prime within the range has the same probability of beeing chosen / uniformity
It is sufficient that the sampled number is not a prime with a very high probability (algorithmic false positives are less likely than CPU-bugs & co.)
The sampling-range could be big (sieve-like approaches are slow)
High performance of a single sample is desired (no caching; no sampling without replacement)
Method:
Sample random-number in range
Check if this number is prime with a very fast probabilistic primality test
Stop when observing first non-prime number
If no number is found, stop algorithm after max_trials
max_trials-value is set by an approximation to the Coupon-Collectors-Problem (wiki): expected number of samples to observe each candidate once
Characteristics of method
Fast for single samples (10000 samples per second on single CPU; given range as in example)
Easy to prove uniformity
Good asymptotic behaviour regarding range-size and range-position (number sizes)
Code
import random
import math
""" Miller-Rabin primality test
source: https://jeremykun.com/2013/06/16/miller-rabin-primality-test/
"""
def decompose(n):
exponentOfTwo = 0
while n % 2 == 0:
n = n//2 # modified for python 3!
exponentOfTwo += 1
return exponentOfTwo, n
def isWitness(possibleWitness, p, exponent, remainder):
possibleWitness = pow(possibleWitness, remainder, p)
if possibleWitness == 1 or possibleWitness == p - 1:
return False
for _ in range(exponent):
possibleWitness = pow(possibleWitness, 2, p)
if possibleWitness == p - 1:
return False
return True
def probablyPrime(p, accuracy=100):
if p == 2 or p == 3: return True
if p < 2: return False
exponent, remainder = decompose(p - 1)
for _ in range(accuracy):
possibleWitness = random.randint(2, p - 2)
if isWitness(possibleWitness, p, exponent, remainder):
return False
return True
""" Coupon-Collector Problem (approximation)
How many random-samplings with replacement are expected to observe each element at least once
"""
def couponcollector(n):
return int(n*math.log(n))
""" Non-prime random-sampling
"""
def get_random_nonprime(min, max):
max_trials = couponcollector(max-min)
for i in range(max_trials):
candidate = random.randint(min, max)
if not probablyPrime(candidate):
return candidate
return -1
# TEST
print(get_random_nonprime(1000, 10000000))
I would like to decompose a number into a tuple of numbers as close to each other in size as possible, whose product is the initial number. The inputs are the number n we want to factor and the number m of factors desired.
For the two factor situation (m==2), it is enough to look for the largest factor less than a square root, so I can do something like this
def get_factors(n):
i = int(n**0.5 + 0.5)
while n % i != 0:
i -= 1
return i, n/i
So calling this with 120 will result in 10,12.
I realize there is some ambiguity as to what it means for the numbers to be "close to each other in size". I don't mind if this is interpretted as minimizing Σ(x_i - x_avg) or Σ(x_i - x_avg)^2 or something else generally along those lines.
For the m==3 case, I would expect that 336 to produce 6,7,8 and 729 to produce 9,9,9.
Ideally, I would like a solution for general m, but if someone has an idea even for m==3 it would be much appreciated. I welcome general heuristics too.
EDIT: I would prefer to minimize the sum of the factors. Still interested in the above, but if someone has an idea for a way of also figuring out the optimal m value such that the sum of factors is minimal, it'd be great!
To answer your second question (which m minimizes the sum of factors), it will always be optimal to split number into its prime factors. Indeed, for any positive composite number except 4 sum of its prime factors is less that the number itself, so any split that has composite numbers can be improved by splitting that composite numbers into its prime factors.
To answer your first question, greedy approaches suggested by others will not work, as I pointed out in the comments 4104 breaks them, greedy will immediately extract 8 as the first factor, and then will be forced to split the remaining number into [3, 9, 19], failing to find a better solution [6, 6, 6, 19]. However, a simple DP can find the best solution. The state of the DP is the number we are trying to factor, and how many factors do we want to get, the value of the DP is the best sum possible. Something along the lines of the code below. It can be optimized by doing factorization smarter.
n = int(raw_input())
left = int(raw_input())
memo = {}
def dp(n, left): # returns tuple (cost, [factors])
if (n, left) in memo: return memo[(n, left)]
if left == 1:
return (n, [n])
i = 2
best = n
bestTuple = [n]
while i * i <= n:
if n % i == 0:
rem = dp(n / i, left - 1)
if rem[0] + i < best:
best = rem[0] + i
bestTuple = [i] + rem[1]
i += 1
memo[(n, left)] = (best, bestTuple)
return memo[(n, left)]
print dp(n, left)[1]
For example
[In] 4104
[In] 4
[Out] [6, 6, 6, 19]
You can start with the same principle: look for numbers under or equal to the mth root that are factors. Then you can recurse to find the remaining factors.
def get_factors(n, m):
factors = []
factor = int(n**(1.0/m) + .1) # fudged to deal with precision problem with float roots
while n % factor != 0:
factor = factor - 1
factors.append(factor)
if m > 1:
factors = factors + get_factors(n / factor, m - 1)
return factors
print get_factors(729, 3)
How about this, for m=3 and some n:
Get the largest factor of n smaller than the cube root of n, call it f1
Divide n by f1, call it g
Find the "roughly equal factors" of g as in the m=2 example.
For 336, the largest factor smaller than the cube root of 336 is 6 (I think). Dividing 336 by 6 gives 56 (another factor, go figure!) Performing the same math for 56 and looking for two factors, we get 7 and 8.
Note that doesn't work for any number with fewer than 3 factors. This method can be expanded for m > 3, maybe.
If this is right, and I'm not too crazy, the solution would be a recursive function:
factors=[]
n=336
m=3
def getFactors(howMany, value):
if howMany < 2:
return value
root=getRoot(howMany, value) # get the root of value, eg square root, cube, etc.
factor=getLargestFactor(value, root) # get the largest factor of value smaller than root
otherFactors=getFactors(howMany-1, value / factor)
otherFactors.insert(factor)
return otherFactors
print getFactors(n, m)
I'm too lazy to code the rest, but that should do it.
m=5 n=4 then m^(1/n)
you get:
Answer=1.495
then
1.495*1.495*1.495*1.495 = 5
in C#
double Result = Math.Pow(m,1/(double)n);
I'm generating prime numbers from Fibonacci as follows (using Python, with mpmath and sympy for arbitrary precision):
from mpmath import *
def GCD(a,b):
while a:
a, b = fmod(b, a), a
return b
def generate(x):
mp.dps = round(x, int(log10(x))*-1)
if x == GCD(x, fibonacci(x-1)):
return True
if x == GCD(x, fibonacci(x+1)):
return True
return False
for x in range(1000, 2000)
if generate(x)
print(x)
It's a rather small algorithm but seemingly generates all primes (except for 5 somehow, but that's another question). I say seemingly because a very little percentage (0.5% under 1000 and 0.16% under 10K, getting less and less) isn't prime. For instance under 1000: 323, 377 and 442 are also generated. These numbers are not prime.
Is there something off in my script? I try to account for precision by relating the .dps setting to the number being calculated. Can it really be that Fibonacci and prime numbers are seemingly so related, but then when it's get detailed they aren't? :)
For this type of problem, you may want to look at the gmpy2 library. gmpy2 provides access to the GMP multiple-precision library which includes gcd() and fib() functions which calculate the greatest common divisor and the n-th fibonacci numbers quickly, and only using integer arithmetic.
Here is your program re-written to use gmpy2.
import gmpy2
def generate(x):
if x == gmpy2.gcd(x, gmpy2.fib(x-1)):
return True
if x == gmpy2.gcd(x, gmpy2.fib(x+1)):
return True
return False
for x in range(7, 2000):
if generate(x):
print(x)
You shouldn't be using any floating-point operations. You can calculate the GCD just using the builtin % (modulo) operator.
Update
As others have commented, you are checking for Fibonacci pseudoprimes. The actual test is slightly different than your code. Let's call the number being tested n. If n is divisible by 5, then the test passes if n evenly divides fib(n). If n divided by 5 leaves a remainder of either 1 or 4, then the test passes if n evenly divides fib(n-1). If n divided by 5 leaves a remainder of either 2 or 3, then the test passes if n evenly divides fib(n+1). Your code doesn't properly distinguish between the three cases.
If n evenly divides another number, say x, it leaves a remainder of 0. This is equivalent to x % n being 0. Calculating all the digits of the n-th Fibonacci number is not required. The test just cares about the remainder. Instead of calculating the Fibonacci number to full precision, you can calculate the remainder at each step. The following code calculates just the remainder of the Fibonacci numbers. It is based on the code given by #pts in Python mpmath not arbitrary precision?
def gcd(a,b):
while b:
a, b = b, a % b
return a
def fib_mod(n, m):
if n < 0:
raise ValueError
def fib_rec(n):
if n == 0:
return 0, 1
else:
a, b = fib_rec(n >> 1)
c = a * ((b << 1) - a)
d = b * b + a * a
if n & 1:
return d % m, (c + d) % m
else:
return c % m, d % m
return fib_rec(n)[0]
def is_fib_prp(n):
if n % 5 == 0:
return not fib_mod(n, n)
elif n % 5 == 1 or n % 5 == 4:
return not fib_mod(n-1, n)
else:
return not fib_mod(n+1, n)
It's written in pure Python and is very quick.
The sequence of numbers commonly known as the Fibonacci numbers is just a special case of a general Lucas sequence L(n) = p*L(n-1) - q*L(n-2). The usual Fibonacci numbers are generated by (p,q) = (1,-1). gmpy2.is_fibonacci_prp() accepts arbitrary values for p,q. gmpy2.is_fibonacci(1,-1,n) should match the results of the is_fib_pr(n) given above.
Disclaimer: I maintain gmpy2.
This isn't really a Python problem; it's a math/algorithm problem. You may want to ask it on the Math StackExchange instead.
Also, there is no need for any non-integer arithmetic whatsoever: you're computing floor(log10(x)) which can be done easily with purely integer math. Using arbitrary-precision math will greatly slow this algorithm down and may introduce some odd numerical errors too.
Here's a simple floor_log10(x) implementation:
from __future__ import division # if using Python 2.x
def floor_log10(x):
res = 0
if x < 1:
raise ValueError
while x >= 1:
x //= 10
res += 1
return res