Lowest base system that has all 1s in its digits - python

I need to determine the lowest number base system in which the input n (base 10), expressed in this number base system, is all 1s in its digits.
Examples:
7 in base 2 is 111 - fits! answer is 2
21 in base 2 is 10101 - contains 0, does not fit
21 in base 3 is 210 - contains 0 and 2, does not fit
21 in base 4 is 111 - contains only 1 it fits! answer is 4
n is always less than Number.MAX_SAFE_INTEGER or equivalent.
I have the following code, which works well with a certain range of numbers, but for huge numbers the algorithm is still time consuming:
def check_digits(number, base):
res = 1
while res == 1 and number:
res *= number % base
number //= base
return res
def get_min_base(number):
for i in range(2, int(number ** 0.5) + 2):
if check_digits(number, i) == 1:
return i
return number - 1
How can I optimize the current code to make it run faster?

The number represented by a string of x 1s in base b is b^(x-1) + b^(x-2) + ... + b^2 + b + 1.
Note that for x >= 3, this number is greater than b^(x-1) (trivially) and less than (b+1)^(x-1) (apply the binomial theorem). Thus, if a number n is represented by x 1s in base b, we have b^(x-1) < n < (b+1)^(x-1). Applying x-1'th roots, we have b < n^(1/(x-1)) < b+1. Thus, for b to exist, b must be floor(n^(1/(x-1)).
I've written things with ^ notation instead of Python-style ** syntax so far because those equations and inequalities only hold for exact real number arithmetic, not for floating point. If you try to compute b with floating point math, rounding error may throw off your calculations, especially for extremely large inputs where the ULP is greater than 1. (I think floating point is fine for the input range you're working with, but I'm not sure.)
Still, regardless of whether floating point is good enough or if you need something fancier, the idea of an algorithm is there: you can directly check if a value of x is viable by directly computing what the corresponding b would have to be, and checking if x 1s in base b really represent n.

Just some small twist, slightly faster but don't improve the time complexity.
def check_digits2(number, base):
while number % base == 1:
if number == 1:
return True
number //= base
return False
def get_min_base2(number):
for i in range(2, int(number**0.5) + 2):
if check_digits2(number, i):
return i
return number - 1
def test():
number = 100000010000001
start = time.time()
print(get_min_base(number)) # 10000000
print(f"{time.time() - start:.3f}s\n") # 3.292s
start = time.time()
print(get_min_base2(number)) # 10000000
print(f"{time.time() - start:.3f}s\n") # 1.731s
Also try to approach with some math trick, but I actually make it worse lol
def calculate_n(number, base):
return math.log(number * (base - 1) + 1, base).is_integer()
def get_min_base3(number):
for i in range(2, int(number**0.5) + 2):
if calculate_n(number, i):
return i
return number - 1
def test():
number = 100000010000001
start = time.time()
print(get_min_base3(number)) # 10000000
print(f"{time.time() - start:.3f}s\n") # 4.597s

Related

Algorithm which finds the last digit of raising to a power every previous digit into the current

I've been trying to implement the algorithm which does raising to a power every previous digit to current digit, which is also raised to. Then I find the last digit of this number. Here is the formula of this algorithm:
(x0 ** (x1 ** (x2 ** (x3 ** (...) **(Xn))))))
Then I find the last digit like that:
return find_last_digit % 10
If the list is empty, programm must return 1.
I have the Solution of this problem:
def last_digit(lst):
if len(lst) > 0:
temp = lst[-1]
for i in range(len(lst) - 2, -1, -1):
temp = lst[i] ** temp
return temp % 10
else:
return 1
But as you can see, this code takes a lot of time to be implemented if any value of the input list is large. Could you answer me, how can I make this code more effecient? Thx a lot
Here are some observations that can make the calculations more efficient:
As we need the last digit, and we are essentially doing multiplications, we can use the rules of modular arithmetic. If 𝑎⋅𝑏 = 𝑐, then 𝑎(mod 𝑚)⋅𝑏(mod 𝑚) = 𝑐(mod 𝑚). So a first idea could be to take 𝑚 as 10, and perform the multiplications. But we don't want to split up exponentiation in individual mutliplications, so then see the next point:
For all unsigned integers 𝑏 it holds that 𝑏2 = 𝑏6 modulo 20. You can verify this by doing this for all values of 𝑏 in the range {0,...,19}. By consequence, 𝑏𝑛 = 𝑏𝑛+4 for 𝑛 > 1. We choose 20 as modulus as that is both a multiple of 10 and 4. A multiple of 10, because we need to maintain the last digit in the process, and 4 as we will reduce the exponent by a multiple of 4. Both are necessary conditions at the same time, so not to lose out on the final digit. In the end we have that 𝑎(mod 20)(mod 10) = 𝑎(mod 10)
With these simplification rules, you can keep the involved exponents limited to at most 5, the base to at most 21, and the resulting power to at most 215 = 4084101.
The code could become:
def last_digit(lst):
power = 1
for base in reversed(lst):
power = (base if base < 2 else (base - 2) % 20 + 2) ** (
power if power < 2 else (power - 2) % 4 + 2)
return power % 10
In practice you can skip the reduction of base to (base - 2) % 20 + 2 if these input numbers are not very large.

Taylor series of cos x expansion in python

I want to calculate the summation of cosx series (while keeping the x in radian). This is the code i created:
import math
def cosine(x,n):
sum = 0
for i in range(0, n+1):
sum += ((-1) ** i) * (x**(2*i)/math.factorial(2*i))
return sum
and I checked it using math.cos() .
It works just fine when I tried out small numbers:
print("Result: ", cosine(25, 1000))
print(math.cos(25))
the output:
Result: 0.991203540954667 0.9912028118634736
The number is still similar. But when I tried a bigger number, i.e 40, it just returns a whole different value.
Result: 1.2101433786727471 -0.6669380616522619
Anyone got any idea why this happens?
The error term for a Taylor expansion increases the further you are from the point expanded about (in this case, x_0 = 0). To reduce the error, exploit the periodicity and symmetry by only evaluating within the interval [0, 2 * pi]:
def cosine(x, n):
x = x % (2 * pi)
total = 0
for i in range(0, n + 1):
total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
return total
This can be further improved to [0, pi/2]:
def cosine(x, n):
x = x % (2 * pi)
if x > pi:
x = abs(x - 2 * pi)
if x > pi / 2:
return -cosine(pi - x, n)
total = 0
for i in range(0, n + 1):
total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
return total
Contrary to the answer you got, this Taylor series converges regardless of how large the argument is. The factorial in the terms' denominators eventually drives the terms to 0.
But before the factorial portion dominates, terms can get larger and larger in absolute value. Native floating point doesn't have enough bits of precision to keep enough information for the low-order bits to survive.
Here's a way that doesn't lose any bits of precision. It's not practical because it's slow. Trust me when I tell you that it typically takes years of experience to learn how to write practical, fast, high-quality math libraries.
def mycos(x, nbits=100):
from fractions import Fraction
x2 = - Fraction(x) ** 2
i = 0
ntries = 0
total = term = Fraction(1)
while True:
ntries += 1
term = term * x2 / ((i+1) * (i+2))
i += 2
total += term
if (total // term).bit_length() > nbits:
break
print("converged to >=", nbits, "bits in", ntries, "steps")
return total
and then your examples:
>>> mycos(25)
converged to >= 100 bits in 60 steps
Fraction(177990265631575526901628601372315751766446600474817729598222950654891626294219622069090604398951917221057277891721367319419730580721270980180746700236766890453804854224688235663001, 179569976498504495450560473003158183053487302118823494306831203428122565348395374375382001784940465248260677204774780370309486592538808596156541689164857386103160689754560975077376)
>>> float(_)
0.9912028118634736
>>> mycos(40)
converged to >= 100 bits in 82 steps
Fraction(-41233919211296161511135381283308676089648279169136751860454289528820133116589076773613997242520904406094665861260732939711116309156993591792484104028113938044669594105655847220120785239949370429999292710446188633097549, 61825710035417531603549955214086485841025011572115538227516711699374454340823156388422475359453342009385198763106309156353690402915353642606997057282914587362557451641312842461463803518046090463931513882368034080863251)
>>> float(_)
-0.6669380616522619
Things to note:
The full-precision results require lots of bits.
Rounded back to float, they identically match what you got from math.cos().
It doesn't require anywhere near 1000 steps to converge.

Implement inclusion-exclusion efficiently

An am trying to implement inclusion-exclusion efficiently for the following values:
2/5, 2/5, 2/10, 2/10, 2/15, 2/15, ...
Of course, the program cannot run infinitely long, so I'd like to put a configurable limit on the number of terms.
For example, let's say I limit my calculation to:
2/5, 2/5, 2/10, 2/10
Then the calculation would be:
+ 2/5 + 2/5 + 2/10 + 2/10 # choose 1 out of 4 terms
- 4/25 - 4/50 - 4/50 - 4/50 - 4/50 - 4/100 # choose 2 out of 4 terms
+ 8/250 + 8/250 + 8/500 + 8/500 # choose 3 out of 4 terms
- 16/2500 # choose 4 out of 4 terms
I find doing "choose x out of y" in an iterative manner to be a bit complicated.
Here is my current implementation:
from decimal import Decimal
from decimal import getcontext
getcontext().prec = 100
def getBits(num):
bit = 0
bits = []
while num > 0:
if num & 1:
bits.append(bit)
bit += 1
num >>= 1
return bits
def prod(arr):
res = Decimal(1)
for val in arr:
res *= val
return res
SIZE = 20
sums = [Decimal(0) for k in range(SIZE)]
temp = [Decimal(2)/(5*k) for k in range(1,SIZE)]
probabilities = [p for pair in zip(temp,temp) for p in pair][:SIZE]
for n in range(1,1<<SIZE):
bits = getBits(n)
sums[len(bits)-1] += prod([probabilities[bit] for bit in bits])
total = 0
for k in range(SIZE):
total += sums[k]*(-1)**k
print(total)
As you can see, I do the "choose x out of y terms" by generating every possible bit-combination between 1 and 2 ** SIZE, and then choosing the terms in the probabilities array which are located in the positions which are marked 1 in the current bit-combination.
I am essentially looking for a more efficient way to do this, as the number of iterations grows exponentially by the value of SIZE.
I am aware of the fact that it might not be feasible, because the total number of terms in the inclusion-exclusion expression is technically exponential (the sum of a row in Pascal triangle).
However, since the values in probabilities are all known and "nice" (i.e., a power of 2 divided by a product of 5), I've figured that there is possibly a "shortcut" here which I have overlooked.

Python Partial Harmonics

Could someone help check why the result is always one and let me know what I did wrong? Thanks
Correct result should be: 1/1 + 1/2 + 1/3 == 1.83333333333.
x = int(input("Enter n: "))
assert x > 0, "n must be greater than zero!"
def one_over_n(x):
result = 0
for n in range(x):
n += 1
result += 1 / n
return result
r = one_over_n(x)
print("one_over_n( {0:d} ): {1:f}" .format(x, r))
It will work correctly on python 3, but not in python 2
>>> 1/2
0
That means you are just adding zeroes, to one. You will need to change either numerator or denominator to a float number e.g. 1/2.0, so change your code to
result += 1.0 / n
See Pep 238 to see why it was changed in python 3.
btw floating point numbers can't represent all fractions, so if you are just adding fractions, you can use Fraction class e.g.
>>> from fractions import Fraction as F
>>> F(1,1) + F(1,2) + F(1,3)
Fraction(11, 6)
As an alternative, to force Python 2 perform division as you expect (rather than integer division), add:
from __future__ import division

Prime number generation using Fibonacci possible?

I'm generating prime numbers from Fibonacci as follows (using Python, with mpmath and sympy for arbitrary precision):
from mpmath import *
def GCD(a,b):
while a:
a, b = fmod(b, a), a
return b
def generate(x):
mp.dps = round(x, int(log10(x))*-1)
if x == GCD(x, fibonacci(x-1)):
return True
if x == GCD(x, fibonacci(x+1)):
return True
return False
for x in range(1000, 2000)
if generate(x)
print(x)
It's a rather small algorithm but seemingly generates all primes (except for 5 somehow, but that's another question). I say seemingly because a very little percentage (0.5% under 1000 and 0.16% under 10K, getting less and less) isn't prime. For instance under 1000: 323, 377 and 442 are also generated. These numbers are not prime.
Is there something off in my script? I try to account for precision by relating the .dps setting to the number being calculated. Can it really be that Fibonacci and prime numbers are seemingly so related, but then when it's get detailed they aren't? :)
For this type of problem, you may want to look at the gmpy2 library. gmpy2 provides access to the GMP multiple-precision library which includes gcd() and fib() functions which calculate the greatest common divisor and the n-th fibonacci numbers quickly, and only using integer arithmetic.
Here is your program re-written to use gmpy2.
import gmpy2
def generate(x):
if x == gmpy2.gcd(x, gmpy2.fib(x-1)):
return True
if x == gmpy2.gcd(x, gmpy2.fib(x+1)):
return True
return False
for x in range(7, 2000):
if generate(x):
print(x)
You shouldn't be using any floating-point operations. You can calculate the GCD just using the builtin % (modulo) operator.
Update
As others have commented, you are checking for Fibonacci pseudoprimes. The actual test is slightly different than your code. Let's call the number being tested n. If n is divisible by 5, then the test passes if n evenly divides fib(n). If n divided by 5 leaves a remainder of either 1 or 4, then the test passes if n evenly divides fib(n-1). If n divided by 5 leaves a remainder of either 2 or 3, then the test passes if n evenly divides fib(n+1). Your code doesn't properly distinguish between the three cases.
If n evenly divides another number, say x, it leaves a remainder of 0. This is equivalent to x % n being 0. Calculating all the digits of the n-th Fibonacci number is not required. The test just cares about the remainder. Instead of calculating the Fibonacci number to full precision, you can calculate the remainder at each step. The following code calculates just the remainder of the Fibonacci numbers. It is based on the code given by #pts in Python mpmath not arbitrary precision?
def gcd(a,b):
while b:
a, b = b, a % b
return a
def fib_mod(n, m):
if n < 0:
raise ValueError
def fib_rec(n):
if n == 0:
return 0, 1
else:
a, b = fib_rec(n >> 1)
c = a * ((b << 1) - a)
d = b * b + a * a
if n & 1:
return d % m, (c + d) % m
else:
return c % m, d % m
return fib_rec(n)[0]
def is_fib_prp(n):
if n % 5 == 0:
return not fib_mod(n, n)
elif n % 5 == 1 or n % 5 == 4:
return not fib_mod(n-1, n)
else:
return not fib_mod(n+1, n)
It's written in pure Python and is very quick.
The sequence of numbers commonly known as the Fibonacci numbers is just a special case of a general Lucas sequence L(n) = p*L(n-1) - q*L(n-2). The usual Fibonacci numbers are generated by (p,q) = (1,-1). gmpy2.is_fibonacci_prp() accepts arbitrary values for p,q. gmpy2.is_fibonacci(1,-1,n) should match the results of the is_fib_pr(n) given above.
Disclaimer: I maintain gmpy2.
This isn't really a Python problem; it's a math/algorithm problem. You may want to ask it on the Math StackExchange instead.
Also, there is no need for any non-integer arithmetic whatsoever: you're computing floor(log10(x)) which can be done easily with purely integer math. Using arbitrary-precision math will greatly slow this algorithm down and may introduce some odd numerical errors too.
Here's a simple floor_log10(x) implementation:
from __future__ import division # if using Python 2.x
def floor_log10(x):
res = 0
if x < 1:
raise ValueError
while x >= 1:
x //= 10
res += 1
return res

Categories

Resources