The pow() function in python3 provide the values for exponents.
>>>pow(2,3)
8
Python3 has support to negative exponents that is can be represented using pow(10,-1). When I calculated pow(4,-1,5), it gave the output 4.
>>> pow(4, -1, 5)
4
I couldn't understand how the value 4 was calculated because, in the background, it performs
and it didn't return a value 4 as a reminder when I calculated manually.
When -ve value is passed in two values it responds with the desired output as a manual method.
>>> pow(4, -1)
.25
What is the difference when calculating a negative exponent with a modulus?
From the documentation;
If mod is present and exp is negative, base must be relatively prime to mod. In that case, pow(inv_base, -exp, mod) is returned, where inv_base is an inverse to base modulo mod.
Starting in python 3.8, the pow function allows you to calculate a modular inverse. As other answers have mentioned, this occurs when you use integers, have a negative exp, and base is relatively prime to mod. (this is the case in your example)
What is a modular inverse?
Lets start with normal inverses. Some number Y has an inverse X such that Y * X == 1. Modular inverses are very similar. For some number Y and some modulus mod, there exists an inverse X such that ((X * Y) % mod) == 1. From your example, you will see (4 * 4) % 5 does in fact equal 1, making 4 a valid modular inverse for Y = 4 and mod = 5.
How do you just get pow(4, -1, 5) == 0.25
Well, you could write it as separate steps (4 ** -1) % 5, but as the documentation says
if mod is present, return base to the power exp, modulo mod (computed more efficiently than pow(base, exp) % mod)
So you may sacrifice performance by using (4 ** -1) % 5. Unfortunately, it does not seem possible to do with pow.
Related
I've been writing some code to list the Gaussian integer divisors of rational integers in Python. (Relating to Project Euler problem 153)
I seem to have reached some trouble with certain numbers and I believe it's to do with Python approximating the division of complex numbers.
Here is my code for the function:
def IsGaussian(z):
#returns True if the complex number is a Gaussian integer
return complex(int(z.real), int(z.imag)) == z
def Divisors(n):
divisors = []
#Firstly, append the rational integer divisors
for x in range(1, int(n / 2 + 1)):
if n % x == 0:
divisors.append(x)
#Secondly, two for loops are used to append the complex Guassian integer divisors
for x in range(1, int(n / 2 + 1)):
for y in range(1, int(n / 2 + 1)):
if IsGaussian(n / complex(x, y)) == n:
divisors.append(complex(x, y))
divisors.append(complex(x, -y))
divisors.append(n)
return divisors
When I run Divisors(29) I get [1, 29], but this is missing out four other divisors, one of which being (5 + 2j), which can clearly be seen to divide into 29.
On running 29 / complex(5, 2), Python gives (5 - 2.0000000000000004j)
This result is incorrect, as it should be (5 - 2j). Is there any way to somehow bypass Python's approximation? And why is it that this problem has not risen for many other rational integers under 100?
Thanks in advance for your help.
Internally, CPython uses a pair of double-precision floats for complex numbers. The behavior of numerical solutions in general is too complicated to summarize here, but some error is unavoidable in numerical calculations.
EG:
>>>print(.3/3)
0.09999999999999999
As such, it is often correct to use approximate equality rather than actual equality when testing solutions of this kind.
The isclose function in the cmath module is available for this exact reason.
>>>print(.3/3 == .1)
False
>>>print(isclose(.3/3, .1))
True
This kind of question is the domain of Numerical Analysis; this may be a useful tag for further questions on this subject.
Note that it is considered 'pythonic' for function identifiers to be in snake_case.
from cmath import isclose
def is_gaussian(z):
#returns True if the complex number is a Gaussian integer
rounded = complex(round(z.real), round(z.imag))
return isclose(rounded, z)
You could define an epsilon, by using round to round to the desired number of decimal places/precision (e.g. 10):
def IsGaussian(z, prec=10):
# returns True if the complex number is a Gaussian integer
# rounds the input number to the `prec` number of digits
z = complex(round(z.real,prec), round(z.imag,prec))
return complex(int(z.real), int(z.imag)) == z
Your code has another issue though:
if IsGaussian(n / complex(x, y)) == n:
This will only give results for n = 0 or n = 1. You probably want to remove the check for equality.
Perfect power is a positive integer that can be expressed as an integer power of another positive integer.
The task is to check whether a given integer is a perfect power.
Here is my code:
def isPP2(x):
c=[]
for z in range(2,int(x/2)+1):
if (x**(1./float(z)))*10%10==0:
c.append(int(x**(1./float(z)))), c.append(z)
if len(c)>=2:
return c[0:2]
else:
return None
It works perfect with all numbers, for example:
isPP2(81)
[9, 2]
isPP2(2187)
[3, 7]
But it doesn't work with 343 (73).
Because 343**(1.0/float(3)) is not 7.0, it's 6.99999999999999. You're trying to solve an integer problem with floating point math.
As explained in this link, floating point numbers are not stored perfectly in computers. You are most likely experiencing some error in calculation based off of this very small difference that persists in floating point calculations.
When I run your function, the equation ((x ** (1./float(z))) * 10 % 10) results in 9.99999999999999986, not 10 as is expected. This is due to the slight error involved in floating point arithmetic.
If you must calculate the value as a float (which may or may not be useful in your overall goal), you can define an accuracy range for your result. A simple check would look something like this:
precision = 1.e-6
check = (x ** (1./float(z))) * 10 % 10
if check == 0:
# No changes to previous code
elif 10 - check < precision:
c.append(int(x**(1./float(z))) + 1)
c.append(z)
precision is defined in scientific notation, being equal to 1 x 10^(-6) or 0.000001, but it can be decreased in magnitude if this large range of precision introduces other errors, which is not likely but entirely possible. I added 1 to the result since the original number was less than the target.
As the other answers have already explained why your algorithm fails, I will concentrate on providing an alternative algorithm that avoids the issue.
import math
def isPP2(x):
# exp2 = log_2(x) i.e. 2**exp2 == x
# is a much better upper bound for the exponents to test,
# as 2 is the smallest base exp2 is the biggest exponent we can expect.
exp2 = math.log(x, 2)
for exp in range(2, int(exp2)):
# to avoid floating point issues we simply round the base we get
# and then test it against x by calculating base**exp
# side note:
# according to the docs ** and the build in pow()
# work integer based as long as all arguments are integer.
base = round( x**(1./float(exp)) )
if base**exp == x:
return base, exp
return None
print( isPP2(81) ) # (9, 2)
print( isPP2(2187) ) # (3, 7)
print( isPP2(343) ) # (7, 3)
print( isPP2(232**34) ) # (53824, 17)
As with your algorithm this only returns the first solution if there is more than one.
This question already has answers here:
How does the modulo (%) operator work on negative numbers in Python?
(12 answers)
Closed last month.
What does modulo in the following piece of code do?
from math import *
3.14 % 2 * pi
How do we calculate modulo on a floating point number?
When you have the expression:
a % b = c
It really means there exists an integer n that makes c as small as possible, but non-negative.
a - n*b = c
By hand, you can just subtract 2 (or add 2 if your number is negative) over and over until the end result is the smallest positive number possible:
3.14 % 2
= 3.14 - 1 * 2
= 1.14
Also, 3.14 % 2 * pi is interpreted as (3.14 % 2) * pi. I'm not sure if you meant to write 3.14 % (2 * pi) (in either case, the algorithm is the same. Just subtract/add until the number is as small as possible).
In addition to the other answers, the fmod documentation has some interesting things to say on the subject:
math.fmod(x, y)
Return fmod(x, y), as defined by the platform C
library. Note that the Python expression x % y may not return the same
result. The intent of the C standard is that fmod(x, y) be exactly
(mathematically; to infinite precision) equal to x - n*y for some
integer n such that the result has the same sign as x and magnitude
less than abs(y). Python’s x % y returns a result with the sign of y
instead, and may not be exactly computable for float arguments. For
example, fmod(-1e-100, 1e100) is -1e-100, but the result of Python’s
-1e-100 % 1e100 is 1e100-1e-100, which cannot be represented exactly as a float, and rounds to the surprising 1e100. For this reason,
function fmod() is generally preferred when working with floats, while
Python’s x % y is preferred when working with integers.
Same thing you'd expect from normal modulo .. e.g. 7 % 4 = 3, 7.3 % 4.0 = 3.3
Beware of floating point accuracy issues.
same as a normal modulo 3.14 % 6.28 = 3.14, just like 3.14%4 =3.14 3.14%2 = 1.14 (the remainder...)
you should use fmod(a,b)
While abs(x%y) < abs(y) is true mathematically, for floats it may not be true numerically due to roundoff.
For example, and assuming a platform on which a Python float is an IEEE 754 double-precision number, in order that -1e-100 % 1e100 have the same sign as 1e100, the computed result is -1e-100 + 1e100, which is numerically exactly equal to 1e100.
Function fmod() in the math module returns a result whose sign matches the sign of the first argument instead, and so returns -1e-100 in this case. Which approach is more appropriate depends on the application.
where x = a%b is used for integer modulo
My task is to factor very large composite numbers using Fermat's factorization method. The numbers are 1024 bits large, which is around 309 decimal digits.
I have come up with the Python code below, which uses the gmpy2 module for accuracy. It is simply a Python implementation of the pseudo-code shown on the Wikipedia page. I read the "Sieve Improvement" section on that page, but wasn't sure how to implement it.
def fermat_factor(n):
assert n % 2 != 0 # Odd integers only
a = gmpy2.ceil(gmpy2.sqrt(n))
b2 = gmpy2.square(a) - n
while not is_square(b2):
a += 1
b2 = gmpy2.square(a) - n
factor1 = a + gmpy2.sqrt(b2)
factor2 = a - gmpy2.sqrt(b2)
return int(factor1), int(factor2)
def is_square(n):
root = gmpy2.sqrt(n)
return root % 1 == 0 # '4.0' will pass, '4.1212' won't
This code runs fairly fast for small numbers, but takes much too long for numbers as large as those given in the problem. How can I improve the speed of this code? I'm not looking for people to write my code for me, but would appreciate some suggestions. Thank you for any responses.
You need to avoid doing so many square and sqrt operations, especially on large numbers.
The easy way to avoid them is to note that a^2 - N = b^2 must be true for all moduli to be a solution. For example,
a^2 mod 9 - N mod 9 = b^2 mod 9
Let's say your N is 55, so N mod 9 = 1.
Now consider the set of (a mod 9), and square it, modulo 9.
The resulting a^2 mod 9 is the set: {0, 1, 4, 7}. The same must be true for the b^2 mod 9.
If a^2 mod 9 = 0, then 0 - 1 = 8 (all mod 9) is not a solution, since 8 is not a square of a number modulo 9. This eliminates (a mod 9) = {0, 3 and 6}.
If a^2 mod 9 = 1, the 1 - 1 = 0 (all mod 9), so (a mod 9) = {1, 8} are possible solutions.
If a^2 mod 9 = 4, then 4 - 1 = 3 (all mod 9) is not a possible solution.
Ditto for a^2 mod 9 = 7.
So, that one modulus eliminated 7 out of 9 possible values of 'a mod 9'.
And you can have many moduli, each one eliminating at least half of the possibilities.
With a set of, say, 10 moduli, you only have to check about 1 in 1,000 a's for being perfect squares, or having integer square roots. (I use about 10,000 moduli for my work).
Note: Moduli which are powers of a prime are often more useful than a prime.
Also, a modulus of 16 is a useful special case, because 'a' must be odd when N mod 4 is 1,
and 'a' must be even when N mod 4 is 3. "Proof is left as an exercise for the student."
Consider rewriting this script to use only integers instead of arbitrary precision floats.
gmpy has support for integer square root (returns the floor of the square root, calculated efficiently). This can be used for the is_square() function by testing if the square of the square root equals the original.
I'm not sure about gmpy2, but in gmpy.sqrt() requires an integer argument, and returns an integer output. If you are using floats, then that is probably your problem (since floats are very slow as compared to integers, especially when using extended precision). If you are in fact using integers, then is_square() must be doing a tedious conversion from integer to float every time it is called (and gmpy2.sqrt() != gmpy.sqrt()).
For those of you who keep saying that this is a difficult problem, keep in mind that using this method was a hint: The fermat factorization algorithm is based on a weakness present when the composite number to be factored has two prime factors which are close to each other. If this was given as a hint, it is likely that the entity posing the problem knows this to be the case.
Edit: Apparently, gmpy2.isqrt() is the same as gmpy.sqrt() (the integer version of sqrt), and gmpy2.sqrt() is the floating-point version.
In Python 3, I am checking whether a given value is triangular, that is, it can be represented as n * (n + 1) / 2 for some positive integer n.
Can I just write:
import math
def is_triangular1(x):
num = (1 / 2) * (math.sqrt(8 * x + 1) - 1)
return int(num) == num
Or do I need to do check within a tolerance instead?
epsilon = 0.000000000001
def is_triangular2(x):
num = (1 / 2) * (math.sqrt(8 * x + 1) - 1)
return abs(int(num) - num) < epsilon
I checked that both of the functions return same results for x up to 1,000,000. But I am not sure if generally speaking int(x) == x will always correctly determine whether a number is integer, because of the cases when for example 5 is represented as 4.99999999999997 etc.
As far as I know, the second way is the correct one if I do it in C, but I am not sure about Python 3.
There is is_integer function in python float type:
>>> float(1.0).is_integer()
True
>>> float(1.001).is_integer()
False
>>>
Both your implementations have problems. It actually can happen that you end up with something like 4.999999999999997, so using int() is not an option.
I'd go for a completely different approach: First assume that your number is triangular, and compute what n would be in that case. In that first step, you can round generously, since it's only necessary to get the result right if the number actually is triangular. Next, compute n * (n + 1) / 2 for this n, and compare the result to x. Now, you are comparing two integers, so there are no inaccuracies left.
The computation of n can be simplified by expanding
(1/2) * (math.sqrt(8*x+1)-1) = math.sqrt(2 * x + 0.25) - 0.5
and utilizing that
round(y - 0.5) = int(y)
for positive y.
def is_triangular(x):
n = int(math.sqrt(2 * x))
return x == n * (n + 1) / 2
You'll want to do the latter. In Programming in Python 3 the following example is given as the most accurate way to compare
def equal_float(a, b):
#return abs(a - b) <= sys.float_info.epsilon
return abs(a - b) <= chosen_value #see edit below for more info
Also, since epsilon is the "smallest difference the machine can distinguish between two floating-point numbers", you'll want to use <= in your function.
Edit: After reading the comments below I have looked back at the book and it specifically says "Here is a simple function for comparing floats for equality to the limit of the machines accuracy". I believe this was just an example for comparing floats to extreme precision but the fact that error is introduced with many float calculations this should rarely if ever be used. I characterized it as the "most accurate" way to compare in my answer, which in some sense is true, but rarely what is intended when comparing floats or integers to floats. Choosing a value (ex: 0.00000000001) based on the "problem domain" of the function instead of using sys.float_info.epsilon is the correct approach.
Thanks to S.Lott and Sven Marnach for their corrections, and I apologize if I led anyone down the wrong path.
Python does have a Decimal class (in the decimal module), which you could use to avoid the imprecision of floats.
floats can exactly represent all integers in their range - floating-point equality is only tricky if you care about the bit after the point. So, as long as all of the calculations in your formula return whole numbers for the cases you're interested in, int(num) == num is perfectly safe.
So, we need to prove that for any triangular number, every piece of maths you do can be done with integer arithmetic (and anything coming out as a non-integer must imply that x is not triangular):
To start with, we can assume that x must be an integer - this is required in the definition of 'triangular number'.
This being the case, 8*x + 1 will also be an integer, since the integers are closed under + and * .
math.sqrt() returns float; but if x is triangular, then the square root will be a whole number - ie, again exactly represented.
So, for all x that should return true in your functions, int(num) == num will be true, and so your istriangular1 will always work. The only sticking point, as mentioned in the comments to the question, is that Python 2 by default does integer division in the same way as C - int/int => int, truncating if the result can't be represented exactly as an int. So, 1/2 == 0. This is fixed in Python 3, or by having the line
from __future__ import division
near the top of your code.
I think the module decimal is what you need
You can round your number to e.g. 14 decimal places or less:
>>> round(4.999999999999997, 14)
5.0
PS: double precision is about 15 decimal places
It is hard to argue with standards.
In C99 and POSIX, the standard for rounding a float to an int is defined by nearbyint() The important concept is the direction of rounding and the locale specific rounding convention.
Assuming the convention is common rounding, this is the same as the C99 convention in Python:
#!/usr/bin/python
import math
infinity = math.ldexp(1.0, 1023) * 2
def nearbyint(x):
"""returns the nearest int as the C99 standard would"""
# handle NaN
if x!=x:
return x
if x >= infinity:
return infinity
if x <= -infinity:
return -infinity
if x==0.0:
return x
return math.floor(x + 0.5)
If you want more control over rounding, consider using the Decimal module and choose the rounding convention you wish to employ. You may want to use Banker's Rounding for example.
Once you have decided on the convention, round to an int and compare to the other int.
Consider using NumPy, they take care of everything under the hood.
import numpy as np
result_bool = np.isclose(float1, float2)
Python has unlimited integer precision, but only 53 bits of float precision. When you square a number, you double the number of bits it requires. This means that the ULP of the original number is (approximately) twice the ULP of the square root.
You start running into issues with numbers around 50 bits or so, because the difference between the fractional representation of an irrational root and the nearest integer can be smaller than the ULP. Even in this case, checking if you are within tolerance will do more harm than good (by increasing the number of false positives).
For example:
>>> x = (1 << 26) - 1
>>> (math.sqrt(x**2)).is_integer()
True
>>> (math.sqrt(x**2 + 1)).is_integer()
False
>>> (math.sqrt(x**2 - 1)).is_integer()
False
>>> y = (1 << 27) - 1
>>> (math.sqrt(y**2)).is_integer()
True
>>> (math.sqrt(y**2 + 1)).is_integer()
True
>>> (math.sqrt(y**2 - 1)).is_integer()
True
>>> (math.sqrt(y**2 + 2)).is_integer()
False
>>> (math.sqrt(y**2 - 2)).is_integer()
True
>>> (math.sqrt(y**2 - 3)).is_integer()
False
You can therefore rework the formulation of your problem slightly. If an integer x is a triangular number, there exists an integer n such that x = n * (n + 1) // 2. The resulting quadratic is n**2 + n - 2 * x = 0. All you need to know is if the discriminant 1 + 8 * x is a perfect square. You can compute the integer square root of an integer using math.isqrt starting with python 3.8. Prior to that, you could use one of the algorithms from Wikipedia, implemented on SO here.
You can therefore stay entirely in python's infinite-precision integer domain with the following one-liner:
def is_triangular(x):
return math.isqrt(k := 8 * x + 1)**2 == k
Now you can do something like this:
>>> x = 58686775177009424410876674976531835606028390913650409380075
>>> math.isqrt(k := 8 * x + 1)**2 == k
True
>>> math.isqrt(k := 8 * (x + 1) + 1)**2 == k
False
>>> math.sqrt(k := 8 * x + 1)**2 == k
False
The first result is correct: x in this example is a triangular number computed with n = 342598234604352345342958762349.
Python still uses the same floating point representation and operations C does, so the second one is the correct way.
Under the hood, Python's float type is a C double.
The most robust way would be to get the nearest integer to num, then test if that integers satisfies the property you're after:
import math
def is_triangular1(x):
num = (1/2) * (math.sqrt(8*x+1)-1 )
inum = int(round(num))
return inum*(inum+1) == 2*x # This line uses only integer arithmetic