Modulo with fractions.Fraction class - python

My aim is to find np.mod(np.array[int], some_number) for a numpy array containing very large integers. Some_number is rational, but in general not an exact decimal fraction. I want to make sure that the modulos are as accurate as possible since I need to bin the results for a histogram in a later step, so any errors due to floating-point precision might mean that values will end up in the wrong bin.
I am aware that the modulo function with floats is limited by floating-point precision, so I am hesitating to use np.mod(array[int], float).
I then came across the fractions module of the python library. Can someone give advice as to whether the results obtained via np.mod(np.array[int], Fraction(int1, int2)) would be more accurate than using a float? If not, what is the best approach for such a problem?

So you have a fraction some_number=n/d
Computing the modulo is like performing this division:
a = q*(n/d) + (r/d)
the remainder is a fraction with numerator r.
It can be written like this:
a*d = q * n + r
The problem you have is that a*d could overflow.
But the problem can be written like this:
a = q1 * n + r1
d = q2 * n + r2
a*d = (q1*q2*n+q1*r2+q2*r1) * n + (r1*r2)
given that n/d is between 10 and 100, n>d, q2=0, r2=d, the algorithm is
compute a modulo n => r1
compute (r1*d) modulo n => r
divide r by d => a modulo n/d
If it's for putting in bins, you don't need step 3.

Related

Floating point Division without using Division Operator

Given two positive floating point numbers x and y, how would you compute x/y to within a specified tolerance e if the division operator
cannot be used?
You cannot use any library functions, such as log and exp; addition
and multiplication are acceptable.
May I know how can I solve it? I know the approach to solving division is to use bitwise operator, but in that approach, when x is less than y, the loop stops.
def divide(x, y):
# break down x/y into (x-by)/y + b , where b is the integer answer
# b can be computed using addition of numbers of power of 2
result = 0
power = 32
y_power = y << power
while x >= y:
while y_power > x:
y_power = y_power>> 1
power -= 1
x = x - y_power
result += 1 << power
return result
An option is to use the Newton-Raphson iterations, known to converge quadratically (so that the number of exact bits will grow like 1, 2, 4, 8, 16, 32, 64).
First compute the inverse of y with the iterates
z(n+1) = z(n) (2 - z(n) y(n)),
and after convergence form the product
x.z(N) ~ x/y
But the challenge is to find a good starting approximation z(0), which should be within a factor 2 of 1/y.
If the context allows it, you can play directly with the exponent of the floating-point representation and replace Y.2^e by 1.2^-e or √2.2^-e.
If this is forbidden, you can setup a table of all the possible powers of 2 in advance and perform a dichotomic search to locate y in the table. Then the inverse power is easily found in the table.
For double precision floats, there are 11 exponent bits so that the table of powers should hold 2047 values, which can be considered a lot. You can trade storage for computation by storing only the exponents 2^0, 2^±1, 2^±2, 2^±3... Then during the dichotomic search, you will recreate the intermediate exponents on demand by means of products (i.e. 2^5 = 2^4.2^1), and at the same time, form the product of inverses. This can be done efficiently, using lg(p) multiplies only, where p=|lg(y)| is the desired power.
Example: lookup of the power for 1000; the exponents are denoted in binary.
1000 > 2^1b = 2
1000 > 2^10b = 4
1000 > 2^100b = 16
1000 > 2^1000b = 256
1000 < 2^10000b = 65536
Then
1000 < 2^1100b = 16.256 = 4096
1000 < 2^1010b = 4.256 = 1024
1000 > 2^1001b = 2.256 = 512
so that
2^9 < 1000 < 2^10.
Now the Newton-Raphson iterations yield
z0 = 0.001381067932
z1 = 0.001381067932 x (2 - 1000 x 0.001381067932) = 0.000854787231197
z2 = 0.000978913251777
z3 = 0.000999555349049
z4 = 0.000999999802286
z5 = 0.001
Likely most straightforward solution is to probably to use Newton's method for division to compute the reciprocal, which may then be multiplied by the numerator to yield the final result.
This is an iterative process gradually refining an initial guess and doubling the precision on every iteration, and involves only multiplication and addition.
One complication is generating a suitable initial guess, since an improper selection may fail to converge or take a larger number of iterations to reach the desired precision. For floating-point numbers the easiest solution is to normalize for the power-of-two exponent and use 1 as the initial guess, then invert and reapply the exponent separately for the final result. This yields roughly 2^iteration bits of precision, and so 6 iterations should be sufficient for a typical IEEE-754 double with a 53-bit mantissa.
Computing the result to within an absolute error tolerance e is difficult however given the limited precision of the intermediate computations. If specified too tightly it may not be representable and, worse, a minimal half-ULP bound requires exact arithmetic. If so you will be forced to manually implement the equivalent of an exact IEEE-754 division function by hand while taking great care with rounding and special cases.
Below is one possible implementation in C:
double divide(double numer, double denom, unsigned int precision) {
int exp;
denom = frexp(denom, &exp);
double guess = 1.4142135623731;
if(denom < 0)
guess = -guess;
while(precision--)
guess *= 2 - denom * guess;
return ldexp(numer * guess, -exp);
}
Handling and analysis of special-cases such as zero, other denormals, infinity or NaNs is left as an exercise for the reader.
The frexp and ldexp library functions are easily substituted for manual bit-extraction of the exponent and mantissa. However this is messy and non-portable, and no specific floating-point representation was specified in the question.
First, you should separate signs and exponents from the both numbers. After that, we'll divide pure positive mantissas and adapt the result using former exponents and signs.
As for dividing mantissas, it is simple, if you'll remember that division is not only inverted multiplication, but also the many-times done substraction. The number of times is the result.
A:B->C, precision e
C=0
allowance= e*B
multiplicator = 1
delta = B
while (delta< allowance && A>0)
if A<delta {
multiplicator*=0.1 // 1/10
delta*=0.1 // 1/10
} else {
A-=delta;
C+=multiplicator
}
}
Really, we can use any number>1 instead of 10. It would be interesting, which will give the most effectivity. Of course, if we use 2, we can use shift instead of multiplication inside the cycle.

How to multiply a super large number with a super small number in python?

I'm doing some probability calculation.
In one of my task, I need to multiply the combination number of choose 8000 samples from 10000 items with 0.8**8000.
The combination number is a long long-number, and with the help of numpy, I get the result of 0.8**8000 as 5.2468172239242176864e-776.
But when I try to multiply these two numbers, I got [9] 34845 segmentation fault ipython -i.
How can I do such multiplication then?
PS: This is a piece of my code
import numpy
d2 = numpy.float128(0.8) ** 8000
d1 = 165555575235503558460892983752748337696863078099010763950122624527927836980322780662408249953188062227721112100054260160204180655980717428736444016909193193353770953722788106404786520413339850951599929567643032803416164290936680088121145665954509987077953596641237451927908536624592636591471456488142060812180933761408708169972797751139799352908109763166895772281109195968567911923343187466596002627570139321755043803267091330804414889831229832744256038117150720178689066894068507531026417815624234453195871008113238128934831837842040515600131726096039123279876153916504647241693083829553081901075278042326502699324012014817969085443550523855284341221708045253558716789811929298590803855947461554713178815399150688529048306222786951038548880400191620565711291586700534540755526276938422405001345270278335726581375322976014611332999126216550500951669985289322635729053541565465940744524663726205818866513444952048185208697438054246674199211750006230637806394882672053335493831407089830994135058867370833787098758113596190447219426121568324685764151601296948654893782399960327514764114467176417125060133454019708700782282480571935020898204763471121684913190735908414301826140125010936910161942130277906874552721346626800201093026689035996876035329180150478191582393837824731994055511844267891121846403164857127885959745644323971338513739214928092232132691519007718752719466750891748327404893783451436251805894736392433617289459646429204124129760273396235033220480921175386059331059354409267348067375581516003852060360378571075522650956157791058846993826792047806030332676423336065499519953076910418838626376480202828151673161942289092221049283902410699951912366163469099917310239336454637062482599733606299329923589714875696509548029668358723465427602758225427644633549944802010973352599970041918971524450218727345622721744933664742499521140235707102217164259438766026322532351208348119475549696983427008567651685921355966036780080415723688044325099562693124488758728102729947753752228785786200998322978801432511608341549234067324280214361346940194251357867820535466891356019219904248859277399657389914429390105240751239760865282709465029549690591863591028864648910033430400L
print d1 * d2
When multiplying an extremely large number by an extremely small number, working with floats can introduce huge inaccuracies. In your case, the magnitude of the numbers is causing overflow errors, so you have bigger problems than just inaccuracies!
Whenever you find yourself in this situation, it can be useful to first check if it is possible to stay in the integer domain, and "massage" the numbers a little first. In your case, it is possible and I'll explain how below.
One operand of the multiplication, the extremely large number, is 8000 samples from 10000 items. Use the closed form equation for the number of combinations, where your sample size n is 10000 and the subset size r is 8000. Exclam (!) here is factorial, which you can find in math.factorial in python.
C(n,r) = n! / r! (n - r)!
The other operand 0.8 ** 8000 is the extremely small number, which by index laws is equal to:
8**8000 / 10**8000
So when we multiply these two numbers together, the answer we want is:
10000! * 8**8000
--------------------------
8000! * 2000! * 10**8000
Let's call this number x and then take logarithms of both sides. Working in the log domain will transform multiplications into additions, and divisions into subtractions, making things more manageable.
from math import log, factorial
numerator = log(factorial(10000)) + 8000*log(8)
denominator = log(factorial(8000)) + log(factorial(2000)) + 8000*log(10)
log_x = numerator - denominator
Now these numbers are of a magnitude that is usable in python.
You will find that log_x is equal to approximately 3214. You now only need to observe that exp(log_x) == x to find your answer. It is a very large, but finite, number.
Arbitrary-precision integers aren't really the way to go for this problem, since you're destroying any precision you had by calling log, so I'll just let scipy.special.gammaln speak for itself (but see my edit below):
from math import log, factorial
from scipy.special import gammaln
def comp_integral(n, r, p, q):
numerator = log(factorial(n)) + r*log(8)
denominator = log(factorial(r)) + log(factorial(n-r)) + r*log(q)
return numerator - denominator
def comp_gamma(n, r, p, q):
comb = gammaln(n+1) - gammaln(n-r+1) - gammaln(r+1)
expon = r*(log(p) - log(q))
return comb+expon
In [220]: comp_integral(10000, 8000, 8, 10)
Out[220]: 3214.267963130871
In [221]: comp_gamma(10000, 8000, 8, 10)
Out[221]: 3214.2679631308811
In [222]: %timeit comp_integral(10000, 8000, 8, 10)
10 loops, best of 3: 80.3 ms per loop
In [223]: %timeit comp_gamma(10000, 8000, 8, 10)
100000 loops, best of 3: 11.4 µs per loop
Note that the outputs are identical up to 14 digits, but the gammaln version is almost 8000 times faster. If you're going to do this a lot, this will count.
EDIT: What gammaln does is to compute the natural log of the gamma function. The gamma function can be thought of as a generalization of factorial, in that factorial(n) == gamma(n+1). So comb(n,r) == gamma(n+1)/(gamma(n-r+1)*gamma(r+1)). Then taking logs turns it into the form above.
Gamma also has values for fractional inputs and for negative numbers. That doesn't really matter here though.
I maintain the gmpy2 library and it can do this very easily.
>>> import gmpy2
>>> gmpy2.comb(10000,8000) * gmpy2.mpfr('0.8')**8000
mpfr('8.6863984366232171e+1395')
Building off of wim's great answer, you can also store this number as a Fraction by building a list of prime factors, doing any cancellations and multiplying everything together.
I've included a rather naive implementation for this problem. It returns a fraction in less than a minute as is but if you implement slightly smarter factorization you can surely make it even faster.
from collections import Counter
from fractions import Fraction
import gmpy2 as gmpy
def get_factors(n):
factors = Counter()
factor = 1
while n != 1:
factor = int(gmpy.next_prime(factor))
while not n % factor:
n //= factor
factors[factor] += 1
return factors
factors = Counter()
# multiply by 10000!
for i in range(10000):
factors += get_factors(i+1)
# multiply by 8^8000
factors[2] += 3*8000
#divide by 2000!
for i in range(2000):
factors -= get_factors(i+1)
#divide by 8000!
for i in range(8000):
factors -= get_factors(i+1)
# divide by 10^8000
factors[2] -= 8000
factors[5] -= 8000
# build Fraction
numer = 1
denom = 1
for f,c in factors.items():
if c>0:
numer *= f**c
elif c<0:
denom *= f**-c
frac = Fraction(numer, denom)
Looks like it's around 8.686*10^1395

How to generate scale-independent random floating point numbers?

I want to generate what I'm choosing to call "arbitrary" positive floating-point numbers; that is, random numbers which are independent of scale (in other words, numbers whose logarithms are uniformly distributed). I'm not much of a mathematician, so for all I know there may be another name for what I'm after.
Here's my initial, naïve solution:
import sys
import random
def arbitrary(min=sys.float_info.min_10_exp, max=sys.float_info.max_10_exp):
return 10 ** random.uniform(min, max)
It strikes me that this is probably not ideal: for one thing, I imagine that there might be some interaction between the limited precision of random.uniform() and the floating point representation itself that would cause bunching and gaps in the expected output at higher orders of magnitude.
Is there a better approach? Would it make more sense to just produce a string of random bits and then turn that into the floating point number they represent?
EDIT: As pointed out by Oli Charlesworth in the comments, the "convert random bits to a float" idea doesn't do what I want (which is a uniform distribution of log(n)).
You are correct that your approach doesn't return some numbers. For example, there is no floating-point number between 1.0 and 1.0000000000000002, but 10**1.0000000000000002 is 10.000000000000005, and there are two numbers between 10.0 and 10.000000000000005: 10.000000000000002 and 10.000000000000004. Those two numbers will never be returned by your algorithm.
But you can cheat and use Decimal to exponentiate with greater precision:
>>> float(10 ** Decimal('1'))
10.0
>>> float(10 ** Decimal('1.0000000000000001'))
10.000000000000002
>>> float(10 ** Decimal('1.00000000000000015'))
10.000000000000004
>>> float(10 ** Decimal('1.0000000000000002'))
10.000000000000005
So, arbitrary needs to generate random Decimal exponents of sufficient precision and use them as exponents. Assuming 64 binary digits is enough precision for the exponent, the code would look like this:
import sys, random
from decimal import Decimal
def _random_decimal(minval, maxval, added_prec):
# generate a Decimal in the range [minval, maxval) with the
# precision of additional ADDED_PREC binary digits
rangelen = maxval - minval
denom = rangelen << added_prec
return minval + Decimal(rangelen) * random.randrange(denom) / denom
def arbitrary():
min_exp = sys.float_info.min_exp - sys.float_info.mant_dig
max_exp = sys.float_info.max_exp
return float(2 ** _random_decimal(min_exp, max_exp, 64))

Checking if float is equivalent to an integer value in python

In Python 3, I am checking whether a given value is triangular, that is, it can be represented as n * (n + 1) / 2 for some positive integer n.
Can I just write:
import math
def is_triangular1(x):
num = (1 / 2) * (math.sqrt(8 * x + 1) - 1)
return int(num) == num
Or do I need to do check within a tolerance instead?
epsilon = 0.000000000001
def is_triangular2(x):
num = (1 / 2) * (math.sqrt(8 * x + 1) - 1)
return abs(int(num) - num) < epsilon
I checked that both of the functions return same results for x up to 1,000,000. But I am not sure if generally speaking int(x) == x will always correctly determine whether a number is integer, because of the cases when for example 5 is represented as 4.99999999999997 etc.
As far as I know, the second way is the correct one if I do it in C, but I am not sure about Python 3.
There is is_integer function in python float type:
>>> float(1.0).is_integer()
True
>>> float(1.001).is_integer()
False
>>>
Both your implementations have problems. It actually can happen that you end up with something like 4.999999999999997, so using int() is not an option.
I'd go for a completely different approach: First assume that your number is triangular, and compute what n would be in that case. In that first step, you can round generously, since it's only necessary to get the result right if the number actually is triangular. Next, compute n * (n + 1) / 2 for this n, and compare the result to x. Now, you are comparing two integers, so there are no inaccuracies left.
The computation of n can be simplified by expanding
(1/2) * (math.sqrt(8*x+1)-1) = math.sqrt(2 * x + 0.25) - 0.5
and utilizing that
round(y - 0.5) = int(y)
for positive y.
def is_triangular(x):
n = int(math.sqrt(2 * x))
return x == n * (n + 1) / 2
You'll want to do the latter. In Programming in Python 3 the following example is given as the most accurate way to compare
def equal_float(a, b):
#return abs(a - b) <= sys.float_info.epsilon
return abs(a - b) <= chosen_value #see edit below for more info
Also, since epsilon is the "smallest difference the machine can distinguish between two floating-point numbers", you'll want to use <= in your function.
Edit: After reading the comments below I have looked back at the book and it specifically says "Here is a simple function for comparing floats for equality to the limit of the machines accuracy". I believe this was just an example for comparing floats to extreme precision but the fact that error is introduced with many float calculations this should rarely if ever be used. I characterized it as the "most accurate" way to compare in my answer, which in some sense is true, but rarely what is intended when comparing floats or integers to floats. Choosing a value (ex: 0.00000000001) based on the "problem domain" of the function instead of using sys.float_info.epsilon is the correct approach.
Thanks to S.Lott and Sven Marnach for their corrections, and I apologize if I led anyone down the wrong path.
Python does have a Decimal class (in the decimal module), which you could use to avoid the imprecision of floats.
floats can exactly represent all integers in their range - floating-point equality is only tricky if you care about the bit after the point. So, as long as all of the calculations in your formula return whole numbers for the cases you're interested in, int(num) == num is perfectly safe.
So, we need to prove that for any triangular number, every piece of maths you do can be done with integer arithmetic (and anything coming out as a non-integer must imply that x is not triangular):
To start with, we can assume that x must be an integer - this is required in the definition of 'triangular number'.
This being the case, 8*x + 1 will also be an integer, since the integers are closed under + and * .
math.sqrt() returns float; but if x is triangular, then the square root will be a whole number - ie, again exactly represented.
So, for all x that should return true in your functions, int(num) == num will be true, and so your istriangular1 will always work. The only sticking point, as mentioned in the comments to the question, is that Python 2 by default does integer division in the same way as C - int/int => int, truncating if the result can't be represented exactly as an int. So, 1/2 == 0. This is fixed in Python 3, or by having the line
from __future__ import division
near the top of your code.
I think the module decimal is what you need
You can round your number to e.g. 14 decimal places or less:
>>> round(4.999999999999997, 14)
5.0
PS: double precision is about 15 decimal places
It is hard to argue with standards.
In C99 and POSIX, the standard for rounding a float to an int is defined by nearbyint() The important concept is the direction of rounding and the locale specific rounding convention.
Assuming the convention is common rounding, this is the same as the C99 convention in Python:
#!/usr/bin/python
import math
infinity = math.ldexp(1.0, 1023) * 2
def nearbyint(x):
"""returns the nearest int as the C99 standard would"""
# handle NaN
if x!=x:
return x
if x >= infinity:
return infinity
if x <= -infinity:
return -infinity
if x==0.0:
return x
return math.floor(x + 0.5)
If you want more control over rounding, consider using the Decimal module and choose the rounding convention you wish to employ. You may want to use Banker's Rounding for example.
Once you have decided on the convention, round to an int and compare to the other int.
Consider using NumPy, they take care of everything under the hood.
import numpy as np
result_bool = np.isclose(float1, float2)
Python has unlimited integer precision, but only 53 bits of float precision. When you square a number, you double the number of bits it requires. This means that the ULP of the original number is (approximately) twice the ULP of the square root.
You start running into issues with numbers around 50 bits or so, because the difference between the fractional representation of an irrational root and the nearest integer can be smaller than the ULP. Even in this case, checking if you are within tolerance will do more harm than good (by increasing the number of false positives).
For example:
>>> x = (1 << 26) - 1
>>> (math.sqrt(x**2)).is_integer()
True
>>> (math.sqrt(x**2 + 1)).is_integer()
False
>>> (math.sqrt(x**2 - 1)).is_integer()
False
>>> y = (1 << 27) - 1
>>> (math.sqrt(y**2)).is_integer()
True
>>> (math.sqrt(y**2 + 1)).is_integer()
True
>>> (math.sqrt(y**2 - 1)).is_integer()
True
>>> (math.sqrt(y**2 + 2)).is_integer()
False
>>> (math.sqrt(y**2 - 2)).is_integer()
True
>>> (math.sqrt(y**2 - 3)).is_integer()
False
You can therefore rework the formulation of your problem slightly. If an integer x is a triangular number, there exists an integer n such that x = n * (n + 1) // 2. The resulting quadratic is n**2 + n - 2 * x = 0. All you need to know is if the discriminant 1 + 8 * x is a perfect square. You can compute the integer square root of an integer using math.isqrt starting with python 3.8. Prior to that, you could use one of the algorithms from Wikipedia, implemented on SO here.
You can therefore stay entirely in python's infinite-precision integer domain with the following one-liner:
def is_triangular(x):
return math.isqrt(k := 8 * x + 1)**2 == k
Now you can do something like this:
>>> x = 58686775177009424410876674976531835606028390913650409380075
>>> math.isqrt(k := 8 * x + 1)**2 == k
True
>>> math.isqrt(k := 8 * (x + 1) + 1)**2 == k
False
>>> math.sqrt(k := 8 * x + 1)**2 == k
False
The first result is correct: x in this example is a triangular number computed with n = 342598234604352345342958762349.
Python still uses the same floating point representation and operations C does, so the second one is the correct way.
Under the hood, Python's float type is a C double.
The most robust way would be to get the nearest integer to num, then test if that integers satisfies the property you're after:
import math
def is_triangular1(x):
num = (1/2) * (math.sqrt(8*x+1)-1 )
inum = int(round(num))
return inum*(inum+1) == 2*x # This line uses only integer arithmetic

Could random.randint(1,10) ever return 11?

When researching for this question and reading the sourcecode in random.py, I started wondering whether randrange and randint really behave as "advertised". I am very much inclined to believe so, but the way I read it, randrange is essentially implemented as
start + int(random.random()*(stop-start))
(assuming integer values for start and stop), so randrange(1, 10) should return a random number between 1 and 9.
randint(start, stop) is calling randrange(start, stop+1), thereby returning a number between 1 and 10.
My question is now:
If random() were ever to return 1.0, then randint(1,10) would return 11, wouldn't it?
From random.py and the docs:
"""Get the next random number in the range [0.0, 1.0)."""
The ) indicates that the interval is exclusive 1.0. That is, it will never return 1.0.
This is a general convention in mathematics, [ and ] is inclusive, while ( and ) is exclusive, and the two types of parenthesis can be mixed as (a, b] or [a, b). Have a look at wikipedia: Interval (mathematics) for a formal explanation.
Other answers have pointed out that the result of random() is always strictly less than 1.0; however, that's only half the story.
If you're computing randrange(n) as int(random() * n), you also need to know that for any Python float x satisfying 0.0 <= x < 1.0, and any positive integer n, it's true that 0.0 <= x * n < n, so that int(x * n) is strictly less than n.
There are two things that could go wrong here: first, when we compute x * n, n is implicitly converted to a float. For large enough n, that conversion might alter the value. But if you look at the Python source, you'll see that it only uses the int(random() * n) method for n smaller than 2**53 (here and below I'm assuming that the platform uses IEEE 754 doubles), which is the range where the conversion of n to a float is guaranteed not to lose information (because n can be represented exactly as a float).
The second thing that could go wrong is that the result of the multiplication x * n (which is now being performed as a product of floats, remember) probably won't be exactly representable, so there will be some rounding involved. If x is close enough to 1.0, it's conceivable that the rounding will round the result up to n itself.
To see that this can't happen, we only need to consider the largest possible value for x, which is (on almost all machines that Python runs on) 1 - 2**-53. So we need to show that (1 - 2**-53) * n < n for our positive integer n, since it'll always be true that random() * n <= (1 - 2**-53) * n.
Proof (Sketch) Let k be the unique integer k such that 2**(k-1) < n <= 2**k. Then the next float down from n is n - 2**(k-53). We need to show that n*(1-2**53) (i.e., the actual, unrounded, value of the product) is closer to n - 2**(k-53) than to n, so that it'll always be rounded down. But a little arithmetic shows that the distance from n*(1-2**-53) to n is 2**-53 * n, while the distance from n*(1-2**-53) to n - 2**(k-53) is (2**k - n) * 2**-53. But 2**k - n < n (because we chose k so that 2**(k-1) < n), so the product is closer to n - 2**(k-53), so it will get rounded down (assuming, that is, that the platform is doing some form of round-to-nearest).
So we're safe. Phew!
Addendum (2015-07-04): The above assumes IEEE 754 binary64 arithmetic, with round-ties-to-even rounding mode. On many machines, that assumption is fairly safe. However, on x86 machines that use the x87 FPU for floating-point (for example, various flavours of 32-bit Linux), there's a possibility of double rounding in the multiplication, and that makes it possible for random() * n to round up to n in the case where random() returns the largest possible value. The smallest such n for which this can happen is n = 2049. See the discussion at http://bugs.python.org/issue24546 for more.
From Python documentation:
Almost all module functions depend on the basic function random(), which generates a random float uniformly in the semi-open range [0.0, 1.0).
Like almost every PRNG of float numbers..

Categories

Resources