Random rounding to integer in Python - python

I am looking for a way to round a floating point number up or down to the next integer based on a probability derived from the numbers after the decimal point. For example the floating number 6.1 can be rounded to 6 and to 7. The probability for beeing rounded to 7 is 0.1 and the probability to be rounded to 6 is 1-0.1. So if I run this rounding experiment infinite times, the average of all integer results should be 6.1 again. I don't know if there is a name for such a procedure and if there is already and implemented function in Python.
Of course it'd be very nice if it is possible to round also to e.g. 2 decimal places the same way.
Does that make sense? Any ideas?

Here is a nice one-liner for this. By using the floor function, it will only be rounded up if the random number between 0 and 1 is enough to bring it up to the next highest integer. This method also works with positive and negative numbers equally well.
def probabilistic_round(x):
return int(math.floor(x + random.random()))
Consider the case of a negative input x = -2.25. 75% of the time the random number will be greater than or equal to 0.25 in which case the floor function will result in -2 being the answer. The other 25% of time the number will get rounded down to -3.
To round to different decimal places it can be modified as follows:
def probabilistic_round(x, decimal_places=0):
factor = 10.0**decimal_places
return int(math.floor(x*factor + random.random()))/factor

The probability you're looking for is x-int(x).
To sample with this probability, do random.random() < x-int(x)
import random
import math
import numpy as np
def prob_round(x):
sign = np.sign(x)
x = abs(x)
is_up = random.random() < x-int(x)
round_func = math.ceil if is_up else math.floor
return sign * round_func(x)
x = 6.1
sum( prob_round(x) for i in range(100) ) / 100.
=> 6.12
EDIT: adding an optional prec argument:
def prob_round(x, prec = 0):
fixup = np.sign(x) * 10**prec
x *= fixup
is_up = random.random() < x-int(x)
round_func = math.ceil if is_up else math.floor
return round_func(x) / fixup
x = 8.33333333
[ prob_round(x, prec = 2) for i in range(10) ]
=> [8.3399999999999999,
8.3300000000000001,
8.3399999999999999,
8.3300000000000001,
8.3300000000000001,
8.3300000000000001,
8.3300000000000001,
8.3300000000000001,
8.3399999999999999,
8.3399999999999999]

The most succinct way to do this for non-negative x is:
int(x + random.random())
If for example x == 6.1, then there's a 10% chance that random.random() will be large enough to make x + random.random() >= 7.
Note that if x == 6, then this expression is guaranteed to return 6, because random.random() is always in the range [0, 1).
Update: This method only works for non-negative inputs. For a solution that works for negative numbers, see Chris Locke's answer.

For rounding positive values to integers, you can do this very concisely:
x = int(x) + (random.random() < x - int(x))
This works because Python's bool type is a subclass of int. The value True is equal to 1 and False is equal to 0.

I also came up with a solution based on the binomial function of random and the code already provided by shx2:
def prob_round(x, prec = 0):
fixup = np.sign(x) * 10**prec
x *= fixup
round_func = int(x) + np.random.binomial(1,x-int(x))
return round_func/fixup

Here's an easy way:
x = round(random.random()*100)
The *100 bit means 1 to 100.
If *200, it means 1 to 200.

Related

Python 3 - float(X) * i = int(Z)

I have a very large number, both before and after the decimal, but for this I'll just call it 4.58.
I want to know the number, Y, that will yield me an integer if multiplied by X and not any sort of float number.
Here is my code:
from decimal import *
setcontext(ExtendedContext)
getcontext().prec = 300
x=Decimal('4.58')
while True:
i=1
a=Decimal(i*x)
if float(a).is_integer():
print(i*x)
break
else:
i=+1
However, this method is incredibly slow and inefficient. I was wondering how could I implement continued fractions or some other method to make it predict the value of Y?
Edit
The decimal module stores float numbers more accurately (As strings), so 0.5 won't become 0.499999999.
Edit 2
I've got X (4.58).
I want to know what number will multiply by X to make an integer; as efficiently as possible.
Edit 3
Okay, maybe not my best question yet.
Here's my dilemma.
I've got a number spat out from a trivial programme I made. That number is a decimal number, 1.5.
All I want to do is find what integer will multiply by my decimal to yield another integer.
For 1.5, the best answer will be 2. (1.5*2=3) (float*int=int)
My while-loop above will do that, eventually, but I just wanted to know whether there was a better way to do this, such as continued fractions; and if there was, how could I implement it.
Edit 4
Here's my code thanks to user6794072. It's lengthy but functional.
from gmpy2 import mpz, isqrt
from fractions import Fraction
import operator
import functools
from decimal import *
setcontext(ExtendedContext)
getcontext().prec = 300
def factors(n):
n = mpz(n)
result = set()
result |= {mpz(1), n}
def all_multiples(result, n, factor):
z = n
f = mpz(factor)
while z % f == 0:
result |= {f, z // f}
f += factor
return result
result = all_multiples(result, n, 2)
result = all_multiples(result, n, 3)
for i in range(1, isqrt(n) + 1, 6):
i1 = i + 1
i2 = i + 5
if not n % i1:
result |= {mpz(i1), n // i1}
if not n % i2:
result |= {mpz(i2), n // i2}
return result
j=Decimal('4.58')
a=(Fraction(j).numerator)
b=(Fraction(j).denominator)
y=(factors(a))
x=(factors(b))
q=([item for item in x if item not in y])
w=([item for item in y if item not in x]) q.extend(w)
p=(functools.reduce(operator.mul, q, 1)) ans=(p*j)
print(ans)
If I understand your question correctly, you want to find the smallest integer (i) that can be multiplied to a non-integer number (n) so that:
i*n is an integer
I would do this by finding the factors of the numerator and denominator for n. In your example, if n = 4.58, then you can extract 458 for the numerator and 100 for the denominator.
The multiples of 458 are 2 and 229
The multiples of 100 are 2, 2, 5, 5
You can cross off one instance of 2 for the numerator and denominator. Then your solution is just multiplying the remaining factors in the denominator: in this case, 2*5*5 or 50.
Well think of what if you wanted to reach z = 1 and then use the fact that z == z * 1 to scale the answer. For any float x != 0.0, y = 1/x will yield z = 1, so for arbitrary integer z, just use y = z/x.
I'm not a Python programmer, but what about round function?

Python Partial Harmonics

Could someone help check why the result is always one and let me know what I did wrong? Thanks
Correct result should be: 1/1 + 1/2 + 1/3 == 1.83333333333.
x = int(input("Enter n: "))
assert x > 0, "n must be greater than zero!"
def one_over_n(x):
result = 0
for n in range(x):
n += 1
result += 1 / n
return result
r = one_over_n(x)
print("one_over_n( {0:d} ): {1:f}" .format(x, r))
It will work correctly on python 3, but not in python 2
>>> 1/2
0
That means you are just adding zeroes, to one. You will need to change either numerator or denominator to a float number e.g. 1/2.0, so change your code to
result += 1.0 / n
See Pep 238 to see why it was changed in python 3.
btw floating point numbers can't represent all fractions, so if you are just adding fractions, you can use Fraction class e.g.
>>> from fractions import Fraction as F
>>> F(1,1) + F(1,2) + F(1,3)
Fraction(11, 6)
As an alternative, to force Python 2 perform division as you expect (rather than integer division), add:
from __future__ import division

Generating evenly distributed bits, using approximation

I'm trying to generate 0 or 1 with 50/50 chance of any using random.uniform instead of random.getrandbits.
Here's what I have
0 if random.uniform(0, 1e-323) == 0.0 else 1
But if I run this long enough, the average is ~70% to generate 1. As seem here:
sum(0 if random.uniform(0, 1e-323) == 0.0
else 1
for _ in xrange(1000)) / 1000.0 # --> 0.737
If I change it to 1e-324 it will always be 0. And if I change it to 1e-322, the average will be ~%90.
I made a dirty program that will try to find the sweet spot between 1e-322 and 1e-324, by dividing and multiplying it several times:
v = 1e-323
n_runs = 100000
target = n_runs/2
result = 0
while True:
result = sum(0 if random.uniform(0, v) == 0.0 else 1 for _ in xrange(n_runs))
if result > target:
v /= 1.5
elif result < target:
v *= 1.5 / 1.4
else:
break
print v
This end ups with 4.94065645841e-324
But it still will be wrong if I ran it enough times.
Is there I way to find this number without the dirty script I wrote? I know that Python has a intern min float value, show in sys.float_info.min, which in my PC is 2.22507385851e-308. But I don't see how to use it to solve this problem.
Sorry if this feels more like a puzzle than a proper question, but I'm not able to answer it myself.
I know that Python has a intern min float value, show in sys.float_info.min, which in my PC is 2.22507385851e-308. But I don't see how to use it to solve this problem.
2.22507385851e-308 is not the smallest positive float value, it is the smallest positive normalized float value. The smallest positive float value is 2-52 times that, that is, near 5e-324.
2-52 is called the “machine epsilon” and it is usual to call the “min” of a floating-point type a value that is nether that which is least of all comparable values (that is -inf), nor the least of finite values (that is -max), nor the least of positive values.
Then, the next problem you face is that random.uniform is not uniform to that level. It probably works ok when you pass it a normalized number, but if you pass it the smallest positive representable float number, the computation it does with it internally may be very approximative and lead it to behave differently than the documentation says. Although it appears to work surprisingly ok according to the results of your “dirty script”.
Here's the random.uniform implementation, according to the source:
from os import urandom as _urandom
BPF = 53 # Number of bits in a float
RECIP_BPF = 2**-BPF
def uniform(self, a, b):
"Get a random number in the range [a, b) or [a, b] depending on rounding."
return a + (b-a) * self.random()
def random(self):
"""Get the next random number in the range [0.0, 1.0)."""
return (int.from_bytes(_urandom(7), 'big') >> 3) * RECIP_BPF
So, your problem boils down to finding a number b that will give 0 when multiplied by a number less than 0.5 and another result when multiplied by a number larger than 0.5. I've found out that, on my machine, that number is 5e-324.
To test it, I've made the following script:
from random import uniform
def test():
runs = 1000000
results = [0, 0]
for i in range(runs):
if uniform(0, 5e-324) == 0:
results[0] += 1
else:
results[1] += 1
print(results)
Which returned results consistent with a 50% probability:
>>> test()
[499982, 500018]
>>> test()
[499528, 500472]
>>> test()
[500307, 499693]

Generate random number between 0.1 and 1.0. Python

I'm trying to generate a random number between 0.1 and 1.0.
We can't use rand.randint because it returns integers.
We have also tried random.uniform(0.1,1.0), but it returns a value >= 0.1 and < 1.0, we can't use this, because our search includes also 1.0.
Does somebody else have an idea for this problem?
How "accurate" do you want your random numbers? If you're happy with, say, 10 decimal digits, you can just round random.uniform(0.1, 1.0) to 10 digits. That way you will include both 0.1 and 1.0:
round(random.uniform(0.1, 1.0), 10)
To be precise, 0.1 and 1.0 will have only half of the probability compared to any other number in between and, of course, you loose all random numbers that differ only after 10 digits.
You could do this:
>>> import numpy as np
>>> a=.1
>>> b=np.nextafter(1,2)
>>> print(b)
1.0000000000000002
>>> [a+(b-a)*random.random() for i in range(10)]
or, use numpy's uniform:
np.random.uniform(low=0.1, high=np.nextafter(1,2), size=1)
nextafter will produce the platform specific next representable floating pointing number towards a direction. Using numpy's random.uniform is advantageous because it is unambiguous that it does not include the upper bound.
Edit
It does appear that Mark Dickinson's comments is correct: Numpy's documentation is incorrect regarding the upper bound to random.uniform being inclusive or not.
The Numpy documentation states All values generated will be less than high.
This is easily disproved:
>>> low=1.0
>>> high=1.0+2**-49
>>> a=np.random.uniform(low=low, high=high, size=10000)
>>> len(np.where(a==high)[0])
640
Nor is the result uniform over this limited range:
>>> for e in sorted(set(a)):
... print('{:.16e}: {}'.format(e,len(np.where(a==e)[0])))
...
1.0000000000000000e+00: 652
1.0000000000000002e+00: 1215
1.0000000000000004e+00: 1249
1.0000000000000007e+00: 1288
1.0000000000000009e+00: 1245
1.0000000000000011e+00: 1241
1.0000000000000013e+00: 1228
1.0000000000000016e+00: 1242
1.0000000000000018e+00: 640
However, combining J.F. Sebastian and Mark Dickinson's comments, I think this works:
import numpy as np
import random
def rand_range(low=0,high=1,size=1):
a=np.nextafter(low,float('-inf'))
b=np.nextafter(high,float('inf'))
def r():
def rn():
return a+(b-a)*random.random()
_rtr=rn()
while _rtr > high:
_rtr=rn()
if _rtr<low:
_rtr=low
return _rtr
return [r() for i in range(size)]
If run with the minimal spread of values in Mark's comment such that there are very few discrete floating point values:
l,h=1,1+2**-48
s=10000
rands=rand_range(l,h,s)
se=sorted(set(rands))
if len(se)<25:
for i,e in enumerate(se,1):
c=rands.count(e)
note=''
if e==l: note='low value end point'
if e==h: note='high value end point'
print ('{:>2} {:.16e} {:,}, {:.4%} {}'.format(i, e, c, c/s,note))
It produces the desired uniform distribution inclusive of end points:
1 1.0000000000000000e+00 589, 5.8900% low value end point
2 1.0000000000000002e+00 544, 5.4400%
3 1.0000000000000004e+00 612, 6.1200%
4 1.0000000000000007e+00 569, 5.6900%
5 1.0000000000000009e+00 593, 5.9300%
6 1.0000000000000011e+00 580, 5.8000%
7 1.0000000000000013e+00 565, 5.6500%
8 1.0000000000000016e+00 584, 5.8400%
9 1.0000000000000018e+00 603, 6.0300%
10 1.0000000000000020e+00 589, 5.8900%
11 1.0000000000000022e+00 597, 5.9700%
12 1.0000000000000024e+00 591, 5.9100%
13 1.0000000000000027e+00 572, 5.7200%
14 1.0000000000000029e+00 619, 6.1900%
15 1.0000000000000031e+00 593, 5.9300%
16 1.0000000000000033e+00 592, 5.9200%
17 1.0000000000000036e+00 608, 6.0800% high value end point
On the values requested by the OP, it also produces a uniform distribution:
import matplotlib.pyplot as plt
l,h=.1,1
s=10000
bin_count=20
rands=rand_range(l,h,s)
count, bins, ignored = plt.hist(np.array(rands),bin_count)
plt.plot(bins, np.ones_like(bins)*s/bin_count, linewidth=2, color='r')
plt.show()
Output
Random.uniform()
is just:
def uniform(self, a, b):
"Get a random number in the range [a, b) or [a, b] depending on rounding."
return a + (b-a) * self.random()
where self.random() returns a random number in the range [0.0, 1.0).
Python (as well as many other languages) uses floating
point to represent real
numbers. How 0.1 is represented is described in detail in the
docs:
from __future__ import division
BPF = 53 # assume IEEE 754 double-precision binary floating-point format
N = BPF + 3
assert 0.1 == 7205759403792794 / 2 ** N
It allows to find a random number in [0.1, 1] (inclusive) using
randint() without losing precision:
n, m = 7205759403792794, 2 ** N
f = randint(n, m) / m
randint(n, m) returns a random integer in [n, m] (inclusive)
therefore the above method can potentially return all floating points
numbers in [0.1, 1].
An alternative is to find the smallest x such that x > 1 and use:
f = uniform(.1, x)
while f > 1:
f = uniform(.1, x)
x should be the smallest value to avoid losing precision and to
reduce number of calls to uniform() e.g.:
import sys
# from itertools import count
# decimal.Decimal(1).next_plus() analog
# x = next(x for i in count(1) for x in [(2**BPF + i) / 2**BPF] if x > 1)
x = 1 + sys.float_info.epsilon
Both solutions preserve uniformness of the random distribution (no skew).
With the information you've given (including comments thus far), I still fail to see how the university is going to test your program such that it will make a difference if 1.0 appears or not. (I mean, if you're required to generate random floats, how can they require that any particular value appears?)
OK, so putting the craziness of your requirements aside:
The fact that the lower bound for your random floats is higher than 0 gives you a disturbingly elegant way to use random.random, which guarantees return values in the interval [0.0, 1.0): Simply keep calling random.random, throwing away any values less than 0.1, except 0.0. If you actually get 0.0, return 1.0 instead.
So something like
from random import random
def myRandom():
while True:
r = random()
if r >= 0.1:
return r
if r == 0.0:
return 1.0
You can use random.randint simply by doing this trick:
>>> float(random.randint(1000,10000)) / 10000
0.4362
if you want more decimals, just change the interval to:
(1000,10000) 4 digits
(10000,100000) 5 digits
etc
In numpy, you can do the following:
import numpy
numpy.random.uniform(0.1, numpy.nextafter(1, 2))
Are you unable to use random.random()? This gives a number between 0.0 and 1.0, though you could easily set up a way to get around this.
import random
def randomForMe():
number = random.random()
number = round(number, 1)
if (number == 0):
number = 0.1
This code would give you a number that is between 0.1 and 1.0, inclusive (0.1 and 1.0 are both possible solutions). Hope this helps.
*I assumed you only wanted numbers to the tenths place. If you want it different, where I typed round(number, 1) change 1 to 2 for hundredths, 3 for thousandths, and so on.
The standard way would be random.random() * 0.9 + 0.1 (random.uniform() internally does just this). This will return numbers between 0.1 and 1.0 without the upper border.
But wait! 0.1 (aka ¹/₁₀) has no clear binary representation (as ⅓ in decimal)! So You won't get a true 0.1 anyway, simply because the computer cannot represent it internally. Sorry ;-)
Try
random.randint(1, 10)/100.0
According to the Python 3.0 documentation:
random.uniform(a, b) Return a random floating point number N such that a <= N <= b for a <= b and b <= N <= a for b < a.
Thus, random.uniform() does, in fact, include the upper limit, at least on Python 3.0.
EDIT: As pointed out by #Blender, the documentation for Python 3.0 seems to be inconsistent with the source code on this point.
EDIT 2: As pointed out by #MarkDickinson, I had unintentionally linked to the Python 3.0 documentation instead of the latest Python 3 documentation here which reads as follows:
random.uniform(a, b) Return a random floating point number N such
that a <= N <= b for a <= b and b <= N <= a for b < a.
The end-point
value b may or may not be included in the range depending on
floating-point rounding in the equation a + (b-a) * random().

Checking if float is equivalent to an integer value in python

In Python 3, I am checking whether a given value is triangular, that is, it can be represented as n * (n + 1) / 2 for some positive integer n.
Can I just write:
import math
def is_triangular1(x):
num = (1 / 2) * (math.sqrt(8 * x + 1) - 1)
return int(num) == num
Or do I need to do check within a tolerance instead?
epsilon = 0.000000000001
def is_triangular2(x):
num = (1 / 2) * (math.sqrt(8 * x + 1) - 1)
return abs(int(num) - num) < epsilon
I checked that both of the functions return same results for x up to 1,000,000. But I am not sure if generally speaking int(x) == x will always correctly determine whether a number is integer, because of the cases when for example 5 is represented as 4.99999999999997 etc.
As far as I know, the second way is the correct one if I do it in C, but I am not sure about Python 3.
There is is_integer function in python float type:
>>> float(1.0).is_integer()
True
>>> float(1.001).is_integer()
False
>>>
Both your implementations have problems. It actually can happen that you end up with something like 4.999999999999997, so using int() is not an option.
I'd go for a completely different approach: First assume that your number is triangular, and compute what n would be in that case. In that first step, you can round generously, since it's only necessary to get the result right if the number actually is triangular. Next, compute n * (n + 1) / 2 for this n, and compare the result to x. Now, you are comparing two integers, so there are no inaccuracies left.
The computation of n can be simplified by expanding
(1/2) * (math.sqrt(8*x+1)-1) = math.sqrt(2 * x + 0.25) - 0.5
and utilizing that
round(y - 0.5) = int(y)
for positive y.
def is_triangular(x):
n = int(math.sqrt(2 * x))
return x == n * (n + 1) / 2
You'll want to do the latter. In Programming in Python 3 the following example is given as the most accurate way to compare
def equal_float(a, b):
#return abs(a - b) <= sys.float_info.epsilon
return abs(a - b) <= chosen_value #see edit below for more info
Also, since epsilon is the "smallest difference the machine can distinguish between two floating-point numbers", you'll want to use <= in your function.
Edit: After reading the comments below I have looked back at the book and it specifically says "Here is a simple function for comparing floats for equality to the limit of the machines accuracy". I believe this was just an example for comparing floats to extreme precision but the fact that error is introduced with many float calculations this should rarely if ever be used. I characterized it as the "most accurate" way to compare in my answer, which in some sense is true, but rarely what is intended when comparing floats or integers to floats. Choosing a value (ex: 0.00000000001) based on the "problem domain" of the function instead of using sys.float_info.epsilon is the correct approach.
Thanks to S.Lott and Sven Marnach for their corrections, and I apologize if I led anyone down the wrong path.
Python does have a Decimal class (in the decimal module), which you could use to avoid the imprecision of floats.
floats can exactly represent all integers in their range - floating-point equality is only tricky if you care about the bit after the point. So, as long as all of the calculations in your formula return whole numbers for the cases you're interested in, int(num) == num is perfectly safe.
So, we need to prove that for any triangular number, every piece of maths you do can be done with integer arithmetic (and anything coming out as a non-integer must imply that x is not triangular):
To start with, we can assume that x must be an integer - this is required in the definition of 'triangular number'.
This being the case, 8*x + 1 will also be an integer, since the integers are closed under + and * .
math.sqrt() returns float; but if x is triangular, then the square root will be a whole number - ie, again exactly represented.
So, for all x that should return true in your functions, int(num) == num will be true, and so your istriangular1 will always work. The only sticking point, as mentioned in the comments to the question, is that Python 2 by default does integer division in the same way as C - int/int => int, truncating if the result can't be represented exactly as an int. So, 1/2 == 0. This is fixed in Python 3, or by having the line
from __future__ import division
near the top of your code.
I think the module decimal is what you need
You can round your number to e.g. 14 decimal places or less:
>>> round(4.999999999999997, 14)
5.0
PS: double precision is about 15 decimal places
It is hard to argue with standards.
In C99 and POSIX, the standard for rounding a float to an int is defined by nearbyint() The important concept is the direction of rounding and the locale specific rounding convention.
Assuming the convention is common rounding, this is the same as the C99 convention in Python:
#!/usr/bin/python
import math
infinity = math.ldexp(1.0, 1023) * 2
def nearbyint(x):
"""returns the nearest int as the C99 standard would"""
# handle NaN
if x!=x:
return x
if x >= infinity:
return infinity
if x <= -infinity:
return -infinity
if x==0.0:
return x
return math.floor(x + 0.5)
If you want more control over rounding, consider using the Decimal module and choose the rounding convention you wish to employ. You may want to use Banker's Rounding for example.
Once you have decided on the convention, round to an int and compare to the other int.
Consider using NumPy, they take care of everything under the hood.
import numpy as np
result_bool = np.isclose(float1, float2)
Python has unlimited integer precision, but only 53 bits of float precision. When you square a number, you double the number of bits it requires. This means that the ULP of the original number is (approximately) twice the ULP of the square root.
You start running into issues with numbers around 50 bits or so, because the difference between the fractional representation of an irrational root and the nearest integer can be smaller than the ULP. Even in this case, checking if you are within tolerance will do more harm than good (by increasing the number of false positives).
For example:
>>> x = (1 << 26) - 1
>>> (math.sqrt(x**2)).is_integer()
True
>>> (math.sqrt(x**2 + 1)).is_integer()
False
>>> (math.sqrt(x**2 - 1)).is_integer()
False
>>> y = (1 << 27) - 1
>>> (math.sqrt(y**2)).is_integer()
True
>>> (math.sqrt(y**2 + 1)).is_integer()
True
>>> (math.sqrt(y**2 - 1)).is_integer()
True
>>> (math.sqrt(y**2 + 2)).is_integer()
False
>>> (math.sqrt(y**2 - 2)).is_integer()
True
>>> (math.sqrt(y**2 - 3)).is_integer()
False
You can therefore rework the formulation of your problem slightly. If an integer x is a triangular number, there exists an integer n such that x = n * (n + 1) // 2. The resulting quadratic is n**2 + n - 2 * x = 0. All you need to know is if the discriminant 1 + 8 * x is a perfect square. You can compute the integer square root of an integer using math.isqrt starting with python 3.8. Prior to that, you could use one of the algorithms from Wikipedia, implemented on SO here.
You can therefore stay entirely in python's infinite-precision integer domain with the following one-liner:
def is_triangular(x):
return math.isqrt(k := 8 * x + 1)**2 == k
Now you can do something like this:
>>> x = 58686775177009424410876674976531835606028390913650409380075
>>> math.isqrt(k := 8 * x + 1)**2 == k
True
>>> math.isqrt(k := 8 * (x + 1) + 1)**2 == k
False
>>> math.sqrt(k := 8 * x + 1)**2 == k
False
The first result is correct: x in this example is a triangular number computed with n = 342598234604352345342958762349.
Python still uses the same floating point representation and operations C does, so the second one is the correct way.
Under the hood, Python's float type is a C double.
The most robust way would be to get the nearest integer to num, then test if that integers satisfies the property you're after:
import math
def is_triangular1(x):
num = (1/2) * (math.sqrt(8*x+1)-1 )
inum = int(round(num))
return inum*(inum+1) == 2*x # This line uses only integer arithmetic

Categories

Resources