numpy seems to not be a good friend of complex infinities
While we can evaluate:
In[2]: import numpy as np
In[3]: np.mean([1, 2, np.inf])
Out[3]: inf
The following result is more cumbersome:
In[4]: np.mean([1 + 0j, 2 + 0j, np.inf + 0j])
Out[4]: (inf+nan*j)
...\_methods.py:80: RuntimeWarning: invalid value encountered in cdouble_scalars
ret = ret.dtype.type(ret / rcount)
I'm not sure the imaginary part make sense to me. But please do comment if I'm wrong.
Any insight into interacting with complex infinities in numpy?
Solution
To compute the mean we divide the sum by a real number. This division causes problems because of type promotion (see below). To avoid type promotion we can manually perform this division separately for the real and imaginary part of the sum:
n = 3
s = np.sum([1 + 0j, 2 + 0j, np.inf + 0j])
mean = np.real(s) / n + 1j * np.imag(s) / n
print(mean) # (inf+0j)
Rationale
The issue is not related to numpy but to the way complex division is performed. Observe that ((1 + 0j) + (2 + 0j) + (np.inf + 0j)) / (3+0j) also results in (inf+nanj).
The result needs to be split into a real and imagenary part. For division both operands are promoted to complex, even if you divide by a real number. So basically the division is:
a + bj
--------
c + dj
The division operation does not know that d=0. So to split the result into real and imaginary it has to get rid of the j in the denominator. This is done by multiplying numerator and denominator with the complex conjugate:
a + bj (a + bj) * (c - dj) ac + bd + bcj - adj
-------- = --------------------- = ---------------------
c + dj (c + dj) * (c - dj) c**2 + d**2
Now, if a=inf and d=0 the term a * d * j = inf * 0 * j = nan * j.
when you run the function with a np.inf in your array the result will be the infinity object for np.mean or another functions like np.max(). But in this case for calculating the mean(), since you have complex numbers and an infinity complex numbers is defined as an infinite number in the complex plane whose complex argument is unknown or undefined, you're getting non*j as the imaginary part.
In order to get around this problem, you should ignore the infinity items in such mathematical operations. You can use isfinite() function to detect them and apply the function on finite items:
In [16]: arr = np.array([1 + 0j, 2 + 0j, np.inf + 0j])
In [17]: arr[np.isfinite(arr)]
Out[17]: array([ 1.+0.j, 2.+0.j])
In [18]: np.mean(arr[np.isfinite(arr)])
Out[18]: (1.5+0j)
Because of type promotion.
When you do the division of a complex by a real, like (inf + 0j) / 2, the (real) divisor gets promoted to 2 + 0j.
And by complex division, the imaginary part is equal to (0 * 2 - inf * 0) / 4. Note the inf * 0 here which is an indeterminate form, and it evaluates to NaN. This makes the imaginary part NaN.
And back to the topic. When numpy calculates the mean of a complex array, it really doesn't try to do anything clever. First it reduces the array with the "addition" operation, obtaining the sum. After that, the sum is divided by the count. This sum contains an inf in the real part, which causes the trouble described above when the divisor (count) gets promoted from integral type to complex floating point.
Edit: a word about solution
The IEEE floating point "infinity" is really a very primitive construct that represents indeterminate forms like 1 / 0. These forms are not constant numbers, but possible limits. The special inf or NaN "floating point numbers" are placeholders that notifies you about the presence of indeterminate forms. They do nothing about the existence or type of the limit, which you must determine by the mathematical context.
Even for real numbers, the underlying limit can depend on how you approach the limit. A superficial 1 / 0 form can go to positive or negative infinity. On the complex plane, things are even more complex (well). For example, you may run into branch cuts and (different kinds of) singularities. There's no universal solution that fits all.
Tl;dr: Fix the underlying problem in the face of ambiguous/incomplete/corrupted data, or prove that the end computational result can withstand such corruption (which can happen).
Related
I am using Python 3.7.7 and numpy 1.19.1. This is the code:
import numpy as np
a = 55.74947517067784019673 + 0j
print(f'{-a == -1 * a}, {np.angle(-a)}, {np.angle(-1 * a)}')
and this is the output:
True, -3.141592653589793, 3.141592653589793
I have two questions:
Why does the angle function give different outputs for the same input?
According to the documentation, the angle output range is (-pi, pi], so why is one of the outputs -np.pi?
If you look at the source of the np.angle, it uses the function np.arctan2. Now, according to the numpy docs, np.arctan2 uses the underlying C library, which has the following rule:
Note that +0 and -0 are distinct floating point numbers, as are +inf and -inf.
which results in different behavior when calculating using +/-0. So, in this case, the rule is:
y: +/- 0
x: <0
angle: +/- pi
Now, if you try:
a = 55.74947517067784019673
print(f'{-a == -1 * a}, {np.angle(-a)}, {np.angle(-1 * a)}')
#True, 3.141592653589793, 3.141592653589793
and if you try:
a = 55.74947517067784019673 + 0j
print(-a)
#(-55.74947517067784-0j)
print(-1*a)
#(-55.74947517067784+0j)
print(f'{-a == -1 * a}, {np.angle(-a)}, {np.angle(-1 * a)}')
#True, -3.141592653589793, 3.141592653589793
Which is inline with the library protocol.
As for your second question, I guess it is a typo/mistake since the np.arctan2 doc says:
Array of angles in radians, in the range [-pi, pi]. This is a scalar if both x1 and x2 are scalars.
Explanation of -a vs. -1*a:
To start with, 55.74947517067784019673 + 0j is NOT construction of a complex number and merely addition of a float to a complex number (to construct a complex number explicitly use complex(55.74947517067784019673, 0.0) and beware that integers do not have signed zeros and only floats have). -a is simply reverting the sign and quite self explanatory. Lets see what happens when we calculate -1*a:
For simplicity assume a = 55.5 + 0j
First a = 55.5+0j converts to complex(55.5, 0.0)
Second -1 equals to complex(-1.0, 0.0)
Then complex(-1.0, 0.0)*complex(55.5, 0.0) equals to complex((-1.0*55.5 - 0.0*0.0), (-1.0*0.0 + 0.0*55.5)) equals to complex((-55.5 - 0.0), (-0.0 + 0.0)) which then equals to complex(-55.5, 0.0).
Note that -0.0+0.0 equals to 0.0 and the sign rule only applies to multiplication and division as mentioned in this link and quoted in comments below. To better understand it, see this:
print(complex(-1.0, -0.0)*complex(55.5, 0.0))
#(-55.5-0j)
where the imaginary part is (-0.0*55.5 - 1.0*0.0) = (-0.0 - 0.0) = -0.0
For 1) print -a and -1*a, you'll see they are different.
-a
Out[4]: (-55.74947517067784-0j)
-1*a
Out[5]: (-55.74947517067784+0j) # note +0j not -0j
Without knowing the details of the numpy implementation, the sign of the imaginary part is probably used to compute the angle... which could explain why this degenerate case gives different results.
For 2) this looks like a bug or a doco mistake to me then...
On my computer, I can check that
(0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3)
evaluates to False.
More generally, I can estimate that the formula (a + b) + c == a + (b + c) fails roughly 17% of the time when a,b,c are chosen uniformly and independently on [0,1], using the following simulation:
import numpy as np
import numexpr
np.random.seed(0)
formula = '(a + b) + c == a + (b + c)'
def failure_expectation(formula=formula, N=10**6):
a, b, c = np.random.rand(3, N)
return 1.0 - numexpr.evaluate(formula).mean()
# e.g. 0.171744
I wonder if it is possible to arrive at this probability by hand, e.g. using the definitions in the floating point standard and some assumption on the uniform distribution.
Given the answer below, I assume that the following part of the original question is out of reach, at least for now.
Is there is a tool that computes the failure probability for a given
formula without running a simulation.
Formulas can be assumed to be simple, e.g. involving the use of
parentheses, addition, subtraction, and possibly multiplication and
division.
(What follows may be an artifact of numpy random number generation, but still seems fun to explore.)
Bonus question based on an observation by NPE. We can use the following code to generate failure probabilities for uniform distributions on a sequence of ranges [[-n,n] for n in range(100)]:
import pandas as pd
def failures_in_symmetric_interval(n):
a, b, c = (np.random.rand(3, 10**4) - 0.5) * n
return 1.0 - numexpr.evaluate(formula).mean()
s = pd.Series({
n: failures_in_symmetric_interval(n)
for n in range(100)
})
The plot looks something like this:
In particular, failure probability dips down to 0 when n is a power of 2 and seems to have a fractal pattern. It also looks like every "dip" has a failure probability equal to that of some previous "peak". Any elucidation of why this happens would be great!
It's definitely possible to evaluate these things by hand, but the only methods I know are tedious and involve a lot of case-by-case enumeration.
For example, for your specific example of determining the probability that (a + b) + c == a + (b + c), that probability is 53/64, to within a few multiples of the machine epsilon. So the probability of a mismatch is 11/64, or around 17.19%, which agrees with what you were observing from your simulation.
To start with, note that there's a major simplifying factor in this particular case, and that's that Python and NumPy's "uniform-on-[0, 1]" random numbers are always of the form n/2**53 for some integer n in range(2**53), and within the constraints of the underlying Mersenne Twister PRNG, each such number is equally likely to occur. Since there are around 2**62 IEEE 754 binary64 representable values in the range [0.0, 1.0], that means that the vast majority of those IEEE 754 values aren't generated by random.random() (or np.random.rand()). This fact greatly simplifies the analysis, but also means that it's a bit of a cheat.
Here's an incomplete sketch, just to give an idea of what's involved. To compute the value of 53/64, I had to divide into five separate cases:
The case where both a + b < 1 and b + c < 1. In this case, both a + b and b + c are computed without error, and (a + b) + c and a + (b + c) therefore both give the closest float to the exact result, rounding ties to even as usual. So in this case, the probability of agreement is 1.
The case where a + b < 1 and b + c >= 1. Here (a + b) + c will be the correctly rounded value of the true sum, but a + (b + c) may not be. We can divide further into subcases, depending on the parity of the least significant bits of a, b and c. Let's abuse terminology and call a "odd" if it's of the form n/2**53 with n odd, and "even" if it's of the form n/2**53 with n even, and similarly for b and c. If b and c have the same parity (which will happen half the time), then (b + c) is computed exactly and again a + (b + c) must match (a + b) + c. For the other cases, the probability of agreement is 1/2 in each case; the details are all very similar, but for example in the case where a is odd, b is odd and c is even, (a + b) + c is computed exactly, while in computing a + (b + c) we incur two rounding errors, each of magnitude exactly 2**-53. If those two errors are in opposite directions, they cancel and we get agreement. If not, we don't. Overall, there's a 3/4 probability of agreement in this case.
The case where a + b >= 1 and b + c < 1. This is identical to the previous case after swapping the roles of a and c; the probability of agreement is again 3/4.
a + b >= 1 and b + c >= 1, but a + b + c < 2. Again, one can split on the parities of a, b and c and look at each of the resulting 8 cases in turn. For the cases even-even-even and odd-odd-odd we always get agreement. For the case odd-even-odd, the probability of agreement turns out to be 3/4 (by yet further subanalysis). For all the other cases, it's 1/2. Putting those together gets an aggregate probability of 21/32 for this case.
Case a + b + c >= 2. In this case, since we're rounding the final result to a multiple of four times 2**-53, it's necessary to look not just at the parities of a, b, and c, but to look at the last two significant bits. I'll spare you the gory details, but the probability of agreement turns out to be 13/16.
Finally, we can put all these cases together. To do that, we also need to know the probability that our triple (a, b, c) lands in each case. The probability that a + b < 1 and b + c < 1 is the volume of the square-based pyramid described by 0 <= a, b, c <= 1, a + b < 1, b + c < 1, which is 1/3. The probabilities of the other four cases can be seen (either by a bit of solid geometry, or by setting up suitable integrals) to be 1/6 each.
So our grand total is 1/3 * 1 + 1/6 * 3/4 + 1/6 * 3/4 + 1/6 * 21/32 + 1/6 * 13/16, which comes out to be 53/64, as claimed.
A final note: 53/64 almost certainly isn't quite the right answer - to get a perfectly accurate answer we'd need to be careful about all the corner cases where a + b, b + c, or a + b + c hit a binade boundary (1.0 or 2.0). It would certainly be possible to do refine the above approach to compute exactly how many of the 2**109 possible triples (a, b, c) satisfy (a + b) + c == a + (b + c), but not before it's time for me to go to bed. But the corner cases should constitute on the order of 1/2**53 of the total number of cases, so our estimate of 53/64 should be accurate to at least 15 decimal places.
Of course, there are lots of details missing above, but I hope it gives some idea of how it might be possible to do this.
How to calculate in Python and without numpy the geometric mean of a list of numbers in a safe way, so I do avoid RuntimeWarning which this function produces sometimes:
data = [1,2,3,4,5]
result = reduce(mul, data) ** (1 / len(data))
I found out that i can use this log function to get the same result, but i have issue with log function not accepting negative values.
result = (1 / len(data)) * sum(list(map(math.log10, data)))
Can I map the data with abs function before map to log10?
Is there better way?
generally the n_th root of negative numbers are complex numbers
the code works with cmath base e log, exponentiation
from functools import reduce
import operator
from cmath import log, e
data = [1,2,3,4,5]
rmul = reduce(operator.mul, data) ** (1 / len(data))
rln = e**((1 / len(data)) * sum(list(map(log, data))))
rmul, rln
Out[95]: (2.605171084697352, (2.6051710846973517+0j))
data = [1,2,3,-4,5]
rmul = reduce(operator.mul, data) ** (1 / len(data))
rln = e**((1 / len(data)) * sum(list(map(log, data))))
rmul, rln
Out[96]:
((2.1076276807743737+1.531281143283889j),
(2.1076276807743732+1.5312811432838889j))
some checks:
abs(rln)
Out[97]: 2.6051710846973517
rln**5
Out[98]: (-120.00000000000003-1.4210854715202004e-14j)
for more fun and argument:
'the' square root of a positive valued a isn't singular, and positive, it is both the + and - signed values: +/- sqrt(a)
and 'the' square root of negative a is similarly both the +/- 1j * sqrt(a) values
Geometric means with negative numbers are not well-defined. There are several workarounds available which depend on your application. Please see this and also this paper. The main points are:
When all the numbers are negative you may be able to define a geometric mean by temporarily suspending the signs, take geometric mean and add them back.
If you have mix of positive and negative numbers and if odd number of them are negative then the geometric means become undefined. In any case because you're ignoring the signs the result is not meaningfule
It may be possible to separately evaluate the positive and negative parts calculate the means and them combine them with some weights as the paper does but the accuracy will depend on various factors (also described).
In terms of the code I do not get a Runtime error (see code below). If you can show an example of your code I can try to reproduce that and update my answer. And yes you cannot pass negative values to log so you have to take the absolute values where appropriate (as described above). Note that with python 2 you have to either import division from __future__() module or use a floating point number when taking fractional power otherwise you'll get wrong result.
>>> data = [1,2,3,4,5]
>>> import operator
>>> result = reduce(operator.mul, data) ** (1 / len(data))
>>> result
1
>>> result = reduce(operator.mul, data) ** (1.0 / len(data))
>>> result
2.605171084697352
I have a function that has a loop, inside of which I do both division and multiplication. The final answer is easily representable, as should the running answer be.
def tie(total):
count = total / 2
prob = 1.0
for i in xrange(1, count + 1):
i_f = float(i)
prob *= (count + i_f) / i_f / 4
return prob
-
tie(4962) == 0.01132634537589437
but
tie(4964) == inf
Is the compiler trying to do some optimization, doing the arithmetic operations in an order other than I seem to have specified and that order is supposedly equivalent but causes the overflow?
You're running into issues because even though the final result of your tie function should mathematically be between 0 and 1, the intermediate values in your loop grow very large: for total = 4962, the value of prob halfway through the iteration is around 1.5e308, which is almost but not quite large enough to overflow a Python float. For total = 4964, the mid-way value really does overflow a float, and since inf times anything finite is still inf, the inf from the overflow propagates all the way down to the final value.
If you're prepared to accept a (fairly small) amount of floating-point error, there's no need to compute this quantity using a loop at all: you can use the lgamma function from the math module to compute the log of the relevant factorials. (You could also use the gamma function directly, but that would likely also lead to overflow issues.)
Here's a version of your function based on this.
from math import lgamma, log, exp
def tie(total):
count = total / 2
return exp(lgamma(2*count + 1) - 2*lgamma(count + 1) - count*log(4))
Alternatively, you could compute the 2n-choose-n term using pure integer arithmetic (which won't cause overflow), and only produce a float at the last moment (when dividing by 4**count). This will be less efficient that the above, but will give you (in a sense) perfect accuracy, in that it'll give the closest representable float to the exact answer. Here's what that version looks like:
from __future__ import division
def tie(total):
count = total // 2
prod = 1
for i in xrange(1, count+1):
prod = prod * (count + i) // i
return prod / 4**count
Note: the floor division in prod * (count + i) // i may look wrong, but it actually works: a little bit of elementary number theory shows that at this point in the calculation, prod * (count + i) must be divisible by i, so it's safe to do an integer division.
Finally, just for fun, here's a third way to compute your probability that's similar in spirit to your original code, but avoids overflow: the value prob starts at 1.0 and steadily decreases to the final value.
def tie(total):
count = total // 2
prob = 1.0
for i in xrange(1, count+1):
prob *= (i-0.5) / i
return prob
Besides being immune from overflow issues, this solution will be more efficient that the integer-based solution, and more accurate than the lgamma-based one.
prob grows to be quite large and eventually overflows. Given the name, did you intend prob to always be between 0 and 1?
What do you mean "controlled calculation"? What causes the overflow is prob getting bigger and bigger.
Your prob variable grows very large and for total equals 4964 it overflows Python maximum float value sys.float_info
>>> import sys
>>> print(sys.float_info.max)
1.7976931348623157e+308
In Python 3, I am checking whether a given value is triangular, that is, it can be represented as n * (n + 1) / 2 for some positive integer n.
Can I just write:
import math
def is_triangular1(x):
num = (1 / 2) * (math.sqrt(8 * x + 1) - 1)
return int(num) == num
Or do I need to do check within a tolerance instead?
epsilon = 0.000000000001
def is_triangular2(x):
num = (1 / 2) * (math.sqrt(8 * x + 1) - 1)
return abs(int(num) - num) < epsilon
I checked that both of the functions return same results for x up to 1,000,000. But I am not sure if generally speaking int(x) == x will always correctly determine whether a number is integer, because of the cases when for example 5 is represented as 4.99999999999997 etc.
As far as I know, the second way is the correct one if I do it in C, but I am not sure about Python 3.
There is is_integer function in python float type:
>>> float(1.0).is_integer()
True
>>> float(1.001).is_integer()
False
>>>
Both your implementations have problems. It actually can happen that you end up with something like 4.999999999999997, so using int() is not an option.
I'd go for a completely different approach: First assume that your number is triangular, and compute what n would be in that case. In that first step, you can round generously, since it's only necessary to get the result right if the number actually is triangular. Next, compute n * (n + 1) / 2 for this n, and compare the result to x. Now, you are comparing two integers, so there are no inaccuracies left.
The computation of n can be simplified by expanding
(1/2) * (math.sqrt(8*x+1)-1) = math.sqrt(2 * x + 0.25) - 0.5
and utilizing that
round(y - 0.5) = int(y)
for positive y.
def is_triangular(x):
n = int(math.sqrt(2 * x))
return x == n * (n + 1) / 2
You'll want to do the latter. In Programming in Python 3 the following example is given as the most accurate way to compare
def equal_float(a, b):
#return abs(a - b) <= sys.float_info.epsilon
return abs(a - b) <= chosen_value #see edit below for more info
Also, since epsilon is the "smallest difference the machine can distinguish between two floating-point numbers", you'll want to use <= in your function.
Edit: After reading the comments below I have looked back at the book and it specifically says "Here is a simple function for comparing floats for equality to the limit of the machines accuracy". I believe this was just an example for comparing floats to extreme precision but the fact that error is introduced with many float calculations this should rarely if ever be used. I characterized it as the "most accurate" way to compare in my answer, which in some sense is true, but rarely what is intended when comparing floats or integers to floats. Choosing a value (ex: 0.00000000001) based on the "problem domain" of the function instead of using sys.float_info.epsilon is the correct approach.
Thanks to S.Lott and Sven Marnach for their corrections, and I apologize if I led anyone down the wrong path.
Python does have a Decimal class (in the decimal module), which you could use to avoid the imprecision of floats.
floats can exactly represent all integers in their range - floating-point equality is only tricky if you care about the bit after the point. So, as long as all of the calculations in your formula return whole numbers for the cases you're interested in, int(num) == num is perfectly safe.
So, we need to prove that for any triangular number, every piece of maths you do can be done with integer arithmetic (and anything coming out as a non-integer must imply that x is not triangular):
To start with, we can assume that x must be an integer - this is required in the definition of 'triangular number'.
This being the case, 8*x + 1 will also be an integer, since the integers are closed under + and * .
math.sqrt() returns float; but if x is triangular, then the square root will be a whole number - ie, again exactly represented.
So, for all x that should return true in your functions, int(num) == num will be true, and so your istriangular1 will always work. The only sticking point, as mentioned in the comments to the question, is that Python 2 by default does integer division in the same way as C - int/int => int, truncating if the result can't be represented exactly as an int. So, 1/2 == 0. This is fixed in Python 3, or by having the line
from __future__ import division
near the top of your code.
I think the module decimal is what you need
You can round your number to e.g. 14 decimal places or less:
>>> round(4.999999999999997, 14)
5.0
PS: double precision is about 15 decimal places
It is hard to argue with standards.
In C99 and POSIX, the standard for rounding a float to an int is defined by nearbyint() The important concept is the direction of rounding and the locale specific rounding convention.
Assuming the convention is common rounding, this is the same as the C99 convention in Python:
#!/usr/bin/python
import math
infinity = math.ldexp(1.0, 1023) * 2
def nearbyint(x):
"""returns the nearest int as the C99 standard would"""
# handle NaN
if x!=x:
return x
if x >= infinity:
return infinity
if x <= -infinity:
return -infinity
if x==0.0:
return x
return math.floor(x + 0.5)
If you want more control over rounding, consider using the Decimal module and choose the rounding convention you wish to employ. You may want to use Banker's Rounding for example.
Once you have decided on the convention, round to an int and compare to the other int.
Consider using NumPy, they take care of everything under the hood.
import numpy as np
result_bool = np.isclose(float1, float2)
Python has unlimited integer precision, but only 53 bits of float precision. When you square a number, you double the number of bits it requires. This means that the ULP of the original number is (approximately) twice the ULP of the square root.
You start running into issues with numbers around 50 bits or so, because the difference between the fractional representation of an irrational root and the nearest integer can be smaller than the ULP. Even in this case, checking if you are within tolerance will do more harm than good (by increasing the number of false positives).
For example:
>>> x = (1 << 26) - 1
>>> (math.sqrt(x**2)).is_integer()
True
>>> (math.sqrt(x**2 + 1)).is_integer()
False
>>> (math.sqrt(x**2 - 1)).is_integer()
False
>>> y = (1 << 27) - 1
>>> (math.sqrt(y**2)).is_integer()
True
>>> (math.sqrt(y**2 + 1)).is_integer()
True
>>> (math.sqrt(y**2 - 1)).is_integer()
True
>>> (math.sqrt(y**2 + 2)).is_integer()
False
>>> (math.sqrt(y**2 - 2)).is_integer()
True
>>> (math.sqrt(y**2 - 3)).is_integer()
False
You can therefore rework the formulation of your problem slightly. If an integer x is a triangular number, there exists an integer n such that x = n * (n + 1) // 2. The resulting quadratic is n**2 + n - 2 * x = 0. All you need to know is if the discriminant 1 + 8 * x is a perfect square. You can compute the integer square root of an integer using math.isqrt starting with python 3.8. Prior to that, you could use one of the algorithms from Wikipedia, implemented on SO here.
You can therefore stay entirely in python's infinite-precision integer domain with the following one-liner:
def is_triangular(x):
return math.isqrt(k := 8 * x + 1)**2 == k
Now you can do something like this:
>>> x = 58686775177009424410876674976531835606028390913650409380075
>>> math.isqrt(k := 8 * x + 1)**2 == k
True
>>> math.isqrt(k := 8 * (x + 1) + 1)**2 == k
False
>>> math.sqrt(k := 8 * x + 1)**2 == k
False
The first result is correct: x in this example is a triangular number computed with n = 342598234604352345342958762349.
Python still uses the same floating point representation and operations C does, so the second one is the correct way.
Under the hood, Python's float type is a C double.
The most robust way would be to get the nearest integer to num, then test if that integers satisfies the property you're after:
import math
def is_triangular1(x):
num = (1/2) * (math.sqrt(8*x+1)-1 )
inum = int(round(num))
return inum*(inum+1) == 2*x # This line uses only integer arithmetic