Implicit conversion while comparing numeric values in Python - python

While working on an issue, I stumbled at something I cannot really undestand on my own.
I have a variable:
a = pow(2, 1024)
Its type is long int. If I try casting it explicitly to float, like float(a), I receive OverflowError. The number is too big to fit in 64bit float, so it's understandable.
Then I try implicit cast, through mutliplying it by a float number:
b = a * 11.0
Once again, the OverflowError occurs, which is fine, because according to python docs, implicit conversion from long int to float happens. So the result is like before.
Finally, I try comparison:
a > 11.0
returns True. The OverflowError doesn't occur. And that confuses me a lot. How does the Python comparison mechanism work, if it doesn't require numerics to be in the same format? Accordind to this,
Python fully supports mixed arithmetic: when a binary arithmetic operator has operands of different numeric types, the operand with the “narrower” type is widened to that of the other, where plain integer is narrower than long integer is narrower than floating point is narrower than complex. Comparisons between numbers of mixed type use the same rule. The constructors int(), long(), float(), and complex() can be used to produce numbers of a specific type.
My question is, why a is not being cast to float in forementioned comparison?
The version of Python I'm using is 2.7.15. Thanks in an advance

From the source:
/* Comparison is pretty much a nightmare. When comparing float to float,
* we do it as straightforwardly (and long-windedly) as conceivable, so
* that, e.g., Python x == y delivers the same result as the platform
* C x == y when x and/or y is a NaN.
* When mixing float with an integer type, there's no good *uniform* approach.
* Converting the double to an integer obviously doesn't work, since we
* may lose info from fractional bits. Converting the integer to a double
* also has two failure modes: (1) a long int may trigger overflow (too
* large to fit in the dynamic range of a C double); (2) even a C long may have
* more bits than fit in a C double (e.g., on a 64-bit box long may have
* 63 bits of precision, but a C double probably has only 53), and then
* we can falsely claim equality when low-order integer bits are lost by
* coercion to double. So this part is painful too.
*/
As such, the potential pitfalls of conversion are taken into account.

The exact error is OverflowError: int too large to convert to float
Which also means that any int that generates that error is by definition bigger then any possible float. So just checking if it's bigger should return True.
I'm not entirely sure, but I wouldn't be surprised if the implementation is just catching this error in the background (when trying to cast to float) and returning True in that case.
That is always true with the exception of float('inf') which is a special case that should return False (and does)

Related

What is the time complexity of type casting function in python?

For example,
int(x)
float(x)
str(x)
What is time complexity of them?
There is no definite answer to this because it depends not just what type you're converting to, but also what type you're converting from.
Let's consider just numbers and strings. To avoid writing "log" everywhere, we'll measure the size of an int by saying n is how many bits or digits it takes to represent it. (Asymptotically it doesn't matter if you count bits or digits.) For strings, obviously we should let n be the length of the string. There is no meaningful way to measure the "input size" of a float object, since floating-point numbers all take the same amount of space.
Converting an int, float or str to its own type ought to take Θ(1) time because they are immutable objects, so it's not even necessary to make a copy.
Converting an int to a float ought to take Θ(1) time because you only need to read at most a fixed constant number of bits from the int object to find the mantissa, and the bit length to find the exponent.
Converting an int to a str ought to take Θ(n2) time, because you have to do Θ(n) division and remainder operations to find n digits, and each arithmetic operation takes Θ(n) time because of the size of the integers involved.
Converting a str to an int ought to take Θ(n2) time because you need to do Θ(n) multiplications and additions on integers of size Θ(n).
Converting a str to a float ought to take Θ(n) time. The algorithm only needs to read a fixed number of characters from the string to do the conversion, and floating-point arithmetic operations (or operations on bounded int values to avoid intermediate rounding errors) for each character take Θ(1) time; but the algorithm needs to look at the rest of the characters anyway in order to raise a ValueError if the format is wrong.
Converting a float to any type takes Θ(1) time because there are only finitely many distinct float values.
I've said "ought to" because I haven't checked the actual source code; this is based on what the conversion algorithms need to do, and the assumption that the algorithms actually used aren't asymptotically worse than they need to be.
There could be special cases to optimise the str-to-int conversion when the base is a power of 2, like int('11001010', 2) or int('AC5F', 16), since this can be done without arithmetic. If those cases are optimised then they should take Θ(n) time instead of Θ(n2). Likewise, converting an int to a str in a base which is a power of 2 (e.g. using the bin or hex functions) should take Θ(n) time.
Float(x) is more complex among these, as it has a very long range. At the same time it depends on how much of the value you are using.

Python multiplication equivalent to integer division

In python using // for division forces the result to be an integer. Is there an equivalent for multiplication?
For example, assume I have an integer W which I scale by a float f. It could be nice to have an operator such as .*, so that:
int(W*f)==W.*f
Would be True.
// does not "force the result to be an integer", this may be coincidentally true, but describing the operator in this presumptuous way is (I believe) resulting in you thinking that there should be other analogous features, which there really aren't. // is "floor division", which any type can overload to have any desired behaviour. There is no "floor multiplication" operator. If you want the result of multiplication to be forced to an integer, you've already shown a perfectly acceptable and straightforward way to do this:
int(W*f)
No, and there is unlikely to be one added, for two reasons.
The current options are short and built-in
There is no ambiguity to be resolved
When you take 12/5 you can reasonably want a integer, quotient, or real, all of which are well defined, take different values in python (without infinite floating precision) and behave differently.
When you multiply 12*5, you could also want those three, but the value will be identical.
For something like pi * 100000, you would need to know the type to end up with as well as the resolution technique for e.g. float to integer (floor, ceiling, round closest, round .5 up, round .5 down, bankers rounding). Without strong types this becomes a mess to hand down from above, and easier to delegate to the user and their own needs or preferences.

Is there a difference between an int of 5 and a float of 5.0?

I am confused on whether there is or is not any difference between and int of 5 and a float of 5.0, besides the float having a decimal.
What are some of the things I can do with an int that I can't with a float? What is the point of having two separate types, instead of just letting everything be a float in the first place?
They are different data types:
type(5) # int
type(5.0) # float
And therefore they are not, strictly speaking, the same.
However, they are equal:
5 == 5.0 # true
They are different types.
>>> type(5)
<type 'int'>
>>> type(5.0)
<type 'float'>
Internally, they are stored differently.
5 and 5.0 are different objects in python so 5 is 5.0 is False
But in most cases they behave the same like 5 == 5.0 is True
As your question is focused on the difference and need of having two different data types I will try to focus on that and answer.
Need for different data type (why not put everything in float?)
Different data type have different memory usage.int uses 2 byte whereas float uses 4 byte.One can use the correct data type in correct palce and save memory
What are some of the things I can do with an int that I can't with a float?
One of the most important thing one should know while using these these two data type is that,"integer division truncates": any fractional part is discarded.To get desired result you should use the correct type.
A nice example is given in the book "The C Programming Language.Book by Brian Kernighan and Dennis Ritchie" which is applicable regardless of the language used.
This statement converts the temparature from Fahrenheit to Celsius.
float celsius=(5/9)*(Fahrenheit-32)
This code will always give you the answer as 0.That is because the answer of 5/9 is 0.5556 which due to truncation is taken as 0.
now look at this code:
float celsius=(5.0/9.0)*(Fahrenheit-32)
This code will give you the correct answer as 5.0/9.0 gives us the value 0.5556. As we have used float value here the compiler does not truncate the value.The float value prevents the truncation of fraction part and gives us our desired answer
I think this will tell you how important is the difference between 5 and 5.0
This question is already answered: they have different type.
But what does that mean?
One must think in term of object: they are somehow objects of different classes, and the class dictates the object behavior.
Thus they will behave differently.
It's easier to grasp such things when you are in a pure object oriented language like in Smalltalk, because you clearly can browse the Float and Integer classes and learn how they might differ thru their implementation. In Python, it's more complex because the computation model is multi-layer with notions of types, operators, function, and this complexity is somehow obscuring the basic object oriented principles. But from behavioural point of view, it ends up being the same: Python : terminology 'class' VS 'type'
So what are these differences of Behavior?
They are thin because we make our best effort to have uniform and unsurprising arithmetic (including mixed arithmetic) behavior matching the laws of mathematics, whatever the programming language.
Floating point behaves differently because they keep a limited number of significand bits. It's a necessary trade-off for keeping computations simple and fast. Small integers require few significand bits, so they will behave mostly the same than floating point. But when growing larger, they won't. Here is an arithmetic example:
print(5.0**3 == 5**3)
print(5.0**23 == 5**23)
The former expression will print True, the later False... Because 5^23 requires 54bits to be represented and Python VM will in most case depend on IEEE754 double floating point which provide only 53 bits significand.

When does Python perform type conversion when comparing int and float?

Why does Python return True when I compare int and float objects which have the same value?
For example:
>>> 5*2 == 5.0*2.0
True
It's not as simple as a type conversion.
10 == 10.0 delegates to the arguments' __eq__ methods, trying (10).__eq__(10.0) first, and then (10.0).__eq__(10) if the first call returns NotImplemented. It makes no attempt to convert types. (Technically, the method lookup uses a special routine that bypasses instance __dict__ entries and __getattribute__/__getattr__ overrides, so it's not quite equivalent to calling the methods yourself.)
int.__eq__ has no idea how to handle a float:
>>> (10).__eq__(10.0)
NotImplemented
but float.__eq__ knows how to handle ints:
>>> (10.0).__eq__(10)
True
float.__eq__ isn't just performing a cast internally, either. It has over 100 lines of code to handle float/int comparison without the rounding error an unchecked cast could introduce. (Some of that could be simplified if the C-level comparison routine didn't also have to handle >, >=, <, and <=.)
Objects of different types, except different numeric types, never compare equal.
And:
Python fully supports mixed arithmetic: when a binary arithmetic operator has operands of different numeric types, the operand with the “narrower” type is widened to that of the other, where integer is narrower than floating point, which is narrower than complex. Comparisons between numbers of mixed type use the same rule.
https://docs.python.org/3/library/stdtypes.html#numeric-types-int-float-complex
The comparison logic is implemented by each type's __eq__ method. And the standard numeric types are implemented in a way that they support comparisons (and arithmetic operations) among each other. Python as a language never does implicit type conversion (like Javascript's == operator would do implicit type juggling).
The simple answer is that the langue is designed this way. Here is an excerpt from the documentation supporting this:
6.10.1 Value Comparisons
Numbers of built-in numeric types (Numeric Types — int, float, complex) and of the standard library types fractions.Fraction and decimal.Decimal can be compared within and across their types, with the restriction that complex numbers do not support order comparison.
In other words, we want different numeric types with the same value to be equal.
PEP 20
Special cases aren't special enough to break the rules.
Although practicality beats purity.
What benefit is there to making numeric types not comparable, besides making life difficult in most common cases?
You can have a look at the source code for the CPython implementation.
The function is preceded by this comment explaining how the conversion is attempted:
/* Comparison is pretty much a nightmare. When comparing float to float,
* we do it as straightforwardly (and long-windedly) as conceivable, so
* that, e.g., Python x == y delivers the same result as the platform
* C x == y when x and/or y is a NaN.
* When mixing float with an integer type, there's no good *uniform* approach.
* Converting the double to an integer obviously doesn't work, since we
* may lose info from fractional bits. Converting the integer to a double
* also has two failure modes: (1) an int may trigger overflow (too
* large to fit in the dynamic range of a C double); (2) even a C long may have
* more bits than fit in a C double (e.g., on a 64-bit box long may have
* 63 bits of precision, but a C double probably has only 53), and then
* we can falsely claim equality when low-order integer bits are lost by
* coercion to double. So this part is painful too.
*/
Other implementations are not guaranteed to follow the same logic.
From the documentation:
Python fully supports mixed arithmetic: when a binary arithmetic
operator has operands of different numeric types, the operand with the
“narrower” type is widened to that of the other, where plain integer
is narrower than long integer is narrower than floating point is
narrower than complex. Comparisons between numbers of mixed type use
the same rule.
According to this 5*2 is widened to 10.0 and which is equal to 10.0
If you are comparing the mixed data types then the result will be considered on the basics of data type which is having long range, so in your case float range is more then int
float max number can be --> 1.7976931348623157e+308
int max number can be --> 9223372036854775807
Thanks
The == operator compares only the values but not the types. You can use the 'is' keyword to achieve the same effect as using === in other languages. For instance
5 is 5.0
returns
False
==
is a comparison operator
You are actually asking the interpreter if both sides of your expression are equal or not.
In other words you are asking for it to return a Boolean value, not to convert data types. If you want to convert the data types you will have to do so implicitly in your code.

What is the result of arithmetics with mpz variables?

I am trying to do some arithmetic with gmpy2 in python. Unfortunately I don't
know what are the types of returned values of this arithmetics. For example:
x=float(gmpy2.comb(100,50))/gmpy2.comb(200,100)
print x
print isinstance(x, (int, long, float, complex))
gives me:
1.114224180581451e-30
False
I couldn't find any helpful information when I Googled a bit.
Is there a way that I can get an object type in python in general?
Otherwise, what is the exact type of this value? Is it an mpz?
And the last question, when I do arithmetic with mpz values and for
example float, will it always cast the type to mpz?
P.S. I don't know if mpz is a correct term I am using here! I would be also happy
if somebody with high reputation adds gmpy to tags in stackoverflow to make them
questions more accessible to the people who know gmpy.
I don't know about gmpy2 but you can find the type of an object in Python using x.__class__.
With new-style classes you can also use type(x).
gmpy2 introduces several new data types: mpz, mpq, mpfr, and mpc. They are analogous to Python's int/long, fractions.Fraction, float and complex types. So division involving an mpz will normally result in an mpfr.
In your example, you create an mpz, convert it to a float, and then divide it by an mpz. When performing the division, gmpy2 converts the float to an mpfr and then performs the division. If you want a float result, you should apply float() to the entire result and not just gmpy2.comb(100,50). Note the differences in the parenthesis.
>>> float(gmpy2.comb(100,50))/gmpy2.comb(200,100)
mpfr('1.114224180581451e-30')
>>> float(gmpy2.comb(100,50)/gmpy2.comb(200,100))
1.114224180581451e-30
Why the conversion from float to mpfr? The mpfr data type can support much higher precision and has a significantly wider range that a float. As long as the precision of mpfr is greater than or equal to the precision of a float (i.e. 53), then the conversion is lossless.
Disclaimer: I maintain gmpy and gmpy2.

Categories

Resources