Python extending decimals - python

Why is it that python sometimes extends numbers and is there a way to stop it? For example 1.7 may turn into 1.70000005.
Specifically I'm encountering this while taking in a list of floats and trying to populate a new list.
newList = []
for value in myList:
print value
newList.append(value)
return newList
The console will print out numbers containing no more than 2 decimal places while the newList being returned will have 17 places and oftentimes include a non-zero in the last digit. It does this even if I attempt to round(value,2) inside the loop.

That's just the representation! Actually, the contents of the list is still the same.
To show it properly, you can format it in a string:
print("{.17f}".format(my_float_value))
Alternatively, you can use the decimals module:
>>> import decimal
>>> my_float = decimal.Decimal("0.2342134235")
>>> my_float
Decimal('0.2342134235')
Hope this helps!

CPython uses C doubles, and C doubles are typically implemented in hardware for speed.
Hardware floating point is precise to only a limited number of digits. Also, it's stored base 2, and we think mostly in base 10, and some numbers that have a finite expression in base 10, don't have one in base 2, and vice versa.
So you should:
Never compare a floating point value for equality to another.
Instead, subtract them, take the absolute value, and compare that
result to a small positive number like 1e-8.
Round your floating point values to a palatable number of places
after the decimal point, using string formatting or the
http://docs.python.org/3/library/functions.html#round function.
You can use the decimal module to get caller-specified precision. Or if you have rational values, you can use the fractions module.

Related

Python f-string surprising results on floats

I am trying to format float numbers in a fixed point notation: x.xxx, three digits following the decimal point regardless of the value of the number. I am getting surprising results. The first in particular would suggest that it is giving me three significant places rather than three digits after the decimal point. How do I tell it what I really want?
>>> print(f"{.0987:5.03}")
0.0987
*expected: 0.099*
>>> print(f"{0.0:05.03}")
000.0
*expected: 0.000*
>>> print(f"{0.0:5.3}")
0.0
# added "3f" to specify decimals places
print(f"{.0987:5.3f}")
#expected: 0.099*
print(f"{0.9687:05.3f}")
#expected: 0.000*
print(f"{0.0:5.3f}")

What is difference between {:.4e} and {:2.4} in Python scientific notation

I can't quite understand what the difference is between the two print statements below for the number I am trying to express in scientific notation. I thought the the bottom one is supposed to allow 2 spaces for the printed result, and move the decimal place 4 times, but the result I get does not corroborate that understanding. As far as the first one, What does 4e mean?
>>> print('{:.4e}'.format(3454356.7))
3.4544e+06
>>> print('{:2.4}'.format(3454356.7))
3.454e+06
All help greatly appreciated.
In the first example , 4e means, 4 decimal places in scientific notation. You can come to know that by doing
>>> print('{:.4e}'.format(3454356.7))
3.4544e+06
>>> print('{:.5e}'.format(3454356.7))
3.45436e+06
>>> print('{:.6e}'.format(3454356.7))
3.454357e+06
In the second example, .4 , means 4 significant figures. And 2 means to fit the whole data into 2 characters
>>> print('{:2.4}'.format(3454356.7))
3.454e+06
>>> print('{:2.5}'.format(3454356.7))
3.4544e+06
>>> print('{:2.6}'.format(3454356.7))
3.45436e+06
Testing with different value of 2
>>> print('-{:20.6}'.format(3454356.7))
- 3.45436e+06
You can learn more from the python documentation on format
If you want to produce a float, you will have to specify the float type:
>>> '{:2.4f}'.format(3454356.7)
'3454356.7000'
Otherwise, if you don’t specify a type, Python will choose g as the type for which precision will mean the precision based on its significant figures, the digits before and after the decimal point. And since you have a precision of 4, it will only display 4 digits, falling back to scientific notation so it doesn’t add false precision.
The precision is a decimal number indicating how many digits should be displayed after the decimal point for a floating point value formatted with 'f' and 'F', or before and after the decimal point for a floating point value formatted with 'g' or 'G'. For non-number types the field indicates the maximum field size - in other words, how many characters will be used from the field content. The precision is not allowed for integer values.
(source, emphasis mine)
Finally, note that the width (the 2 in above format string) includes the full width, including digits before the decimal point, digits after it, the decimal point itself, and the components of the scientific notation. The above result would have a width of 12, so in this case, the width of the format string is simply ignored.

How to check if a float value is within a certain range and has a given number of decimal digits?

How to check if a float value is within a range (0.50,150.00) and has 2 decimal digits?
For example, 15.22366 should be false (too many decimal digits). But 15.22 should be true.
I tried something like:
data= input()
if data in range(0.50,150.00):
return True
Is that you are looking for?
def check(value):
if 0.50 <= value <= 150 and round(value,2)==value:
return True
return False
Given your comment:
i input 15.22366 it is going to return true; that is why i specified the range; it should accept 15.22
Simply said, floating point values are imprecise. Many values don't have a precise representation. Say for example 1.40. It might be displayed "as it":
>>> f = 1.40
>>> print f
1.4
But this is an illusion. Python has rounded that value in order to nicely display it. The real value as referenced by the variable f is quite different:
>>> from decimal import Decimal
>>> Decimal(f)
Decimal('1.399999999999999911182158029987476766109466552734375')
According to your rule of having only 2 decimals, should f reference a valid value or not?
The easiest way to fix that issue is probably to use round(...,2) as I suggested in the code above. But this in only an heuristic -- only able to reject "largely wrong" values. See my point here:
>>> for v in [ 1.40,
... 1.405,
... 1.399999999999999911182158029987476766109466552734375,
... 1.39999999999999991118,
... 1.3999999999999991118]:
... print check(v), v
...
True 1.4
False 1.405
True 1.4
True 1.4
False 1.4
Notice how the last few results might seems surprising at first. I hope my above explanations put some light on this.
As a final advice, for your needs as I guess them from your question, you should definitively consider using "decimal arithmetic". Python provides the decimal module for that purpose.
float is the wrong data type to use for your case, Use Decimal instead.
Check python docs for issues and limitations. To quote from there (I've generalised the text in Italics)
Floating-point numbers are represented in computer hardware as base 2 (binary) fractions.
no matter how many base 2 digits you’re willing to use, some decimal value (like 0.1) cannot be represented exactly as a base 2 fraction.
Stop at any finite number of bits, and you get an approximation
On a typical machine running Python, there are 53 bits of precision available for a Python float, so the value stored internally when you enter a decimal number is the binary fraction which is close to, but not exactly equal to it.
The documentation for the built-in round() function says that it rounds to the nearest value, rounding ties away from zero.
And finally, it recommends
If you’re in a situation where you care which way your decimal halfway-cases are rounded, you should consider using the decimal module.
And this will hold for your case as well, as you are looking for a precision of 2 digits after decimal points, which float just can't guarantee.
EDIT Note: The answer below corresponds to original question related to random float generation
Seeing that you need 2 digits of sure shot precision, I would suggest generating integer random numbers in range [50, 15000] and dividing them by 100 to convert them to float yourself.
import random
random.randint(50, 15000)/100.0
Why don't you just use round?
round(random.uniform(0.5, 150.0), 2)
Probably what you want to do is not to change the value itself. As said by Cyber in the comment, even if your round a floating point number, it will always store the same precision. If you need to change the way it is printed:
n = random.uniform(0.5, 150)
print '%.2f' % n # 58.03
The easiest way is to first convert the decimal to string and split with '.' and check if the length of the character. If it is >2 then pass on. i.e. Convert use input number to check if it is in a given range.
a=15.22366
if len(str(a).split('.')[1])>2:
if 0.50 <= value <= 150:
<do your stuff>>

How to return a decimal after using int() function on a string?

Very basic question. If I set products as 3 and parcels as 2, I get 1. How do I have the last line print 1.5, a decimal, instead of simply 1?
products = raw_input('products shipped? ')
parcels = raw_input('parcels shipped? ')
print "Average Number of products per parcel"
print int(products) / int(parcels)
print float(products) / float(parcels)
If you want real numbers, use float, which represents real numbers. Don't use integers.
In Python 3 you'll get this automatically.
In Python 2 you can do from __future__ import division, then dividing two integers will result in a floating point number.
In either case you can use // instead of / if you decide you really needed an integer result instead. That works in Python 2 even if you don't do the import.
You can also convert either or both of the numbers to float to force a floating point result.
If you want the full decimal value use the below,
from decimal import Decimal
print Decimal(products) / Decimal(parcels)

Is there a more readable or Pythonic way to format a Decimal to 2 places?

What the heck is going on with the syntax to fix a Decimal to two places?
>>> from decimal import Decimal
>>> num = Decimal('1.0')
>>> num.quantize(Decimal(10) ** -2) # seriously?!
Decimal('1.00')
Is there a better way that doesn't look so esoteric at a glance? 'Quantizing a decimal' sounds like technobabble from an episode of Star Trek!
Use string formatting:
>>> from decimal import Decimal
>>> num = Decimal('1.0')
>>> format(num, '.2f')
'1.00'
The format() function applies string formatting to values. Decimal() objects can be formatted like floating point values.
You can also use this to interpolate the formatted decimal value is a larger string:
>>> 'Value of num: {:.2f}'.format(num)
'Value of num: 1.00'
See the format string syntax documentation.
Unless you know exactly what you are doing, expanding the number of significant digits through quantisation is not the way to go; quantisation is the privy of accountancy packages and normally has the aim to round results to fewer significant digits instead.
Quantize is used to set the number of places that are actually held internally within the value, before it is converted to a string. As Martijn points out this is usually done to reduce the number of digits via rounding, but it works just as well going the other way. By specifying the target as a decimal number rather than a number of places, you can make two values match without knowing specifically how many places are in them.
It looks a little less esoteric if you use a decimal value directly instead of trying to calculate it:
num.quantize(Decimal('0.01'))
You can set up some constants to hide the complexity:
places = [Decimal('0.1') ** n for n in range(16)]
num.quantize(places[2])

Categories

Resources