This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 7 years ago.
With Python 3.4.3
int(1 / 1e-3)
1000
int(1 / 1e-4)
10000
int(1 / 1e-5)
99999
int(1 / 1e-6)
1000000
int(1 / 1e-7)
10000000
Bug or Feature? Any particular reason?
floating point numbers aren't exact. Only binary numbers are.
>>> '%.25f' % 1e-5
'0.0000100000000000000008180'
>>> '%.25f' % (1/1e-5)
'99999.9999999999854480847716331'
So 1/1e-5 is less than 100000 and int cuts off the fractal part.
Converting to int, rounding is the answer:
>>> int(round(1/1e-5))
100000
Related
This question already has answers here:
How to suppress scientific notation when printing float values?
(16 answers)
Closed 10 months ago.
This is my code:
m = int(input(""))
p= 3.00*10*10*10*10*10*10*10*10
c = p*p
E = m * c
print(E)
The answer is 19e+16.
But I don't want the scientific notation: I want the number.
Actually it is not from VSCode, Python prints that way for better readability. You can print the long form with the help of string formatting. When you apply precision it will expand the printed value:
E = 3.00 * 100 * 100 * 100 * 100 * 100 * 100 * 100 * 100
print(E)
print(f"{E:.1f}")
print(f"{E:.10f}")
output:
3e+16
30000000000000000.0
30000000000000000.0000000000
Possible duplicate of:
How to suppress scientific notation when printing float values?
TL;DR
print('{:f}'.format(E))
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
a = 1000000000
b = 1000000
max_ = int((a - b + 1) * (a - b) / 2)
I have this line in my code and when "a" equals a billion (1000000000) and "b" equals a million (1000000), the result came up with the answer "499000500499500032". The correct result of this arithmetical calculation is "499000500499500000".
I searched for why it does so but couldn't find anything. I am using Python 3.9.5 if it matters in this case
.
Python uses the CPU's native float which is a binary estimate of the true floating point number. Its not a problem with python per se, its the inherent imprecision in fixed length binary floats. Simply writing your wanted value as a float demonstrates the problem:
>>> f"{499000500499500000.:f}"
'499000500499500032.000000
If you need more precision than float offers, the decimal module may work for you.
>>> from decimal import Decimal
>>> a = Decimal(1000000000)
>>> b = Decimal(1000000)
>>> max_d = (a - b + 1) * (a - b) / 2
>>> max_d
Decimal('499000500499500000')
>>> max_ = int(max_d)
>>> max_
499000500499500000
float exists, even though it is an estimate of a true real number, because this lack of precision can usually be factored into the algorithm. When this error is too much, or when you are doing something like accounting where the error is significant, there is the alternative decimal.
See Floating Point Arithmetic: Issues and Limitations
Another option is to use floor division which doesn't go through float.
>>> a = 1000000000
>>> b = 1000000
>>> (a - b + 1) * (a - b) // 2
499000500499500000
That looks better! But, there is still a lack of precision depending on what you are dividing.
This question already has answers here:
How to round a number to significant figures in Python
(26 answers)
Closed 2 years ago.
I am wondering how can I floor 1,999,999 to 1,000,000 or 2,000,108 to 2,000,000 in python?
I used math.floor() but it's just for removing decimal part.
Just do it like this:
math.floor(num / 1000000) * 1000000
e.g.:
>>> num=1999999
>>> math.floor(num / 1000000) * 1000000
1000000.0
>>> num=2000108
>>> math.floor(num / 1000000) * 1000000
2000000.0
To round down positive integer to first digit
from math import floor
def r(n):
gr = 10 ** (len(str(n))-1)
return floor(n / gr) * gr
for i in [199_999, 200_100, 2_100, 315]:
print(r(i))
Output
100000
200000
2000
300
def floor_integer(num):
l = str(num)
return int(l[0]) * pow(10, len(l) - 1)
I think that will fits your needs.
print(floor_integer(5))
# 5
print(floor_integer(133))
# 100
print(floor_integer(1543))
# 1000
print(floor_integer(488765))
# 400000
print(floor_integer(1999999))
# 1000000
print(floor_integer(2000108))
# 2000000
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 3 years ago.
I wrote a small piece of code that rounds a number to the closest multiple of 0.005
but it produces a weird output, that I didn't expect.
I am using Python version 3.7.3
Here is the code:
number = 1.639
print(5 * round(number / 5, 3))
Output
1.6400000000000001
Expected output
1.64
Check this replit for detailed output on different values.
Can anyone tell me why this happens?
print(round(5 * round(1.639 / 5, 3), 3))
Round it again since according to what I see you are multiplying the rounded number by 5 then you expect that the number to be rounded. I guess you should round the output of the first rounding step!
If you want to do the multiplication on the rounding number, here is how you can do:
number = 1.639
temp = round(number / 5, 3)
final = round(5 * temp, 3)
print(final) # 1.64
If you just want to round the final result, here is how you can do it:
number = 1.639
final = round(5 * (number / 5), 3)
print(final) # 1.639
This question already has an answer here:
Is Python incorrectly handling this "arbitrary precision integer"?
(1 answer)
Closed 4 years ago.
This is my code in python but the answer it gives is not correct according to projecteuler.net.
a = 2**1000
total = 0
while a >= 1:
temp = a % 10
total = total + temp
a = int(a/10)
print(total)
It gives an output 1189. Am I making some mistake?
Your logic is fine. The problem is that 2 ** 1000 is too big for all the digits to fit into a float, so the number gets rounded when you do a = int(a/10). A Python float only has 53 bits of precision, you can read about it in the official tutorial article: Floating Point Arithmetic: Issues and Limitations, and on Wikipedia: Double-precision floating-point format. Also see Is floating point math broken?.
This is 2 ** 1000
10715086071862673209484250490600018105614048117055336074437503883703510511249361224931983788156958581275946729175531468251871452856923140435984577574698574803934567774824230985421074605062371141877954182153046474983581941267398767559165543946077062914571196477686542167660429831652624386837205668069376
But print(format(2**1000 / 10, 'f')) gives us this:
1071508607186267380429101388171324322483904737701556012694158454746129413355810495130824665231870799934327252763807170417136096893411236061781867579266085792026680021578208129860941078404632071895251811587214122307926025420797364998626502669722909817741077261714977537247847201331018951634334519394304.000000
You can see that the digits start going wrong after 10715086071862673.
So you need to use integer arithmetic, which in Python has arbitrary precision (only limited by how much memory Python can access). To do that, use the // floor division operator.
a = 2**1000
total = 0
while a >= 1:
temp = a % 10
total = total + temp
a = a // 10
print(total)
output
1366
We can condense that code a little by using augmented assignment operators.
a = 2**1000
total = 0
while a:
total += a % 10
a //= 10
print(total)
Here's a faster way. Convert a to a string then convert each digit back to int and sum them. I use bit shifting to compute a because it's faster than exponentiation.
print(sum(int(u) for u in str(1 << 1000)))