Is there a way to print only the decimal part? [duplicate] - python

This question already has answers here:
How to get numbers after decimal point?
(37 answers)
Closed 1 year ago.
I'm having a hard time doing this I have tried experimenting with (0:.2f)
I want the program to only print the decimal part
Example inputs and outputs
Inputs
100.56
455.345
89.5
Outputs
.56
.345
.5
Is there a way to do this?

you may try the code as below
input = 100.56
print(input % 1)
you could refer to the answer at How to get numbers after decimal point?

If you want to have only the original decimal part, you can do :
inputFloat = 25189456.1584566
decimalPart = '.' + str(inputFloat).split('.')[1]
print(decimalPart)

Something like this should work
def f(x, decimals=2):
r = str(round(x, decimals)) #round and convert to string
r = r.split('.')[-1] #split at the dot and keep the decimals
r = '.' + r #add the dot
return r
f(100.56789) #.57
[f(x) for x in [100.56, 455.345, 89.5]] #['.56', '.35', '.5']
The one-liner would be '.' + str(round(x, 2)).split('.')[-1] or f".{str(round(x, 2)).split('.')[-1]}" where x is the float you are interested in.

Related

How can I consistently get correct float division in python3? [duplicate]

This question already has answers here:
Is floating point arbitrary precision available?
(5 answers)
Is floating point math broken?
(31 answers)
Closed 1 year ago.
I am trying to divide floats by each other but am having a hard time getting accurate results. I understand that computers store floats in a way where the value stored is not exact to the given number. I am simply looking for a way where I can get specific results when working with floats.
input:
x = 2.4
y = 0.2
print(x/y)
Output:
11.999999998
I highly recommend to use decimals
Example
from decimal import Decimal
x = Decimal("2.4")
y = Decimal("0.2")
print(x / y) # 12
Notice we passing number as string, as passing float numbers would have the same problem you pointed out.
But care with comparison, as 12 == x / y evaluates to False

how to calculate float number using precision in python [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 2 years ago.
precision = 2
number = 31684.28
result = Decimal(number) - Decimal(10 ** -precision)
print(result)
Desired output:
31684.27
Actual output:
31684.26999999999883584657356
What I try to do is to subtract 0.01 from number.
You have to make the values with Decimal(...) not the output. So try this:
from decimal import Decimal
precision = 2
number = 31684.28
result = number - float(10 ** Decimal(-precision))
print(result)
Output:
31684.27
You should use formating like the following:
print("{:.2f}".format(result))
Or using round like:
print(round(result, 2))
comment
The question wasn't clear from the start. The correct answer (in my opinion is of #U11-Forward)
You can use the function round
The syntax is: round(number, digits)
So, result = round(number, 2)
You can read more about it here: https://www.w3schools.com/python/ref_func_round.asp
You can use quantize method of decimal library
from decimal import Decimal
result = Decimal('31684.26999999999883584657356').quantize(Decimal('0.01'))
# result = Decimal('31684.27')

How to get 0.000001 instead of 1e-06? [duplicate]

This question already has answers here:
Print a float number in normal form, not exponential form / scientific notation [duplicate]
(2 answers)
How to suppress scientific notation when printing float values?
(16 answers)
Closed 2 years ago.
I want to print the whole number instead of 1e-06
number = 1
result = number/1000000
print(result)
Please help whats the best way to do it?
Try out the following by using format:
number = 1
result = number/1000000
print('{0:.6f}'.format(result))
Output:
0.000001
output = f"{num:.9f}"
you can replace 9 with the amount of numbers you have after the decimal point in your number.
and also you will need to define your variable to float to order it will work.

Strange results when computing large numbers in python2 then in python3 [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 4 years ago.
I try to compute x = (2**83 + 1)/3 which is mathematically an integer and a float in python-x.
In python 2, I get :
x = 2**83 + 1
x = 9671406556917033397649409L
then
y = x/3 = 3223802185639011132549803L
In python 3, I get :
x = 2**83 + 1
x = 9671406556917033397649409 --> OK
then
y = x/3 = 3.223802185639011e+24
To compare the 2 results, I use a format string instruction in python 3:
z = '%25d' % y and I get z = '3223802185639010953592832'
and z = '3223802185639011132549803' in python 2.
(%i gives the same results, quite normal).
The strange thing is that when I compute 3*Z, I get the good result in python2 and a wrong one in python3.
I can't see where the problem is with my test (computing, formatting, ...).
I'd like to use python 3 and to display x = (2**83 + 1)/3 with no 'e+24' but with all numbers.
Does anybody have an idea?
I have to add thet the problem remains the same when using / or // in python2. We get the same result sinc it is mathematically an integer. I should say that the problem is rather with python 3. How can i get the good result (the whole dispaly of (2**83 + 1)/3 in python 3) ?
You seem to be looking for integer division as opposed to the floating point one.
/ operator returns a floating point number in Python3. To perform integer division in Python3, use //.
So, I guess all you need is
(2**83 + 1)//3
which gives
3223802185639011132549803
instead of the (2**83 + 1)/3.
In Python2.7, both / and // are effectively the same unless you do something like from __future__ import division.

Multiplying a number with pi value in python [duplicate]

This question already has answers here:
How can I read inputs as numbers?
(10 answers)
Closed 7 years ago.
I wish to accept a number in float from the user and give back an answer multiplying that number with pi value = 3.14
heres my code:
print "Enter a no."
n = raw_input('enter here: ')
result = 1
def gen_pi(x):
result = x*3.14
return float(result)
gen_pi(n)
its giving me an error of not being able to multiply non int sequence with float. what does it mean? anything wrong with the code?
The result of raw_input is a str which means that n is a str and then in your function, x is a str which can't multiply a float.
you'll want to convert x into a float before multiplying by pi.
There are lots of places to do this, but I'd recommend doing it before passing to gen_pi:
gen_pi(float(n))
This way, gen_pi can always assume it's working with a float and that code can be simplified:
def gen_pi(x):
return x * 3.14
Also note that if you want more precision (and possibly a bit more code clarity), you can use math.pi:
import math
def gen_pi(x):
return x * math.pi

Categories

Resources