Turning a float into an integer without rounding the decimal points - python

I have a function that makes pseudorandom floats, and I want to turn those into integers,
But I don't mean to round them.
For Example, If the input is:
1.5323665
Then I want the output to be:
15323665
and not 2 or 1, which is what you get with round() and int().

You can first convert the float to a string and then remove the decimal point and convert it back to an int:
x = 1.5323665
n = int(str(x).replace(".", ""))
However, this will not work for large numbers where the string representation defaults to scientific notation. In such cases, you can use string formatting:
n = int(f"{x:f}".replace(".", ""))
This will only work up to 6 decimal places, for larger numbers you have to decide the precision yourself using the {number: .p} syntax where p is the precision:
n = int(f"{1.234567891:.10f}".replace(".", ""))

Rather than creating your own pseudorandom engine, which almost-certainly won't have a good density distribution, especially if you coerce floats to ints in this way, strongly consider using a builtin library for the range you're after!
More specifically, if you don't have a good distribution, you'll likely have extreme or unexplained skew in your data (especially values tending towards some common value)
You'll probably be able to observer this if you graph your data, which can be a great way to understand it!
Take a look at the builtin random library, which offers an integer range function for your convenience
https://docs.python.org/3/library/random.html#random.randint
import random
result = random.randint(lowest_int, highest_int)

Convert it to string and remove a dot:
int(str(x).replace('.', ''))

x = 1.5323665
y= int (x)
z= str(x-y)[2:]
o = int(len(z))
print(int(x*10**o))
it will return 15323665

Related

How to dynamically format string representation of float number in python?

Hi I would like to dynamically adjust the displayed decimal places of a string representation of a floating point number, but i couldn't find any information on how to do it.
E.g:
precision = 8
n = 7.12345678911
str_n = '{0:.{precision}}'.format(n)
print(str_n) should display -> 7.12345678
But instead i'm getting a "KeyError". What am i missing?
You need to specify where precision in your format string comes from:
precision = 8
n = 7.12345678911
print('{0:.{precision}}'.format(n, precision=precision))
The first time, you specified which argument you'd like to be the number using an index ({0}), so the formatting function knows where to get the argument from, but when you specify a placeholder by some key, you have to explicitly specify that key.
It's a little unusual to mix these two systems, i'd recommend staying with one:
print('{number:.{precision}}'.format(number=n, precision=precision)) # most readable
print('{0:.{1}}'.format(n, precision))
print('{:.{}}'.format(n, precision)) # automatic indexing, least obvious
It is notable that these precision values will include the numbers before the point, so
>>> f"{123.45:.3}"
'1.23e+02'
will give drop drop the decimals and only give the first three digits of the number.
Instead, the f can be supplied to the type of the format (See the documentation) to get fixed-point formatting with precision decimal digits.
print('{number:.{precision}f}'.format(number=n, precision=precision)) # most readable
print('{0:.{1}f}'.format(n, precision))
print('{:.{}f}'.format(n, precision)) # automatic indexing, least obvious
In addition to #Talon, for those interested in f-strings, this also works.
precision = 8
n = 7.12345678911
print(f'{n:.{precision}f}')

How can I check the length of a long float? Python is truncating the length [duplicate]

I have some number 0.0000002345E^-60. I want to print the floating point value as it is.
What is the way to do it?
print %f truncates it to 6 digits. Also %n.nf gives fixed numbers. What is the way to print without truncation.
Like this?
>>> print('{:.100f}'.format(0.0000002345E-60))
0.0000000000000000000000000000000000000000000000000000000000000000002344999999999999860343602938602754
As you might notice from the output, it’s not really that clear how you want to do it. Due to the float representation you lose precision and can’t really represent the number precisely. As such it’s not really clear where you want the number to stop displaying.
Also note that the exponential representation is often used to more explicitly show the number of significant digits the number has.
You could also use decimal to not lose the precision due to binary float truncation:
>>> from decimal import Decimal
>>> d = Decimal('0.0000002345E-60')
>>> p = abs(d.as_tuple().exponent)
>>> print(('{:.%df}' % p).format(d))
0.0000000000000000000000000000000000000000000000000000000000000000002345
You can use decimal.Decimal:
>>> from decimal import Decimal
>>> str(Decimal(0.0000002345e-60))
'2.344999999999999860343602938602754401109865640550232148836753621775217856801120686600683401464097113374472942165409862789978024748827516129306833728589548440037314681709534891496105046826414763927459716796875E-67'
This is the actual value of float created by literal 0.0000002345e-60. Its value is a number representable as python float which is closest to actual 0.0000002345 * 10**-60.
float should be generally used for approximate calculations. If you want accurate results you should use something else, like mentioned Decimal.
If I understand, you want to print a float?
The problem is, you cannot print a float.
You can only print a string representation of a float. So, in short, you cannot print a float, that is your answer.
If you accept that you need to print a string representation of a float, and your question is how specify your preferred format for the string representations of your floats, then judging by the comments you have been very unclear in your question.
If you would like to print the string representations of your floats in exponent notation, then the format specification language allows this:
{:g} or {:G}, depending whether or not you want the E in the output to be capitalized). This gets around the default precision for e and E types, which leads to unwanted trailing 0s in the part before the exponent symbol.
Assuming your value is my_float, "{:G}".format(my_float) would print the output the way that the Python interpreter prints it. You could probably just print the number without any formatting and get the same exact result.
If your goal is to print the string representation of the float with its current precision, in non-exponentiated form, User poke describes a good way to do this by casting the float to a Decimal object.
If, for some reason, you do not want to do this, you can do something like is mentioned in this answer. However, you should set 'max_digits' to sys.float_info.max_10_exp, instead of 14 used in the answer. This requires you to import sys at some point prior in the code.
A full example of this would be:
import math
import sys
def precision_and_scale(x):
max_digits = sys.float_info.max_10_exp
int_part = int(abs(x))
magnitude = 1 if int_part == 0 else int(math.log10(int_part)) + 1
if magnitude >= max_digits:
return (magnitude, 0)
frac_part = abs(x) - int_part
multiplier = 10 ** (max_digits - magnitude)
frac_digits = multiplier + int(multiplier * frac_part + 0.5)
while frac_digits % 10 == 0:
frac_digits /= 10
scale = int(math.log10(frac_digits))
return (magnitude + scale, scale)
f = 0.0000002345E^-60
p, s = precision_and_scale(f)
print "{:.{p}f}".format(f, p=p)
But I think the method involving casting to Decimal is probably better, overall.

Difference between round() and float() in Python

Could someone explain me what's the difference between round() and float() in Python, please?
For example
x = 9.09128239
x = float("{0:.2f}".format(x))
y = 9.09128239
y = round(y, 2)
As I see, both functions from the code above do the same job. However, round() seems more compact and appealing to me.
I'd like to know if there is something else behind these functions and if I should consider something in particular when choosing which one to use.
Thank you for your help in advance!
It is not the float function that is doing the rounding here.
As a general term, float and round do very different things. Float takes a valid input and attempts to typecast it into a floating point representation. Round just rounds up to n significant digits.
float(3) #works on numbers
float("5.2") #and strings too!
x = 9.09128239
#x = float("{0:.2f}".format(x)) #there are two steps here.
result = "{0:.2f}".format(x)
#result is a string "9.09" The rounding happened because of the precision listed during string formatting.
x = float(result) #just takes the string and converts to float
y = 9.09128239
y = round(y, 2) #directly works on the float and rounds it off.
Tl;Dr Just use round.
This formats and parses a string, which is a lot of unnecessary work:
x = float("{0:.2f}".format(x))
This simple rounds the float, and will be much faster:
y = round(y, 2)
One of the main differences is that float is a class and round is a function. Using float does not round a number:
float('0.12345') #0.12345
but round does:
round(0.12345, 2) #0.12
Use float to convert something to a float, and use round to round off a float.
float() is used for type conversion of data to the float-type, if applicable.
On the other hand, round() is used for rounding of the given value to the specified number of decimal places.
Just as a quick note, what you are doing above in the example for float() is taking a number, rounding it off to the specified number of digits (in your example, two), converting it into string, and then type casting it into float data type.
For more information on float(), you may visit this page:
[Built in Functions](https://docs.python.org/3/library/functions.html#float)

representing large number in python

I am getting a large value as a string as follows
s='1234567'
d='12345678912'
I want to do arithmetic as (100/d)*s
To do this, I need to convert the strings to appropriate large values. What would be the way to represent them as a number?
Just convert them using float. Python takes care of creating appropriately large representation. You can read more about Numerals here.
s='1234567'
d='12345678912'
(100/float(d))*float(s)
You could convert them using int, but as #GamesBrainiac pointed, that will work only in Python3; in Python2 it will most of the time give you 0 as result.
(100/int(d))*int(s)
If s and d are large e.g., thousands of digits then you could use fractions module to find the fraction:
from fractions import Fraction
s = int('1234567')
d = int('12345678912')
result = Fraction(100, d) * s
print(result)
# -> 30864175/3086419728
float has finite precision; It won't work for very large/small numbers.

Is there a more readable or Pythonic way to format a Decimal to 2 places?

What the heck is going on with the syntax to fix a Decimal to two places?
>>> from decimal import Decimal
>>> num = Decimal('1.0')
>>> num.quantize(Decimal(10) ** -2) # seriously?!
Decimal('1.00')
Is there a better way that doesn't look so esoteric at a glance? 'Quantizing a decimal' sounds like technobabble from an episode of Star Trek!
Use string formatting:
>>> from decimal import Decimal
>>> num = Decimal('1.0')
>>> format(num, '.2f')
'1.00'
The format() function applies string formatting to values. Decimal() objects can be formatted like floating point values.
You can also use this to interpolate the formatted decimal value is a larger string:
>>> 'Value of num: {:.2f}'.format(num)
'Value of num: 1.00'
See the format string syntax documentation.
Unless you know exactly what you are doing, expanding the number of significant digits through quantisation is not the way to go; quantisation is the privy of accountancy packages and normally has the aim to round results to fewer significant digits instead.
Quantize is used to set the number of places that are actually held internally within the value, before it is converted to a string. As Martijn points out this is usually done to reduce the number of digits via rounding, but it works just as well going the other way. By specifying the target as a decimal number rather than a number of places, you can make two values match without knowing specifically how many places are in them.
It looks a little less esoteric if you use a decimal value directly instead of trying to calculate it:
num.quantize(Decimal('0.01'))
You can set up some constants to hide the complexity:
places = [Decimal('0.1') ** n for n in range(16)]
num.quantize(places[2])

Categories

Resources