How can I create an array.array of Decimal? I want to do accurate float calculation, and I am storing numbers inside array.array. So I want to store Decimal inside array.array. I don't want to use Numpy for this.
I have tried setting the type code to f but it just converts the Decimal into float, and I don't want it to be converted! (It will lose precision)
import array
from decimal import Decimal
new = array.array("f", [Decimal(1.1)])
# These two show different results!
print(Decimal(1.1)) # 1.100000000000000088817841970012523233890533447265625
print(new[0]) # 1.100000023841858
Can you try
new = array.array("d", [Decimal(1.1)])
Double should have more decimal points.
Related
I have a function that makes pseudorandom floats, and I want to turn those into integers,
But I don't mean to round them.
For Example, If the input is:
1.5323665
Then I want the output to be:
15323665
and not 2 or 1, which is what you get with round() and int().
You can first convert the float to a string and then remove the decimal point and convert it back to an int:
x = 1.5323665
n = int(str(x).replace(".", ""))
However, this will not work for large numbers where the string representation defaults to scientific notation. In such cases, you can use string formatting:
n = int(f"{x:f}".replace(".", ""))
This will only work up to 6 decimal places, for larger numbers you have to decide the precision yourself using the {number: .p} syntax where p is the precision:
n = int(f"{1.234567891:.10f}".replace(".", ""))
Rather than creating your own pseudorandom engine, which almost-certainly won't have a good density distribution, especially if you coerce floats to ints in this way, strongly consider using a builtin library for the range you're after!
More specifically, if you don't have a good distribution, you'll likely have extreme or unexplained skew in your data (especially values tending towards some common value)
You'll probably be able to observer this if you graph your data, which can be a great way to understand it!
Take a look at the builtin random library, which offers an integer range function for your convenience
https://docs.python.org/3/library/random.html#random.randint
import random
result = random.randint(lowest_int, highest_int)
Convert it to string and remove a dot:
int(str(x).replace('.', ''))
x = 1.5323665
y= int (x)
z= str(x-y)[2:]
o = int(len(z))
print(int(x*10**o))
it will return 15323665
I want to convert string numbers on a list to float numbers
and i can't do it, so i need some help.
num = '0.00003533'
print('{:f}'.format(float(num)))
formatting it without decimals, only returns a float of 0.000035, i need the entire string in a float.
print('{:8f}'.format(float(num)))
adding the exact decimal works, but the numbers in the list with decimals varies greatly, so i can't manually add it everytime, how could i automatically add the correct decimal number inside the format?
something like '{':exactdecimalf'} exactdecinal being a variable.
i'm using a module that requires float, which is why i can't print it directly from the string format.
Use this
from decimal import Decimal
num = '0.00003533'
print(Decimal(num)) #0.00003533
if you want to print as string
print ('{:f}'.format(Decimal(num)))
Maybe double precision will suit you.
from decimal import Decimal
print ('{:f}'.format(Decimal(num)))
You can split the string and take the length of the last part with
len(num.split(".")[1])
Then use that as the number of decimals.
I have a variable containing a large floating point number, say a = 999999999999999.99
When I type int(a) in the interpreter, it returns 1000000000000000.
How do I get the output as 999999999999999 for long numbers like these?
999999999999999.99 is a number that can't be precisely represented in the floating-point format, so Python compromises and picks the closest value that can be represented. In this case, that happens to be 1000000000000000. That's why converting that to an integer gives you 1000000000000000.
If you need more precision than floats can provide, consider using decimal.Decimal.
>>> import decimal
>>> a = decimal.Decimal("999999999999999.99")
>>> a
Decimal('999999999999999.99')
>>> int(a)
999999999999999
The problem is not int, the problem is the floating point value itself. Your value would need 17 digits of precision to be represented correctly, while double precision floating point values have between 15 and 16 digits of precision. So, when you input it, it is rounded to the nearest representable float value, which is 1000000000000000.0. When int is called it cannot do a thing - the precision is already lost.
If you need to represent this kind of values exactly you can use the decimal data type, keeping in mind that performance does suffer compared to regular floats.
I am getting a large value as a string as follows
s='1234567'
d='12345678912'
I want to do arithmetic as (100/d)*s
To do this, I need to convert the strings to appropriate large values. What would be the way to represent them as a number?
Just convert them using float. Python takes care of creating appropriately large representation. You can read more about Numerals here.
s='1234567'
d='12345678912'
(100/float(d))*float(s)
You could convert them using int, but as #GamesBrainiac pointed, that will work only in Python3; in Python2 it will most of the time give you 0 as result.
(100/int(d))*int(s)
If s and d are large e.g., thousands of digits then you could use fractions module to find the fraction:
from fractions import Fraction
s = int('1234567')
d = int('12345678912')
result = Fraction(100, d) * s
print(result)
# -> 30864175/3086419728
float has finite precision; It won't work for very large/small numbers.
Very basic question. If I set products as 3 and parcels as 2, I get 1. How do I have the last line print 1.5, a decimal, instead of simply 1?
products = raw_input('products shipped? ')
parcels = raw_input('parcels shipped? ')
print "Average Number of products per parcel"
print int(products) / int(parcels)
print float(products) / float(parcels)
If you want real numbers, use float, which represents real numbers. Don't use integers.
In Python 3 you'll get this automatically.
In Python 2 you can do from __future__ import division, then dividing two integers will result in a floating point number.
In either case you can use // instead of / if you decide you really needed an integer result instead. That works in Python 2 even if you don't do the import.
You can also convert either or both of the numbers to float to force a floating point result.
If you want the full decimal value use the below,
from decimal import Decimal
print Decimal(products) / Decimal(parcels)