I am getting a large value as a string as follows
s='1234567'
d='12345678912'
I want to do arithmetic as (100/d)*s
To do this, I need to convert the strings to appropriate large values. What would be the way to represent them as a number?
Just convert them using float. Python takes care of creating appropriately large representation. You can read more about Numerals here.
s='1234567'
d='12345678912'
(100/float(d))*float(s)
You could convert them using int, but as #GamesBrainiac pointed, that will work only in Python3; in Python2 it will most of the time give you 0 as result.
(100/int(d))*int(s)
If s and d are large e.g., thousands of digits then you could use fractions module to find the fraction:
from fractions import Fraction
s = int('1234567')
d = int('12345678912')
result = Fraction(100, d) * s
print(result)
# -> 30864175/3086419728
float has finite precision; It won't work for very large/small numbers.
Related
I have a function that makes pseudorandom floats, and I want to turn those into integers,
But I don't mean to round them.
For Example, If the input is:
1.5323665
Then I want the output to be:
15323665
and not 2 or 1, which is what you get with round() and int().
You can first convert the float to a string and then remove the decimal point and convert it back to an int:
x = 1.5323665
n = int(str(x).replace(".", ""))
However, this will not work for large numbers where the string representation defaults to scientific notation. In such cases, you can use string formatting:
n = int(f"{x:f}".replace(".", ""))
This will only work up to 6 decimal places, for larger numbers you have to decide the precision yourself using the {number: .p} syntax where p is the precision:
n = int(f"{1.234567891:.10f}".replace(".", ""))
Rather than creating your own pseudorandom engine, which almost-certainly won't have a good density distribution, especially if you coerce floats to ints in this way, strongly consider using a builtin library for the range you're after!
More specifically, if you don't have a good distribution, you'll likely have extreme or unexplained skew in your data (especially values tending towards some common value)
You'll probably be able to observer this if you graph your data, which can be a great way to understand it!
Take a look at the builtin random library, which offers an integer range function for your convenience
https://docs.python.org/3/library/random.html#random.randint
import random
result = random.randint(lowest_int, highest_int)
Convert it to string and remove a dot:
int(str(x).replace('.', ''))
x = 1.5323665
y= int (x)
z= str(x-y)[2:]
o = int(len(z))
print(int(x*10**o))
it will return 15323665
I am performing calculation in python that results in very large numbers. The smallest of them is 2^10^6, this number is extremely long so I attempted to use format() to convert it to scientific notation. I keep getting an error message stating that the number is too large to convert to a float.
this is the error I keep getting:
print(format(2**10**6, "E"))
OverflowError: int too large to convert to float
I would like to print the result of 2^10^6 in a way that is concise and readable
You calculated 2 raised to the 10th then raised to the 6th power. If your aim is "2 times 10 to the sixth", then 2*10**6 is what you want. In python that can also be expressed by 2E6 where E means "to the 10th power". This is confusing when you are thinking in terms of natural logs and Euler's Number e.
You can also use the decimal.Decimal package if you want to side step decimal to binary float problems. In python, floats expressed in decimal are rounded to the nearest binary float. If you really did want the huge number, Decimal can handle it.
>>> Decimal("2E6")
Decimal('2E+6')
>>> Decimal("2")*10**6
Decimal('2000000')
>>> Decimal("2")**10**6
Decimal('9.900656229295898250697923616E+301029')
For printing, use the "g" format
>>> d = Decimal('2')**10**6
>>> format(d,'g')
'9.900656229295898250697923616e+301029'
>>> format(d,'.6g')
'9.90066e+301029'
>>> "{:g}".format(d)
'9.900656229295898250697923616e+301029'
>>> "{:.6g}".format(d)
'9.90066e+301029'
So I have this long number (i.e: 1081546747036327937), and when I cleaned up my data in pandas dataframe, I didn't realize Python converted it to complex number (i.e: 1.081546747036328e+18).
I saved this one as csv. The problem is, I accidentally deleted the original file, tried to recover it but no success this far, so...
is there a way to convert this complex number back to their original number?
I tried to convert it to str using str(data) but it stays the same (i.e: 1.081546747036328e+18).
As you were said in comment, this is not a complex number, but a floating point number. You can certainly convert it to a (long) integer, but you cannot be sure to get back the initial number.
In your example:
i = 1081546747036327937
f = float(i)
j = int(f)
print(i, f, j, j-i)
will display:
1081546747036327937 1.081546747036328e+18 1081546747036327936 -1
This is because floating points only have a limited accuracy and rounding errors are to be expected with large integers when the binary representation requires more than 53 bits.
As can be read here, complex numbers are a sum of a real part and an imaginary part.
3+1j is a complex number with the real value 3 and a complex value 1
You have a scientific notation (type is float), which is just an ordinary float multiplied by the specified power of 10.
1e10 equals to 1 times ten to the power of ten
To convert this to int, you can just convert with int(number). For more information about python data types, you can take a look here
How can I create an array.array of Decimal? I want to do accurate float calculation, and I am storing numbers inside array.array. So I want to store Decimal inside array.array. I don't want to use Numpy for this.
I have tried setting the type code to f but it just converts the Decimal into float, and I don't want it to be converted! (It will lose precision)
import array
from decimal import Decimal
new = array.array("f", [Decimal(1.1)])
# These two show different results!
print(Decimal(1.1)) # 1.100000000000000088817841970012523233890533447265625
print(new[0]) # 1.100000023841858
Can you try
new = array.array("d", [Decimal(1.1)])
Double should have more decimal points.
I have a binary float like '10.1' and want to convert it to a decimal float. Then I want to do some operations on it and convert it back to a binary float. Are there builtins in python 2 to do this? If not, what is the shortest (in bytes) way to do this?
There's no built-in way to do this for binary (though there is from hexadecimal). The easiest way would be to strip out the ., parse as an integer, and scale appropriately:
import math
x = "10.1"
p = len(x) - x.index(".") - 1
i = int(x.replace(".",""),2)
math.ldexp(i,-p)
This assumes your string consists of nothing but 0s, 1s and a single . (i.e. no whitespace or exponent).