So I was writing a simple script to demonstrate geometric series convergence.
from decimal import *
import math
initial = int(input("a1? "))
r = Decimal(input("r? "))
runtime = int(input("iterations? "))
sum_value=0
for i in range(runtime):
sum_value+=Decimal(initial * math.pow(r,i))
print(sum_value)
When I use values such as:
a1 = 1
r = .2
iterations = 100000
I get the convergence to be 1.250000000000000021179302083
When I replace the line:
sum_value+=Decimal(initial * math.pow(r,i))
With:
sum_value+=Decimal(initial * r ** i)
I get a more precise value, 1.250000000000000000000000002
What exactly is the difference here? From my understanding, it has to do with math.pow being a floating point operation, but I would just think that ** is syntactic sugar for the math power function. If they are indeed different, then why with a precision value of 200, when inputting the following to IDLE:
>>> Decimal(.8**500)
Decimal('3.50746621104350087215129555150772856244326043764431058846880005304485310211166734705824986213804838358790165633656170035364028902957755917668691836297512054443359375E-49')
>>> Decimal(math.pow(.8,500))
Decimal('3.50746621104350087215129555150772856244326043764431058846880005304485310211166734705824986213804838358790165633656170035364028902957755917668691836297512054443359375E-49')
They seem to be exactly the same. What is happening here?
The difference is, as you imply, that math.pow() converts the inputs to floats as stated in the documentation: "Unlike the built-in ** operator, math.pow() converts both its arguments to type float."
Therefore math.pow() also delivers a float as answer, independently of whether the input is Decimal (or int) or whatever. When using numbers that are not exactly representable as a float(but is as Decimal) you are likely to get a more precise answer using the ** operator.
This explains why your loop gives a more exact result in case of using ** since you are working with Decimal numbers raised to an integer. In the second case, you are inadvertently using floats for both calculations and then converting the result to Decimal when the operation is already executed. If you instead work with explicit Decimal values you will see the difference:
>>> Decimal('.8')**500
Decimal('3.507466211043403874762758796E-49')
>>> Decimal(math.pow(Decimal('.8'), 500))
Decimal('3.50746621104350087215129555150772856244326043764431058846880005304485310211166734705824986213804838358790165633656170035364028902957755917668691836297512054443359375E-49')
Thus, in the second case, the Decimal value is automatically casted to a float and the result is the same as for your example above. In the first case, however, the calculation is executed in the Decimal domain and yields a slightly different result.
Related
I actually have 2 questions (which I hope they are related). How can I turn rational numbers in a sympy expression to fractions. For example "0.25*x+0.5*y" I would like to became "1/4*x+1/2*y". The second question: If I want to replace a variable symbol by a fraction, the fraction gets automatically converted to a decimal number. For example for eq = parse_expr("p1*cos(5*x)") doing eq = eq.subs("p1",1/5) gives me 0.2cos(5đť‘Ą) instead of 1/5*0.2cos(5đť‘Ą). Of course they are both the same mathematically, but I would like to have them in a nicer, fractional form. How can I do that? Thank you!
Because you are working in Python, every expression you write passes through Python semantics. So 1/5 becomes 0.25 before SymPy can do anything with it. This is covered in the 'gotchas.rst' file in the documentation.
The only time you will encounter this is when you have two leading numbers in a product dividing each other as in 1/5*x but not x*1/5. In such cases you can prevent them from becoming a Float by wrapping one of them in S() to make it a SymPy number rather than a Python number, e.g. S(1)/5*x. This proactive step is needed in all contexts where you use a numeric fraction alone or as the first factor in a product
>>> x.subs(x, 1/5)
0.2
>>> x.subs(x, S(1)/5)
1/5
If you happen to have an expression in which you want to convert the Floats back to Rationals you can use nsimplify(..., rational=True):
>>> nsimplify(0.25*x + 0.5*y, rational=True)
x/4 + y/2
You can use Rational. In the first case, write
Rational(0.25)*x + Rational(0.5)*y
or
Rational(1, 4)*x + Rational(1, 2)*y
which gives you x/4 + y/2. In the second case,
eq.subs("p1", Rational(1, 5)) # cos(5*x)/5
I have some number 0.0000002345E^-60. I want to print the floating point value as it is.
What is the way to do it?
print %f truncates it to 6 digits. Also %n.nf gives fixed numbers. What is the way to print without truncation.
Like this?
>>> print('{:.100f}'.format(0.0000002345E-60))
0.0000000000000000000000000000000000000000000000000000000000000000002344999999999999860343602938602754
As you might notice from the output, it’s not really that clear how you want to do it. Due to the float representation you lose precision and can’t really represent the number precisely. As such it’s not really clear where you want the number to stop displaying.
Also note that the exponential representation is often used to more explicitly show the number of significant digits the number has.
You could also use decimal to not lose the precision due to binary float truncation:
>>> from decimal import Decimal
>>> d = Decimal('0.0000002345E-60')
>>> p = abs(d.as_tuple().exponent)
>>> print(('{:.%df}' % p).format(d))
0.0000000000000000000000000000000000000000000000000000000000000000002345
You can use decimal.Decimal:
>>> from decimal import Decimal
>>> str(Decimal(0.0000002345e-60))
'2.344999999999999860343602938602754401109865640550232148836753621775217856801120686600683401464097113374472942165409862789978024748827516129306833728589548440037314681709534891496105046826414763927459716796875E-67'
This is the actual value of float created by literal 0.0000002345e-60. Its value is a number representable as python float which is closest to actual 0.0000002345 * 10**-60.
float should be generally used for approximate calculations. If you want accurate results you should use something else, like mentioned Decimal.
If I understand, you want to print a float?
The problem is, you cannot print a float.
You can only print a string representation of a float. So, in short, you cannot print a float, that is your answer.
If you accept that you need to print a string representation of a float, and your question is how specify your preferred format for the string representations of your floats, then judging by the comments you have been very unclear in your question.
If you would like to print the string representations of your floats in exponent notation, then the format specification language allows this:
{:g} or {:G}, depending whether or not you want the E in the output to be capitalized). This gets around the default precision for e and E types, which leads to unwanted trailing 0s in the part before the exponent symbol.
Assuming your value is my_float, "{:G}".format(my_float) would print the output the way that the Python interpreter prints it. You could probably just print the number without any formatting and get the same exact result.
If your goal is to print the string representation of the float with its current precision, in non-exponentiated form, User poke describes a good way to do this by casting the float to a Decimal object.
If, for some reason, you do not want to do this, you can do something like is mentioned in this answer. However, you should set 'max_digits' to sys.float_info.max_10_exp, instead of 14 used in the answer. This requires you to import sys at some point prior in the code.
A full example of this would be:
import math
import sys
def precision_and_scale(x):
max_digits = sys.float_info.max_10_exp
int_part = int(abs(x))
magnitude = 1 if int_part == 0 else int(math.log10(int_part)) + 1
if magnitude >= max_digits:
return (magnitude, 0)
frac_part = abs(x) - int_part
multiplier = 10 ** (max_digits - magnitude)
frac_digits = multiplier + int(multiplier * frac_part + 0.5)
while frac_digits % 10 == 0:
frac_digits /= 10
scale = int(math.log10(frac_digits))
return (magnitude + scale, scale)
f = 0.0000002345E^-60
p, s = precision_and_scale(f)
print "{:.{p}f}".format(f, p=p)
But I think the method involving casting to Decimal is probably better, overall.
I know that questions about rounding in python have been asked multiple times already, but the answers did not help me. I'm looking for a method that is rounding a float number half up and returns a float number. The method should also accept a parameter that defines the decimal place to round to. I wrote a method that implements this kind of rounding. However, I think it does not look elegant at all.
def round_half_up(number, dec_places):
s = str(number)
d = decimal.Decimal(s).quantize(
decimal.Decimal(10) ** -dec_places,
rounding=decimal.ROUND_HALF_UP)
return float(d)
I don't like it, that I have to convert float to a string (to avoid floating point inaccuracy) and then work with the decimal module.
Do you have any better solutions?
Edit: As pointed out in the answers below, the solution to my problem is not that obvious as correct rounding requires correct representation of numbers in the first place and this is not the case with float. So I would expect that the following code
def round_half_up(number, dec_places):
d = decimal.Decimal(number).quantize(
decimal.Decimal(10) ** -dec_places,
rounding=decimal.ROUND_HALF_UP)
return float(d)
(that differs from the code above just by the fact that the float number is directly converted into a decimal number and not to a string first) to return 2.18 when used like this: round_half_up(2.175, 2) But it doesn't because Decimal(2.175) will return Decimal('2.17499999999999982236431605997495353221893310546875'), the way the float number is represented by the computer.
Suprisingly, the first code returns 2.18 because the float number is converted to string first. It seems that the str() function conducts an implicit rounding to the number that was initially meant to be rounded. So there are two roundings taking place. Even though this is the result that I would expect, it is technically wrong.
Rounding is surprisingly hard to do right, because you have to handle floating-point calculations very carefully. If you are looking for an elegant solution (short, easy to understand), what you have like like a good starting point. To be correct, you should replace decimal.Decimal(str(number)) with creating the decimal from the number itself, which will give you a decimal version of its exact representation:
d = Decimal(number).quantize(...)...
Decimal(str(number)) effectively rounds twice, as formatting the float into the string representation performs its own rounding. This is because str(float value) won't try to print the full decimal representation of the float, it will only print enough digits to ensure that you get the same float back if you pass those exact digits to the float constructor.
If you want to retain correct rounding, but avoid depending on the big and complex decimal module, you can certainly do it, but you'll still need some way to implement the exact arithmetics needed for correct rounding. For example, you can use fractions:
import fractions, math
def round_half_up(number, dec_places=0):
sign = math.copysign(1, number)
number_exact = abs(fractions.Fraction(number))
shifted = number_exact * 10**dec_places
shifted_trunc = int(shifted)
if shifted - shifted_trunc >= fractions.Fraction(1, 2):
result = (shifted_trunc + 1) / 10**dec_places
else:
result = shifted_trunc / 10**dec_places
return sign * float(result)
assert round_half_up(1.49) == 1
assert round_half_up(1.5) == 2
assert round_half_up(1.51) == 2
assert round_half_up(2.49) == 2
assert round_half_up(2.5) == 3
assert round_half_up(2.51) == 3
Note that the only tricky part in the above code is the precise conversion of a floating-point to a fraction, and that can be off-loaded to the as_integer_ratio() float method, which is what both decimals and fractions do internally. So if you really want to remove the dependency on fractions, you can reduce the fractional arithmetic to pure integer arithmetic; you stay within the same line count at the expense of some legibility:
def round_half_up(number, dec_places=0):
sign = math.copysign(1, number)
exact = abs(number).as_integer_ratio()
shifted = (exact[0] * 10**dec_places), exact[1]
shifted_trunc = shifted[0] // shifted[1]
difference = (shifted[0] - shifted_trunc * shifted[1]), shifted[1]
if difference[0] * 2 >= difference[1]: # difference >= 1/2
shifted_trunc += 1
return sign * (shifted_trunc / 10**dec_places)
Note that testing these functions brings to spotlight the approximations performed when creating floating-point numbers. For example, print(round_half_up(2.175, 2)) prints 2.17 because the decimal number 2.175 cannot be represented exactly in binary, so it is replaced by an approximation that happens to be slightly smaller than the 2.175 decimal. The function receives that value, finds it smaller than the actual fraction corresponding to the 2.175 decimal, and decides to round it down. This is not a quirk of the implementation; the behavior derives from properties of floating-point numbers and is also present in the round built-in of Python 3 and 2.
I don't like it, that I have to convert float to a string (to avoid
floating point inaccuracy) and then work with the decimal module. Do
you have any better solutions?
Yes; use Decimal to represent your numbers throughout your whole program, if you need to represent numbers such as 2.675 exactly and have them round to 2.68 instead of 2.67.
There is no other way. The floating point number which is shown on your screen as 2.675 is not the real number 2.675; in fact, it is very slightly less than 2.675, which is why it gets rounded down to 2.67:
>>> 2.675 - 2
0.6749999999999998
It only shows in string form as '2.675' because that happens to be the shortest string such that float(s) == 2.6749999999999998. Note that this longer representation (with lots of 9s) isn't exact either.
However you write your rounding function, it is not possible for my_round(2.675, 2) to round up to 2.68 and also for my_round(2 + 0.6749999999999998, 2) to round down to 2.67; because the inputs are actually the same floating point number.
So if your number 2.675 ever gets converted to a float and back again, you have already lost the information about whether it should round up or down. The solution is not to make it float in the first place.
After trying for a very long time to produce an elegant one-line function, I ended up getting something that is comparable to a dictionary in size.
I would say the simplest way to do this is just to
def round_half_up(inp,dec_places):
return round(inp+0.0000001,dec_places)
i would acknowledge that this is not accurate in every cases, but should work if you just want a simple quick workaround.
I will explain my problem by example:
>>> #In this case, I get unwanted result
>>> k = 20685671025767659927959422028 / 2580360422
>>> k
8.016582043889239e+18
>>> math.floor(k)
8016582043889239040
>>> #I dont want this to happen ^^, let it remain 8.016582043889239e+18
>>> #The following case though, is fine
>>> k2 = 5/6
>>> k2
0.8333333333333334
>>> math.floor(k2)
0
How do I make math.floor not flooring the scientific notated numbers? Is there a rule for which numbers are represented in a scientific notation (I guess it would be a certain boundry).
EDIT:
I first thought that the math.floor function was causing an accuracy loss, but it turns out that the first calculation itself lost the calculation's accuracy, which had me really confused, it can be easily seen here:
>>> 20685671025767659927959422028 / 2580360422
8016582043889239040
>>> 8016582043889239040 * 2580360422
20685671025767659370513274880
>>> 20685671025767659927959422028 - 20685671025767659370513274880
557446147148
>>> 557446147148 / 2580360422
216.0342184739958
>>> ##this is >1, meaning I lost quite a bit of information, and it was not due to the flooring
So now my problem is how to get the actual result of the division. I looked at the following thread:
How to print all digits of a large number in python?
But for some reason I didn't get the same result.
EDIT:
I found a simple solution for the division accuracy problem in here:
How to manage division of huge numbers in Python?
Apparently the // operator returns an int rather then float, which has no size limit apart to the machine's memory.
In Python 3, math.floor returns an integer. Integers are not displayed using scientific notation. Some floats are represented using scientific notation. If you want scientific notation, try converting back to float.
>>> float(math.floor(20685671025767659927959422028 / 2580360422))
8.016582043889239e+18
As Tadhg McDonald-Jensen indicates, you can also use str.format to get a string representation of your integer in scientific notation:
>>> k = 20685671025767659927959422028 / 2580360422
>>> "{:e}".format(k)
'8.016582e+18'
This may, in fact, be more practical than converting to float. As a general rule of thumb, you should choose a numeric data type based on the precision and range you require, without worrying about what it looks like when printed.
When I divide 2/3 I get 0.66666666, when I do 2//3 I get 0.
Is there any way to compute integer division while still keeping the decimal points?
Edit: looks like I may have confused a lot of you, my bad. So what my professor told me that since standard division(2/3) will only return 0.666666666666 up to 203 digits, it is not useful when I want to do computations that requires more than 203 digits after the decimal point. I am wondering if there is a way to do 2//3 (which will return 0) but somehow still get the .6666 in the end
For certain limited decimals, you can use Python's float .as_integer_ratio() method:
>>> 0.5.as_integer_ratio()
(1, 2)
For 2/3, which is not exactly representable in decimal, this starts to give less desirable results:
>>> (2/3).as_integer_ratio()
(6004799503160661, 9007199254740992) # approximation of 2/3
For arbitrary precision of rational numbers, use fractions in the Python library:
>>> import fractions
>>> fractions.Fraction('2/3')
Fraction(2, 3)
>>> Frac=fractions.Fraction
>>> Frac('2/3') + Frac('1/3') + Frac('1/10')
Fraction(11, 10)
>>> Frac('2/3') + Frac('1/6') + Frac('1/10')
Fraction(14, 15)
Then if you want a more accurate representation of that in decimal, use the Decimal library to convert the integer numerator and denominator to an arbitrary precision decimal:
>>> f=Frac('2/3') + Frac('1/6') + Frac('1/10')
>>> f
Fraction(14, 15)
>>> f.numerator
14
>>> f.denominator
15
>>> import decimal
>>> decimal.Decimal(f.numerator) / decimal.Decimal(f.denominator)
Decimal('0.9333333333333333333333333333')
You can also cast one integer to a float before division.
In [1]: float(2)/3
Out[1]: 0.6666666666666666
This will prevent integer truncation and give you a result as a float.
Perhaps take a look at decimal.Decimal():
>>> import decimal
>>> x = decimal.Decimal(2/3)
>>> x
Decimal('0.66666666666666662965923251249478198587894439697265625')
// is a floor division, it will give you the integer floor of the result. No matter you use 2//3 or float(2)//3. You can not keep precision when using //.
In my environment (python2.7.6) 2//3 return 0 and float(2)//3 return 0.0, neither can keep the precision.
A similar question maybe helpful for you.
This is not a direct answer to your question but it will help you to udnerstand.
I am posting two link which explains very much details about the implementation:
From python-history-blog from Guido :
From PEP-0238
Here is something which we need to aware of:
>>> 2/3
0
>>> 2/3.0
0.6666666666666666
>>> 2//3
0
>>> -2//3
-1
>>>
from the PEP-0238
The current division (/) operator has an ambiguous meaning for
numerical arguments: it returns the floor of the mathematical
result of division if the arguments are ints or longs, but it
returns a reasonable approximation of the division result if the
arguments are floats or complex. This makes expressions expecting
float or complex results error-prone when integers are not
expected but possible as inputs.
We propose to fix this by introducing different operators for
different operations: x/y to return a reasonable approximation of
the mathematical result of the division ("true division"), x//y to
return the floor ("floor division"). We call the current, mixed
meaning of x/y "classic division".
- Classic division will remain the default in the Python 2.x
series; true division will be standard in Python 3.0.
- The // operator will be available to request floor division
unambiguously.
- The future division statement, spelled "from __future__ import
division", will change the / operator to mean true division
throughout the module.
- A command line option will enable run-time warnings for classic
division applied to int or long arguments; another command line
option will make true division the default.
- The standard library will use the future division statement and
the // operator when appropriate, so as to completely avoid
classic division.