Python cdecimal InvalidOperation - python

I am trying to read financial data and store it. The place I get the financial data from stores the data with incredible precision, however I am only interested in 5 figures after the decimal point. Therefore, I have decided to use t = .quantize(cdecimal.Decimal('.00001'), rounding=cdecimal.ROUND_UP) on the Decimal I create, but I keep getting an InvalidOperation exception. Why is this?
>>> import cdecimal
>>> c = cdecimal.getcontext()
>>> c.prec = 5
>>> s = '45.2091000080109'
>>> # s = '0.257585003972054' works!
>>> t = cdecimal.Decimal(s).quantize(cdecimal.Decimal('.00001'), rounding=cdecimal.ROUND_UP)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
cdecimal.InvalidOperation: [<class 'cdecimal.InvalidOperation'>]
Why is there an invalid operation here? If I change the precision to 7 (or greater), it works. If I set s to be '0.257585003972054' instead of the original value, that also works! What is going on?
Thanks!

decimal version gives a better description of the error:
Python 2.7.2+ (default, Feb 16 2012, 18:47:58)
>>> import decimal
>>> s = '45.2091000080109'
>>> decimal.getcontext().prec = 5
>>> decimal.Decimal(s).quantize(decimal.Decimal('.00001'), rounding=decimal.ROUND_UP)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/decimal.py", line 2464, in quantize
'quantize result has too many digits for current context')
File "/usr/lib/python2.7/decimal.py", line 3866, in _raise_error
raise error(explanation)
decimal.InvalidOperation: quantize result has too many digits for current context
>>>
Docs:
Unlike other operations, if the length of the coefficient after the
quantize operation would be greater than precision, then an
InvalidOperation is signaled. This guarantees that, unless there is an
error condition, the quantized exponent is always equal to that of the
right-hand operand.
But i must confess i don't know what this means.

Related

'decimal.DivisionUndefined' error thrown when creating model before children models [duplicate]

In the following code, both coeff1 and coeff2 are Decimal objects. When i check their type using type(coeff1), i get (class 'decimal.Decimal') but when i made a test code and checked decimal objects i get decimal. Decimal, without the word class
coeff1 = system[i].normal_vector.coordinates[i]
coeff2 = system[m].normal_vector.coordinates[i]
x = coeff2/coeff1
print(type(x))
system.xrow_add_to_row(x,i,m)
another issue is when i change the first input to the function xrow_add_to_row to negative x:
system.xrow_add_to_row(-x,i,m)
I get invalid operation error at a line that is above the changed code:
<ipython-input-11-ce84b250bafa> in compute_triangular_form(self)
93 coeff1 = system[i].normal_vector.coordinates[i]
94 coeff2 = system[m].normal_vector.coordinates[i]
---> 95 x = coeff2/coeff1
96 print(type(coeff1))
97 system.xrow_add_to_row(-x,i,m)
InvalidOperation: [<class 'decimal.DivisionUndefined'>]
But then again in a test code i use negative numbers with Decimal objects and it works fine. Any idea what the problem might be? Thanks.
decimal.DivisionUndefined is raised when you attempt to divide zero by zero. It's a bit confusing as you get a different exception when only the divisor is zero (decimal.DivisionByZero)
>>> import decimal.Decimal as D
>>> D(0) / D(0)
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
D(0) / D(0)
decimal.InvalidOperation: [<class 'decimal.DivisionUndefined'>]
>>> D(1) / D(0)
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
D(1) / D(0)
decimal.DivisionByZero: [<class 'decimal.DivisionByZero'>]

Django scientific notation input validation

I have the following fields on my model:
class Range(models.Model):
optimal_low = models.DecimalField(max_digits=30, decimal_places=8)
optimal_high = models.DecimalField(max_digits=30, decimal_places=8)
And here's how I bring them into the form (because the form's primary object is not this model, I just need the fields, and don't want to duplicate the max_digits and decimal_places.
class ReadingMappingForm(forms.ModelForm):
optimal_low = Range._meta.get_field('optimal_low').formfield()
optimal_high = Range._meta.get_field('optimal_high').formfield()
It seems django allows entering of decimals in scientific notation out of the box, but there's a glitch above a certain threshold.
In the form, if I input 1.5E9 it works fine, and the value gets saved as 1500000000.00000000 (Here's an online scientific notation calculator).
However if I input 1.5E10 it says:
Ensure that there are no more than 8 decimal places.
Which is wrong, because I'm not adding any decimal places. In fact, if I enter 1.5E10 in normal notation, even with the 8 decimal places added, i.e. 15000000000.00000000 it works fine.
So I think something is not working correctly under the hood...
EDIT
I tested the field in the console, and it errors there:
from django.forms import DecimalField
>>> f = DecimalField(max_digits=30, decimal_places=8)
>>> f.clean('1.5E9')
Decimal('1.5E+9')
>>> f.clean('1.5E10')
Traceback (most recent call last):
File "myproject/env/lib/python3.5/site-packages/django/core/management/commands/shell.py", line 69, in handle
self.run_shell(shell=options['interface'])
File "myproject/env/lib/python3.5/site-packages/django/core/management/commands/shell.py", line 61, in run_shell
raise ImportError
ImportError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "myproject/env/lib/python3.5/site-packages/django/forms/fields.py", line 168, in clean
self.run_validators(value)
File "myproject/env/lib/python3.5/site-packages/django/forms/fields.py", line 157, in run_validators
raise ValidationError(errors)
django.core.exceptions.ValidationError: ['Ensure that there are no more than 8 decimal places.']
It does seem to be bug in DecimalValidator with the following explanation
Number that you stated 1.5E10 is parsed as object with properties _int '15' and _exp 9
In DecimalValidator __call__ method number of decimals is calculated as
decimals = abs(exponent)
which than furthermore triggers the following
if self.decimal_places is not None and decimals > self.decimal_places:
raise ValidationError(
self.messages['max_decimal_places'],
code='max_decimal_places',
params={'max': self.decimal_places},
)
Seems that fix for following would be something like
if exponent < 0:
decimal = abs(exponent)
else:
decimal = 0
Fix for version 2.0 looks like following
# If the absolute value of the negative exponent is larger than the
# number of digits, then it's the same as the number of digits,
# because it'll consume all of the digits in digit_tuple and then
# add abs(exponent) - len(digit_tuple) leading zeros after the
# decimal point.
if abs(exponent) > len(digit_tuple):
digits = decimals = abs(exponent)
else:
digits = len(digit_tuple)
decimals = abs(exponent)
This is a known bug which is fixed, but only in django 2.0 as far as I can tell.

Python Compiler - Using Decimal for Division of Zero?

I am currently creating a compiler and would also like to implement the division of zero. I noticed the decimal module in python and thought it could be useful. The code below shows what I am trying to get at. Is there anyway to split up the expression and check for the division of 0 for both negative and positive numbers? thanks in advance.
if input negative int/0 = -infin
if pos int/0 = infin
if 0/0 = null
ect.
The python documentation says the default behaviour is to raise an exception.
>>> import decimal
>>> D = decimal.Decimal
>>> a = D("12")
>>> b = D("0")
>>> a/b
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/decimal.py", line 1350, in __truediv__
return context._raise_error(DivisionByZero, 'x / 0', sign)
File "/usr/lib/python3.4/decimal.py", line 4050, in _raise_error
raise error(explanation)
decimal.DivisionByZero: x / 0
>>>
But also, instead of raising an exception:
If this signal is not trapped, returns Infinity or -Infinity with the
sign determined by the inputs to the calculation.
As of the zero by zero division, it is not mathematically defined, so I don't know what you may want to do with it. Python would return another exception:
>>> b/b
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/decimal.py", line 1349, in __truediv__
return context._raise_error(DivisionUndefined, '0 / 0')
File "/usr/lib/python3.4/decimal.py", line 4050, in _raise_error
raise error(explanation)
decimal.InvalidOperation: 0 / 0
>>>
and if the signal is not trapped, it returns NaN. (Might not be useful to know whether the operation was a zero divided by zero or not).

Python: limit on the accuracy of float

The code gives an error because the value of "var" is very close to zero, less than 1e-80. I tried to fix this error using "Import decimal *", but it didn't really work. Is there a way to tell Python to round a number to zero when float number is very close to zero, i.e. < 1e-50? Or any other way to fix this issue?
Thank you
CODE:
import math
H=6.6260755e-27
K=1.3807e-16
C=2.9979E+10
T=100.0
x=3.07175e-05
cst=2.0*H*H*(C**3.0)/(K*T*T*(x**6.0))
a=H*C/(K*T*x)
var=cst*math.exp(a)/((math.exp(a)-1.0)**2.0)
print var
OUTPUT:
Traceback (most recent call last):
File "test.py", line 11, in <module>
var=cst*math.exp(a)/((math.exp(a)-1.0)**2.0)
OverflowError: (34, 'Numerical result out of range')
To Kevin:
The code was edited with following lines:
from decimal import *
getcontext().prec = 7
cst=Decimal(2.0*H*H*(C**3.0)/(K*T*T*(x**6.0)))
a=Decimal(H*C/(K*T*x))
The problem is that (math.exp(a)-1.0)**2.0 is too large to hold as an intermediate result.
>>> (math.exp(a) - 1.0)**2.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: (34, 'Result too large')
However, for the value of a you are using,
>>> math.exp(a)/(math.exp(a)-1.0) == 1.0
True
so you can essentially cancel that part of the fraction, leaving
var = cst/(math.exp(a)-1.0)
which evaluates nicely to
>>> cst/(math.exp(a)-1.0)
7.932672271698049e-186
If you aren't comfortable rewriting the formula to that extent, use the associativity of the operations to avoid the large intermediate value. The resulting product is the same.
>>> cst/(math.exp(a)-1.0)*math.exp(a)/(math.exp(a)-1.0)
7.932672271698049e-186
I solved this issue but that will work only for this particular problem, not in general. The main issue is the nature of this function:
math.exp(a)/(math.exp(a)-1.0)**2.0
which decays very rapidly.
Problem can be easily solved restricting the value of "a" (which won't make any significant change in calculation). i.e.
if a>200:
var=0.0
else:
var=cst*math.exp(a)/((math.exp(a)-1.0)**2.0)

How do I get the value of the decimal.Inexact exception?

In the decimal module documentation I read:
class decimal.Inexact
Indicates that rounding occurred and the result is not exact. [...] The rounded
result is returned. [...]
How do I get the rounded result? Here's an example:
>>> from decimal import Decimal, Context, Inexact
>>> (Decimal("1.23")/2).quantize(Decimal("0.1"), context=Context(traps=[Inexact]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/decimal.py", line 2590, in quantize
context._raise_error(Inexact)
File "/usr/lib/python3.4/decimal.py", line 4043, in _raise_error
raise error(explanation)
decimal.Inexact: None
You misinterpret the documentation; the operation returns the rounded result only when you don't trap, instead the Inexact flag is set on the context.
But when you trap the exception instead, it is raised and no rounded result is returned.
From the tutorial portion of the documentation:
Contexts also have signal flags for monitoring exceptional conditions encountered during computations. The flags remain set until explicitly cleared, so it is best to clear the flags before each set of monitored computations by using the clear_flags() method.
>>> from decimal import localcontext
>>> with localcontext() as ctx:
... (Decimal("1.23")/2).quantize(Decimal("0.1"))
... print(ctx.flags)
...
Decimal('0.6')
{<class 'decimal.Subnormal'>: 0, <class 'decimal.Underflow'>: 0, <class 'decimal.DivisionByZero'>: 0, <class 'decimal.Inexact'>: 1, <class 'decimal.Rounded'>: 1, <class 'decimal.InvalidOperation'>: 0, <class 'decimal.Overflow'>: 0, <class 'decimal.Clamped'>: 0}
Here the decimal.Inexact and decimal.Rounded flags are set, telling you that the Decimal('0.6') return value is inexact.
Use trapping only when the specific signal should be an error; e.g. when rounding would be a problem for your application.

Categories

Resources