Python modulus result is incorrect - python

I am totally stumped. I was computing the cipher of the number 54 in RSA with the following values:
p=5; q=29; n=145
d=9; e=137
So the number 54 encrypted would be:
54^137 mod 145
or equivalently in python:
import math
math.pow(54,137)%145
My calculator gives me 24, my python statement gives me 54.0. Python is clearly wrong but I have no idea why or how. Try it on your installations of Python. My version is 2.5.1 but I also tried on 2.6.5 with the same incorrect result.

>>> pow(54,137,145)
24
math.pow is floating point. You don't want that. Floating-point values have less than 17 digits of useful precision. The 54**137 has 237 digits.

That's because using the math module is basically just a Python wrapper for the C math library which doesn't have arbitrary precision numbers. That means math.pow(54,137) is calculating 54^137 as a 64-bit floating point number, which means it will not be precise enough to hold all the digits of such a large number. Try this instead to use Python's normal built-in arbitrary precision integers:
>>> (54 ** 137) % 145
24L

Related

Weird float to integer conversion issue in python

For a calculation in program that I wrote that involves finite algebraic fields, I needed to check whether (2**58-1)/61 was an integer. However python seems to indicates that it is, while it is not.
For example -
>>> (2**58-1)/61
4725088133634619.0
Even when using numpy functions this issue appears -
>>> np.divide(np.float64(2**58)-1,np.float64(61))
4725088133634619.0
This happens although python does calculate 2**58 correctly (I assume this issue is general, but I encountered it using these numbers).
If you use normal / division, your result is a float, with the associated limited precision. The result gets rounded, and in your case, it gets rounded to 4725088133634619.0 - but that doesn't prove that it is an integer.
If you want to check if the result of the division by 61 is an integer, test if the remainder of the division by 61 is 0, using the modulo operator:
>>> (2**58-1) % 61
45
As you can see, it isn't.
As for float limited precision mentioned by #Thierry Lathuille, Python's float uses 64 bits and is double-precision that provides 53 bits for mantissa (the same is true for np.float64). That means that not all numbers > 2**53 are representable using the float, we have a loss of precision. For example, 2**53 == 2**53 + 1 is true in double precision. More detailed here:
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
Is floating point math broken?
Correct answers have already been given. I am just adding another approach (which is not much different from what has already been said).
What you may want to do, due to inherent limitations of representation errors, is use divmod( ) for python and numpy.divmod( ) for numpy. That way you can check the quotient and the remainder.
print(divmod((2**58-1),61))
Gives quotient and remainder as
(4725088133634618, 45)
In numpy you may want to use similar, divmod, function but the numbers should be np.int type and not np.float type (due to representation errors mentioned above).
np.divmod(np.int64(2**58)-1,np.int8(61))
The above gives quotient and remainder as.
(4725088133634618, 45)

Replicating C Fixed-Point Math in Python

I am attempting to replicate a DSP algorithm in Python that was originally written in C. The trick is I also need to retain the same behavior of the 32 bit fixed point variables from the C version, including any numerical errors that the limited precision would introduce.
The current options I think are available are:
I know the python Decimal type can be used for fixed-point arithmetic, however from what I can tell there is no way to adjust the size of a Decimal variable. To my knowledge numpy does not support doing fixed point operations.
I did a quick experiment to see how fiddling with the Decimal precision affected things:
>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.100000000000000088817841970012523233890533447265625')
>>> sys.getsizeof(a)
104
>>> dc.getcontext().prec = 16
>>> a = dc.Decimal(1.1)
>>> a
Decimal('1.1999999999999999555910790149937383830547332763671875')
>>> sys.getsizeof(a)
104
There is a change before/after the precision change, however there are still a large number of decimal places. The variable is still the same size, and has quite a few decimal places after it.
How can I best go about achieving the original objective? I do know that Python ctypes has the C language float, but I do not know if that will be useful in this case. I do not know if there is even a way to accurately mimic C type fixed point math in Python.
Thanks!
I recommend fxpmath module for fixed-point operations in python. Maybe with that you can emulate the fixed point arithmetic defining precision in bits (fractional part). It supports arrays and some arithmetic operations.
Repo at: https://github.com/francof2a/fxpmath
Here an example:
from fxpmath import Fxp
x = Fxp(1.1, True, 32, 16) # (val, signed, n_word, n_frac)
print(x)
print(x.precision)
results in:
1.0999908447265625
1.52587890625e-05

Remainders with fractional divisions not working in Python

Remainders with fractional divisions not working in Python.
For example,
>>> 59.28%3.12
3.119999999999999
>>> 59.28/3.12
19.0
Is there any way to get 0.0 as the output of 59.28%3.12
I don't know why, I don't know details of modulo implementation for floats, however this works fine:
from decimal import Decimal
Decimal("59.28") % Decimal("3.12")
EDIT: Note that you have to use quotes " (i.e. strings) in constructors. Otherwise it will try to interpret both numbers as floats which is the source of the problem (incorrect approximation).

Limitations of division in Python

I wrote a simple Python code that divides 1 by 2. My aim was to see the limits of division. The code runs fine till a little bit over than a thousand cycles. Then it starts to produce 0.0 instead of any representation of a number. Why is that happening?
I am just learning.
I paste here the last few result lines:
6.3e-322
3.16e-322
1.6e-322
8e-323
4e-323
2e-323
1e-323
5e-324
0.0
0.0
Press any key to continue . .
Python floats are IEEE double-precision floating point numbers (that is, whatever your platform's C compiler maps to the "double" type -- on most current OSes, that means 64-bit).
You can learn about them here: http://en.wikipedia.org/wiki/IEEE_floating_point_number
If you need arbitrary-precision maths, you can use the decimal module. It will, of course, be slower, but you'll be able to keep dividing by 2 until you run out of memory.
Try this:
import sys
print sys.float_info
It will give you an idea about the limits of the float numbers on your system. I expect it should be about the same as the numbers you got.

How can I make numbers more precise in Python? [duplicate]

This question already has answers here:
Is floating point arbitrary precision available?
(5 answers)
Closed 3 years ago.
I'm just learning the basics of Python at the moment and I thought that, as a learning exercise, I'd try writing something that would approximate the number "e". Anyway, it always gives the answer to 11 decimal places and I want it to give something more like 1000 decimal places. How do I do I do this?
Are you sure you need to make them "more precise"? Or do you just need to see more digits than Python shows by default?
>>> import math
>>> math.pi
3.141592653589793
>>>
>>> '{0:0.2f}'.format(math.pi)
'3.14'
>>>
>>> '{0:0.30f}'.format(math.pi)
'3.141592653589793115997963468544'
>>>
>>> '{0:0.60f}'.format(math.pi)
'3.141592653589793115997963468544185161590576171875000000000000'
However, note that
Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in sys.float_info
I assure you that pi doesn't go to zero after 48 digits :-)
Almost all machines today use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”.
A IEEE-754 double has 64 bits (8 bytes), with the 52 bits of the fraction significand appearing in the memory format, the total precision is approximately 16 decimal digits.
So to represent a float number have a higher precise than that, you should use Decimal.
import decimal
decimal.getcontext().prec = 100
If you want it to be a number, with a precision of a thousand digits, the short answer is you can't..
A workaround is, you can use the decimal module. Here is an example:
import decimal
a = decimal.Decimal('2387324895172987120570935712093570921579217509185712093')
In this case, however, a is not a number anymore. It's just an instance of the decimal.Decimal class. Well, you can still do some math operations with it.

Categories

Resources