I've been experimenting with the standard python math module and have come across some subtle difficulties. For example, I'm noticing the following behavior concerning indeterminate forms:
0**0
>>> 1
def inf():
return 1e900
# Will return inf
inf()**inf()
>>> inf
And other anomalies of the sort. I'm writing a calculator, and I'd like to have it be mathematically accurate. Is there something I can do about this? Or, is there some way to circumvent this? Thanks in advance.
There's nothing wrong with your first example. 0**0 is often defined to be 1.
The second example is all to do with precision of doubles. 1E900 exceeds the maximum positive value of a (most likely 64-bit) double. If you want doubles outside of that range, you'll have to look into libraries. Fortunately Python has one built-in: the decimal module.
For example:
from decimal import Decimal
d = Decimal('1E900')
f = d + d
print(f)
>>> 2E900
According to Wolfram (quoting Knuth) while 0**0 is indeterminate, it's sometimes given as 1. This is because holding the statement 'x**0 = 1' to be true in all cases is in some cases useful. Even more interestingly Python will consider NaN**0 to be 1 as well.
http://mathworld.wolfram.com/Power.html
In the case of infinity**infinity, you're not really dealing with the mathematical concept of infinity here (where that would be undefined), but rather a number that's too large and has overflowed. As such all that statement is saying is that a number that's huge to the power of another number that's huge is still a number that's huge.
Edit: I do not think it is possible to overload a built in type (such as float) in Python so overloading the float.__pow__(x,y) operator directly. What you could possibly do is define your own version of float.
class myfloat(float):
def __pow__(x,y):
if(x==y==0):
return 'NaN'
else:
return float.__pow__(x,y)
m = myfloat(0)
m**0
Not sure if that's exactly what you're looking for though.
Well returning NaN for 0**0 is almost always useless and lots of algorithms avoid special cases if we assume 0**0 == 1. So while it may not be mathematically perfect - we're talking about IEEE-754 here, mathematical exactness is really the least of our problems [1]
But if you want to change it, that's rather simple. The following works as expected in Python 3.2:
def my_pow(x, y):
if y == 0: return 'NaN'
return float.__pow__(float(x), y)
pow = my_pow
[1] The following code can theoretically execute the if branch with x86 CPUs (well at least in C and co):
float x = sqrt(y);
if (x != sqrt(y)) printf("Surprise, surprise!\n");
Related
To keep things simplistic, consider the Python function:
def to_string(value):
return str(value)
Since we cannot define data types of parameters in Python (as for as I know), when I pass 1/2 to above function, it automatically converts 1/2 to 0.5 and then returns string '0.5'. How do I make it return '1/2'? How do I force Python to treat arguments to be of certain data type, no matter how they "appear"?
Here (in python 3) 1/2 is evaluated to 0.5 before being even passed into the function. For this specific example you have lost the information, due to possible float accuracy errors, before the function is even called; In theory you can get back to 1/2 from 0.5 but you should not rely on this float manipulation. In order to not lose this accuracy here you should probably treat a fraction as two pieces of integer information as it really is, instead of one float.
from fractions import gcd
def to_string(n, d):
g = gcd(n, d)
return str(n//g) + "/" + str(d//g)
If what you are asking is specifically about fractions then a class built around this idea is probably your best bet. If your example is not explanatory then (famously) python does not have type enforcement. However you can read here https://docs.python.org/3/library/typing.html about modernisation of this idea and decorators.
I've tried to solve the problem myself but i cant. Its a function in order to solve 2nd grade equations when y=0 like 'ax2+bx+c=0'. when i execute it it says me there is math domain error. if u can help me it will be nice thx.
a=raw_input('put a number for variable a:')
b=raw_input('put a number for variable b:')
c=raw_input('put a number for variable c:')
a=float(a)
b=float(b)
c=float(c)`
import math
x=(-b+math.sqrt((b**2)-4*a*c))/2*a
print x`
x=(-b-math.sqrt((b**2)-4*a*c))/2*a`
print x
PD:im starting with python so im quite a disaster sorry.
The issue here is that the standard math library in python cannot handle complex variables. The sqrt you've got up there reflects this.
If you want to handle a function that could have complex variables (such as the one above) I would suggest using the cmath library, which has a replacement cmath.sqrt function.
You could change your above code to the following:
from cmath import sqrt
a = raw_input('put a number for variable a:')
b = raw_input('put a number for variable b:')
c = raw_input('put a number for variable c:')
a = float(a)
b = float(b)
c = float(c)`
x = (-b + sqrt((b**2) - 4 * a * c)) / 2 * a
print x`
x = (-b - sqrt((b**2) - 4 * a * c)) / 2 * a`
print x
and it should fix your problem (I also made some edits to make the code look a little more pythonic (read: pep8 compliant))
First, it's worth noting that in "2nd grade math", that equation doesn't have a solution with the values you (presumably) entered.* When you get to high school math and learn about imaginary numbers, you learn that all quadratic equations actually do have solutions, it's just that sometimes the solutions are complex numbers. And then, when you get to university, you learn that whether or not the equations have solutions depends on the domain; the function to real numbers and the function to complex numbers are different functions. So, from either a 2nd-grade perspective or a university perspective, Python is doing the right thing by raising a "math domain error".
* Actually, do you even learn about quadratic equations before middle school? That seems a bit early…
The math docs explain:
These functions cannot be used with complex numbers; use the functions of the same name from the cmath module if you require support for complex numbers. The distinction between functions which support complex numbers and those which don’t is made since most users do not want to learn quite as much mathematics as required to understand complex numbers. Receiving an exception instead of a complex result allows earlier detection of the unexpected complex number used as a parameter, so that the programmer can determine how and why it was generated in the first place.
But there's another reason for this: math was specifically designed to be thin wrappers around the standard C library math functions. It's part of the intended goal that you can take code written for another language that uses C's <math.h>, C++'s <cmath>, or similar functions in Perl, PHP, etc. and have it work the same way with the math module in Python.
So, if you want the complex roots, all you have to do is import cmath and use cmath.sqrt instead of math.sqrt.
As a side note: In general, the operators and other builtins are more "friendly" than the functions from these modules. However, until 3.0, the ** operator breaks this rule, so ** .5 will just raise ValueError: negative number cannot be raised to a fractional power. If you upgrade to 3.x, it will work as desired. (This change is exactly like the one with integer division giving a floating-point result, but there's no __future__ statement to enable it in 2.6-2.7 because it was deemed to be less of a visible and important change.)
I am new to SAGE and am having a problem with something very simple. I have the following code:
delay = float(3.5)
D = delay%1.0
D
But this returns the value -0.5 instead of the expected 0.5. What am I doing wrong?
If I change delay to be delay = float(2.5), I get the right answer, so I don't know why it isn't consistent (I am sure I am using the modulo wrong somehow).
I think that this question will answer things very well indeed for you.
However, I don't know why you are using float in Sage. Then you could just use Python straight up. Anyway, the % operator is tricky to use outside of integers. For example, here is the docstring for its use on Sage rational numbers.
Return the remainder of division of self by other, where other is
coerced to an integer
INPUT:
* ``other`` - object that coerces to an integer.
OUTPUT: integer
EXAMPLES:
sage: (-4/17).__mod__(3/1)
1
I assume this is considered to be a feature, not a bug.
I am working on a python program to calculate numbers in the Fibonacci sequence. Here is my code:
import math
def F(n):
return ((1+math.sqrt(5))**n-(1-math.sqrt(5))**n)/(2**n*math.sqrt(5))
def fib(n):
for i in range(n):
print F(i)
My code uses this formula for finding the Nth number in the Fibonacci sequence:
This can calculate many of the the numbers in the Fibonacci sequence but I do get overflow errors.
How can I improve this code and prevent overflow errors?
Note: I am using python 2.7.
Python's integers are arbitrary precision so if you calculate the Fibonacci sequence using an interative algorithm, you can compute exact results.
>>> def fib(n):
... a = 0
... b = 1
... while n > 0:
... a, b = b, a + b
... n = n - 1
... return a
...
>>> fib(100)
354224848179261915075L
There are several multiple precision floating-point libraries available for Python. The decimal module is included with Python and was originally intended for financial calculations. It does support sqrt() so you can do the following:
>>> import decimal
>>> decimal.setcontext(decimal.Context(prec=40))
>>> a=decimal.Decimal(5).sqrt()
>>> a
Decimal('2.236067977499789696409173668731276235441')
>>> ((1+a)**100 - (1-a)**100)/(a*(2**100))
Decimal('354224848179261915075.0000000000000000041')
Other libraries are mpmath and gmpy2.
>>> import gmpy2
>>> gmpy2.set_context(gmpy2.context(precision=150))
>>> a=gmpy2.sqrt(5)
>>> a
mpfr('2.2360679774997896964091736687312762354406183598',150)
>>> ((1+a)**100 - (1-a)**100)/(a*(2**100))
mpfr('354224848179261915075.00000000000000000000000248',150)
>>> gmpy2.fib(100)
mpz(354224848179261915075L)
gmpy2 can also computer Fibonacci numbers directly (as shown above).
Disclaimer: I maintain gmpy2.
I don't know if this is acceptable to you, but you could just use integer arithmetic as calculate the fib number using the recurrence relation (e.g., F3 = F2 + F1).
Since Python 2.5?, you can do arbitrary precision integer arithmetic -- pretty much eliminate overflow problems -- If you try to calculate F(10000) it will doubtless get very slow.
Also, check out the decimal module -- IIRC correctly, you can use in with Python 2.7, you can specify the precision of the decimal arithmetic -- This would allow you to continue using same algorithm -- except for using the decimal type.
ADDED
You can easily overlook that the decimal class includes a square root method. You would need to use this method instead of math.sqrt() since you will need to retain the full precision of the decimal class.
Also sqrt(5) is a relatively expensive operation. Only calculate it one time
Your statement of How can I improve this code... is kind of vague, so I will take it to mean shortening your code:
import math
def fib(j):
for i in [int(((1+math.sqrt(5))**n-(1-math.sqrt(5))**n)/(2**n*math.sqrt(5))) for n in range(j)]: print i
You can combine both of your functions to make just one function, and using list comprehension, you can make that function run in one line.
You cannot prevent overflow errors if you are working with very large numbers, instead, try catching them and then breaking:
import math
def fib(j):
try:
for i in [int(((1+math.sqrt(5))**n-(1-math.sqrt(5))**n)/(2**n*math.sqrt(5))) for n in range(j)]: print i
except Exception as e:
print 'There was an error, your number was too large!'
The second one will first loop over everything and make sure there is no error, and if there isn't, it will proceed to print it.
C++ has a set of functions, ffs(), ffsl(), and ffsll(), that return the least significant bit that is set in a given binary integer.
I'm wondering if there is an equivalent function already available in Python. I don't see one described for bitarray, but perhaps there's another. I am hoping to avoid calculating the answer by looping through all possible bit masks, though of course that's an option of last resort; ffs() simply returns a single integer and I'd like to know of something comparable in Python.
Only in Pythons 2.7 and 3.1 and up:
def ffs(x):
"""Returns the index, counting from 0, of the
least significant set bit in `x`.
"""
return (x&-x).bit_length()-1
Example:
>>> ffs(136)
3
It is available in the gmpy wrapper for the GNU Multi-Precision library. On my system, it is about 4x faster than the ctypes solution.
>>> import gmpy
>>> gmpy.scan1(136)
3
>>> bin(136)
'0b10001000'
It is possible to load functions from shared libraries (DLLs for Windows users) using the ctypes module. I was able to load the ffs() function from the C standard library, contained in libc.so.6 on Ubuntu 10.10:
>>> import ctypes
>>> libc = ctypes.cdll.LoadLibrary('libc.so.6')
>>> libc.ffs(136)
4
(Note that this uses 1-based indexing). Obviously, this is not cross-platform compatible as-is; you'll need to change the filename of the library to load based on which system you're operating under (detected from sys.platform or similar). I wouldn't even be 100% sure it'd be the same on different Linux distributions.
It'd also be worth doing some proper benchmarking to see if its really worth it. If its called frequently it could be, but if its only used occasionally, the performance benefit over a Python implementation would probably be negligible compared to the maintenance to ensure it keeps working on different platforms.
An alternative would be to write your own implementation of the function in C and come up with a Python wrapper. You'd then have to compile it for each platform you want, but you lose the hassle of finding the correct library name while retaining the speed benefits.
Really all these answers with external modules, defining functions, etc. for a... bitwise operation???
(1 + (x ^ (x-1))) >> 1
will return the least significant power of 2 in x.
For instance, with x=136, answer will be 2^3 = 8.
Trick is remembering that x-1 has the same digits as x except all least significant 1 and all following zeros; then performing a bitwise XOR bitwin X and X+1 extracts these digits.
Then, you can extract the index with the bit_length() method.
To flesh out S.Lott's comment, if the LSB is set, the value will be odd and if it is clear, the value will be even. Hence we can just keep shifting the value right until it becomes odd, keeping track of how many shifts it takes for this to occur. Just remember to check that there is a bit set first, otherwise the loop will become endless when it is given the value 0...
>>> def ffs(num):
... # Check there is at least one bit set.
... if num == 0:
... return None
... # Right-shift until we have the first set bit in the LSB position.
... i = 0
... while (num % 2) == 0:
... i += 1
... num = num >> 1
... return i
...
>>> num = 136
>>> bin(num)
'0b10001000'
>>> ffs(num)
3
>>> ffs(0) is None
True
Note that this treats the LSB as bit 0; simply initialise i to 1 if you'd rather have 1-based indexing.
You can implement any of the algorithms identified here:
http://graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightLinear
I'm not aware of any native method to do so. (You could also write an extension to export the C function to Python, but that would probably not be worth the trouble :-)
It's a little silly to try and aggressively optimize Python code, so a simple for loop with counter and right-shift should be fine. If you wanted to go faster (which would make more sense in C, Java, or Assembly) you could binary-search for the right-most 1-bit and even use lookup tables to help you.
Suppose x is 64-bits and you want the LSB. Mask off the lower 32-bits. Assume x is nonzero:
if x & 0xffffffff == 0:
if x & 0xffff00000000 == 0:
# the LSB is in the highest two bytes
else:
# the LSB is in the 5th or 6th byte
else:
if x & 0xffff0000:
# the LSB is in the 3rd or 4th byte
else:
# the LSB is in the 1st or 2nd byte
How you handle the commented section above depends on how aggressive you want to be: you could do further binary searching analogous to what we have, or you could use a lookup table. As it stands, we have 16-bits of uncertainty, so our table would be 65,536 entries. I have actually made tables like this for extremely performance-sensitive code, but that was a C program that played Chess (the 64-bit string there was a binary representation of the board).
I know there's a selected answer already, but had a crack at the problem myself, because I could. The solution is based around the idea that if the value is a power of two you can take the log base two to find its position. The rest of the implementation revolves around transforming the value so that we can simply take the log.
I haven't benchmarked it, but the transformation is O(1) where n is the number of bits (and this view somewhat unfairly ignores the complexity introduced by the log(), though maybe it's around O(log(n))? 1). The implementation is loosely based on the 'decrement and compare' power-of-two method from 2:
import math
def ffs(value):
"""Find the first set bit in an integer, returning its index (from zero)"""
if 0 > value:
raise ValueError("No negative values!")
if 0 == value:
# No bits are set in value
return None
if (value & (value - 1)) != 0:
# Multiple bits are set in value. Transform value such that only the
# lowest bit remains set.
value &= ~ (value - 1)
# Only one bit is set in value, find its index from zero
return int(math.log(value, 2))
from math import log2
def bit_pos(n):
"""Returns the index, counting from 0, of the
least significant set bit in `n`. If no bit is set, returns None.
"""
return int(log2(n^(n&(n-1)))) if n else None
bit_pos(32)
# returns 5