Taylor series of cos x expansion in python - python

I want to calculate the summation of cosx series (while keeping the x in radian). This is the code i created:
import math
def cosine(x,n):
sum = 0
for i in range(0, n+1):
sum += ((-1) ** i) * (x**(2*i)/math.factorial(2*i))
return sum
and I checked it using math.cos() .
It works just fine when I tried out small numbers:
print("Result: ", cosine(25, 1000))
print(math.cos(25))
the output:
Result: 0.991203540954667 0.9912028118634736
The number is still similar. But when I tried a bigger number, i.e 40, it just returns a whole different value.
Result: 1.2101433786727471 -0.6669380616522619
Anyone got any idea why this happens?

The error term for a Taylor expansion increases the further you are from the point expanded about (in this case, x_0 = 0). To reduce the error, exploit the periodicity and symmetry by only evaluating within the interval [0, 2 * pi]:
def cosine(x, n):
x = x % (2 * pi)
total = 0
for i in range(0, n + 1):
total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
return total
This can be further improved to [0, pi/2]:
def cosine(x, n):
x = x % (2 * pi)
if x > pi:
x = abs(x - 2 * pi)
if x > pi / 2:
return -cosine(pi - x, n)
total = 0
for i in range(0, n + 1):
total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
return total

Contrary to the answer you got, this Taylor series converges regardless of how large the argument is. The factorial in the terms' denominators eventually drives the terms to 0.
But before the factorial portion dominates, terms can get larger and larger in absolute value. Native floating point doesn't have enough bits of precision to keep enough information for the low-order bits to survive.
Here's a way that doesn't lose any bits of precision. It's not practical because it's slow. Trust me when I tell you that it typically takes years of experience to learn how to write practical, fast, high-quality math libraries.
def mycos(x, nbits=100):
from fractions import Fraction
x2 = - Fraction(x) ** 2
i = 0
ntries = 0
total = term = Fraction(1)
while True:
ntries += 1
term = term * x2 / ((i+1) * (i+2))
i += 2
total += term
if (total // term).bit_length() > nbits:
break
print("converged to >=", nbits, "bits in", ntries, "steps")
return total
and then your examples:
>>> mycos(25)
converged to >= 100 bits in 60 steps
Fraction(177990265631575526901628601372315751766446600474817729598222950654891626294219622069090604398951917221057277891721367319419730580721270980180746700236766890453804854224688235663001, 179569976498504495450560473003158183053487302118823494306831203428122565348395374375382001784940465248260677204774780370309486592538808596156541689164857386103160689754560975077376)
>>> float(_)
0.9912028118634736
>>> mycos(40)
converged to >= 100 bits in 82 steps
Fraction(-41233919211296161511135381283308676089648279169136751860454289528820133116589076773613997242520904406094665861260732939711116309156993591792484104028113938044669594105655847220120785239949370429999292710446188633097549, 61825710035417531603549955214086485841025011572115538227516711699374454340823156388422475359453342009385198763106309156353690402915353642606997057282914587362557451641312842461463803518046090463931513882368034080863251)
>>> float(_)
-0.6669380616522619
Things to note:
The full-precision results require lots of bits.
Rounded back to float, they identically match what you got from math.cos().
It doesn't require anywhere near 1000 steps to converge.

Related

Lowest base system that has all 1s in its digits

I need to determine the lowest number base system in which the input n (base 10), expressed in this number base system, is all 1s in its digits.
Examples:
7 in base 2 is 111 - fits! answer is 2
21 in base 2 is 10101 - contains 0, does not fit
21 in base 3 is 210 - contains 0 and 2, does not fit
21 in base 4 is 111 - contains only 1 it fits! answer is 4
n is always less than Number.MAX_SAFE_INTEGER or equivalent.
I have the following code, which works well with a certain range of numbers, but for huge numbers the algorithm is still time consuming:
def check_digits(number, base):
res = 1
while res == 1 and number:
res *= number % base
number //= base
return res
def get_min_base(number):
for i in range(2, int(number ** 0.5) + 2):
if check_digits(number, i) == 1:
return i
return number - 1
How can I optimize the current code to make it run faster?
The number represented by a string of x 1s in base b is b^(x-1) + b^(x-2) + ... + b^2 + b + 1.
Note that for x >= 3, this number is greater than b^(x-1) (trivially) and less than (b+1)^(x-1) (apply the binomial theorem). Thus, if a number n is represented by x 1s in base b, we have b^(x-1) < n < (b+1)^(x-1). Applying x-1'th roots, we have b < n^(1/(x-1)) < b+1. Thus, for b to exist, b must be floor(n^(1/(x-1)).
I've written things with ^ notation instead of Python-style ** syntax so far because those equations and inequalities only hold for exact real number arithmetic, not for floating point. If you try to compute b with floating point math, rounding error may throw off your calculations, especially for extremely large inputs where the ULP is greater than 1. (I think floating point is fine for the input range you're working with, but I'm not sure.)
Still, regardless of whether floating point is good enough or if you need something fancier, the idea of an algorithm is there: you can directly check if a value of x is viable by directly computing what the corresponding b would have to be, and checking if x 1s in base b really represent n.
Just some small twist, slightly faster but don't improve the time complexity.
def check_digits2(number, base):
while number % base == 1:
if number == 1:
return True
number //= base
return False
def get_min_base2(number):
for i in range(2, int(number**0.5) + 2):
if check_digits2(number, i):
return i
return number - 1
def test():
number = 100000010000001
start = time.time()
print(get_min_base(number)) # 10000000
print(f"{time.time() - start:.3f}s\n") # 3.292s
start = time.time()
print(get_min_base2(number)) # 10000000
print(f"{time.time() - start:.3f}s\n") # 1.731s
Also try to approach with some math trick, but I actually make it worse lol
def calculate_n(number, base):
return math.log(number * (base - 1) + 1, base).is_integer()
def get_min_base3(number):
for i in range(2, int(number**0.5) + 2):
if calculate_n(number, i):
return i
return number - 1
def test():
number = 100000010000001
start = time.time()
print(get_min_base3(number)) # 10000000
print(f"{time.time() - start:.3f}s\n") # 4.597s

Infinite sum with given precision

I've been trying to solve this infinite sum with a given precision problem.
You can see the description in the picture below
Here's what I tried so far:
import math
def infin_sum(x, eps):
sum = float(0)
prev = ((-1)*(x**2))/2
i = 2
while True:
current = prev + ((-1)**i) * (x**(2*i)) / math.factorial(2*i)
if(abs(current - prev) <= eps):
print(current)
return current
prev = current
i+=1
For the given sample input (0.2 for x and 0.00001 precision) my sum is 6.65777777777778e-05 and according to their tests it doesn't come close enough to the correct answer
You should use math.isclose() instead of abs() to check your convergence (given that it's how the result will be checked). given that each iteration adds or subtract a specific term, the delta between previous and next (Si-1 vs Si) will be equal to the last term added (so you don't need to track a previous value).
That infinite series is almost the one for cosine (it would be if i started at zero) so you can test your result against math.cos(x)-1. Also, I find it strange that the check for expected result is fixed within a precision 0.0001 but the sample input specifies a precision of 0.00001 (I guess more precise will be within 0.0001 but then, the validation is not really checking that the output is correct given the input?)
from math import isclose
def cosMinus1(x,precision=0.00001):
result = 0
numerator = 1
denominator = 1
even = 0
while not isclose(numerator/denominator,0,abs_tol=precision): # reach precision
numerator *= -x*x # +/- for even powers of x
even += 2
denominator *= even * (even-1) # factorial of even numbers
result += numerator / denominator # sum of terms
return result
print(cosMinus1(0.2))
# -0.019933422222222226
import math
expected = math.cos(0.2)-1
print(expected, math.isclose(expected,cosMinus1(0.2),abs_tol=0.0001))
# -0.019933422158758374 True
Since it's not a good idea to shadow sum, you had the right idea in calling it current, but you didn't initialise current to float(0), and forgot to sum it. This is your code with those problems fixed:
def infin_sum(x, eps):
current = float(0)
prev = ((-1) * (x ** 2)) / 2
i = 2
while True:
current = current + (((-1) ** i) * (x ** (2 * i))) / math.factorial(2 * i)
if (abs(current - prev) <= eps):
print(current)
return current
prev = current
i += 1
As a more general comment, printing inside a function like this is probably not the best idea - it makes the reusability of the function limited - you'd want to capture the return value and print it outside the function, ideally.
First, you are not summing the elements.
Recomputing everything from scratch is very wasteful and precision of floats is limited.
You can user Horner's method to refine the sum:
import math
def infin_sum(x, eps):
total = float(0)
e = 1
total = 0
i = 1
while abs(e) >= eps:
diff = (-1) * (x ** 2) / (2 * i) / (2 * i - 1)
e *= diff
total += e
i += 1
return total
if __name__ == "__main__":
x = infin_sum(0.2, 0.00001)
print(x)
Dont forget to add the result and not replace it:
prev += current
instead of
prev = current

Python calculate factorial, OverflowError: int too large to convert to float

I want to write a function that calculate (1 / n!) * (1! + 2! + 3! + ... + n!) with n as the parameter of the function, also the result is truncated to 6 decimals (not rounded).
Below is my code:
def going(n):
n1 = n
n2 = n
factorial = 1
back = 1
for i in range(2, n1+1):
factorial *= i
while n2>1:
this = 1
for i in range(2, n2+1):
this*=i
back+=this
n2 = n2-1
this = 1
result = int((1/factorial)*back*1000000)/1000000
return result
When I passed the argument 171 into the function, I got the following traceback:
Traceback (most recent call last):
File "/Users/Desktop/going.py", line 18, in <module>
print(going(171))
File "/Users/Desktop/going.py", line 15, in going
result = int((1/factorial)*back*1000000)/1000000
OverflowError: int too large to convert to float
How can I fix this problem? Thanks a lot for help!
--update--
Sorry that I didn't clarify: I'm doing this problem in Codewars and I don't think I can import any libraries to use. So, I need a solution that can avoid using any libraries.
Original problem from Codewars:
Consider the following numbers (where n! is factorial(n)):
u1 = (1 / 1!) * (1!)
u2 = (1 / 2!) * (1! + 2!)
u3 = (1 / 3!) * (1! + 2! + 3!)
un = (1 / n!) * (1! + 2! + 3! + ... + n!)
Which will win: 1 / n! or (1! + 2! + 3! + ... + n!)?
Are these numbers going to 0 because of 1/n! or to infinity due to the sum of factorials?
Task
Calculate (1 / n!) * (1! + 2! + 3! + ... + n!) for a given n, where n is an integer greater or equal to 1.
To avoid discussions about rounding, return the result truncated to 6 decimal places, for example:
1.0000989217538616 will be truncated to 1.000098
1.2125000000000001 will be truncated to 1.2125
Remark
Keep in mind that factorials grow rather rapidly, and you need to handle large inputs.
And going(170) works as intended, right?
What you are seeing is a fundamental limitation of how your computer represents floating point numbers, and not a problem with Python per se. In general, most modern computers use IEEE 754 to represent and perform math with non-integer numbers. Specifically, numbers using IEEE 754's "binary64" (double-precision) floating point representation has a maximum value of 2^1023 × (1 + (1 − 2^−52)), or approximately 1.7976931348623157 × 10^308. It turns out that 170! ≈ 7.2 × 10^306, which is just under the maximum value. However, 171! ≈ 1.2 × 10^309, so you are out of luck.
The best chance you have of actually performing calculations with numbers that large without running into these overflow errors or losing precision is to use a large number library like gmpy2 (see this previous answer). A possible solution would be:
from gmpy2 import mpz, add, div, fac
def going(n):
factorial = fac(n)
back = mpz(1)
for i in range(2, n+1):
back = add(back, fac(i))
result = div(back, factorial)
return result
#PaSTE's suggestion to use gmpy2 is great, and should work fine.
The library mpmath is built on top of gmpy2 and provides the function ff (falling factorial) that makes the implementation a little more concise:
import mpmath
def going_mp(n):
return sum([1/mpmath.ff(n, k) for k in range(n)])
For example,
In [54]: import mpmath
In [55]: mpmath.mp.dps = 30
In [56]: going_mp(170)
Out[56]: mpf('1.00591736819491744725806951204519')
In [57]: going_mp(171)
Out[57]: mpf('1.00588255770874220729390683925161')
(I left out the truncating of the digits. That's something that you can add as you see fit.)
Another standard technique for handling very large numbers is to work with the logarithms of the numbers instead of the numbers themselves. In this case, you can use math.lgamma to compute k!/n! as exp(lgamma(k+1) - lgamma(n+1)). This will allow you to compute the value using just the standard math library.
import math
def going_l(n):
lognfac = math.lgamma(n + 1)
return sum([math.exp(math.lgamma(k+1) - lognfac) for k in range(1, n+1)])
For example,
In [69]: going_l(170)
Out[69]: 1.0059173681949172
In [70]: going_l(171)
Out[70]: 1.0058825577087422
Finally, if you don't want to use even the standard library, you could avoid the large numbers another way. Rewrite the expression as
1 1 1 1
1 + - + ------- + ------------- + ... + ---
n n*(n-1) n*(n-1)*(n-2) n!
That leads to this implementation:
def going_nolibs(n):
total = 0.0
term = 1.0
for k in range(n, 0, -1):
total += term
term /= k
return total
For example,
In [112]: going_nolibs(170)
Out[112]: 1.0059173681949174
In [113]: going_nolibs(171)
Out[113]: 1.0058825577087422

Arithmetic precision problems with large numbers

I am writing a program that handles numbers as large as 10 ** 100, everything looks good when dealing with smaller numbers but when values get big I get these kind of problems:
>>> N = 615839386751705599129552248800733595745824820450766179696019084949158872160074326065170966642688
>>> ((N + 63453534345) / sqrt(2)) == (N / sqrt(2))
>>> True
Clearly the above comparision is false, why is this happening?
Program code:
from math import *
def rec (n):
r = sqrt (2)
s = r + 2
m = int (floor (n * r))
j = int (floor (m / s))
if j <= 1:
return sum ([floor (r * i) for i in range (1, n + 1)])
assert m >= s * j and j > 1, "Error: something went wrong"
return m * (m + 1) / 2 - j * (j + 1) - rec (j)
print rec (1e100)
Edit:
I don't think my question is a duplicate of the linked question above because the decimal points in n, m and j are not important to me and I am looking for a solution to avoid this precision issue.
You can’t retain the precision you want while dividing by standard floating point numbers, so you should instead divide by a Fraction. The Fraction class in the fractions module lets you do exact rational arithmetic.
Of course, the square root of 2 is not rational. But if the error is less than one part in 10**100, you’ll get the right result.
So, how to compute an approximation to sqrt(2) as a Fraction? There are several ways to do it, but one simple way is to compute the integer square root of 2 * 10**200, which will be close to sqrt(2) * 10**100, then just make that the numerator and make 10**100 the denominator.
Here’s a little routine in Python 3 for integer square root.
def isqrt(n):
lg = -1
g = (1 >> n.bit_length() // 2) + 1
while abs(lg - g) > 1:
lg = g
g = (g + n//g) // 2
while g * g > n:
g -= 1
return g
You should be able to take it from there.

Recursion Formula for Integer Partitions

I have written the following code for evaluating integer partitions using the recurrence formula involving pentagonal numbers:
def part(n):
p = 0
if n == 0:
p += 1
else:
k = 1
while ((n >= (k*(3*k-1)/2)) or (n >= (k*(3*k+1)/2))):
i = (k * (3*k-1)/2)
j = (k * (3*k+1)/2)
if ((n-i) >= 0):
p -= ((-1)**k) * part(n-i)
if ((n-j) >= 0):
p -= ((-1)**k) * part(n-j)
k += 1
return p
n = int(raw_input("Enter a number: "))
m = part(n)
print m
The code works fine up until n=29. It gets a bit slow around n=24, but I still get an output within a decent runtime. I know the algorithm is correct because the numbers yielded are in accordance with known values.
For numbers above 35, I don't get an output even after waiting for a long time (about 30 minutes). I was under the impression that python can handle numbers much larger than the numbers used here. Can someone help me improve my runtime and get better results? Also, if there is something wrong with the code, please let me know.
You can use Memoization:
def memo(f):
mem = {}
def wrap(x):
if x not in mem:
mem[x] = f(x)
return mem[x]
return wrap
#memo
def part(n):
p = 0
if n == 0:
p += 1
else:
k = 1
while (n >= (k * (3 * k - 1) // 2)) or (n >= (k * (3 * k + 1) // 2)):
i = (k * (3 * k - 1) // 2)
j = (k * (3 * k + 1) // 2)
if (n - i) >= 0:
p -= ((-1) ** k) * part(n - i)
if (n - j) >= 0:
p -= ((-1) ** k) * part(n - j)
k += 1
return p
Demo:
In [9]: part(10)
Out[9]: 42
In [10]: part(20)
Out[10]: 627
In [11]: part(29)
Out[11]: 4565
In [12]: part(100)
Out[12]: 190569292
With memoization we remember previous calculation so for repeated calculations we just do a lookup in the dict.
Well there are a number of things you can do.
Remove duplicate calculations. - Basically you are calculating "3*k+1" many times for every execution of your while loop. You should calculate it once and assign it to a variable, and then use the variable.
Replace the (-1)**k with a much faster operation like something like -2*(k%2)+1). So instead of the calculation being linear with respect to k it is constant.
Cache the result of expensive deterministic calculations. "part" is a deterministic function. It gets called many times with the same arguments. You can build a hashmap of the inputs mapped to the results.
Consider refactoring it to use a loop rather than recursion. Python does not support tail recursion from what I understand, thus it is burdened with having to maintain very large stacks when you use deep recursion.
If you cache the calculations I can guarantee it will operate many times faster.

Categories

Resources