I wrote this code,but I not sure if it is right.In Simpson rule there is condition that it has to has even number of intervals.I dont know how to imprint this condition into my code.
def simpson(data):
data = np.array(data)
a = min(range(len(data)))
b = max(range(len(data)))
n = len(data)
h = (b-a)/n
for i in range(1,n, 2):
result += 4*data[i]*h
for i in range(2,n-1, 2):
result += 2*data[i]*h
return result * h /3
Interestingly enough, you can find it in the Wikipedia entry:
from __future__ import division # Python 2 compatibility
def simpson(f, a, b, n):
"""Approximates the definite integral of f from a to b by the
composite Simpson's rule, using n subintervals (with n even)"""
if n % 2:
raise ValueError("n must be even (received n=%d)" % n)
h = (b - a) / n
s = f(a) + f(b)
for i in range(1, n, 2):
s += 4 * f(a + i * h)
for i in range(2, n-1, 2):
s += 2 * f(a + i * h)
return s * h / 3
where you use it as:
simpson(lambda x:x**4, 0.0, 10.0, 100000)
Note how it bypasses your parity problem by requiring a function and n.
In case you need it for a list of values, though, then after adapting the code (which should be easy), I suggest you also raise a ValueError in case its length is not even.
Since you already seem to be using numpy you may also consider using scipy which conveniently provides a Simpson's rule integration routine.
from scipy.integrate import simps
result=simps(data)
See http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.simps.html for the full documentation (where they discuss the handling of even/odd intervals)
Related
def f(n):
Total_Triangles = 0
for i in range(1,n+1):
term = 3**(i-1)
Total_Triangles+=term
return Total_Triangles
Q = int(input())
for i in range(Q):
n = int(input())
Ans = f(n)*4 +1
print(Ans%1000000007)
How to tackle with Time limit error in this code?
Karan has a good answer. It will speed up your original approach, but you still end up calculating huge numbers. Fortunately, Python's Long type can do that, but I expect that it isn't as efficient as the native 32-bit or 64-bit integer types.
You are told to give the answer modulo a huge number M, 1,000,000,007. You can improve the algorithm by using modular arithmetic throughout, so that your numbers never get very big. In modular arithmetic, this is true:
(a + b) % M == (a % M + b % M) % M
(a * b) % M == (a % M * b % M) % M
One approach could be to calculate all possible Q values up front using modular arithmetic:
M = 1000000007
def makef(m):
"""Generator to create all sum(3**i) mod M"""
n = 1
s = 0
for i in range(m):
yield s
s = (s + n) % M
n = ((n + n) % M + n) % M
f = list(makef(100000))
Q = int(input())
for i in range(Q):
n = int(input())
print (f[n] * 4 + 1) % M
This will do the calculations in a big loop, but only once and should be fast enough for your requirements.
Python offers you a second way: The expression a ** b is mapped to the in-built function pow(a, b). This function can take a third parameter: a base for modular arithmetic, so that pow(a, b, M) will calculate (a ** b) % M without generating huge intermediate results.
Now you can use Karan's neat formula. But wait, there's a pitfall: You have to divide the result of the power by two. The modular relationships above are not true of division. For example, (12 // 2) % M is 6, but if you applied the modulo operator first, as the pow function does, you'd get ((12 % 2) // 2) % M, which is 1 and not what you want. A solution is to calculate the power modulo 2 * M and then divide by 2:
def f(n):
return pow(3, n, 2 * 1000000007) // 2
Q = int(input())
for i in range(Q):
n = int(input())
print (f(n) * 4 + 1) % M
(Note that all powers of 3 are odd, so I have removed the - 1 and let the integer division do the work.)
Side note: The value of M is chosen so that the addition of two numbers that are smaller than M fits in a signed 32-bit integer. That means that users of C, C++ or Java don't have to use bignum libraries. But note that 3 * n can still overflow a signed int, so that you have to take care when multiplying by three: Use ((n + n) % M + n) % M instead.
You want to find 3 ** 0 + 3 ** 1 ... + 3 ** (n - 1), this is just a geometric series with first term a = 1, common ratio r = 3 and number of terms n = n, and using the summation of a geometric series formula, we can find f(n) much faster when defined as so:
def f(n):
return (3 ** n - 1) // 2
I am writing a program that handles numbers as large as 10 ** 100, everything looks good when dealing with smaller numbers but when values get big I get these kind of problems:
>>> N = 615839386751705599129552248800733595745824820450766179696019084949158872160074326065170966642688
>>> ((N + 63453534345) / sqrt(2)) == (N / sqrt(2))
>>> True
Clearly the above comparision is false, why is this happening?
Program code:
from math import *
def rec (n):
r = sqrt (2)
s = r + 2
m = int (floor (n * r))
j = int (floor (m / s))
if j <= 1:
return sum ([floor (r * i) for i in range (1, n + 1)])
assert m >= s * j and j > 1, "Error: something went wrong"
return m * (m + 1) / 2 - j * (j + 1) - rec (j)
print rec (1e100)
Edit:
I don't think my question is a duplicate of the linked question above because the decimal points in n, m and j are not important to me and I am looking for a solution to avoid this precision issue.
You can’t retain the precision you want while dividing by standard floating point numbers, so you should instead divide by a Fraction. The Fraction class in the fractions module lets you do exact rational arithmetic.
Of course, the square root of 2 is not rational. But if the error is less than one part in 10**100, you’ll get the right result.
So, how to compute an approximation to sqrt(2) as a Fraction? There are several ways to do it, but one simple way is to compute the integer square root of 2 * 10**200, which will be close to sqrt(2) * 10**100, then just make that the numerator and make 10**100 the denominator.
Here’s a little routine in Python 3 for integer square root.
def isqrt(n):
lg = -1
g = (1 >> n.bit_length() // 2) + 1
while abs(lg - g) > 1:
lg = g
g = (g + n//g) // 2
while g * g > n:
g -= 1
return g
You should be able to take it from there.
I am trying to write a recursive multinacci (basically fibonacci numbers except the rabbits produce k pairs instead of 1 pair with each breeding cycle) function and I want it to work with all n. Here is my code so far:
from functools import lru_cache
from sys import getrecursionlimit, setrecursionlimit
def fibnum(n, k=1):
"""Returns the nth fibonacci number of order k"""
# check if recursionlimit needs increasing
return _fibnum(n, k)
#lru_cache(maxsize=None)
def _fibnum(n, k):
if n <= 0:
return 0
if n == 1:
return 1
return _fibnum(n-1, k) + k * _fibnum(n-2, k)
A few notes about the code: the first function is a wrapper around the second so as to the description text look right. The second function is memoized, which increases performance drastically.
I've noticed that when I try to find increasing values of fibnum in order (100, 400, 1000 etc.) I can get around the recursion limit since the memoization shortcuts the recursion. I want to be able to run my function for any number right off the bat. I've tried testing the bounds of the recursion limit for n and then setting the recursion limit to that, but the only one that seemed to work was n2, but that seems too high of a limit.
Any suggestions?
Note: at a later point, I would like to add a lifespan to the formula (which is basically subtract out fibnum(n- life_span, k)). How would this affect the recursion depth needed?
One way of sidestepping the stack limitations is to set up the Fibonacci recurrence in matrix form and use the matrix version of multiplication by successive halving and squaring. With this approach the stack growth is O(log n), so you can go to gigantic values of fib(n) with no worries. Here's an implementation:
def __matrix_fib__(n):
if n == 1:
return [0, 1]
else:
f = __matrix_fib__(n / 2)
c = f[0] * f[0] + f[1] * f[1]
d = f[1] * (f[1] + 2 * f[0])
if n % 2 == 0:
return [c, d]
else:
return [d, c + d]
def fib(n):
assert (n >= 0)
if n == 0:
return n
else:
return __matrix_fib__(n)[1]
ADDENDUM
This version adds the k parameter as requested...
def __matrix_fib__(n, k):
if n == 1:
return [0, 1]
else:
f = __matrix_fib__(n / 2, k)
c = k * f[0] * f[0] + f[1] * f[1]
d = f[1] * (f[1] + 2 * k * f[0])
if n % 2 == 0:
return [c, d]
else:
return [d, k * c + d]
def fib(n, k=1):
assert (n >= 0)
if n == 0:
return n
else:
return __matrix_fib__(n, k)[1]
I won't swear this is correct because I dashed it off between classes, but my tests produced the same answers as your version when given the same inputs.
Alternatively, you could use a class as a namespace to store the cache, then calculate results iteratively:
class Fib(object):
cache = [1, 1]
#classmethod
def get(cls, n):
if n < 1:
return 0
for i in range(len(cls.cache), n):
cls.cache.append(cls.cache[-1] + cls.cache[-2])
return cls.cache[n - 1]
Usage:
a = Fib()
print a.get(1000)
If you change fibnum to limit the call stack to 100 items by computing the first 100 fibnums, then the next 100 fibnums, then the next 100 fibnums, you can avoid a recursion limit.
This produces very little wasted work since you'll need to compute the first 100 fibnums to compute the last 100 fibnums anyways.
The number 100 is arbitrary, but should be less than sys.recursionlimit.
def fibnum(n, k=1):
"""Returns the nth fibonacci number of order k"""
# build up cache of fib values early in the sequence
for intermediate_n in range(100, n, 100):
_fibnum(intermediate_n, k)
return _fibnum(n, k)
I want to play around with procedural content generation algorithms, and decided to start with noises (Perlin, value, etc)
For that, I want have a generic n-dimensional noise function. For that I wrote a function that returns a noise generation function of the given dimension:
small_primes = [1, 83, 97, 233, 61, 127]
def get_noise_function(dimension, random_seed=None):
primes_list = list(small_primes)
if dimension > len(primes_list):
primes_list = primes_list * (dimension / len(primes_list))
rand = random.Random()
if random_seed:
rand.seed(random_seed)
# random.shuffle(primes_list)
rand.shuffle(primes_list)
def noise_func(*args):
if len(args) < dimension:
# throw something
return None
n = [a*b for a, b in zip(args, primes_list)]
n = sum(n)
#n = (n << 13) ** n
n = (n << 13) ^ n
nn = (n * (n * n * 60493 + 19990303) + 1376312589) & 0x7fffffff
return 1.0 - (nn / 1073741824.0)
return noise_func
The, problem, I believe, is with the calculations. I based my code on these two articles:
Hugo Elias' value noise implementation (end of the page)
libnoise documentation
Example of one of my tests:
f1 = get_noise_function(1, 10)
print f1(1)
print f1(2)
print f1(3)
print f1(1)
It always returns -0.281790983863, even on higher dimensions and different seeds.
The problem, I believe, is that in C/C++ there is overflow is some of the calculations, and everything works. In python, it just calculates a gigantic number.
How can I correct this or, if possible, how can I generate a pseudo-random function that, after being seeded, for a certain input always returns the same value.
[EDIT] Fixed the code. Now it works.
Where the referenced code from Hugo Elias has:
x = (x<<13) ^ x
you have:
n = (n << 13) ** n
I believe Elias is doing bitwise xor, while you're effectively raising 8192*n to the power of n. That gives you a huge value. Then
nn = (n * (n * n * 60493 + 19990303) + 1376312589) & 0x7fffffff
takes that gigantic n and makes it even bigger, until you finally throw away everything but the last 31 bits. It doesn't make much sense ;-)
Try changing your code to:
n = (n << 13) ^ n
and see whether that helps.
How can this Mathematica code be ported to Python? I do not know the Mathematica syntax and am having a hard time understanding how this is described in a more traditional language.
Source (pg 5): http://subjoin.net/misc/m496pres1.nb.pdf
This cannot be ported to Python directly as the definition a[j] uses the Symbolic Arithmetic feature of Mathematica.
a[j] is basically the coefficient of xj in the series expansion of that rational function inside Apart.
Assume you have a[j], then f[n] is easy. A Block in Mathematica basically introduces a scope for variables. The first list initializes the variable, and the rest is the execution of the code. So
from __future__ import division
def f(n):
v = n // 5
q = v // 20
r = v % 20
return sum(binomial(q+5-j, 5) * a[r+20*j] for j in range(5))
(binomial is the Binomial coefficient.)
Using the proposed solutions from the previous answers I found that sympy sadly doesn't compute the apart() of the rational immediatly. It somehow gets confused. Moreover, the python list of coefficients returned by *Poly.all_coeffs()* has a different semantics than a Mathmatica list. Hence the try-except-clause in the definition of a().
The following code does work and the output, for some tested values, concurs with the answers given by the Mathematica formula in Mathematica 7:
from __future__ import division
from sympy import expand, Poly, binomial, apart
from sympy.abc import x
A = Poly(apart(expand(((1-x**20)**5)) / expand((((1-x)**2)*(1-x**2)*(1-x**5)*(1-x**10))))).all_coeffs()
def a(n):
try:
return A[n]
except IndexError:
return 0
def f(n):
v = n // 5
q = v // 20
r = v % 20
return sum(a[r+20*j]* binomial(q+5-j, 5) for j in range(5))
print map(f, [100, 50, 1000, 150])
The symbolics can be done with sympy. Combined with KennyTM's answer, something like this might be what you want:
from __future__ import division
from sympy import Symbol, apart, binomial
x = Symbol('x')
poly = (1-x**20)**5 / ((1-x)**2 * (1-x**2) * (1-x**5) * (1-x**10))
poly2 = apart(poly,x)
def a(j):
return poly2.coeff(x**j)
def f(n):
v = n // 5
q = v // 20
r = v % 20
return sum(binomial(q+5-j, 5)*a(r+20*j) for j in range(5))
Although I have to admit that f(n) does not work (I'm not very good at Python).