Implementing noise function in python from C code - python

I want to play around with procedural content generation algorithms, and decided to start with noises (Perlin, value, etc)
For that, I want have a generic n-dimensional noise function. For that I wrote a function that returns a noise generation function of the given dimension:
small_primes = [1, 83, 97, 233, 61, 127]
def get_noise_function(dimension, random_seed=None):
primes_list = list(small_primes)
if dimension > len(primes_list):
primes_list = primes_list * (dimension / len(primes_list))
rand = random.Random()
if random_seed:
rand.seed(random_seed)
# random.shuffle(primes_list)
rand.shuffle(primes_list)
def noise_func(*args):
if len(args) < dimension:
# throw something
return None
n = [a*b for a, b in zip(args, primes_list)]
n = sum(n)
#n = (n << 13) ** n
n = (n << 13) ^ n
nn = (n * (n * n * 60493 + 19990303) + 1376312589) & 0x7fffffff
return 1.0 - (nn / 1073741824.0)
return noise_func
The, problem, I believe, is with the calculations. I based my code on these two articles:
Hugo Elias' value noise implementation (end of the page)
libnoise documentation
Example of one of my tests:
f1 = get_noise_function(1, 10)
print f1(1)
print f1(2)
print f1(3)
print f1(1)
It always returns -0.281790983863, even on higher dimensions and different seeds.
The problem, I believe, is that in C/C++ there is overflow is some of the calculations, and everything works. In python, it just calculates a gigantic number.
How can I correct this or, if possible, how can I generate a pseudo-random function that, after being seeded, for a certain input always returns the same value.
[EDIT] Fixed the code. Now it works.

Where the referenced code from Hugo Elias has:
x = (x<<13) ^ x
you have:
n = (n << 13) ** n
I believe Elias is doing bitwise xor, while you're effectively raising 8192*n to the power of n. That gives you a huge value. Then
nn = (n * (n * n * 60493 + 19990303) + 1376312589) & 0x7fffffff
takes that gigantic n and makes it even bigger, until you finally throw away everything but the last 31 bits. It doesn't make much sense ;-)
Try changing your code to:
n = (n << 13) ^ n
and see whether that helps.

Related

Taylor series of cos x expansion in python

I want to calculate the summation of cosx series (while keeping the x in radian). This is the code i created:
import math
def cosine(x,n):
sum = 0
for i in range(0, n+1):
sum += ((-1) ** i) * (x**(2*i)/math.factorial(2*i))
return sum
and I checked it using math.cos() .
It works just fine when I tried out small numbers:
print("Result: ", cosine(25, 1000))
print(math.cos(25))
the output:
Result: 0.991203540954667 0.9912028118634736
The number is still similar. But when I tried a bigger number, i.e 40, it just returns a whole different value.
Result: 1.2101433786727471 -0.6669380616522619
Anyone got any idea why this happens?
The error term for a Taylor expansion increases the further you are from the point expanded about (in this case, x_0 = 0). To reduce the error, exploit the periodicity and symmetry by only evaluating within the interval [0, 2 * pi]:
def cosine(x, n):
x = x % (2 * pi)
total = 0
for i in range(0, n + 1):
total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
return total
This can be further improved to [0, pi/2]:
def cosine(x, n):
x = x % (2 * pi)
if x > pi:
x = abs(x - 2 * pi)
if x > pi / 2:
return -cosine(pi - x, n)
total = 0
for i in range(0, n + 1):
total += ((-1) ** i) * (x**(2*i) / math.factorial(2*i))
return total
Contrary to the answer you got, this Taylor series converges regardless of how large the argument is. The factorial in the terms' denominators eventually drives the terms to 0.
But before the factorial portion dominates, terms can get larger and larger in absolute value. Native floating point doesn't have enough bits of precision to keep enough information for the low-order bits to survive.
Here's a way that doesn't lose any bits of precision. It's not practical because it's slow. Trust me when I tell you that it typically takes years of experience to learn how to write practical, fast, high-quality math libraries.
def mycos(x, nbits=100):
from fractions import Fraction
x2 = - Fraction(x) ** 2
i = 0
ntries = 0
total = term = Fraction(1)
while True:
ntries += 1
term = term * x2 / ((i+1) * (i+2))
i += 2
total += term
if (total // term).bit_length() > nbits:
break
print("converged to >=", nbits, "bits in", ntries, "steps")
return total
and then your examples:
>>> mycos(25)
converged to >= 100 bits in 60 steps
Fraction(177990265631575526901628601372315751766446600474817729598222950654891626294219622069090604398951917221057277891721367319419730580721270980180746700236766890453804854224688235663001, 179569976498504495450560473003158183053487302118823494306831203428122565348395374375382001784940465248260677204774780370309486592538808596156541689164857386103160689754560975077376)
>>> float(_)
0.9912028118634736
>>> mycos(40)
converged to >= 100 bits in 82 steps
Fraction(-41233919211296161511135381283308676089648279169136751860454289528820133116589076773613997242520904406094665861260732939711116309156993591792484104028113938044669594105655847220120785239949370429999292710446188633097549, 61825710035417531603549955214086485841025011572115538227516711699374454340823156388422475359453342009385198763106309156353690402915353642606997057282914587362557451641312842461463803518046090463931513882368034080863251)
>>> float(_)
-0.6669380616522619
Things to note:
The full-precision results require lots of bits.
Rounded back to float, they identically match what you got from math.cos().
It doesn't require anywhere near 1000 steps to converge.

efficiency graph size calculation with power function

I am looking to improve the following simple function (written in python), calculating the maximum size of a specific graph:
def max_size_BDD(n):
i = 1
size = 2
while i <= n:
size += min(pow(2, i-1), pow(2, pow(2, n-i+1))-pow(2, pow(2, n-i)))
i+=1
print(str(i)+" // "+ str(size))
return size
if i give it as input n = 45, the process gets killed (probably because it takes too long, i dont think it is a memory thing, right?). How can i redesign the following algorithm such that it can handle larger inputs?
My proposal: While the original function starts to run into troubles at ~10, I have practically no limitations (even for n = 100000000, I stay below 1s).
def exp_base_2(n):
return 1 << n
def max_size_bdd(n):
# find i at which the min branch switches
start_i = n
while exp_base_2(n - start_i + 1) < start_i:
start_i -= 1
# evaluate all to that point
size = 1 + 2 ** start_i
# evaluate remaining terms (in an uncritical range of n - i)
for i in range(start_i + 1, n + 1):
val = exp_base_2(exp_base_2(n - i))
size += val * (val - 1)
print(f"{i} // {size}")
return size
Remarks:
Core idea: avoid the large powers of 2, as they are not necessary to calculate if you use the min in the end.
I did all this in a rush, maybe I can add more explanation later... if anyone is interested. Then, I could also do a more decent verification of the new implementation.
The effect of exp_base_2 should be negligible after doing all the math to optimize the original calculations. I did this optimization before I went into analysis.
Maybe a complete closed-form solution is possible. I did not invest the time for further investigations.

How should I optimize this code?

def f(n):
Total_Triangles = 0
for i in range(1,n+1):
term = 3**(i-1)
Total_Triangles+=term
return Total_Triangles
Q = int(input())
for i in range(Q):
n = int(input())
Ans = f(n)*4 +1
print(Ans%1000000007)
How to tackle with Time limit error in this code?
Karan has a good answer. It will speed up your original approach, but you still end up calculating huge numbers. Fortunately, Python's Long type can do that, but I expect that it isn't as efficient as the native 32-bit or 64-bit integer types.
You are told to give the answer modulo a huge number M, 1,000,000,007. You can improve the algorithm by using modular arithmetic throughout, so that your numbers never get very big. In modular arithmetic, this is true:
(a + b) % M == (a % M + b % M) % M
(a * b) % M == (a % M * b % M) % M
One approach could be to calculate all possible Q values up front using modular arithmetic:
M = 1000000007
def makef(m):
"""Generator to create all sum(3**i) mod M"""
n = 1
s = 0
for i in range(m):
yield s
s = (s + n) % M
n = ((n + n) % M + n) % M
f = list(makef(100000))
Q = int(input())
for i in range(Q):
n = int(input())
print (f[n] * 4 + 1) % M
This will do the calculations in a big loop, but only once and should be fast enough for your requirements.
Python offers you a second way: The expression a ** b is mapped to the in-built function pow(a, b). This function can take a third parameter: a base for modular arithmetic, so that pow(a, b, M) will calculate (a ** b) % M without generating huge intermediate results.
Now you can use Karan's neat formula. But wait, there's a pitfall: You have to divide the result of the power by two. The modular relationships above are not true of division. For example, (12 // 2) % M is 6, but if you applied the modulo operator first, as the pow function does, you'd get ((12 % 2) // 2) % M, which is 1 and not what you want. A solution is to calculate the power modulo 2 * M and then divide by 2:
def f(n):
return pow(3, n, 2 * 1000000007) // 2
Q = int(input())
for i in range(Q):
n = int(input())
print (f(n) * 4 + 1) % M
(Note that all powers of 3 are odd, so I have removed the - 1 and let the integer division do the work.)
Side note: The value of M is chosen so that the addition of two numbers that are smaller than M fits in a signed 32-bit integer. That means that users of C, C++ or Java don't have to use bignum libraries. But note that 3 * n can still overflow a signed int, so that you have to take care when multiplying by three: Use ((n + n) % M + n) % M instead.
You want to find 3 ** 0 + 3 ** 1 ... + 3 ** (n - 1), this is just a geometric series with first term a = 1, common ratio r = 3 and number of terms n = n, and using the summation of a geometric series formula, we can find f(n) much faster when defined as so:
def f(n):
return (3 ** n - 1) // 2

Simpson rule integration,Python

I wrote this code,but I not sure if it is right.In Simpson rule there is condition that it has to has even number of intervals.I dont know how to imprint this condition into my code.
def simpson(data):
data = np.array(data)
a = min(range(len(data)))
b = max(range(len(data)))
n = len(data)
h = (b-a)/n
for i in range(1,n, 2):
result += 4*data[i]*h
for i in range(2,n-1, 2):
result += 2*data[i]*h
return result * h /3
Interestingly enough, you can find it in the Wikipedia entry:
from __future__ import division # Python 2 compatibility
def simpson(f, a, b, n):
"""Approximates the definite integral of f from a to b by the
composite Simpson's rule, using n subintervals (with n even)"""
if n % 2:
raise ValueError("n must be even (received n=%d)" % n)
h = (b - a) / n
s = f(a) + f(b)
for i in range(1, n, 2):
s += 4 * f(a + i * h)
for i in range(2, n-1, 2):
s += 2 * f(a + i * h)
return s * h / 3
where you use it as:
simpson(lambda x:x**4, 0.0, 10.0, 100000)
Note how it bypasses your parity problem by requiring a function and n.
In case you need it for a list of values, though, then after adapting the code (which should be easy), I suggest you also raise a ValueError in case its length is not even.
Since you already seem to be using numpy you may also consider using scipy which conveniently provides a Simpson's rule integration routine.
from scipy.integrate import simps
result=simps(data)
See http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.integrate.simps.html for the full documentation (where they discuss the handling of even/odd intervals)

Recursion Formula for Integer Partitions

I have written the following code for evaluating integer partitions using the recurrence formula involving pentagonal numbers:
def part(n):
p = 0
if n == 0:
p += 1
else:
k = 1
while ((n >= (k*(3*k-1)/2)) or (n >= (k*(3*k+1)/2))):
i = (k * (3*k-1)/2)
j = (k * (3*k+1)/2)
if ((n-i) >= 0):
p -= ((-1)**k) * part(n-i)
if ((n-j) >= 0):
p -= ((-1)**k) * part(n-j)
k += 1
return p
n = int(raw_input("Enter a number: "))
m = part(n)
print m
The code works fine up until n=29. It gets a bit slow around n=24, but I still get an output within a decent runtime. I know the algorithm is correct because the numbers yielded are in accordance with known values.
For numbers above 35, I don't get an output even after waiting for a long time (about 30 minutes). I was under the impression that python can handle numbers much larger than the numbers used here. Can someone help me improve my runtime and get better results? Also, if there is something wrong with the code, please let me know.
You can use Memoization:
def memo(f):
mem = {}
def wrap(x):
if x not in mem:
mem[x] = f(x)
return mem[x]
return wrap
#memo
def part(n):
p = 0
if n == 0:
p += 1
else:
k = 1
while (n >= (k * (3 * k - 1) // 2)) or (n >= (k * (3 * k + 1) // 2)):
i = (k * (3 * k - 1) // 2)
j = (k * (3 * k + 1) // 2)
if (n - i) >= 0:
p -= ((-1) ** k) * part(n - i)
if (n - j) >= 0:
p -= ((-1) ** k) * part(n - j)
k += 1
return p
Demo:
In [9]: part(10)
Out[9]: 42
In [10]: part(20)
Out[10]: 627
In [11]: part(29)
Out[11]: 4565
In [12]: part(100)
Out[12]: 190569292
With memoization we remember previous calculation so for repeated calculations we just do a lookup in the dict.
Well there are a number of things you can do.
Remove duplicate calculations. - Basically you are calculating "3*k+1" many times for every execution of your while loop. You should calculate it once and assign it to a variable, and then use the variable.
Replace the (-1)**k with a much faster operation like something like -2*(k%2)+1). So instead of the calculation being linear with respect to k it is constant.
Cache the result of expensive deterministic calculations. "part" is a deterministic function. It gets called many times with the same arguments. You can build a hashmap of the inputs mapped to the results.
Consider refactoring it to use a loop rather than recursion. Python does not support tail recursion from what I understand, thus it is burdened with having to maintain very large stacks when you use deep recursion.
If you cache the calculations I can guarantee it will operate many times faster.

Categories

Resources