So after watching this video on the fast fourier transform https://www.youtube.com/watch?v=h7apO7q16V0
I analysed the pseudocode and implemented it in python to find out that it was producing a different output to that of many fft calculator sites. My values seem to be all there its just odd, as the order is out of place, anyone know why. Is it a different kind of algorithm implementation or something.
import cmath
import math
def FFT(P):
n= len(P)
if n == 1:
return P
omega = cmath.exp((2 * cmath.pi * 1j)/n)
p_even = P[::2]
p_odd = P[1::2]
y_even = FFT(p_even)
y_odd = FFT(p_odd)
y = [0] * n
for i in range(n//2):
y[i] = y_even[i] + omega**i*y_odd[i]
y[i+n//2] = y_even[i] - omega**i*y_odd[i]
return y
poly = [0,1,2,3]
print(FFT([0,1,2,3]))
The site I tested it against was https://tonysader.github.io/FFT_Calculator/?
and I input into this site 0,1,2,3 and obtained: 6, -2+2J, -2, -2+-2J
whilst my python program output : 6, -2-2J, -2, -2+2J
The pseudocode I followed:
I think the program you're running is executing the inverse FFT. Try
omega = cmath.exp((-2 * cmath.pi * 1j)/n). Note the minus sign.
Related
I am studying Computational Physics with a lecturer who always ask me to write Python and Matlab code without using instant code (a library that gives me final answer without showing mathematical expression). So I try to write Bessel function for first kind using power series because I thought it was easy compare to other method (I am not sure). I dont know why the result is still very different? Far from answer that Sympy.special provided?
Here is my code for x = 5 and n = 3
import math
def bessel_function(n, x, num_terms):
# Initialize the power series expansion with the first term
series_sum = (x / 2) ** n
# Calculate the remaining terms of the power series expansion
for k in range(0, num_terms):
term = ((-1) ** k) * ((x / 2) ** (2 * k)) / (math.factorial(k)**2)*(2**2*k)
series_sum = series_sum + term
return series_sum
# Test the function with n = 3, x = 5, and num_terms = 10
print(bessel_function(3, 5, 30))
print(bessel_function(3, 5, 15))
And here is the code using sympy library:
from mpmath import *
mp.dps = 15; mp.pretty = True
print(besselj(3, 5))
import sympy
def bessel_function(n, x):
# Use the besselj function from sympy to calculate the Bessel function
return sympy.besselj(n, x)
# Calculate the numerical value of the Bessel function using evalf
numerical_value = bessel_function(3, 5).evalf()
print(numerical_value)
It is a big waste to compute the terms like you do, each from scratch with power and factorial. Much more efficient to compute a term from the previous.
For Jn,
Tk / Tk-1 = - (X/2)²/(k(k+N))
with T0 = (X/2)^N/N!.
N= 3
X= 5
# First term
X*= 0.5
Term= pow(X, N) / math.factorial(N)
Sum= Term
print(Sum)
# Next terms
X*= -X
for k in range(1, 21):
Term*= X / (k * (k + N))
Sum+= Term
print(Sum)
The successive sums are
2.6041666666666665
-1.4648437499999996
1.0782877604166665
0.19525598596643523
0.39236129276336185
0.3615635885763421
0.365128137672062
0.3648098743599441
0.3648324782883616
0.36483117019065225
0.3648312330799652
0.36483123052763916
0.3648312306162616
0.3648312306135987
0.3648312306136686
0.364831230613667
0.36483123061366707
0.36483123061366707
0.36483123061366707
0.36483123061366707
0.36483123061366707
I am developing a code that uses a method called Platen to solve stochastic differential equations. Then I must solve that stochastic differential equation many times (on the order of 10,000 times) to average all the results. My code is:
import numpy as np
import random
import numba
#numba.jit(nopython=True)
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
for i in range(len(t)):
dW=c*random.gauss(0,1)
A=np.array([y[i,-1]/m,-gamma*y[i,-1]]) #this is the platen method that is applied at
B_dW=np.array([0,b*dW]) #each time step
z=y[i]+A*h+B_dW
Az=np.array([z[-1]/m,-gamma*z[-1]])
y[i+1]=y[i]+1/2*(Az+A)*h+B_dW
return y
def media(args): #args is a tuple with the parameters
y = args[0]
t = args[1]
k = args[2]
x=0
p=0
for n in range(k): #k=number of trajectories
y=integrador2(y,t,h)
x=(1./(n+1))*(n*x+y[:,0]) #I do the average like this so as not to have to save all the
p=(1./(n+1))*(n*p+y[:,1]) #solutions in memory
return x,p
The variables y, t and h are:
y0 = np.array([initial position, initial moment]) #initial conditions
t = np.linspace(initial time, final time, number of time intervals) #time array
y = np.zeros((len(t)+1,len(y0))) #array of positions and moments
y[0,:]=np.array(y0) #I keep the initial condition
h = (final time-initial time)/(number of time intervals) #time increment
I need to be able to run the program for a number of time intervals of 10 ** 7 and solve it 10 ** 4 times (k = 10 ** 4).
I feel that I have already reached a dead end because I already accelerate the function that calculates the result with Numba and then (although I do not put it here) I parallelize the "media" function to work with the four cores that my computer has. Even doing all this, my program takes an hour and a half to execute for 10 ** 6 time intervals and k = 10 ** 4, I have not had the courage to execute it for 10 ** 7 time intervals because my intuition tells me that it would take more than 10 hours.
I would really appreciate if someone could advise me to make some parts of the code faster.
Finally, I apologize if I have not expressed myself completely correctly in any part of the question, I am a physicist, not a computer scientist and my English is far from perfect.
I can save about 75% of compute time by simplifying the math in the loop:
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
h = h * 1.
coeff0 = h/m - gamma*h**2/(2.*m)
coeff1 = (1. - gamma*h + gamma**2*h**2/2.)
coeffd = c*b*(1. - gamma*h/2.)
for i in range(len(t)):
dW=np.random.normal()
# Method 2
y[i+1] = np.array([y[i][0] + y[i][1]*coeff0, y[i][1]*coeff1 + dW*coeffd])
return y
Here's a method using filters with scipy, which I don't think is compatible with Numba, but is slightly faster than the solution above:
from scipy import signal
# #numba.jit(nopython=True)
def integrador2(y,t,h): #this is the integrator of the function that solves the SDE
m = 6.6551079E-26 #parameters
gamma=0.05
T = 5E-3
k_b = 1.3806488E-23
b=np.sqrt(2*m*gamma*T*k_b)
c=np.sqrt(h)
h = h * 1.
coeff0a = 1.
coeff0b = h/m - gamma*h**2/(2.*m)
coeff1 = (1. - gamma*h + gamma**2*h**2/2.)
coeffd = c*b*(1. - gamma*h/2.)
noise = np.zeros(y.shape[0])
noise[1:] = np.random.normal(0.,coeffd*1.,y.shape[0]-1)
noise[0] = y[0,1]
a = [1, -coeff1]
b = [1]
y[1:,1] = signal.lfilter(b,a,noise)[1:]
a = [1, -coeff0a]
b = [coeff0b]
y[1:,0] = signal.lfilter(b,a,y[:,1])[1:]
return y
I'm trying wrap my head around linear prediction and figured I'd code up a basic example in Python to test my understanding. The idea behind linear predictive coding is to estimate future samples of a signal based on linear combinations of past samples.
I'm using the lpc module in scikits.talkbox so I don't have to write any of the algorithm myself. Here's my code:
import math
import numpy as np
from scikits.talkbox.linpred.levinson_lpc import levinson, acorr_lpc, lpc
x = np.linspace(0,11,12)
order = 5
"""
a = solution of the inversion
e = prediction error
k = reflection coefficients
"""
(a,e,k) = lpc(x,order,axis=-1)
recon = []
for i in range(order,len(x)):
sum = 0
for j in range(order):
sum += -k[j]*x[i-j-1]
sum += math.sqrt(e)
recon.append(sum)
print(recon)
print(x[order:len(x)])
which gives an output of
[5.618790615323507, 6.316875690307965, 7.0149607652924235,
7.713045840276882, 8.411130915261339, 9.109215990245799, 9.807301065230257,
10.505386140214716]
[ 4. 5. 6. 7. 8. 9. 10. 11.]
My concern is that I'm implementing this incorrectly somehow because I figured that if my input array is a linear signal, it should have no issue predicting future values based on past values. However, it does seem to have a particularly high error, especially for the first few values. Would anyone be able to tell me if I'm implementing this correctly or point me to a few examples where this is done in Python? Any help is greatly appreciated, thanks!
Linear prediction algorithm extends the original sequence with infinite amount of zeros in both directions. So, unless your input signal is constant zero, the extended sequence is not linear and you should expect a nonzero error.
Here is my Python implementation:
def lpc(y, m):
"Return m linear predictive coefficients for sequence y using Levinson-Durbin prediction algorithm"
#step 1: compute autoregression coefficients R_0, ..., R_m
R = [y.dot(y)]
if R[0] == 0:
return [1] + [0] * (m-2) + [-1]
else:
for i in range(1, m + 1):
r = y[i:].dot(y[:-i])
R.append(r)
R = np.array(R)
#step 2:
A = np.array([1, -R[1] / R[0]])
E = R[0] + R[1] * A[1]
for k in range(1, m):
if (E == 0):
E = 10e-17
alpha = - A[:k+1].dot(R[k+1:0:-1]) / E
A = np.hstack([A,0])
A = A + alpha * A[::-1]
E *= (1 - alpha**2)
return A
I have 10k data points like this:
0.010222
0.010345
0.010465
0.010611
0.010768
0.010890
0.011049
0.011206
0.011329
0.011465
0.011613
0.11763
0.011888
0.012015
0.012154
0.012282
0.012408
0.012524
....
I want to calculate Lyapunov exponent for that. This is what I've done so far:
lyapunovs = []
eps = 0.0001
for i in range(N):
for j in range(i + 1, N):
if np.abs(data[i] - data[j]) < eps:
for k in range(1, min(N - i, N - j)):
d0 = np.abs(data[i] - data[j])
dn = np.abs(data[i + k] - data[j + k])
lyapunovs.append(math.log(dn) - math.log(d0)) # problem
My problem is that I don't know first Lyapunov exponent is average of all the lyapunovs when k = 1 or average of all the lyapunovs for the first time that data[i] - data[j] < eps?
Is this right implementation for Lyapunov exponent?
And this is the Numerical Calculation of Lyapunov Exponent
I would calculate the Lyapunov Exponent in this way and then output the results as tuples in a file see blog:
https://blog.abhranil.net/2014/07/22/calculating-the-lyapunov-exponent-of-a-time-series-with-python-code/:
from math import log
import numpy as np
with open('data.txt', 'r') as f:
data = [float(i) for i in f.read().split()]
N = len(data)
eps = 0.001
lyapunovs = [[] for i in range(N)]
for i in range(N):
for j in range(i + 1, N):
if np.abs(data[i] - data[j]) < eps:
for k in range(min(N - i, N - j)):
lyapunovs[k].append(log(np.abs(data[i+k] - data[j+k])))
with open('lyapunov.txt', 'w') as f:
for i in range(len(lyapunovs)):
if len(lyapunovs[i]):
string = str((i, sum(lyapunovs[i]) / len(lyapunovs[i])))
f.write(string + '\n')
I see from the chosen loop structure in the question that a triangle of the Cartesian product of the points is being used. This might improve the estimate of the derivatives, which are susceptible to noise, but it is not part of the Lyapunov exponent explicitly. See this example of the calculations on a known function in the absence of measurement error. Feel free to look into that aspect more, but below I will assume the comparison of signal points adjacent in time.
Your original question uses NumPy, so I will also make use of it. One of the rules of thumb to using NumPy well is to avoid loops, although it is possible to vectorize functions that contain loops. With no explicit time measurements, and no repeated values, you could simply do:
import numpy as np
x = np.random.normal(0,1,size=10**4) # Mock signal data
np.mean(np.log(np.abs(np.diff(x))))
Or if the signal is paired with an array of timepoints, then the numerical derivative can involve time:
import numpy as np
x = np.random.normal(0,1,size=10**4) # Mock signal data
t = np.arange(10**4) # Mock time data
np.mean(np.log(np.abs(np.diff(x) / np.diff(t))))
However, in some datasets it is possible for adjacent values to repeat! This can occur when you've measured the signal only to a few decimal places, and it is a problem because it leads to np.log(0) (=-np.inf) which will blow up your calculation. A simple solution is to remove duplicated values, but this will only be suitable if duplicates are relatively rare and you have a large sample size. It is possible to estimate an upper bound on the estimate of the L-exponent by considering the precision of your measurements, but that is not the estimate of the L-exponent itself.
I just want to mention that knowing the literal expression is the best.
I will take an example with the logistic map equation :
def logisticmap(x_init, r, length):
x = [x_init]
for t in range(length):
x.append(r*x[-1]*(1-x[-1]))
return np.array(x)
Now let's generate the data :
x = logistic(0.2, 3.92, 1000)
plt.plot(x)
plt.show()
Plot logistic map
Here is the proposed solution by Galan,
np.mean(np.log(abs(np.diff(x))))
Which gives : -1.0379
When you derive the Lyapunov exponent from the logistic map equation :
np.mean(np.log(abs(r*(1-2*x))))
It gives : 0.538296
Which is the actual true value for the Lyapunov, since the system is in its chaotic regime it must be positive, so I guess the evaluation from data points is not working in this example, you can try with more data points, but it will still give you a negative LE.
Unfortunately I don't know enough to guide you towards a better estimation for the Lyapunov if you can't derive a mathematical expression, but I would be intersted to know !
I tried to reduce computational complexity with numpy vectorization.
def lyapunov_exponent(series: np.array, threshold: float): -> np.array
N = len(series)
eps = threshold
L = [np.array([0]*N)]
for i in range(1, N):
diff = np.abs(series[i:]-series[:-i])
dist = np.log(diff)
L.append(np.concatenate([[0]*i, dist]))
L = np.array(L)
tf_L = np.where(L<eps, 1, 0)
count_L = np.zeros_like(tf_L)
for i in range(N):
indices = ( np.array(range(0,N-i)), np.array(range(i,N)) )
count_L[indices] = np.cumsum(tf_L[indices])
avg = np.sum(count_L * L, axis=0) / np.sum(count_L, axis=0)
return avg
If there is room for improvement or you get some different result than already answered, please reply.
First time posting, please excuse me if I'm not as concise as I need to be. So I'm trying to do the algorithm for Monte Carlo integration and I've got the code below. However when I run it with the parameters specified in main I get 0 as output. I'm pretty sure it has something to do with the way Python handles floats but I don't know where else to look. Any help is always appreciated. I know I will kick myself when I see my mistake.
One more thing, I'm using Python 2.7.
montecarloint.py
from random import uniform
from math import exp
def estimate_area(f, a, b, m, n=1000):
hits = 0
total = m * (b-a)
for i in range(n):
x = uniform(a, b)
y = uniform(0, m)
if y <= f(x):
hits += 1
frac = hits / n
return (frac * total)
def f(x):
return exp(-x ** 2)
def main():
print (estimate_area(f, 0, 2, 1))
main()