Example python nfft fourier transform - Issues with signal reconstruction normalization - python

I wrote a full working example for both nfft, and scipy.fft.
In both cases I start with a simple 1D sinusoidal signal with a little noise, take the fourier transform, and then go backwards and reconstruct the original signal.
Here is my code as clean and readable as I could manage:
import numpy
import nfft
import scipy
import scipy.fft
import matplotlib.pyplot as plt
if True: #<--- Ensure non-global namespace
#Define signal:
x = -0.5 + numpy.random.rand(1000)
#x = numpy.linspace(-.5, .5, 1000) #--> in case we want to run uniform time domain
f = numpy.sin(10 * 2 * numpy.pi * x) + .1*numpy.random.randn( 1000 ) #Add some 'y' randomness to the sample
#prepare wavenumbers for transform:
N = len(x)
k = - N // 2 + numpy.arange(N)
#print ('k', k) #---> Uniform Steps [-500, -499, ...0..., 499,500]
f_k = nfft.nfft_adjoint(x, f, len(k), truncated=False )
#plot transform
plt.figure()
plt.plot(k, f_k.real, label='real')
plt.plot(k, f_k.imag, label='imag')
plt.legend()
#Reconstruct the original signal with nfft
f_recon = nfft.nfft( x, f_k ) / 2000
#Plot original vs reconstructed
plt.figure()
plt.title('nfft')
plt.scatter(x, f, label='f(x)')
plt.scatter(x, f_recon, label='f_recon(x)', marker='+')
plt.legend()
if True: #<--- Ensure non-global namespace
#Define signal:
x = numpy.linspace(-.5, .5, 1000)
f = numpy.sin(10 * 2 * numpy.pi * x) + .1*numpy.random.randn( 1000 ) #Add some 'y' randomness to the sample
#prepare wavenumbers for transform:
N = len(x)
TimeSpacing = x[1] - x[0]
k = scipy.fft.fftfreq(N, TimeSpacing)
#print ('k', k) #---> Confusing steps: [0,1,...500,-500,-499,...-1]
f_k = scipy.fft.fft(f)
#plot transform
plt.figure()
plt.plot(k, f_k.real, label='real')
plt.plot(k, f_k.imag, label='imag')
plt.legend()
#Reconstruct the original signal with scipy.fft
f_recon = scipy.fft.ifft(f_k)
#Plot original vs reconstructed
plt.figure()
plt.title('scipy.fft')
plt.scatter(x, f, label='f(x)')
plt.scatter(x, f_recon, label='f_recon(x)', marker='+')
plt.legend()
plt.show()
Here are the relevant generated plots:
The nfft reconstruction seems to fail at normalizing.
I arbitrarily divided the magnitudes by 2000 just to get them to plot well.
What is the correct normalization constant?
The nfft also seems to not reproduce the original points.
Even if I got hte normalization constant correct, there is no way I would get the original points back here.
What did I do wrong, and how do I fix it?

The above mentioned package does not implement a inverse nfft
The ndft is f_hat # np.exp(-2j * np.pi * x * k[:, None])
The ndft_adjoint is f # np.exp(2j * np.pi * k * x[:, None])
Let k = -N//2 + np.arange(N), and A = np.exp(-2j * np.pi * k * k[:, None])
A # np.conj(A) = N * np.eye(N) (checked numerically)
Thus, for random x the adjoint transformation is equals to the inverse transform. The given reference paper provides a few options, I implemented Algorithm 1 CGNE, from page 9
import numpy as np # I have the habit to use np
def nfft_inverse(x, y, N, w = 1, L=100):
f = np.zeros(N, dtype=np.complex128);
r = y - nfft.nfft(x, f);
p = nfft.nfft_adjoint(x, r, N);
r_norm = np.sum(abs(r)**2 * w)
for l in range(L):
p_norm = np.sum(abs(p)**2 * w);
alpha = r_norm / p_norm
f += alpha * w * p;
r = y - nfft.nfft(x, f)
r_norm_2 = np.sum(abs(r)**2 * w)
beta = r_norm_2 / r_norm
p = beta * p + nfft.nfft_adjoint(x, w * r, N)
r_norm = r_norm_2;
#print(l, r_norm)
return f;
The algorithm converges slowly and poorly
plt.figure(figsize=(14, 7))
plt.title('inverse nfft error histogram')
#plt.scatter(x, f_hat, label='f(x)')
h_hat = nfft_inverse(x, f, N, L = 1)
plt.hist(f_hat - numpy.real(h_hat), bins=30, label='1 iteration')
h_hat = nfft_inverse(x, f, N, L = 10)
plt.hist(f_hat - numpy.real(h_hat), bins=30, label='10 iterations')
h_hat = nfft_inverse(x, f, N, L = 1000)
plt.hist(f_hat - numpy.real(h_hat), bins=30, label='1000 iterations')
plt.xlabel('error')
plt.ylabel('occurrencies')
plt.legend()
I tried also to use scipy minimization, to minimize the residual ||nfft(x, f) - y||**2 explicitly
import numpy as np # the habit
import scipy.optimize
def nfft_gradient_descent(x, y, N, L=10, tol=1e-8, method='CG'):
'''
compute $min || A # f - y ||**2 via gradient descent
the gradient is
`A^H # (A # f - y)`
Multiply by A using nfft.nfft
'''
def cost(fpack):
f = fpack[0::2] + 1j * fpack[1::2]
u = np.sum(np.abs(nfft.nfft(x, f) - y)**2)
return u
def grad(fpack):
f = fpack[0::2] + 1j * fpack[1::2]
r = nfft.nfft(x, f) - y
u = nfft.nfft_adjoint(x, r, N)
return np.stack([np.real(u), np.imag(u)], axis=1).reshape(-1)
x0 = np.zeros([N, 2])
result = scipy.optimize.minimize(cost, x0=x0, jac=grad, tol=tol, method=method, options={'maxiter': L, 'disp': True})
return result.x[0::2] + 1j * result.x[1::2];
The results look similar, you can try by different methods or parameters by your self if you want. But I believe the transformation is ill conditioned because transformed residual is considerably reduced but the residual on the reconstructed values is large.
Edit 1
Is it basically true that you found that the there is not a true inverse to the algorithm then? I cannot obtain my original points? x != nfft(nfft_adjoint(x))
Please check the section 2.3 of the reference paper
Numerical comparison
Cris Luengo answer metioned another possibility, that is, instead of reconstructing f at the points x, you may reconstruct a resampled version at equidistant points using the ifft. So you already have three options, and I will do a quick comparison. Bear in mind that the plot shown there is based on a NFFT computed in 16k samples, while here I am using 1k samples.
Since the FFT method uses different points we cannot compare to the original signal, what I will do is to compare with the harmonic function without the noise. The variance of the noise is 0.01, so an exact reconstruction would lead to this mean squared error.
N = 1024
x = -0.5 + numpy.random.rand(N)
f_hat = numpy.sin(10 * 2 * numpy.pi * x) + .1*numpy.random.randn( N ) #Add some 'y' randomness to the sample
k = - N // 2 + numpy.arange(N)
f = nfft.nfft(x, f_hat)
print('nfft_inverse')
h_hat = nfft_inverse(x, f, len(x), L = 10)
print('10 iterations: ', np.mean((numpy.sin(10 * 2 * numpy.pi * x) - numpy.real(h_hat))**2))
h_hat = nfft_inverse(x, f, len(x), L = 100)
print('100 iterations: ', np.mean((numpy.sin(10 * 2 * numpy.pi * x) - numpy.real(h_hat))**2))
h_hat = nfft_inverse(x, f, len(x), L = 1000)
print('1000 iterations: ', np.mean((numpy.sin(10 * 2 * numpy.pi * x) - numpy.real(h_hat))**2))
print('nfft_gradient_descent')
h_hat = nfft_gradient_descent(x, f, len(x), L = 10)
print('10 iterations: ', np.mean((numpy.sin(10 * 2 * numpy.pi * x) - numpy.real(h_hat))**2))
h_hat = nfft_gradient_descent(x, f, len(x), L = 100)
print('100 iterations: ', np.mean((numpy.sin(10 * 2 * numpy.pi * x) - numpy.real(h_hat))**2))
h_hat = nfft_gradient_descent(x, f, len(x), L = 1000)
print('1000 iterations: ', np.mean((numpy.sin(10 * 2 * numpy.pi * x) - numpy.real(h_hat))**2))
#Reconstruct the original at a spaced grid based on nfft result using ifft
f_recon = - numpy.fft.fftshift(numpy.fft.ifft(numpy.fft.ifftshift(f_k))) / (N / N2)
x_recon = k / N;
print('using IFFT: ', np.mean((numpy.sin(10 * 2 * numpy.pi * x_recon) - numpy.real(f_recon))**2))
Results:
nfft_inverse
10 iterations: 0.0798988590351581
100 iterations: 0.05136853850272318
1000 iterations: 0.037316315280700896
nfft_gradient_descent
10 iterations: 0.08832834348902704
100 iterations: 0.05901599049633016
1000 iterations: 0.043921864589484
using IFFT: 0.49044932854606377
Another way to view is
plt.plot(numpy.sin(10 * 2 * numpy.pi * x_recon), numpy.real(f_recon), '.', label='ifft')
plt.plot(numpy.sin(10 * 2 * numpy.pi * x), numpy.real(nfft_gradient_descent(x, f, len(x), L = 5)), '.', label='gradient descent L=5')
plt.plot(numpy.sin(10 * 2 * numpy.pi * x), numpy.real(nfft_inverse(x, f, len(x), L = 5)), '.', label='nfft_inverse L=5')
plt.plot(numpy.sin(10 * 2 * numpy.pi * x), np.real(f_hat), '.', label='original')
plt.legend()
Even though the IFFT matrix is better conditioned, it gives results in a worse reconstruction of the signal. Also from the last plot it becomes more noticeable that there is a slight attenuation. Probably due to a systematic energy leak to the imaginary part (an error in my code??). Just a quick test, multiplying it by 1.3 gives better results

Bob has already posted an excellent answer, this is just to complement with some details I hope are instructive.
First, compare the two plots for the computed frequency components. Note how the one for the NFFT is much more noisy than the one for the regular FFT. You are estimating these frequency components for a sampled signal from noisy samples, in one case the samples are regularly spaced, in the other they are randomly spaced. It is a known result that regular sampling is more efficient than random sampling (efficient meaning that you need fewer samples to get the same amount of information). Thus, it is expected that the random sampling yield the more noisy result.
We can compute the "normal" inverse FFT from the frequency components estimated by the NFFT:
f_recon = numpy.fft.fftshift(numpy.fft.ifft(numpy.fft.ifftshift(f_k)))
x_recon = numpy.linspace(-.5, .5, N)
I used ifftshift because NFFT defines the k to go from -N/2 to N/2-1, whereas the FFT defines it to go from 0 to N-1. ifftshift swaps the two halves of the signal to turn the first into the second (k from N/2 to N-1 is equal to -N/2 to -1). I also used fftshift on the result of the IFFT, because the same thing applies to the time axis, it shifts the origin from the first sample to the middle of the sequence.
Note how noisy f_recon is. This is explained by the poor estimates of f_k we could make with the non-uniformed sampled signal. There is also a sign error, we could already observe this error when we compared the two estimates of f_k. This comes from the adjoint NFFT having the same sign in the exponent as the inverse DFT, which actually means that f_recon is flipped w.r.t. x.
If we increase the number of random samples we can obtain a better estimate:
import numpy
import nfft
import matplotlib.pyplot as plt
#Define signal:
N = 1024 * 16 # power of two for speed
x = -0.5 + numpy.random.rand(N)
f = numpy.sin(10 * 2 * numpy.pi * x) + .1 * numpy.random.randn(N) # Add some 'y' randomness to the sample
#prepare wavenumbers for transform:
k = - N // 2 + numpy.arange(N)
N2 = 1024
f_k = nfft.nfft_adjoint(x, f, N2, truncated=False)
#Reconstruct the original signal with nfft
# (note the minus sign to flip the signal, in reality we should flip x)
f_recon = - numpy.fft.fftshift(numpy.fft.ifft(numpy.fft.ifftshift(f_k))) / (N / N2)
x_recon = numpy.linspace(-.5, .5, N2, endpoint=False)
#Plot original vs reconstructed
plt.figure()
plt.title('nfft')
plt.scatter(x[:N2], f[:N2], label='f(x)') # don't plot all samples, there's too many
plt.scatter(x_recon, f_recon, label='f_recon(x)')
plt.legend()
plt.show()

Related

Avoid that the inverse fourier Transform output (irfft) in Python is mirrored

I have a given power spectral density function of a wind excitation. In order to find my covariance function, I am using the irfft function in python. However, the covariance function is weirdly mirrored at about x = 5.
Here is my code.
def to_time_domain(x, y):
N = 2*len(x)-1 # Number of samples in the time domain
T = (len(x)-1) / (x[-1]) # observation length in time
sample_rate = N/T
ys =irfft(y*sample_rate/2, N-1)
return ys
# power spectral density of wind
def Gust_induced_vibration( vm, vm_10 ,z ,f, k, alpha, roh = 1.25): #
I_z_quad = 6 * k * (z/10)**(-2* alpha)
v_z = vm_10 * (z/10)**alpha
zeta = 1200 * f/ vm_10
S = (roh * 2 * vm)**2 * I_z_quad * v_z**2 * 2/3 * zeta**2 / ((1 + zeta** 2)**(4/3))
return S
sample_rate = 250 #data point per second --> max. freq: sample_rate/2
T = 10 # observation length
N = sample_rate * T # number of samples
dt = 1/sample_rate #time step
f = rfftfreq(N, 1/sample_rate)
#environmental conditions that influence the excitation
k = 0.02 #city
z = 10 # m
v = 22 # m/s
alpha = 1/np.log(10/0.5) #depends on surface roughness
power_spec = Gust_induced_vibration( v, v, z ,f, k, alpha)
#covariance
cov = to_time_domain(f,power_spec)
t = np.linspace(0, T-dt, sample_rate * T)
plt.plot(t, cov)
plt.show()
The plot of the covariance function looks as following.
Plot of covariance
I am grateful for every tip!

How to use Gradient Descent to solve this multiple terms trigonometry function?

Question is like this:
f(x) = A sin(2π * L * x) + B cos(2π * M * x) + C sin(2π * N * x)
and L,M,N are constants integer, 0 <= L,M,N <= 100
and A,B,C can be any possible integers.
Here is the given data:
x = [0,0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.1,0.11,0.12,0.13,0.14,0.15,0.16,0.17,0.18,0.19,0.2,0.21,0.22,0.23,0.24,0.25,0.26,0.27,0.28,0.29,0.3,0.31,0.32,0.33,0.34,0.35,0.36,0.37,0.38,0.39,0.4,0.41,0.42,0.43,0.44,0.45,0.46,0.47,0.48,0.49,0.5,0.51,0.52,0.53,0.54,0.55,0.56,0.57,0.58,0.59,0.6,0.61,0.62,0.63,0.64,0.65,0.66,0.67,0.68,0.69,0.7,0.71,0.72,0.73,0.74,0.75,0.76,0.77,0.78,0.79,0.8,0.81,0.82,0.83,0.84,0.85,0.86,0.87,0.88,0.89,0.9,0.91,0.92,0.93,0.94,0.95,0.96,0.97,0.98,0.99]
y = [4,1.240062433,-0.7829654986,-1.332487982,-0.3337640721,1.618033989,3.512512389,4.341307895,3.515268061,1.118929599,-2.097886967,-4.990538967,-6.450324073,-5.831575611,-3.211486891,0.6180339887,4.425660706,6.980842552,7.493970785,5.891593744,2.824429495,-0.5926374511,-3.207870455,-4.263694544,-3.667432785,-2,-0.2617162175,0.5445886005,-0.169441247,-2.323237059,-5.175570505,-7.59471091,-8.488730333,-7.23200463,-3.924327772,0.6180339887,5.138501587,8.38127157,9.532377045,8.495765687,5.902113033,2.849529206,0.4768388529,-0.46697525,0.106795821,1.618033989,3.071952496,3.475795162,2.255463709,-0.4905371745,-4,-7.117914956,-8.727599664,-8.178077181,-5.544088451,-1.618033989,2.365340134,5.169257268,5.995297102,4.758922924,2.097886967,-0.8873135564,-3.06024109,-3.678989552,-2.666365632,-0.6180339887,1.452191817,2.529722611,2.016594378,-0.01374122059,-2.824429495,-5.285215072,-6.302694708,-5.246870619,-2.210419738,2,6.13956874,8.965976562,9.68000641,8.201089581,5.175570505,1.716858387,-1.02183483,-2.278560533,-1.953524751,-0.6180339887,0.7393509358,1.129293593,-0.02181188158,-2.617913164,-5.902113033,-8.727381729,-9.987404016,-9.043589913,-5.984648344,-1.618033989,2.805900027,6.034770001,7.255101454,6.368389697]
enter image description here
How to use Gradient Descent to solve this multiple terms trigonometry function?
Gradient descent is not well suited for optimisation over integers. You can try a navie relaxation where you solve in floats, and hope the rounded solution is still ok.
from autograd import grad, numpy as jnp
import numpy as np
def cast(params):
[A, B, C, L, M, N] = params
L = jnp.minimum(jnp.abs(L), 100)
M = jnp.minimum(jnp.abs(M), 100)
N = jnp.minimum(jnp.abs(N), 100)
return A, B, C, L, M, N
def pred(params, x):
[A, B, C, L, M, N] = cast(params)
return A *jnp.sin(2 * jnp.pi * L * x) + B*jnp.cos(2*jnp.pi * M * x) + C * jnp.sin(2 * jnp.pi * N * x)
x = [0,0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.1,0.11,0.12,0.13,0.14,0.15,0.16,0.17,0.18,0.19,0.2,0.21,0.22,0.23,0.24,0.25,0.26,0.27,0.28,0.29,0.3,0.31,0.32,0.33,0.34,0.35,0.36,0.37,0.38,0.39,0.4,0.41,0.42,0.43,0.44,0.45,0.46,0.47,0.48,0.49,0.5,0.51,0.52,0.53,0.54,0.55,0.56,0.57,0.58,0.59,0.6,0.61,0.62,0.63,0.64,0.65,0.66,0.67,0.68,0.69,0.7,0.71,0.72,0.73,0.74,0.75,0.76,0.77,0.78,0.79,0.8,0.81,0.82,0.83,0.84,0.85,0.86,0.87,0.88,0.89,0.9,0.91,0.92,0.93,0.94,0.95,0.96,0.97,0.98,0.99]
y = [4,1.240062433,-0.7829654986,-1.332487982,-0.3337640721,1.618033989,3.512512389,4.341307895,3.515268061,1.118929599,-2.097886967,-4.990538967,-6.450324073,-5.831575611,-3.211486891,0.6180339887,4.425660706,6.980842552,7.493970785,5.891593744,2.824429495,-0.5926374511,-3.207870455,-4.263694544,-3.667432785,-2,-0.2617162175,0.5445886005,-0.169441247,-2.323237059,-5.175570505,-7.59471091,-8.488730333,-7.23200463,-3.924327772,0.6180339887,5.138501587,8.38127157,9.532377045,8.495765687,5.902113033,2.849529206,0.4768388529,-0.46697525,0.106795821,1.618033989,3.071952496,3.475795162,2.255463709,-0.4905371745,-4,-7.117914956,-8.727599664,-8.178077181,-5.544088451,-1.618033989,2.365340134,5.169257268,5.995297102,4.758922924,2.097886967,-0.8873135564,-3.06024109,-3.678989552,-2.666365632,-0.6180339887,1.452191817,2.529722611,2.016594378,-0.01374122059,-2.824429495,-5.285215072,-6.302694708,-5.246870619,-2.210419738,2,6.13956874,8.965976562,9.68000641,8.201089581,5.175570505,1.716858387,-1.02183483,-2.278560533,-1.953524751,-0.6180339887,0.7393509358,1.129293593,-0.02181188158,-2.617913164,-5.902113033,-8.727381729,-9.987404016,-9.043589913,-5.984648344,-1.618033989,2.805900027,6.034770001,7.255101454,6.368389697]
def loss(params):
p = pred(params, np.array(x))
return jnp.mean((np.array(y)-p)**2)
params = np.array([np.random.random()*100 for _ in range(6)])
for _ in range(10000):
g = grad(loss)
params = params - 0.001*g(params)
print("Relaxed solution", cast(params), "loss=", loss(params))
constrained_params = np.round(cast(params))
print("Integer solution", constrained_params, "loss=", loss(constrained_params))
print()
Since the problem will have a lot of local minima, you might need to run it multiple times.
It's quite hard to use gradient descent to find a solution to this problem, because it tends to get stuck when changing the L, M, or N parameters. The gradients for those can push it away from the right solution, unless it is very close to an optimal solution already.
There are ways to get around this, such as basinhopping or random search, but because of the function you're trying to learn, you have a better alternative.
Since you're trying to learn a sinusoid function, you can use an FFT to find the frequencies of the sine waves. Once you have those frequencies, you can find the amplitudes and phases used to generate the same sine wave.
Pardon the messiness of this code, this is my first time using an FFT.
import scipy.fft
import numpy as np
import math
import matplotlib.pyplot as plt
def get_top_frequencies(x, y, num_freqs):
x = np.array(x)
y = np.array(y)
# Find timestep (assume constant timestep)
dt = abs(x[0] - x[-1]) / (len(x) - 1)
# Take discrete FFT of y
spectral = scipy.fft.fft(y)
freq = scipy.fft.fftfreq(y.shape[0], d=dt)
# Cut off top half of frequencies. Assumes input signal is real, and not complex.
spectral = spectral[:int(spectral.shape[0] / 2)]
# Double amplitudes to correct for cutting off top half.
spectral *= 2
# Adjust amplitude by sampling timestep
spectral *= dt
# Get ampitudes for all frequencies. This is taking the magnitude of the complex number
spectral_amplitude = np.abs(spectral)
# Pick frequencies with highest amplitudes
highest_idx = np.argsort(spectral_amplitude)[::-1][:num_freqs]
# Find amplitude, frequency, and phase components of each term
highest_amplitude = spectral_amplitude[highest_idx]
highest_freq = freq[highest_idx]
highest_phase = np.angle(spectral[highest_idx]) / math.pi
# Convert it into a Python function
function = ["def func(x):", "return ("]
for i, components in enumerate(zip(highest_amplitude, highest_freq, highest_phase)):
amplitude, freq, phase = components
plus_sign = " +" if i != (num_freqs - 1) else ""
term = f"{amplitude:.2f} * math.cos(2 * math.pi * {freq:.2f} * x + math.pi * {phase:.2f}){plus_sign}"
function.append(" " + term)
function.append(")")
return "\n ".join(function)
x = [0,0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.1,0.11,0.12,0.13,0.14,0.15,0.16,0.17,0.18,0.19,0.2,0.21,0.22,0.23,0.24,0.25,0.26,0.27,0.28,0.29,0.3,0.31,0.32,0.33,0.34,0.35,0.36,0.37,0.38,0.39,0.4,0.41,0.42,0.43,0.44,0.45,0.46,0.47,0.48,0.49,0.5,0.51,0.52,0.53,0.54,0.55,0.56,0.57,0.58,0.59,0.6,0.61,0.62,0.63,0.64,0.65,0.66,0.67,0.68,0.69,0.7,0.71,0.72,0.73,0.74,0.75,0.76,0.77,0.78,0.79,0.8,0.81,0.82,0.83,0.84,0.85,0.86,0.87,0.88,0.89,0.9,0.91,0.92,0.93,0.94,0.95,0.96,0.97,0.98,0.99]
y = [4,1.240062433,-0.7829654986,-1.332487982,-0.3337640721,1.618033989,3.512512389,4.341307895,3.515268061,1.118929599,-2.097886967,-4.990538967,-6.450324073,-5.831575611,-3.211486891,0.6180339887,4.425660706,6.980842552,7.493970785,5.891593744,2.824429495,-0.5926374511,-3.207870455,-4.263694544,-3.667432785,-2,-0.2617162175,0.5445886005,-0.169441247,-2.323237059,-5.175570505,-7.59471091,-8.488730333,-7.23200463,-3.924327772,0.6180339887,5.138501587,8.38127157,9.532377045,8.495765687,5.902113033,2.849529206,0.4768388529,-0.46697525,0.106795821,1.618033989,3.071952496,3.475795162,2.255463709,-0.4905371745,-4,-7.117914956,-8.727599664,-8.178077181,-5.544088451,-1.618033989,2.365340134,5.169257268,5.995297102,4.758922924,2.097886967,-0.8873135564,-3.06024109,-3.678989552,-2.666365632,-0.6180339887,1.452191817,2.529722611,2.016594378,-0.01374122059,-2.824429495,-5.285215072,-6.302694708,-5.246870619,-2.210419738,2,6.13956874,8.965976562,9.68000641,8.201089581,5.175570505,1.716858387,-1.02183483,-2.278560533,-1.953524751,-0.6180339887,0.7393509358,1.129293593,-0.02181188158,-2.617913164,-5.902113033,-8.727381729,-9.987404016,-9.043589913,-5.984648344,-1.618033989,2.805900027,6.034770001,7.255101454,6.368389697]
print(get_top_frequencies(x, y, 3))
That produces this function:
def func(x):
return (
5.00 * math.cos(2 * math.pi * 10.00 * x + math.pi * 0.50) +
4.00 * math.cos(2 * math.pi * 5.00 * x + math.pi * -0.00) +
2.00 * math.cos(2 * math.pi * 3.00 * x + math.pi * -0.50)
)
Which is not quite the format you specified - you asked for two sins and one cos, and for no phase parameter. However, using the trigonometric identity cos(x) = sin(pi/2 - x), you can convert this into an equivalent expression that matches what you want:
def func(x):
return (
5.00 * math.sin(2 * math.pi * -10.00 * x) +
4.00 * math.cos(2 * math.pi * 5.00 * x) +
2.00 * math.sin(2 * math.pi * 3.00 * x)
)
And there's the original function!

correct ordering of array for numpy fast Fourier transform

I am trying to implement a simple version of the immersed boundary method for fluid-structure interactions in a periodic domain using the fast Fourier transform. This is done by first setting a force on the fluid, taking its FFT, solving a series of matrix equations, and taking the inverse FFT to recover the solution to the matrix system in the real domain. I am currently getting very strange results, and I think it might be because the solutions in Fourier space (u_hat, v_hat, p_hat) are not ordered correctly for the numpy ifft2 function, but I don't know how to fix that. I'd appreciate any help!
Edit: the idea is that given f, g, we want to find u, v, p. The algorithm is to take the fft in 2D of f and g, use that to compute the fft of u, v, and p, and then take the inverse fft to recover the real u, v, p. I've added the specific function relating f_hat, g_hat and u_hat, v_hat, p_hat.
Edit2: what I'd really like to know is how to order an arbitrary array so that it can be passed to the numpy ifft2 function, which requires the following: input should have the term for zero frequency in the low-order corner of the two axes, the positive frequency terms in the first half of these axes, the term for the Nyquist frequency in the middle of the axes and the negative frequency terms in the second half of both axes, in order of decreasingly negative frequency.
N = 20 # Number of cartesian grid points
l = 1.0 # Length of domain
dx = l / N # Grid step size
mu = 8.9 * 10**(-4)
f_hat = np.fft.fft2(f) # Fourier transform of x component of the force
g_hat = np.fft.fft2(g) # Fourier transform of y component of the force
u_hat = np.zeros((N, N), dtype=np.complex)
v_hat = np.zeros((N, N), dtype=np.complex)
p_hat = np.zeros((N, N), dtype=np.complex)
for k in range(0, N):
for m in range(0, N):
if k == 0 and m == 0:
u_hat[k,m] = 0.0
v_hat[k,m] = 0.0
p_hat[k,m] = 0.0
else:
u_hat[k,m], v_hat[k,m], p_hat[k,m] = fluid_system(k, m, N, dx, mu, f_hat[k,m], g_hat[k,m])
# Take inverse Fourier transform to get u and p
u = np.real((np.fft.ifft2(u_hat))).reshape((len(x),1))
v = np.real((np.fft.ifft2(v_hat))).reshape((len(x),1))
p = np.real(np.fft.ifft2(p_hat))
def fluid_system(l, m, N, h, mu, fft_f, fft_g):
L = - 4.0 / (h**2) * ((np.sin(np.pi * l / N))**2 + (np.sin(np.pi * m / N))**2)
D_l = - 1j / h * np.sin(2.0 * np.pi * l / N)
D_m = - 1j / h * np.sin(2.0 * np.pi * m / N)
A = np.zeros((3,3), dtype=np.complex)
A[0,0] = mu * L
A[0,2] = - D_l
A[1,1] = mu * L
A[1,2] = - D_m
A[2,0] = D_l
A[2,1] = D_m
b = np.zeros((3,1), dtype=np.complex)
b[0] = fft_f
b[1] = fft_g
res = (np.linalg.solve(A, b)).ravel()
return res

Correct implementation of SI, SIS, SIR models (python)

I have created some very basic implementations of the mentioned models. However, although graphs seem to look right, the numbers don't add up to a constant. That is for the sum of susceptible/infected/recovered people in each compartment should add up to N (which is total number of people), but it doesn't, for some reason it adds up to some bizarre decimal numbers, and I really don't know how to fix it, after looking at it for 3 days now.
The SI Model:
import matplotlib.pyplot as plt
N = 1000000
S = N - 1
I = 1
beta = 0.6
sus = [] # infected compartment
inf = [] # susceptible compartment
prob = [] # probability of infection at time t
def infection(S, I, N):
t = 0
while (t < 100):
S = S - beta * ((S * I / N))
I = I + beta * ((S * I) / N)
p = beta * (I / N)
sus.append(S)
inf.append(I)
prob.append(p)
t = t + 1
infection(S, I, N)
figure = plt.figure()
figure.canvas.set_window_title('SI model')
figure.add_subplot(211)
inf_line, =plt.plot(inf, label='I(t)')
sus_line, = plt.plot(sus, label='S(t)')
plt.legend(handles=[inf_line, sus_line])
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) # use scientific notation
ax = figure.add_subplot(212)
prob_line = plt.plot(prob, label='p(t)')
plt.legend(handles=prob_line)
type(ax) # matplotlib.axes._subplots.AxesSubplot
# manipulate
vals = ax.get_yticks()
ax.set_yticklabels(['{:3.2f}%'.format(x*100) for x in vals])
plt.xlabel('T')
plt.ylabel('p')
plt.show()
SIS Model:
import matplotlib.pylab as plt
N = 1000000
S = N - 1
I = 1
beta = 0.3
gamma = 0.1
sus = \[\]
inf = \[\]
def infection(S, I, N):
for t in range (0, 1000):
S = S - (beta*S*I/N) + gamma * I
I = I + (beta*S*I/N) - gamma * I
sus.append(S)
inf.append(I)
infection(S, I, N)
figure = plt.figure()
figure.canvas.set_window_title('SIS model')
inf_line, =plt.plot(inf, label='I(t)')
sus_line, = plt.plot(sus, label='S(t)')
plt.legend(handles=\[inf_line, sus_line\])
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.xlabel('T')
plt.ylabel('N')
plt.show()
SIR Model:
import matplotlib.pylab as plt
N = 1000000
S = N - 1
I = 1
R = 0
beta = 0.5
mu = 0.1
sus = []
inf = []
rec = []
def infection(S, I, R, N):
for t in range (1, 100):
S = S -(beta * S * I)/N
I = I + ((beta * S * I)/N) - R
R = mu * I
sus.append(S)
inf.append(I)
rec.append(R)
infection(S, I, R, N)
figure = plt.figure()
figure.canvas.set_window_title('SIR model')
inf_line, =plt.plot(inf, label='I(t)')
sus_line, = plt.plot(sus, label='S(t)')
rec_line, = plt.plot(rec, label='R(t)')
plt.legend(handles=[inf_line, sus_line, rec_line])
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.xlabel('T')
plt.ylabel('N')
plt.show()
I'll look only at the SI model.
Your two key variables are S and I. (You may have reversed the meanings of these two variables, though that does not affect what I write here.) You initialize them so their sum is N which is the constant 1000000.
You update your two key variables in the lines
S = S - beta * ((S * I / N))
I = I + beta * ((S * I) / N)
You apparently intend to add to I and subtract from S the same value, so the sum of S and I is unchanged. However, you actually first change S then use that new value to change I, so the values added and subtracted are not actually the same, and the sum of the variables has not remained constant.
You can fix this by using Python's ability to update multiple variables in one line. Replace those two lines with
S, I = S - beta * ((S * I / N)), I + beta * ((S * I) / N)
This calculates both of the new values before updating the variables, so the same value actually added and subtracted from the two variables. (There are other ways to get the same effect, such as temporary variables for the updated values, or one temporary variable to store the amount to add and subtract, but since you use Python you may as well use its capabilities.)
When I now run the program, I get these graphs:
which I think is what you want.
So the solution above worked for the SIS model as well.
As for the SIR model I had to solve differential equations using odeint, here is a simple solution to the SIR model:
import matplotlib.pylab as plt
from scipy.integrate import odeint
import numpy as np
N = 1000
S = N - 1
I = 1
R = 0
beta = 0.6 # infection rate
gamma = 0.2 # recovery rate
# differential equatinons
def diff(sir, t):
# sir[0] - S, sir[1] - I, sir[2] - R
dsdt = - (beta * sir[0] * sir[1])/N
didt = (beta * sir[0] * sir[1])/N - gamma * sir[1]
drdt = gamma * sir[1]
print (dsdt + didt + drdt)
dsirdt = [dsdt, didt, drdt]
return dsirdt
# initial conditions
sir0 = (S, I, R)
# time points
t = np.linspace(0, 100)
# solve ODE
# the parameters are, the equations, initial conditions,
# and time steps (between 0 and 100)
sir = odeint(diff, sir0, t)
plt.plot(t, sir[:, 0], label='S(t)')
plt.plot(t, sir[:, 1], label='I(t)')
plt.plot(t, sir[:, 2], label='R(t)')
plt.legend()
plt.xlabel('T')
plt.ylabel('N')
# use scientific notation
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.show()

Calculate the Fourier series with the trigonometry approach

I try to implement the Fourier series function according to the following formulas:
...where...
...and...
Here is my approach to the problem:
import numpy as np
import pylab as py
# Define "x" range.
x = np.linspace(0, 10, 1000)
# Define "T", i.e functions' period.
T = 2
L = T / 2
# "f(x)" function definition.
def f(x):
return np.sin(np.pi * 1000 * x)
# "a" coefficient calculation.
def a(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for i in np.linspace(a, b, accuracy):
x = a + i * dx
integration += f(x) * np.cos((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# "b" coefficient calculation.
def b(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for i in np.linspace(a, b, accuracy):
x = a + i * dx
integration += f(x) * np.sin((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# Fourier series.
def Sf(x, L, n = 10):
a0 = a(0, L)
sum = 0
for i in np.arange(1, n + 1):
sum += ((a(i, L) * np.cos(n * np.pi * x)) + (b(i, L) * np.sin(n * np.pi * x)))
return (a0 / 2) + sum
# x axis.
py.plot(x, np.zeros(np.size(x)), color = 'black')
# y axis.
py.plot(np.zeros(np.size(x)), x, color = 'black')
# Original signal.
py.plot(x, f(x), linewidth = 1.5, label = 'Signal')
# Approximation signal (Fourier series coefficients).
py.plot(x, Sf(x, L), color = 'red', linewidth = 1.5, label = 'Fourier series')
# Specify x and y axes limits.
py.xlim([0, 10])
py.ylim([-2, 2])
py.legend(loc = 'upper right', fontsize = '10')
py.show()
...and here is what I get after plotting the result:
I've read the How to calculate a Fourier series in Numpy? and I've implemented this approach already. It works great, but it use the expotential method, where I want to focus on trigonometry functions and the rectangular method in case of calculating the integraions for a_{n} and b_{n} coefficients.
Thank you in advance.
UPDATE (SOLVED)
Finally, here is a working example of the code. However, I'll spend more time on it, so if there is anything that can be improved, it will be done.
from __future__ import division
import numpy as np
import pylab as py
# Define "x" range.
x = np.linspace(0, 10, 1000)
# Define "T", i.e functions' period.
T = 2
L = T / 2
# "f(x)" function definition.
def f(x):
return np.sin((np.pi) * x) + np.sin((2 * np.pi) * x) + np.sin((5 * np.pi) * x)
# "a" coefficient calculation.
def a(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for x in np.linspace(a, b, accuracy):
integration += f(x) * np.cos((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# "b" coefficient calculation.
def b(n, L, accuracy = 1000):
a, b = -L, L
dx = (b - a) / accuracy
integration = 0
for x in np.linspace(a, b, accuracy):
integration += f(x) * np.sin((n * np.pi * x) / L)
integration *= dx
return (1 / L) * integration
# Fourier series.
def Sf(x, L, n = 10):
a0 = a(0, L)
sum = np.zeros(np.size(x))
for i in np.arange(1, n + 1):
sum += ((a(i, L) * np.cos((i * np.pi * x) / L)) + (b(i, L) * np.sin((i * np.pi * x) / L)))
return (a0 / 2) + sum
# x axis.
py.plot(x, np.zeros(np.size(x)), color = 'black')
# y axis.
py.plot(np.zeros(np.size(x)), x, color = 'black')
# Original signal.
py.plot(x, f(x), linewidth = 1.5, label = 'Signal')
# Approximation signal (Fourier series coefficients).
py.plot(x, Sf(x, L), '.', color = 'red', linewidth = 1.5, label = 'Fourier series')
# Specify x and y axes limits.
py.xlim([0, 5])
py.ylim([-2.2, 2.2])
py.legend(loc = 'upper right', fontsize = '10')
py.show()
Consider developing your code in a different way, block by block. You should be surprised if a code like this would work at the first try. Debugging is one option, as #tom10 said. The other option is rapid prototyping the code step by step in the interpreter, even better with ipython.
Above, you are expecting that b_1000 is non-zero, since the input f(x) is a sinusoid with a 1000 in it. You're also expecting that all other coefficients are zero right?
Then you should focus on the function b(n, L, accuracy = 1000) only. Looking at it, 3 things are going wrong. Here are some hints.
the multiplication of dx is within the loop. Sure about that?
in the loop, i is supposed to be an integer right? Is it really an integer? by prototyping or debugging you would discover this
be careful whenever you write (1/L) or a similar expression. If you're using python2.7, you're doing likely wrong. If not, at least use a from __future__ import division at the top of your source. Read this PEP if you don't know what I am talking about.
If you address these 3 points, b() will work. Then think of a in a similar fashion.

Categories

Resources