How to calculate 95% confidence level of Fourier transform in Python? - python

After calculating the Fast Fourier Transform (FFT) of a time series in Python/Scipy, I am trying to plot the 95% confidence level that for which the power spectrum is different from red or white noise, but haven't found a straightforward way to do so. I tried following this thread: Power spectrum in python - significance levels
and wrote the following code to test for a sine function with random noise:
import numpy as np
from scipy.stats import chi2
from scipy.fft import rfft, rfftfreq
x=np.linspace(0,10,500)
data = np.sin(20*np.pi*x)+np.random.rand(500) - 0.5
yf = rfft(data)
xf = rfftfreq(len(data), 1)
n=len(data)
var=np.var(data)
### degrees of freedom
M=n/2
phi=(2*(n-1)-M/2.)/M
###values of chi-squared
chi_val_99 = chi2.isf(q=0.01/2, df=phi) #/2 for two-sided test
chi_val_95 = chi2.isf(q=0.05/2, df=phi)
### normalization of power spectrum with 1/n
plt.figure(figsize=(5,5))
plt.plot(xf,np.abs(yf)/n, color='k')
plt.axhline(y=(var/n)*(chi_val_95/phi),color='r',linestyle='--')
But the resulting line lies below all of the power spectrum, as in Fig. 1. What am I doing wrong? Is there another way to get the significance of the FFT power spectrum ?

Background considerations
I did not read the entire references included in the answer you linked to (and in particular Pankofsky et. al.), but couldn't find an explicit derivation of the formula and exactly under which conditions the results applied. On the other hand I've found a few other references where a derivation could more readily be confirmed.
Based on the answer to this question on dsp.stackexchange.com, if you only had white gaussian noise with unit variance, the squared-amplitude of each Fourier coefficients would have Chi-squared distribution with degree of freedom asymptotically 2 (sum of 2 Gaussians, one for each of the real and imaginary parts of the complex Fourier coefficient, when n >> 1). When the noise does not have unit variance, it follows a more general Gamma distribution (although in this case you can simply think of it as scaling the survival function). For noise with a uniform distribution in the [-0.5,0.5] range, and a sufficiently large number of samples, the distribution can also be approximated by a Gamma distribution thanks to the Central Limit Theorem.
To illustrate and better understand these distribution, we can go through gradually more complex cases.
Frequency domain distribution of random noise
For sake of comparing with the later case of uniformly distributed data we will use a gaussian noise with a matching variance. Since the variance of uniformly distributed data is in the range [-0.5,0.5] is 1/12, this gives us the following data:
data = np.sqrt(1.0/12)*np.random.randn(500)
Now let us check the statistics on the power spectrum. As indicated earlier, the squared magnitude of each frequency coefficient is a random variable with an approximately Gamma distribution. The shape parameter is half the degrees of freedom of a Chi-Squared distribution that could have been used for a unit-variance case (so 1 in this case), and the scale parameter corresponds to the square of the scaling of the time-domain (from linearity the variate yf scales as data, such that np.abs(yf)**2 scales as the square of data).
We can validate this by plotting the histogram of the data against the probability density function:
yf = rfft(data)
spectrum = np.abs(yf)**2/len(data)
plt.figure(figsize=(5,5))
plt.hist(spectrum, bins=100, density=True, label='data')
z = np.linspace(0, np.max(spectrum), 100)
plt.plot(z, gamma.pdf(z, 1, scale=1.0/12), 'k', label='$\Gamma(1,{:.3f})$'.format(1.0/12))
As you can see the values are in pretty good agreement:
Going back to the spectrum plot:
# degrees of freedom
phi = 2
###values of chi-squared
chi_val_95 = chi2.isf(q=0.05/2, df=phi) #/2 for two-sided test
### normalization of power spectrum with 1/n
plt.figure(figsize=(5,5))
plt.plot(xf,np.abs(yf)**2/n, color='k')
# the following two lines should overlap
plt.axhline(y=var*(chi_val_95/phi),color='r',linestyle='--')
plt.axhline(y=gamma.isf(q=0.05/2, a=1, scale=var),color='b')
Just changing the data to use a uniform distribution in the [-0.5,0.5] range (with data = np.random.rand(500) - 0.5) gives an almost identical plot, with the confidence level remaining unchanged.
Frequency domain distribution of signal with noise
To get a single threshold value corresponding to a 95% confidence interval where the noise part would fall if you could separate it from the data containing a sinusoidal component and noise (or otherwise stated as the 95% confidence interval of the null-hypothesis that the data is white noise), you would need the variance of the noise. While trying to estimate this variance you may quickly realize that the sinusoidal contributes a non-negligible portion of the overall data's variance. To remove this contribution we could take advantage of the fact that sinusoidal signals are more readily separated in the frequency-domain.
So we could simply discard the x% largest values of the spectrum, under the assumption that those are mostly contributed by spike of the sinusoidal component in the frequency-domain. Note that 95 percentile choice below for the outliers is somewhat arbitrary:
# remove outliers
threshold = np.percentile(np.abs(yf)**2, 95)
filtered = [x for x in np.abs(yf)**2 if x <= threshold]
Then we can get the time-domain variance using Parseval's theorem:
# estimate variance
# In time-domain variance ~ np.sum(data**2)/len(data))
# In frequency-domain, using Parseval's theorem we get np.sum(data**2)/len(data) = np.mean(np.abs(spectrum)**2)/len(data)
var = np.mean(filtered)/len(data)
Note that due to the dynamic range of values across the spectrum, you may prefer to visualize the results on a logarithmic scale:
plt.figure(figsize=(5,5))
plt.plot(xf,10*np.log10(np.abs(yf)**2/n), color='k')
plt.axhline(y=10*np.log10(gamma.isf(q=0.05/2, a=1, scale=var)),color='r',linestyle='--')
If on the other hand you are trying to obtain a frequency-dependent 95% confidence interval, then you'd need to consider the contribution of the sinusoidal component at each frequency. For sake of simplicity we will assume here that the amplitude of the sinusoidal component and the variance of the noise are known (otherwise we'd first need to estimate these). In this case the distribution gets shifted by the sinusoidal component's contribution:
signal = np.sin(20*np.pi*x)
data = signal + np.random.rand(500) - 0.5
Sf = rfft(signal) # Assuming perfect knowledge of the sinusoidal component
yf = rfft(data)
noiseVar = 1.0/12 # Assuming perfect knowledge of the noise variance
threshold95 = np.abs(Sf)**2/n + gamma.isf(q=0.05/2, a=1, scale=noiseVar)
plt.figure(figsize=(5,5))
plt.plot(xf, 10*np.log10(np.abs(yf)**2/n), color='k')
plt.plot(xf, 10*np.log10(threshold95), color='r',linestyle='--')
Finally, while I kept the final plots in squared-amplitude units, nothing prevents you from taking the square root and view the corresponding thresholds in amplitude units.
Edit : I've used a gamma(1,s) distribution which is an asymptotically good distribution for data with sufficient number of samples n. For really small data sizes the distribution more closely match a gamma(0.5*(n/(n//2+1)),s) (due to the DC and Nyquist coefficients being purely real, thus having 1 degree of freedom unlike all other coefficients).

Related

Area under the peak of a FFT in Python

I'm trying to do some tests before I proceed analyzing some real dataset via FFT, and I've found the following problem.
First, I create a signal as the sum of two cosines and then use rfft to to the transformation (since it has only real values):
import numpy as np
import matplotlib.pyplot as plt
from scipy.fft import rfft, rfftfreq
# Number of sample points
N = 800
# Sample spacing
T = 1.0 / 800.0
x = np.linspace(0.0, N*T, N)
y = 0.5*np.cos(10*2*np.pi*x) + 0.5*np.cos(200*2*np.pi*x)
# FFT
yf = rfft(y)
xf = rfftfreq(N, T)
fig, ax = plt.subplots(1,2,figsize=(15,5))
ax[0].plot(x,y)
ax[1].plot(xf, 2.0/N*np.abs(yf))
As it can be seen from the definition of the signal, I have two oscillations with amplitude 0.5 and frequency 10 and 200. Now, I would expect the FFT spectrum to be something like two deltas at those points, but apparently increasing the frequency broadens the peaks:
From the first peak it can be infered that the amplitude is 0.5, but not for the second. I've tryied to obtain the area under the peak using np.trapz and use that as an estimate for the amplitude, but as it is close to a dirac delta it's very sensitive to the interval I choose. My problem is that I need to get the amplitude as exact as possible for my data analysis.
EDIT: As it seems to be something related with the number of points, I decided to increment (now that I can) the sample frequency. This seems to solve the problem, as it can be seen in the figure:
However, it still seems strange that for a certain number of points and sample frequency, the high frequency peaks broaden...
It is not strange , you have leakage of the frequency bins. When you discretize the signal (sampling) needed for the Fourier transfrom , frequency bins are created which are frequency intervals where the the amplitude is calculated. And each bin has wide which is given by the sample_rate / num_points . So , the less the number of bins the more difficult is to assign precise amplitudes to every frequency. Other problems in choosing the best sampling rate exist such as the shannon-nyquist theorem to prevent aliasing. https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem . But depending on the problem sometimes there some custom rates used for sampling. E.g. when dealing with audio a sampling rate of 44,100 Hz is widely used , cause is based on the limits of the human hearing. So it depends also on nature of the data you want to perform analysis as you wrote. Anyway , since this question has also theoretical value , you can also check https://dsp.stackexchange.com for some useful info.
I would comment to George's answer, but yet I cannot.
Maybe a starting point for your research are the properties of the Discrete Fourier Transform.
The signal in the time domain is actual the cosines multiplied by a box window which transforms into the frequency domain as the convolution of the deltas with the sinc function. The sinc functions will smear the spectrum.
However, I am not sure we are observing spectral leakage here, since the window fits exactly to the full period of cosines. The discretization of the bins might still play a role here.

FFT for 3d sensor signal

I have 3d-array of accelerator signal data which sampled in 50 Hz meaning that the time step is 1/50=.02. My goal is to compute the main frequency of this sensor using Numpy or Scipy. My question is that should I compute the frequency of each column separately, using multidimensional fft or computing single Vector and then compute fft.
I used the following function to compute the main frequency.
from scipy import fftpack
import numpy as np
def fourier(signal, timestep):
data = signal - np.mean(signal)
N = len(data) // 2 # we need half of data
freq = fftpack.fftfreq(len(data), d=timestep)[:N]
fft = fftpack.fft(data)[:N]
amp = np.abs(fft) / N
order = np.argsort(amp)[::-1] ## sort based on the importance
return freq[order][0]
A 3D array of accelerometer sensors produces an array of 5 dimensions: the space coordinates, time and the components of the acceleration.
Taking the DFT over the time dimension corresponds to analysing sensors one at a time: each sensor would produce a main frequency, likely slightly different from one sensor to another, as if the sensors were uncoupled.
As an alternative, let's think about taking the DFT over both spacial coordinates and time. It corresponds to writing the compound signal as a sum of sinusoidal plane waves:
where Ǹ is a scaling factor obtained by multiplying the number of points to the number of time samples. In the sequel, I'll drop this global scaling independent from x,y,z,t,k_x,k_y,k_z and w.
At this point, modeling the physics generating this acceleration would be a significant asset. Indeed, using this DFT makes little sense if the phenomenon is dispersive. Nevetheless, the diffusion, elasticity or acoustics in an uniform material are non-dispersive: each frequency lives indepently from the others. Furthermore, knowing the physics is useful as an energy can be defined. For instance, the kinetic energy associated to the wave k_x,k_y,k_z,w writes:
Therefore, the kinetic energy associated to a given frequency w writes:
As a consequence, this reasoning provides a physically-based way to merge the pointwise DFTs over time . Indeed, according to the Parseval's identity:
Regarding practical considerations, substracting the average as you did is indeed a good start. If computing the velocity is considered by multiplying by 1/w^2, the zero frequency (i.e. the average) is to be zeroed, to avoid occurence of infinite or Nan.
Moreover, applying a window prior to computing the time DFT could help limit problems related to spectral leakage. DFT is designed for periodic signals of periods consistent with that of the frame. More specifically, it computes the Fourier transform of a signal built by repeating your frame again and again. As a consequence, artifical discontinuities may appear at the edges, inducing misleading non-existing frequencies. Windows drops near zero close to the edge of the frame, thus reducing the discontinuities and their effect. As a consequence, it could be suggested to apply a window to the space dimensions as well, to keep the consistency with the physical plane wave decomposition. It would result in giving more weight to the accelerators at the center of the 3D array.
The plane wave decomposition also requires that the spacial spacing of the sensor must be about twice smaller than the expected wavelength. Otherwise, another phenomenon called aliasing occurs. Nevertheless, the power spectrum W(w) might be less sensitive to this issue than the plane wave decomposition. On the contrary, if the elastic strain energy is computed starting from the acceleration, aliasing could become a real problem, because computing the strain requires derivative with respect to space coordinates, i.e. multiplication by k_x, k_y or k_z, and space aliasing corresponds to using the wrong k_x.
Once W(w) is computed, the frequencies corresponding to each peak can be estimated by computing the mean frequency over the peak with respect to power density as in Why are frequency values rounded in signal using FFT? .
Here is a sample code generating some plane waves of frequencies not consistent with the size of the frame (both time and space). Hanning windows are applied, the kinetic energy is computed and the frequencies corresponding to each peak are retreived.
import matplotlib.pyplot as plt
import numpy as np
from scipy import signal
import scipy
spacingx=1.
spacingy=1.
spacingz=1.
spacingt=1./50.
Nx=5
Ny=5
Nz=5
Nt=512
frequency1=9.5
frequency2=13.7
frequency3=22.3
#building a signal
acc=np.zeros((Nx,Ny,Nz,Nt,3))
for i in range(Nx):
for j in range(Ny):
for k in range(Nz):
for l in range(Nt):
acc[i,j,k,l,0]=np.sin(i*spacingx+j*spacingy-2*np.pi*frequency1*l*spacingt)
acc[i,j,k,l,1]=np.sin(i*spacingx+1.5*k*spacingz-2*np.pi*frequency2*l*spacingt)
acc[i,j,k,l,2]=np.sin(1.5*i*spacingx+k*spacingz-2*np.pi*frequency3*l*spacingt)
#applying a window both in time and space
hanningx=np.hanning(Nx)
hanningy=np.hanning(Ny)
hanningz=np.hanning(Nz)
hanningt=np.hanning(Nt)
for i in range(Nx):
hx=hanningx[i]
for j in range(Ny):
hy=hanningy[j]
for k in range(Nz):
hz=hanningx[k]
for l in range(Nt):
ht=hanningt[l]
acc[i,j,k,l,0]*=hx*hy*hz*ht
acc[i,j,k,l,1]*=hx*hy*hz*ht
acc[i,j,k,l,2]*=hx*hy*hz*ht
#computing the DFT over time.
acctilde=np.fft.fft(acc,axis=3)
#kinetic energy
print acctilde.shape[3]
kineticW=np.zeros(acctilde.shape[3])
frequencies=np.fft.fftfreq(Nt, spacingt)
for l in range(Nt):
oneonomegasquared=0.
if l>0:
oneonomegasquared=1.0/(frequencies[l]*frequencies[l])
for i in range(Nx):
for j in range(Ny):
for k in range(Nz):
kineticW[l]+= oneonomegasquared*(np.real(np.vdot(acctilde[i,j,k,l,:],acctilde[i,j,k,l,:])))
plt.plot(frequencies[0:acctilde.shape[3]],kineticW,'k-',label=r'$W(f)$')
#plt.plot(xi,np.real(fourier),'k-', lw=3, color='red', label=r'$f$, Hz')
plt.legend()
plt.show()
# see https://stackoverflow.com/questions/54714169/why-are-frequency-values-rounded-in-signal-using-fft/54775867#54775867
peaks, _= signal.find_peaks(kineticW, height=np.max(kineticW)*0.1)
print "potential frequencies index", peaks
#compute the mean frequency of the peak with respect to power density
powerpeak=np.zeros(len(peaks))
powerpeaktimefrequency=np.zeros(len(peaks))
for i in range(len(kineticW)):
dist=1000
jnear=0
for j in range(len(peaks)):
if dist>np.abs(i-peaks[j]):
dist=np.abs(i-peaks[j])
jnear=j
powerpeak[jnear]+=kineticW[i]
powerpeaktimefrequency[jnear]+=kineticW[i]*frequencies[i]
powerpeaktimefrequency=np.divide(powerpeaktimefrequency,powerpeak)
print 'corrected frequencies', powerpeaktimefrequency

How do I get the frequencies from a signal?

I am look for a way to obtain the frequency from a signal. Here's an example:
signal = [numpy.sin(numpy.pi * x / 2) for x in range(1000)]
This Array will represent the sample of a recorded sound (x = miliseconds)
sin(pi*x/2) => 250 Hrz
How can we go from the signal (list of points), to obtaining the frequencies form this array?
Note:
I have read many Stackoverflow threads and watch many youtube videos. I am yet to find an answer. Please use simple words.
(I am Thankfull for every answer)
What you're looking for is known as the Fourier Transform
A bit of background
Let's start with the formal definition:
The Fourier transform (FT) decomposes a function (often a function of time, or a signal) into its constituent frequencies
This is in essence a mathematical operation that when applied over a signal, gives you an idea of how present each frequency is in the time series. In order to get some intuition behind this, it might be helpful to look at the mathematical definition of the DFT:
Where k here is swept all the way up t N-1 to calculate all the DFT coefficients.
The first thing to notice is that, this definition resembles somewhat that of the correlation of two functions, in this case x(n) and the negative exponential function. While this may seem a little bit abstract, by using Euler's formula and by playing a bit around with the definition, the DFT can be expressed as the correlation with both a sine wave and a cosine wave, which will account for the imaginary and the real parts of the DFT.
So keeping in mind that this is in essence computing a correlation, whenever a corresponding sine or cosine from the decomposition of the complex exponential matches with that of x(n), there will be a peak in X(K), meaning that, such frequency is present in the signal.
How can we do the same with numpy?
So having given a very brief theoretical background, let's consider an example to see how this can be implemented in python. Lets consider the following signal:
import numpy as np
import matplotlib.pyplot as plt
Fs = 150.0; # sampling rate
Ts = 1.0/Fs; # sampling interval
t = np.arange(0,1,Ts) # time vector
ff = 50; # frequency of the signal
y = np.sin(2*np.pi*ff*t)
plt.plot(t, y)
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.show()
Now, the DFT can be computed by using np.fft.fft, which as mentioned, will be telling you which is the contribution of each frequency in the signal now in the transformed domain:
n = len(y) # length of the signal
k = np.arange(n)
T = n/Fs
frq = k/T # two sides frequency range
frq = frq[:len(frq)//2] # one side frequency range
Y = np.fft.fft(y)/n # dft and normalization
Y = Y[:n//2]
Now, if we plot the actual spectrum, you will see that we get a peak at the frequency of 50Hz, which in mathematical terms it will be a delta function centred in the fundamental frequency of 50Hz. This can be checked in the following Table of Fourier Transform Pairs table.
So for the above signal, we would get:
plt.plot(frq,abs(Y)) # plotting the spectrum
plt.xlabel('Freq (Hz)')
plt.ylabel('|Y(freq)|')
plt.show()

Mean and standard deviation of lognormal distribution do not match analytic values

As part of my research, I measure the mean and standard deviation of draws from a lognormal distribution. Given a value of the underlying normal distribution, it should be possible to analytically predict these quantities (as given at https://en.wikipedia.org/wiki/Log-normal_distribution).
However, as can be seen in the plots below, this does not seem to be the case. The first plot shows the mean of the lognormal data against the sigma of the gaussian, while the second plot shows the sigma of the lognormal data against that of the gaussian. Clearly, the "calculated" lines deviate from the "analytic" ones very significantly.
I take the mean of the gaussian distribution to be related to the sigma by mu = -0.5*sigma**2 as this ensures that the lognormal field ought to have mean of 1. Note, this is motivated by the area of physics that I work in: the deviation from analytic values still occurs if you set mu=0.0 for example.
By copying and pasting the code at the bottom of the question, it should be possible to reproduce the plots below. Any advice as to what might be causing this would be much appreciated!
Mean of lognormal vs sigma of gaussian:
Sigma of lognormal vs sigma of gaussian:
Note, to produce the plots above, I used N=10000, but have put N=1000 in the code below for speed.
import numpy as np
import matplotlib.pyplot as plt
mean_calc = []
sigma_calc = []
mean_analytic = []
sigma_analytic = []
ss = np.linspace(1.0,10.0,46)
N = 1000
for s in ss:
mu = -0.5*s*s
ln = np.random.lognormal(mean=mu, sigma=s, size=(N,N))
mean_calc += [np.average(ln)]
sigma_calc += [np.std(ln)]
mean_analytic += [np.exp(mu+0.5*s*s)]
sigma_analytic += [np.sqrt((np.exp(s**2)-1)*(np.exp(2*mu + s*s)))]
plt.loglog(ss,mean_calc,label='calculated')
plt.loglog(ss,mean_analytic,label='analytic')
plt.legend();plt.grid()
plt.xlabel(r'$\sigma_G$')
plt.ylabel(r'$\mu_{LN}$')
plt.show()
plt.loglog(ss,sigma_calc,label='calculated')
plt.loglog(ss,sigma_analytic,label='analytic')
plt.legend();plt.grid()
plt.xlabel(r'$\sigma_G$')
plt.ylabel(r'$\sigma_{LN}$')
plt.show()
TL;DR
Lognormal are positively skewed and heavy tailed distribution. When performing float arithmetic operations (such as sum, mean or std) on sample drawn from a highly skewed distribution, the sampling vector contains values with discrepancy over several order of magnitude (many decades). This makes the computation inaccurate.
The problem comes from those two lines:
mean_calc += [np.average(ln)]
sigma_calc += [np.std(ln)]
Because ln contains both very low and very high values with order of magnitude much higher than float precision.
The problem can be easily detected to warn user that its computation are wrong using the following predicate:
(max(ln) + min(ln)) <= max(ln)
Which is obviously false in Strictly Positive Real but must be considered when using Finite Precision Arithmetic.
Modifying your MCVE
If we slightly modify your MCVE to:
from scipy import stats
for s in ss:
mu = -0.5*s*s
ln = stats.lognorm(s, scale=np.exp(mu)).rvs(N*N)
f = stats.lognorm.fit(ln, floc=0)
mean_calc += [f[2]*np.exp(0.5*s*s)]
sigma_calc += [np.sqrt((np.exp(f[0]**2)-1)*(np.exp(2*mu + s*s)))]
mean_analytic += [np.exp(mu+0.5*s*s)]
sigma_analytic += [np.sqrt((np.exp(s**2)-1)*(np.exp(2*mu + s*s)))]
It gives the reasonably correct mean and standard deviation estimation even for high value of sigma.
The key is that fit uses MLE algorithm to estimates parameters. This totally differs from your original approach which directly performs the mean of the sample.
The fit method returns a tuple with (sigma, loc=0, scale=exp(mu)) which are parameters of the scipy.stats.lognorm object as specified in documentation.
I think you should investigate how you are estimating mean and standard deviation. The divergence probably comes from this part of your algorithm.
There might be several reasons why it diverges, at least consider:
Biased estimator: Are your estimator correct and unbiased? Mean is unbiased estimator (see next section) but maybe not efficient;
Sampled outliers from Pseudo Random Generator may not be as intense as they should be compared to the theoretical distribution: maybe MLE is less sensitive than your estimator New MCVE bellow does not support this hypothesis, but Float Arithmetic Error can explain why your estimators are underestimated;
Float Arithmetic Error New MCVE bellow highlights that it is part of your problem.
A scientific quote
It seems naive mean estimator (simply taking mean), even if unbiased, is inefficient to properly estimate mean for large sigma (see Qi Tang, Comparison of Different Methods for Estimating Log-normal Means, p. 11):
The naive estimator is easy to calculate and it is unbiased. However,
this estimator can be inefficient when variance is large and sample
size is small.
The thesis reviews several methods to estimate mean of a lognormal distribution and uses MLE as reference for comparison. This explains why your method has a drift as sigma increases and MLE stick better alas it is not time efficient for large N. Very interesting paper.
Statistical considerations
Recalling than:
Lognormal is a heavy and long tailed distribution positively skewed. One consequence is: as the shape parameter sigma grows, the asymmetry and skweness grows, so does the strength of outliers.
Effect of Sample Size: as the number of samples drawn from a distribution grows, the expectation of having an outlier increases (so does the extent).
Building a new MCVE
Lets build a new MCVE to make it clearer. The code bellow draws samples of different sizes (N ranges between 100 and 10000) from lognormal distribution where shape parameter varies (sigma ranges between 0.1 and 10) and scale parameter is set to be unitary.
import warnings
import numpy as np
from scipy import stats
# Make computation reproducible among batches:
np.random.seed(123456789)
# Parameters ranges:
sigmas = np.arange(0.1, 10.1, 0.1)
sizes = np.logspace(2, 5, 21, base=10).astype(int)
# Placeholders:
rv = np.empty((sigmas.size,), dtype=object)
xmean = np.full((3, sigmas.size, sizes.size), np.nan)
xstd = np.full((3, sigmas.size, sizes.size), np.nan)
xextent = np.full((2, sigmas.size, sizes.size), np.nan)
eps = np.finfo(np.float64).eps
# Iterate Shape Parameter:
for (i, s) in enumerate(sigmas):
# Create Random Variable:
rv[i] = stats.lognorm(s, loc=0, scale=1)
# Iterate Sample Size:
for (j, N) in enumerate(sizes):
# Draw Samples:
xs = rv[i].rvs(N)
# Sample Extent:
xextent[:,i,j] = [np.min(xs), np.max(xs)]
# Check (max(x) + min(x)) <= max(x)
if (xextent[0,i,j] + xextent[1,i,j]) - xextent[1,i,j] < eps:
warnings.warn("Potential Float Arithmetic Errors: logN(mu=%.2f, sigma=%2f).sample(%d)" % (0, s, N))
# Generate different Estimators:
# Fit Parameters using MLE:
fit = stats.lognorm.fit(xs, floc=0)
xmean[0,i,j] = fit[2]
xstd[0,i,j] = fit[0]
# Naive (Bad Estimators because of Float Arithmetic Error):
xmean[1,i,j] = np.mean(xs)*np.exp(-0.5*s**2)
xstd[1,i,j] = np.sqrt(np.log(np.std(xs)**2*np.exp(-s**2)+1))
# Log-transform:
xmean[2,i,j] = np.exp(np.mean(np.log(xs)))
xstd[2,i,j] = np.std(np.log(xs))
Observation: The new MCVE starts to raise warnings when sigma > 4.
MLE as Reference
Estimating shape and scale parameters using MLE performs well:
The two figures above show than:
Error on estimation grows alongside with shape parameter;
Error on estimation reduces as sample size increases (CTL);
Note than MLE also fits well the shape parameter:
Float Arithmetic
It is worthy to plot the extent of drawn samples versus shape parameter and sample size:
Or the decimal magnitude between smallest and largest number form the sample:
On my setup:
np.finfo(np.float64).precision # 15
np.finfo(np.float64).eps # 2.220446049250313e-16
It means we have at maximum 15 significant figures to work with, if the magnitude between two numbers exceed then the largest number absorb the smaller ones.
A basic example: What is the result of 1 + 1e6 if we can only keep four significant figures?
The exact result is 1,000,001.0 but it must be rounded off to 1.000e6. This implies: the result of the operation equals to the highest number because of the rounding precision. It is inherent of Finite Precision Arithmetic.
The two previous figures above in conjunction with statistical consideration supports your observation that increasing N does not improve estimation for large value of sigma in your MCVE.
Figures above and below show than when sigma > 3 we haven't enough significant figures (less than 5) to performs valid computations.
Further more we can say that estimator will be underestimated because largest numbers will absorb smallest and the underestimated sum will then be divided by N making the estimator biased by default.
When shape parameter becomes sufficiently large, computations are strongly biased because of Arithmetic Float Errors.
It means using quantities such as:
np.mean(xs)
np.std(xs)
When computing estimate will have huge Arithmetic Float Error because of the important discrepancy among values stored in xs. Figures below reproduce your issue:
As stated, estimations are in default (not in excess) because high values (few outliers) in sampled vector absorb small values (most of the sampled values).
Logarithmic Transformation
If we apply a logarithmic transformation, we can drastically reduces this phenomenon:
xmean[2,i,j] = np.exp(np.mean(np.log(xs)))
xstd[2,i,j] = np.std(np.log(xs))
And then the naive estimation of the mean is correct and will be far less affected by Arithmetic Float Error because all sample values will lie within few decades instead of having relative magnitude higher than the float arithmetic precision.
Actually, taking the log-transform returns the same result for mean and std estimation than MLE for each N and sigma:
np.allclose(xmean[0,:,:], xmean[2,:,:]) # True
np.allclose(xstd[0,:,:], xstd[2,:,:]) # True
Reference
If you are looking for complete and detailed explanations of this kind of issues when performing scientific computing, I recommend you to read the excellent book: N. J. Higham, Accuracy and Stability of Numerical Algorithms, Siam, Second Edition, 2002.
Bonus
Here an example of figure generation code:
import matplotlib.pyplot as plt
fig, axe = plt.subplots()
idx = slice(None, None, 5)
axe.loglog(sigmas, xmean[0,:,idx])
axe.axhline(1, linestyle=':', color='k')
axe.set_title(r"MLE: $x \sim \log\mathcal{N}(\mu=0,\sigma)$")
axe.set_xlabel(r"Standard Deviation, $\sigma$")
axe.set_ylabel(r"Mean Estimation, $\hat{\mu}$")
axe.set_ylim([0.1,10])
lgd = axe.legend([r"$N = %d$" % s for s in sizes[idx]] + ['Exact'], bbox_to_anchor=(1,1), loc='upper left')
axe.grid(which='both')
fig.savefig('Lognorm_MLE_Emean_Sigma.png', dpi=120, bbox_extra_artists=(lgd,), bbox_inches='tight')

How can I maximize the Poissonian likelihood of a histogram given a fit curve with scipy/numpy?

I have data in a python/numpy/scipy environment that needs to be fit to a probability density function. A way to do this is to create a histogram of the data and then fit a curve to this histogram. The method scipy.optimize.leastsq does this by minimizing the sum of (y - f(x))**2, where (x,y) would in this case be the histogram's bin centers and bin contents.
In statistical terms, this least-square maximizes the likelihood of obtaining that histogram by sampling each bin count from a gaussian centered around the fit function at that bin's position. You can easily see this: each term (y-f(x))**2 is -log(gauss(y|mean=f(x))), and the sum is the logarithm of the multiplying the gaussian likelihood for all the bins together.
That's however not always accurate: for the type of statistical data I'm looking at, each bin count would be the result of a Poissonian process, so I want to minimize (the logarithm of the product over all the bins (x,y) of) poisson(y|mean=f(x)). The Poissonian comes very close to the Gaussian distribution for large values of f(x), but if my histogram doesn't have as good statistics, the difference would be relevant and influencing the fit.
If I understood correctly, you have data and want to see whether or not some probability distribution fits your data.
Well, if that's the case - you need QQ-Plot. If that's the case, then take a look at this StackOverflow question-answer. However, that is about normal distribution function, and you need a code for Poisson distribution function. All you need to do is create some random data according to Poisson random function and test your samples against it. Here you can find an example of QQ-plot for Poisson distribution function. Here's the code from this web-site:
#! /usr/bin/env python
from pylab import *
p = poisson(lam=10, size=4000)
m = mean(p)
s = std(p)
n = normal(loc=m, scale=s, size=p.shape)
a = m-4*s
b = m+4*s
figure()
plot(sort(n), sort(p), 'o', color='0.85')
plot([a,b], [a,b], 'k-')
xlim(a,b)
ylim(a,b)
xlabel('Normal Distribution')
ylabel('Poisson Distribution with $\lambda=10$')
grid(True)
savefig('qq.pdf')
show()

Categories

Resources