I am interested in computing the power spectrum of a system of particles (~100,000) in 3D space with Python. What I have found so far is a group of functions in Numpy (fft,fftn,..) which compute the discrete Fourier transform, of which the square of the absolute value is the power spectrum. My question is a matter of how my data are being represented - and truthfully may be fairly simple to answer.
The data structure I have is an array which has a shape of (n,2), n being the number of particles I have, and each column representing either the x, y, and z coordinate of the n particles. The function I believe I should be using it the fftn() function, which takes the discrete Fourier transform of an n-dimensional array - but it says nothing about the format. How should the data be represented as a data structure to be fed into fftn?
Here is what I've tried so far to test the function:
import numpy as np
import random
import matplotlib.pyplot as plt
DATA = np.zeros((100,3))
for i in range(len(DATA)):
DATA[i,0] = random.uniform(-1,1)
DATA[i,1] = random.uniform(-1,1)
DATA[i,2] = random.uniform(-1,1)
FFT = np.fft.fftn(DATA)
PS = abs(FFT)**2
plt.plot(PS)
plt.show()
The array entitled DATA is a mock array, ultimately the thing which will be 100,000 by 3 in shape. The output of the code gives me something like:
As you can see, I think this is giving me three 1D power spectra (1 for each column of my data), but really I'd like a power spectrum as a function of radius.
Does anybody have any advice or alternative methods/packages they know of to compute the power spectrum (I'd even settle for the two point autocorrelation function).
It doesn't quite work the way you are setting it out...
You need a function, lets call it f(x, y, z), that describes the density of mass in space. In your case, you can consider the galaxies as point masses, so you will have a delta function centered at the location of each galaxy. It is for this function that you can calculate the three-dimensional autocorrelation, from which you could calculate the power spectrum.
If you want to use numpy to do that for you, you are first going to have to discretize your function. A possible mock example would be:
import numpy as np
import matplotlib.pyplot as plt
space = np.zeros((100, 100, 100), dtype=np.uint8)
x, y, z = np.random.randint(100, size=(3, 1000))
space[x, y, z] += 1
space_ps = np.abs(np.fft.fftn(space))
space_ps *= space_ps
space_ac = np.fft.ifftn(space_ps).real.round()
space_ac /= space_ac[0, 0, 0]
And now space_ac holds the three-dimensional autocorrelation function for the data set. This is not quite what you are after, and to get you one-dimensional correlation function you would have to average the values on spherical shells around the origin:
dist = np.minimum(np.arange(100), np.arange(100, 0, -1))
dist *= dist
dist_3d = np.sqrt(dist[:, None, None] + dist[:, None] + dist)
distances, _ = np.unique(dist_3d, return_inverse=True)
values = np.bincount(_, weights=space_ac.ravel()) / np.bincount(_)
plt.plot(distances[1:], values[1:])
There is another issue with doing things yourself this way: when you compute the power spectrum as above, mathematically is as if your three dimensional array wrapped around the borders, i.e. point [999, y, z] is a neighbour to [0, y, z]. So your autocorrelation could show two very distant galaxies as close neighbours. The simplest way to deal with this is by making your array twice as large along every dimension, padding with extra zeros, and then discarding the extra data.
Alternatively you could use scipy.ndimage.filters.correlate with mode='constant' to do all the dirty work for you.
Related
The task is to find such a dot with a (x,0) coordinates, that the distance from it to the most distant point from the original set (distance is Euclidean) is minimal.
My idea is to find the minimum of the function that finds an euclidean distance like this:
import matplotlib.pyplot as plt
from scipy.spatial.distance import cdist
from scipy.optimize import minimize
def function_3(points_x, points_y):
dots = np.array([points_x,points_y])
ans = minimize(cdist(dots,points1),x0=0)
return(ans)
But it seems like that I'm doing something wrong... Can somebody give an advice?
Solution
Here's a complete working example for fitting points of the form (x, 0):
from scipy.spatial.distance import cdist
from scipy.optimize import minimize
# set up a test set of 100 points to fit against
n = 100
xyTestset = np.random.rand(n,2)*10
def fun(x, xycomp):
# x is a vector, assumed to be of size 1
# cdist expects a 2D array, so we reshape xy into a 1x2 array
xy = np.array((x[0], 0)).reshape(1, -1)
return cdist(xy, xycomp).max()
fit = minimize(fun, x0=0, args=xyTestset)
print(fit.x)
which outputs:
[5.06807808]
This means, roughly speaking, that the minimization is finding the centroid of the set of random test points, as expected. If you wanted to do a 2D fitting to points of the form (x, y) instead, you can do:
from scipy.spatial.distance import cdist
from scipy.optimize import minimize
# set up a test set of 100 points to fit against
n = 100
xyTestset = np.random.rand(n,2)*10
def fun(x, xycomp):
# x is a vector, assumed to be of size 2
return cdist(x.reshape(1, -1), xycomp).max()
fit = minimize(fun, x0=(0, 0), args=xyTestset)
print(fit.x)
which outputs:
[5.21292828 5.01491085]
which, again, is roughly the centroid of the 100 random points in xyTestset, as you'd expect.
Complete explanation
The problem that you're running into is that scipy.optimize.minimize has very specific expectations about the form of its first argument fun. fun is supposed to be a function that takes x as its first argument, where x is a 1D vector of the values to be minimized over. fun can also take additional arguments. These have to be passed into minimize via the args parameter, and their values are constant (ie they won't change over the course of the minimization).
Also, you should be aware that your case of fitting (x, 0) can be simplified. It's effectively a 1D problem, so you all you need to do is calculate the x distances between the points. You can completely ignore the y distances and still get the same results.
Additionally, you don't need a minimization to solve the problem you stated. The point that minimizes the distance to the farthest point (which is the same as saying "minimizing the distance to all points") is the centroid. The coordinates of the centroid are the means of each coordinate in your set of points, so if your points are stored in an Nx2 array xydata you can calculate the centroid by just doing:
xydata.mean(axis=1)
I have a data as 2D array and I used gaussian_kde to make estimation for data distribution. Now, I want to get the first derivative for the resultant density estimator to get zero crossings. Is it possible to get it from estimated density ?. If so, is there any built-in function in Python that can help ?
Following the example in the documentation of the gaussian_kde, once you have the Z, or more generally, the estimation of your density in a X axis, you can calculate its derivatives using standard numpy functions:
diff = np.gradient(Z)
Note that np.gradient computes central differences. If you would like forward differences you could do something like:
diff = np.r_[Z[1:] - Z[:-1], 0]
To find the zero-crossings you can do:
sdiff = np.sign(diff)
zc = np.where(sdiff[:-1] != sdiff[1:])
You can extend the above for 2D as dy, dx = np.gradient(Z) with Z a 2D array. And then operate in both Y and X direction.
I have a set of points (x,y) as two vectors
x,y for example:
from pylab import *
x = sorted(random(30))
y = random(30)
plot(x,y, 'o-')
Now I would like to smooth this data with a Gaussian and evaluate it only at certain (regularly spaced) points on the x-axis. lets say for:
x_eval = linspace(0,1,11)
I got the tip that this method is called a "Gaussian sum filter", but so far I have not found any implementation in numpy/scipy for that, although it seems like a standard problem at first glance.
As the x values are not equally spaced I can't use the scipy.ndimage.gaussian_filter1d.
Usually this kind of smoothing is done going through furrier space and multiplying with the kernel, but I don't really know if this will be possible with irregular spaced data.
Thanks for any ideas
This will blow up for very large datasets, but the proper calculaiton you are asking for would be done as follows:
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0) # for repeatability
x = np.random.rand(30)
x.sort()
y = np.random.rand(30)
x_eval = np.linspace(0, 1, 11)
sigma = 0.1
delta_x = x_eval[:, None] - x
weights = np.exp(-delta_x*delta_x / (2*sigma*sigma)) / (np.sqrt(2*np.pi) * sigma)
weights /= np.sum(weights, axis=1, keepdims=True)
y_eval = np.dot(weights, y)
plt.plot(x, y, 'bo-')
plt.plot(x_eval, y_eval, 'ro-')
plt.show()
I'll preface this answer by saying that this is more of a DSP question than a programming question...
...that being said there, there is a simple two step solution to your problem.
Step 1: Resample the data
So to illustrate this we can create a random data set with unequal sampling:
import numpy as np
x = np.cumsum(np.random.randint(0,100,100))
y = np.random.normal(0,1,size=100)
This gives something like:
We can resample this data using simple linear interpolation:
nx = np.arange(x.max()) # choose new x axis sampling
ny = np.interp(nx,x,y) # generate y values for each x
This converts our data to:
Step 2: Apply filter
At this stage you can use some of the tools available through scipy to apply a Gaussian filter to the data with a given sigma value:
import scipy.ndimage.filters as filters
fx = filters.gaussian_filter1d(ny,sigma=100)
Plotting this up against the original data we get:
The choice of the sigma value determines the width of the filter.
Based on #Jaime's answer I wrote a function that implements this with some additional documentation and the ability to discard estimates far from the datapoints.
I think confidence intervals could be obtained on this estimate by bootstrapping, but I haven't done this yet.
def gaussian_sum_smooth(xdata, ydata, xeval, sigma, null_thresh=0.6):
"""Apply gaussian sum filter to data.
xdata, ydata : array
Arrays of x- and y-coordinates of data.
Must be 1d and have the same length.
xeval : array
Array of x-coordinates at which to evaluate the smoothed result
sigma : float
Standard deviation of the Gaussian to apply to each data point
Larger values yield a smoother curve.
null_thresh : float
For evaluation points far from data points, the estimate will be
based on very little data. If the total weight is below this threshold,
return np.nan at this location. Zero means always return an estimate.
The default of 0.6 corresponds to approximately one sigma away
from the nearest datapoint.
"""
# Distance between every combination of xdata and xeval
# each row corresponds to a value in xeval
# each col corresponds to a value in xdata
delta_x = xeval[:, None] - xdata
# Calculate weight of every value in delta_x using Gaussian
# Maximum weight is 1.0 where delta_x is 0
weights = np.exp(-0.5 * ((delta_x / sigma) ** 2))
# Multiply each weight by every data point, and sum over data points
smoothed = np.dot(weights, ydata)
# Nullify the result when the total weight is below threshold
# This happens at evaluation points far from any data
# 1-sigma away from a data point has a weight of ~0.6
nan_mask = weights.sum(1) < null_thresh
smoothed[nan_mask] = np.nan
# Normalize by dividing by the total weight at each evaluation point
# Nullification above avoids divide by zero warning shere
smoothed = smoothed / weights.sum(1)
return smoothed
I want numerically compute the FFT on a numpy array Y. For testing, I'm using the Gaussian function Y = exp(-x^2). The (symbolic) Fourier Transform is Y' = constant * exp(-k^2/4).
import numpy
X = numpy.arange(-100,100)
Y = numpy.exp(-(X/5.0)**2)
The naive approach fails:
from numpy.fft import *
from matplotlib import pyplot
def plotReIm(x,y):
f = pyplot.figure()
ax = f.add_subplot(111)
ax.plot(x, numpy.real(y), 'b', label='R()')
ax.plot(x, numpy.imag(y), 'r:', label='I()')
ax.plot(x, numpy.abs(y), 'k--', label='abs()')
ax.legend()
Y_k = fftshift(fft(Y))
k = fftshift(fftfreq(len(Y)))
plotReIm(k,Y_k)
real(Y_k) jumps between positive and negative values, which correspond to a jumping phase, which is not present in the symbolic result. This is certainly not desirable. (The result is technically correct in the sense that abs(Y_k) gives the amplitudes as expected ifft(Y_k) is Y.)
Here, the function fftshift() renders the array k monotonically increasing and changes Y_k accordingly. The pairs zip(k, Y_k) are not changed by applying this operation to both vectors.
This changes appears to fix the issue:
Y_k = fftshift(fft(ifftshift(Y)))
k = fftshift(fftfreq(len(Y)))
plotReIm(k,Y_k)
Is this the correct way to employ the fft() function if monotonic Y and Y_k are required?
The reverse operation of the above is:
Yx = fftshift(ifft(ifftshift(Y_k)))
x = fftshift(fftfreq(len(Y_k), k[1] - k[0]))
plotReIm(x,Yx)
For this case, the documentation clearly states that Y_k must be sorted compatible with the output of fft() and fftfreq(), which we can achieve by applying ifftshift().
Those questions have been bothering me for a long time: Are the output and input arrays of both fft() and ifft() always such that a[0] should contain the zero frequency term, a[1:n/2+1] should contain the positive-frequency terms, and a[n/2+1:] should contain the negative-frequency terms, in order of decreasingly negative frequency [numpy reference], where 'frequency' is the independent variable?
The answer on Fourier Transform of a Gaussian is not a Gaussian does not answer my question.
The FFT can be thought of as producing a set vectors each with an amplitude and phase. The fft_shift operation changes the reference point for a phase angle of zero, from the edge of the FFT aperture, to the center of the original input data vector.
The phase (and thus the real component of the complex vector) of the result is sometimes less "jumpy" when this is done, especially if some input function is windowed such that it is discontinuous around the edges of the FFT aperture. Or if the input is symmetric around the center of the FFT aperture, the phase of the FFT result will always be zero after an fft_shift.
An fft_shift can be done by a vector rotate of N/2, or by simply flipping alternating sign bits in the FFT result, which may be more CPU dcache friendly.
The definition for the output of fft (and ifft) is here: http://docs.scipy.org/doc/numpy/reference/routines.fft.html#background-information
This is what the routines compute, no more and no less. Observe that the discrete Fourier transform is rather different from the continuous Fourier transform. For a densely sampled function there is a relation between the two, but the relation also involves phase factors and scaling in addition to fftshift. This is the cause of the oscillations you see in your plot. The necessary phase factor you can work out yourself from the above mathematical formula for the DFT.
I am trying to utilize Numpy's fft function, however when I give the function a simple gausian function the fft of that gausian function is not a gausian, its close but its halved so that each half is at either end of the x axis.
The Gaussian function I'm calculating is
y = exp(-x^2)
Here is my code:
from cmath import *
from numpy import multiply
from numpy.fft import fft
from pylab import plot, show
""" Basically the standard range() function but with float support """
def frange (min_value, max_value, step):
value = float(min_value)
array = []
while value < float(max_value):
array.append(value)
value += float(step)
return array
N = 256.0 # number of steps
y = []
x = frange(-5, 5, 10/N)
# fill array y with values of the Gaussian function
cache = -multiply(x, x)
for i in cache: y.append(exp(i))
Y = fft(y)
# plot the fft of the gausian function
plot(x, abs(Y))
show()
The result is not quite right, cause the FFT of a Gaussian function should be a Gaussian function itself...
np.fft.fft returns a result in so-called "standard order": (from the docs)
If A = fft(a, n), then A[0]
contains the zero-frequency term (the
mean of the signal), which is always
purely real for real inputs. Then
A[1:n/2] contains the
positive-frequency terms, and
A[n/2+1:] contains the
negative-frequency terms, in order of
decreasingly negative frequency.
The function np.fft.fftshift rearranges the result into the order most humans expect (and which is good for plotting):
The routine np.fft.fftshift(A)
shifts transforms and their
frequencies to put the zero-frequency
components in the middle...
So using np.fft.fftshift:
import matplotlib.pyplot as plt
import numpy as np
N = 128
x = np.arange(-5, 5, 10./(2 * N))
y = np.exp(-x * x)
y_fft = np.fft.fftshift(np.abs(np.fft.fft(y))) / np.sqrt(len(y))
plt.plot(x,y)
plt.plot(x,y_fft)
plt.show()
Your result is not even close to a Gaussian, not even one split into two halves.
To get the result you expect, you will have to position your own Gaussian with the center at index 0, and the result will also be positioned that way. Try the following code:
from pylab import *
N = 128
x = r_[arange(0, 5, 5./N), arange(-5, 0, 5./N)]
y = exp(-x*x)
y_fft = fft(y) / sqrt(2 * N)
plot(r_[y[N:], y[:N]])
plot(r_[y_fft[N:], y_fft[:N]])
show()
The plot commands split the arrays in two halfs and swap them to get a nicer picture.
It is being displayed with the center (i.e. mean) at coefficient index zero. That is why it appears that the right half is on the left, and vice versa.
EDIT: Explore the following code:
import scipy
import scipy.signal as sig
import pylab
x = sig.gaussian(2048, 10)
X = scipy.absolute(scipy.fft(x))
pylab.plot(x)
pylab.plot(X)
pylab.plot(X[range(1024, 2048)+range(0, 1024)])
The last line will plot X starting from the center of the vector, then wrap around to the beginning.
A fourier transform implicitly repeats indefinitely, as it is a transform of a signal that implicitly repeats indefinitely. Note that when you pass y to be transformed, the x values are not supplied, so in fact the gaussian that is transformed is one centred on the median value between 0 and 256, so 128.
Remember also that translation of f(x) is phase change of F(x).
Following on from Sven Marnach's answer, a simpler version would be this:
from pylab import *
N = 128
x = ifftshift(arange(-5,5,5./N))
y = exp(-x*x)
y_fft = fft(y) / sqrt(2 * N)
plot(fftshift(y))
plot(fftshift(y_fft))
show()
This yields a plot identical to the above one.
The key (and this seems strange to me) is that NumPy's assumed data ordering --- in both frequency and time domains --- is to have the "zero" value first. This is not what I'd expect from other implementations of FFT, such as the FFTW3 libraries in C.
This was slightly fudged in the answers from unutbu and Steve Tjoa above, because they're taking the absolute value of the FFT before plotting it, thus wiping away the phase issues resulting from not using the "standard order" in time.