sampling from a clipped normal distribution - python

How do I sample from a normal distribution that is clipped?
I want to sample from N(0, 1). But I want the values to be from [-1, +1]. I cannot apply np.clip as that would increase the probability of -1 and +1. I can do stochastic clipping, but then there's no guarantee that it'll fall out of the range.
#standard
s = np.random.normal(0, 1, [10,10])
s = np.clip(s)
#stochastic
for j in range(10)
edge1 = np.where(s[j] >= 1.)[0]
edge2 = np.where(s[j] <= -1)[0]
if edge1.shape[0] > 0:
rand_el1 = np.random.normal(0, 1, size=(1, edge1.shape[0]))
s[j,edge1] = rand_el1
if edge2.shape[0] > 0:
rand_el2 = np.random.normal(0, 1, size=(1, edge2.shape[0]))
s[j,edge2] = rand_el2

The scipy library implements the truncated normal distribution as scipy.stats.truncnorm. In your case, you can use sample = truncnorm.rvs(-1, 1, size=sample_size).
For example,
In [55]: import matplotlib.pyplot as plt
In [56]: from scipy.stats import truncnorm, norm
Sample 100000 points from the normal distribution truncated to [-1, 1]:
In [57]: sample = truncnorm.rvs(-1, 1, size=100000)
Make a histogram, and plot the theoretical PDF curve. The PDF can be computed with truncnorm.pdf, or with a scaled version of norm.pdf.
In [58]: _ = plt.hist(sample, bins=51, normed=True, facecolor='g', edgecolor='k', alpha=0.4)
In [59]: x = np.linspace(-1, 1, 101)
In [60]: plt.plot(x, truncnorm.pdf(x, -1, 1), 'k', alpha=0.4, linewidth=5)
Out[60]: [<matplotlib.lines.Line2D at 0x11f78c160>]
In [61]: plt.plot(x, norm.pdf(x)/(norm.cdf(1) - norm.cdf(-1)), 'k--', linewidth=1)
Out[61]: [<matplotlib.lines.Line2D at 0x11f779f60>]
Here's the plot:

I believe that the simplest (perhaps not most efficient) way to do so is to use basic Rejection sampling. It simply consists in simulating values from a N(0,1), rejecting those that fall out of the wanted bounds and keeping the others until they stack to the wanted number of samples.
kept = []
while len(kept) < 1000:
s = np.random.normal()
if -1 <= s <= 1:
kept.append(s)
Here I stack things in a basic list ; feel free to use a np.array and replace the length condition with one based on the array's dimensions.

Do the stochiastic clipping iteratively until you don't need it any more. This basically means turning your ifs into a while. You can also take this opportunity to simplify the out-of bounds condition into a single check rather than a separate check on each side:
s = np.random.normal(0, 1, (10, 10))
while True:
out_of_bounds = np.abs(s) > 1
count = np.count_nonzero(out_of_bounds)
if count:
s[out_of_bounds] = np.random.normal(0, 1, count)
else:
break

Related

row-wise matrix multiplication using numpy

I want to implement a "row wise" matrix multiplication.
More specifically speaking, I want to plot a set of arrows whose directions range from (-pi, pi). The following code is how I implemented it.
scan_phi = np.linspace(-np.pi*0.5, np.pi*0.5, 450)
points = np.ones((450, 2), dtype=np.float)
points[..., 0] = 0.0
n_pts = len(points)
sin = np.sin(scan_phi)
cos = np.cos(scan_phi)
rot = np.append(np.expand_dims(np.vstack([cos, -sin]).T, axis=1),
np.expand_dims(np.vstack([sin, cos]).T, axis=1),
axis=1)
points_rot = []
for idx, p in enumerate(points):
points_rot.append(np.matmul(rot[idx], p.T))
points_rot = np.array(points_rot)
sample = points_rot[::10]
ax = plt.axes()
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
for idx, p in enumerate(sample):
if idx == 0:
ax.arrow(0, 0, p[0], p[1], head_width=0.05, head_length=0.1, color='red')
else:
ax.arrow(0, 0, p[0], p[1], head_width=0.05, head_length=0.1, fc='k', ec='k')
plt.show()
In my code, "rot" ends up being an array of (450, 2, 2) meaning for each arrow, I have created a corresponding rotation matrix to rotate it. I have 450 points stored in "points" (450, 2) that I want to draw arrows with. (Here the arrows are all initialized with [0, 1]. However, it can be initialized with different values which is why I want to have 450 individual points instead of just rotating a single point by 450 different angles)
The way I did is using a for-loop, i.e. for each arrow, I transform it individually.
points_rot = []
for idx, p in enumerate(points):
points_rot.append(np.matmul(rot[idx], p.T))
points_rot = np.array(points_rot)
However, I wonder if there's any nicer and easy way to do this completely through numpy, such as some operations that can perform matrix multiplication row-wise. Any idea will be grateful, thanks in advance!
This is a nice use-case for np.einsum:
aa = np.random.normal(size=(450, 2, 2))
bb = np.random.normal(size=(450, 2))
cc = np.einsum('ijk,ik->ij', aa, bb)
So that each row of cc is the product of corresponding rows of aa and bb:
np.allclose(aa[3].dot(bb[3]), cc) # returns True
Explanation: the Einstein notation ijk,ik->ij is saying:
cc[i,j] = sum(aa[i,j,k] * bb[i,k] for k in range(2))
I.e., all variables that do not appear in the right-hand side are summed away.

Fitting a gaussian for data which is zero everywhere except for a sharp peak in the centre point

A test code for this type of data:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = np.linspace(0,1,20)
y = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0])
n = np.size(x)
mean = sum(x*y)/n
sigma = np.sqrt(sum(y*(x-mean)**2)/n)
def gaus(x,a,x0,sigma):
return a*np.exp(-(x-x0)**2/(2*sigma**2))
popt,pcov = curve_fit(gaus,x,y,p0=[max(y),mean,sigma])
plt.plot(x,y,'b+:',label='data')
plt.plot(x,gaus(x,*popt),'ro:',label='fit')
plt.legend()
I need to fit lots of data which is just like the y array given above to a Gaussian distribution.
Using the standard gaussian fitting routine using scipy.optimize gives this kind of fit:
I have tried many different initial values, but cannot get any kind of fit.
Does anyone have any ideas how I could get this data fitted to a Gaussian?
Thanks
The problem
Your fundamental problem is that you have a severely undetermined fitting problem. Think about it like this: you have three unknowns but only one datapoint. This is akin to solving for x, y, z when you only have one equation. Because the height of your gaussian can vary independently of it's width, there are infinitely many distributions, all with different widths that will satisfy the constraints of your fit.
More directly, your a and sigma parameters can both change the maximum height of the distribution, which is pretty much the only thing that matters in terms of achieving a good fit (at least once the distribution is centered and fairly narrow). Thus, the fitting routines in Scipy can't figure which to change at any given step.
The fix
The simplest way to solve the problem is to lock down one of your parameters. You don't need to change your equation, but you do need to make at least one of a, x0, or sigma a constant. The best choice of parameter to fix is probably x0, since it's trivial to determine the mean/median/mode of you data by just getting the x coordinate of the one datapoint that is non-zero in y. You'll also need to get a little more clever about how you set your initial guesses. Here's what that looks like:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = np.linspace(0,1,20)
xdiff = x[1] - x[0]
y = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0])
# the mean/median/mode all occur at the x coordinate of the one datapoint that is non-zero in y
mean = x[np.argmax(y)]
# sigma should be tiny, since we want a narrow distribution
sigma = xdiff
# the scaling factor should be roughly equal to the "height" of the one datapoint
a = y.max()
def gaus(x,a,sigma):
return a*np.exp(-(x-mean)**2/(2*sigma**2))
bounds = ((1, .015), (20, 1))
popt,pcov = curve_fit(gaus, x, y, p0=[a, sigma], maxfev=20000, bounds=bounds)
residual = ((gaus(x,*popt) - y)**2).sum()
plt.figure(figsize=(8,6))
plt.plot(x,y,'b+:',label='data')
xdist = np.linspace(x.min(), x.max(), 1000)
plt.plot(xdist,gaus(xdist,*popt),'C0', label='fit distribution')
plt.plot(x,gaus(x,*popt),'ro:',label='fit')
plt.text(.1,6,"residual: %.6e" % residual)
plt.legend()
plt.show()
Output:
The better fix
You don't need a fit to get the kind of Gaussians you want. You can instead use a simple closed form expression to calculate the parameters that you need, as in the fitonegauss function in the code below:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
def gauss(x, a, mean, sigma):
return a*np.exp(-(x - mean)**2/(2*sigma**2))
def fitonegauss(x, y, fwhm=None):
if fwhm is None:
# determine full width at half maximum from the spacing between the x points
fwhm = (x[1] - x[0])
# the mean/median/mode all occur at the x coordinate of the one datapoint that is non-zero in y
mean = x[np.argmax(y)]
# solve for sigma in terms of the desired full width at half maximum
sigma = fwhm/(2*np.sqrt(2*np.log(2)))
# max(pdf) == 1/(np.sqrt(2*np.pi)*sigma). Use that to determine a
a = y.max() #(np.sqrt(2*np.pi)*sigma)
return a, mean, sigma
N = 20
x = np.linspace(0,1,N)
y = np.zeros(N)
y[N//2] = 10
popt = fitonegauss(x, y)
plt.figure(figsize=(8,6))
plt.plot(x,y,'b+:',label='data')
xdist = np.linspace(x.min(), x.max(), 1000)
plt.plot(xdist,gauss(xdist,*popt),'C0', label='fit distribution')
residual = ((gauss(x,*popt) - y)**2).sum()
plt.plot(x, gauss(x,*popt),'ro:',label='fit')
plt.text(.1,6,"residual: %.6e" % residual)
plt.legend()
plt.show()
Output:
The advantages of this approach are many. It's far more computationally efficient than any fit could be, it will (for the most part) never fail, and it gives you far more control over the actual width of the distribution that you end up with.
The fitonegauss function is set up so that you can directly set the full width at half maximum of the fitted distribution. If you leave it unset, the code will automatically guess it from the spacing of the x data. This seems to produce reasonable results for your application.
Don't use a general "a" parameter, use the proper normal distribution equation instead:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = np.linspace(0,1,20)
y = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0])
n = np.size(x)
mean = sum(x*y)/n
sigma = np.sqrt(sum(y*(x-mean)**2)/n)
def gaus(x, x0, sigma):
return 1/np.sqrt(2 * np.pi * sigma**2)*np.exp(-(x-x0)**2/(2*sigma**2))
popt,pcov = curve_fit(gaus,x,y,p0=[mean,sigma])
plt.plot(x,y,'b+:',label='data')
plt.plot(x,gaus(x,*popt),'ro:',label='fit')
plt.legend()

I want to calculate slope and intercept of a linear fit using pykalman module

Consider the linear regression of Y on X, where (xi, yi) = (2, 7), (0, 2), (5, 14) for i = 1, 2, 3. The solution is (a, b) = (2.395, 2.079), obtained using the regression function on a hand-held calculator.
I want to calculate the slope and the intercept of a linear fit using
the pykalman module. I'm getting
ValueError: The shape of all parameters is not consistent. Please re-check their values.
I'd really appreciate if someone would help me.
Here is my code :
from pykalman import KalmanFilter
import numpy as np
measurements = np.asarray([[7], [2], [14]])
initial_state_matrix = [[1], [1]]
transition_matrix = [[1, 0], [0, 1]]
observation_covariance_matrix = [[1, 0],[0, 1]]
observation_matrix = [[2, 1], [0, 1], [5, 1]]
kf1 = KalmanFilter(n_dim_state=2, n_dim_obs=6,
transition_matrices=transition_matrix,
observation_matrices=observation_matrix,
initial_state_mean=initial_state_matrix,
observation_covariance=observation_covariance_matrix)
kf1 = kf1.em(measurements, n_iter=0)
(smoothed_state_means, smoothed_state_covariances) = kf1.smooth(measurements)
print smoothed_state_means
Here's the code snippet:
from pykalman import KalmanFilter
import numpy as np
kf = KalmanFilter()
(filtered_state_means, filtered_state_covariances) = kf.filter_update(filtered_state_mean = [[0],[0]], filtered_state_covariance = [[90000,0],[0,90000]], observation=np.asarray([[7],[2],[14]]),transition_matrix = np.asarray([[1,0],[0,1]]), observation_matrix = np.asarray([[2,1],[0,1],[5,1]]), observation_covariance = np.asarray([[.1622,0,0],[0,.1622,0],[0,0,.1622]]))
print filtered_state_means
print filtered_state_covariances
for x in range(0, 1000):
(filtered_state_means, filtered_state_covariances) = kf.filter_update(filtered_state_mean = filtered_state_means, filtered_state_covariance = filtered_state_covariances, observation=np.asarray([[7],[2],[14]]),transition_matrix = np.asarray([[1,0],[0,1]]), observation_matrix = np.asarray([[2,1],[0,1],[5,1]]), observation_covariance = np.asarray([[.1622,0,0],[0,.1622,0],[0,0,.1622]]))
print filtered_state_means
print filtered_state_covariances
filtered_state_covariance was chosen large because we have no idea where our filter_state_mean is initially and the observations are just [[y1],[y2],[y3]]. Observation_matrix is [[x1,1],[x2,1],[x3,1]] thus giving second element as our intercept. Imagine it like this y1 = m*x1+c where m and c are slope and intercept respectively. In our case filtered_state_mean = [[m],[c]]. Notice that the new filtered_state_means is used as filtered_state_mean for new kf.filter_update() (in iterating loop) because we now know where mean lies with filtered_state_covariance = filtered_state_covariances. Iterating it 1000 times converges the mean to real value. If you want to know about the function/method used the link is: https://pykalman.github.io/
If the system state does not change between measurements (also called vacuous movement step), then transition_matrix φ = I.
I'm not sure if what I'm going to say now is true or not. So please correct me if I am wrong
observation_covariance matrix must be of size m x m where m is the number of observations (in our case = 3). The diagonal elements are just variances I believe variance_y1, variance_y2 and variance_y3 and off-diagonal elements are covariances. For example element (1,2) in matrix is standard deviation of y1,(COMMA NOT PRODUCT) standard deviation of y2 and is equal to element (2,1). Similarly for other elements. Can someone help me include uncertainty in x1, x2 and x3. I mean how do you implement uncertainties in x in the above code.

Extract histogram modes by detecting the local maxima of a vector with NumPy/SciPy

Is there a way with NumPy/SciPy` to keep only the histogram modes when extracting the local maxima (shown as blue dots on the image below)?:
These maxima were extracted using scipy.signal.argrelmax, but I only need to get the two modes values and ignore the rest of the maxima detected:
# calculate dB positive image
img_db = 10 * np.log10(img)
img_db_pos = img_db + abs(np.min(img_db))
data = img_db_pos.flatten() + 1
# data histogram
n, bins = np.histogram(data, 100, normed=True)
# trim data
x = np.linspace(np.min(data), np.max(data), num=100)
# find index of minimum between two modes
ind_max = argrelmax(n)
x_max = x[ind_max]
y_max = n[ind_max]
# plot
plt.hist(data, bins=100, normed=True, color='y')
plt.scatter(x_max, y_max, color='b')
plt.show()
Note:
I've managed to use this Smoothing filter to get a curve that matches the histogram (but I don't have the equation of the curve).
This could be achieved by appropriately adjusting parameter order of function argrelmax, i.e. by adjusting how many points on each side are considered to detect local maxima.
I've used this code to create mock data (you can play around with the values of the different variables to figure out their effect on the generated histogram):
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import argrelextrema, argrelmax
m = 50
s = 10
samples = 50000
peak_start = 30
peak_width = 10
peak_gain = 4
np.random.seed(3)
data = np.random.normal(loc=m, scale=s, size=samples)
bell, edges = np.histogram(data, bins=np.arange(2*(m + 1) - .5), normed=True)
x = np.int_(.5*(edges[:-1] + edges[1:]))
bell[peak_start + np.arange(peak_width)] *= np.linspace(1, peak_gain, peak_width)
plt.bar(x, bell)
plt.show()
As shown below, it is important to carefully select the value of order. Indeed, if order is too small you are likely to detect noisy local maxima, whereas if order is too large you might fail to detect some of the modes.
In [185]: argrelmax(bell, order=1)
Out[185]: (array([ 3, 5, 7, 12, 14, 39, 47, 51, 86, 90], dtype=int64),)
In [186]: argrelmax(bell, order=2)
Out[186]: (array([14, 39, 47, 51, 90], dtype=int64),)
In [187]: argrelmax(bell, order=3)
Out[187]: (array([39, 47, 51], dtype=int64),)
In [188]: argrelmax(bell, order=4)
Out[188]: (array([39, 51], dtype=int64),)
In [189]: argrelmax(bell, order=5)
Out[189]: (array([39, 51], dtype=int64),)
In [190]: argrelmax(bell, order=11)
Out[190]: (array([39, 51], dtype=int64),)
In [191]: argrelmax(bell, order=12)
Out[191]: (array([39], dtype=int64),)
These results are strongly dependent on the shape of the histogram (if you change just one of the parameters used to generate data the range of valid values for order may vary). To make mode detection more robust, I would recommend you to pass a smoothed histogram to argrelmax rather than the original histogram.
I guess, you want to find second largest number in y_max. Hope this example will help you:
np.random.seed(4) # for reproducibility
data = np.zeros(0)
for i in xrange(10):
data = np.hstack(( data, np.random.normal(i, 0.25, 100*i) ))
# data histogram
n, bins = np.histogram(data, 100, normed=True)
# trim data
x = np.linspace(np.min(data), np.max(data), num=100)
# find index of minimum between two modes
ind_max = argrelmax(n)
x_max = x[ind_max]
y_max = n[ind_max]
# find first and second max values in y_max
index_first_max = np.argmax(y_max)
maximum_y = y_max[index_first_max]
second_max_y = max(n for n in y_max if n!=maximum_y)
index_second_max = np.where(y_max == second_max_y)
# plot
plt.hist(data, bins=100, normed=True, color='y')
plt.scatter(x_max, y_max, color='b')
plt.scatter(x_max[index_first_max], y_max[index_first_max], color='r')
plt.scatter(x_max[index_second_max], y_max[index_second_max], color='g')
plt.show()

Partial convolution / correlation with numpy [duplicate]

I am learning numpy/scipy, coming from a MATLAB background. The xcorr function in Matlab has an optional argument "maxlag" that limits the lag range from –maxlag to maxlag. This is very useful if you are looking at the cross-correlation between two very long time series but are only interested in the correlation within a certain time range. The performance increases are enormous considering that cross-correlation is incredibly expensive to compute.
In numpy/scipy it seems there are several options for computing cross-correlation. numpy.correlate, numpy.convolve, scipy.signal.fftconvolve. If someone wishes to explain the difference between these, I'd be happy to hear, but mainly what is troubling me is that none of them have a maxlag feature. This means that even if I only want to see correlations between two time series with lags between -100 and +100 ms, for example, it will still calculate the correlation for every lag between -20000 and +20000 ms (which is the length of the time series). This gives a 200x performance hit! Do I have to recode the cross-correlation function by hand to include this feature?
Here are a couple functions to compute auto- and cross-correlation with limited lags. The order of multiplication (and conjugation, in the complex case) was chosen to match the corresponding behavior of numpy.correlate.
import numpy as np
from numpy.lib.stride_tricks import as_strided
def _check_arg(x, xname):
x = np.asarray(x)
if x.ndim != 1:
raise ValueError('%s must be one-dimensional.' % xname)
return x
def autocorrelation(x, maxlag):
"""
Autocorrelation with a maximum number of lags.
`x` must be a one-dimensional numpy array.
This computes the same result as
numpy.correlate(x, x, mode='full')[len(x)-1:len(x)+maxlag]
The return value has length maxlag + 1.
"""
x = _check_arg(x, 'x')
p = np.pad(x.conj(), maxlag, mode='constant')
T = as_strided(p[maxlag:], shape=(maxlag+1, len(x) + maxlag),
strides=(-p.strides[0], p.strides[0]))
return T.dot(p[maxlag:].conj())
def crosscorrelation(x, y, maxlag):
"""
Cross correlation with a maximum number of lags.
`x` and `y` must be one-dimensional numpy arrays with the same length.
This computes the same result as
numpy.correlate(x, y, mode='full')[len(a)-maxlag-1:len(a)+maxlag]
The return vaue has length 2*maxlag + 1.
"""
x = _check_arg(x, 'x')
y = _check_arg(y, 'y')
py = np.pad(y.conj(), 2*maxlag, mode='constant')
T = as_strided(py[2*maxlag:], shape=(2*maxlag+1, len(y) + 2*maxlag),
strides=(-py.strides[0], py.strides[0]))
px = np.pad(x, maxlag, mode='constant')
return T.dot(px)
For example,
In [367]: x = np.array([2, 1.5, 0, 0, -1, 3, 2, -0.5])
In [368]: autocorrelation(x, 3)
Out[368]: array([ 20.5, 5. , -3.5, -1. ])
In [369]: np.correlate(x, x, mode='full')[7:11]
Out[369]: array([ 20.5, 5. , -3.5, -1. ])
In [370]: y = np.arange(8)
In [371]: crosscorrelation(x, y, 3)
Out[371]: array([ 5. , 23.5, 32. , 21. , 16. , 12.5, 9. ])
In [372]: np.correlate(x, y, mode='full')[4:11]
Out[372]: array([ 5. , 23.5, 32. , 21. , 16. , 12.5, 9. ])
(It will be nice to have such a feature in numpy itself.)
Until numpy implements the maxlag argument, you can use the function ucorrelate from the pycorrelate package. ucorrelate operates on numpy arrays and has a maxlag keyword. It implements the correlation from using a for-loop and optimizes the execution speed with numba.
Example - autocorrelation with 3 time lags:
import numpy as np
import pycorrelate as pyc
x = np.array([2, 1.5, 0, 0, -1, 3, 2, -0.5])
c = pyc.ucorrelate(x, x, maxlag=3)
c
Result:
Out[1]: array([20, 5, -3])
The pycorrelate documentation contains a notebook showing perfect match between pycorrelate.ucorrelate and numpy.correlate:
matplotlib.pyplot provides matlab like syntax for computating and plotting of cross correlation , auto correlation etc.
You can use xcorr which allows to define the maxlags parameter.
import matplotlib.pyplot as plt
import numpy as np
data = np.arange(0,2*np.pi,0.01)
y1 = np.sin(data)
y2 = np.cos(data)
coeff = plt.xcorr(y1,y2,maxlags=10)
print(*coeff)
[-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7
8 9 10] [ -9.81991753e-02 -8.85505028e-02 -7.88613080e-02 -6.91325329e-02
-5.93651264e-02 -4.95600447e-02 -3.97182508e-02 -2.98407146e-02
-1.99284126e-02 -9.98232812e-03 -3.45104289e-06 9.98555430e-03
1.99417667e-02 2.98641953e-02 3.97518558e-02 4.96037706e-02
5.94189688e-02 6.91964864e-02 7.89353663e-02 8.86346584e-02
9.82934198e-02] <matplotlib.collections.LineCollection object at 0x00000000074A9E80> Line2D(_line0)
#Warren Weckesser's answer is the best as it leverages numpy to get performance savings (and not just call corr for each lag). Nonetheless, it returns the cross-product (eg the dot product between the inputs at various lags). To get the actual cross-correlation I modified his answer w/ an optional mode argument, which if set to 'corr' returns the cross-correlation as such:
def crosscorrelation(x, y, maxlag, mode='corr'):
"""
Cross correlation with a maximum number of lags.
`x` and `y` must be one-dimensional numpy arrays with the same length.
This computes the same result as
numpy.correlate(x, y, mode='full')[len(a)-maxlag-1:len(a)+maxlag]
The return vaue has length 2*maxlag + 1.
"""
py = np.pad(y.conj(), 2*maxlag, mode='constant')
T = as_strided(py[2*maxlag:], shape=(2*maxlag+1, len(y) + 2*maxlag),
strides=(-py.strides[0], py.strides[0]))
px = np.pad(x, maxlag, mode='constant')
if mode == 'dot': # get lagged dot product
return T.dot(px)
elif mode == 'corr': # gets Pearson correlation
return (T.dot(px)/px.size - (T.mean(axis=1)*px.mean())) / \
(np.std(T, axis=1) * np.std(px))
I encountered the same problem some time ago, I paid more attention to the efficiency of calculation.Refer to the source code of MATLAB's function xcorr.m, I made a simple one.
import numpy as np
from scipy import signal, fftpack
import math
import time
def nextpow2(x):
if x == 0:
y = 0
else:
y = math.ceil(math.log2(x))
return y
def xcorr(x, y, maxlag):
m = max(len(x), len(y))
mx1 = min(maxlag, m - 1)
ceilLog2 = nextpow2(2 * m - 1)
m2 = 2 ** ceilLog2
X = fftpack.fft(x, m2)
Y = fftpack.fft(y, m2)
c1 = np.real(fftpack.ifft(X * np.conj(Y)))
index1 = np.arange(1, mx1+1, 1) + (m2 - mx1 -1)
index2 = np.arange(1, mx1+2, 1) - 1
c = np.hstack((c1[index1], c1[index2]))
return c
if __name__ == "__main__":
s = time.clock()
a = [1, 2, 3, 4, 5]
b = [6, 7, 8, 9, 10]
c = xcorr(a, b, 3)
e = time.clock()
print(c)
print(e-c)
Take the results of a certain run as an exmple:
[ 29. 56. 90. 130. 110. 86. 59.]
0.0001745000000001884
comparing with MATLAB code:
clear;close all;clc
tic
a = [1, 2, 3, 4, 5];
b = [6, 7, 8, 9, 10];
c = xcorr(a, b, 3)
toc
29.0000 56.0000 90.0000 130.0000 110.0000 86.0000 59.0000
时间已过 0.000279 秒。
If anyone can give a strict mathematical derivation about this,that would be very helpful.
I think I have found a solution, as I was facing the same problem:
If you have two vectors x and y of any length N, and want a cross-correlation with a window of fixed len m, you can do:
x = <some_data>
y = <some_data>
# Trim your variables
x_short = x[window:]
y_short = y[window:]
# do two xcorrelations, lagging x and y respectively
left_xcorr = np.correlate(x, y_short) #defaults to 'valid'
right_xcorr = np.correlate(x_short, y) #defaults to 'valid'
# combine the xcorrelations
# note the first value of right_xcorr is the same as the last of left_xcorr
xcorr = np.concatenate(left_xcorr, right_xcorr[1:])
Remember you might need to normalise the variables if you want a bounded correlation
Here is another answer, sourced from here, seems faster on the margin than np.correlate and has the benefit of returning a normalised correlation:
def rolling_window(self, a, window):
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
def xcorr(self, x,y):
N=len(x)
M=len(y)
meany=np.mean(y)
stdy=np.std(np.asarray(y))
tmp=self.rolling_window(np.asarray(x),M)
c=np.sum((y-meany)*(tmp-np.reshape(np.mean(tmp,-1),(N-M+1,1))),-1)/(M*np.std(tmp,-1)*stdy)
return c
as I answered here, https://stackoverflow.com/a/47897581/5122657
matplotlib.xcorr has the maxlags param. It is actually a wrapper of the numpy.correlate, so there is no performance saving. Nevertheless it gives exactly the same result given by Matlab's cross-correlation function. Below I edited the code from matplotlib so that it will return only the correlation. The reason is that if we use matplotlib.corr as it is, it will return the plot as well. The problem is, if we put complex data type as the arguments into it, we will get "casting complex to real datatype" warning when matplotlib tries to draw the plot.
<!-- language: python -->
import numpy as np
import matplotlib.pyplot as plt
def xcorr(x, y, maxlags=10):
Nx = len(x)
if Nx != len(y):
raise ValueError('x and y must be equal length')
c = np.correlate(x, y, mode=2)
if maxlags is None:
maxlags = Nx - 1
if maxlags >= Nx or maxlags < 1:
raise ValueError('maxlags must be None or strictly positive < %d' % Nx)
c = c[Nx - 1 - maxlags:Nx + maxlags]
return c

Categories

Resources