Compute stats function on non-overlapping day-wide time window with Pandas - python

Preamble
How can I apply a function to a list with non-overlapping sliding window. E.g. data = {x_1, x_2, ...., x_n} and we apply f with window size 2 to get {f(x_1,x_2), f(x_3, x_4), ...., f(x_{n-1}, x_n)}.
I understand that I can partition and use map on the partitioned list. But are there more efficient ways to handle this operation, especially for ndarray and dataframe? Something that would analogous to BlockMap of Mathematica.
Question
The ultimate goal of this is: suppose the dataframe is a time series with values for each hour of the day. How can I apply a function (e.g. mean, variance) for each day, i.e. function blockmaps with a non-overlapping window of 24 hour size?
EDIT 1:
Here is a code that returns a pandas dataframe:
import pandas as pd
import numpy as np
dat = np.random.uniform(0,10,40)
xpd = pd.DataFrame(dat)
xpd.rename(columns = {0:'new_name'}, inplace = True)
date_rng = pd.date_range(start='1/1/2018 03:00:00', periods=40, freq='H')
xpd.set_index(date_rng, inplace=True)
How can I calculate the variance for each day, i.e. from hourly data, and return as a dataframe.
I tried the below line but it didn't work:
xpd.groupby(by=lambda x: pd.Series.dt.floor(x, freq='d'))
EDIT 2
This worked, problem seems to be solved:
xpd.groupby(by=lambda x: x.floor('d')).var()

(EDIT: Answered when was without edits and titled: map a function with non-overlapping window on a dataframe or ndarray).
One way, assuming that n is always even, is:
def pairwise_map(func, items):
iterators = [iter(items)] * 2
return map(func, zip(*iterators))
list(pairwise_map(sum, range(10)))
# [1, 5, 9, 13, 17]
This consists of two steps: the separation in group and the mapping.
A more general version of the group separation can be found in flyingcircus.base.group_by().
(Disclaimer: I am the main author of the package).
While the above works for the general case, if you have a NumPy array arr and the function func() is vectorized, one can simply use:
import numpy as np
arr = np.arange(10)
def func(x, y):
return x + y
func(arr[::2], arr[1::2])
# array([ 1, 5, 9, 13, 17])
EDIT
This can be generalized to any size, e.g.:
def pairwise_map(func, items, window=2):
iterators = [iter(items)] * window
return map(func, zip(*iterators))
list(pairwise_map(sum, range(10), 3))
# [3, 12, 21]
This obviously rely on func() being able to accept the correct or a variable number of arguments.
Similarly, for NumPy arrays and NumPy-aware functions:
import numpy as np
arr = np.arange(9)
def func(*args):
return sum(args)
window = 3
func(*(arr[i::window] for i in range(window)))
# array([ 3, 12, 21])
Note that this require len(arr) % window == 0.
For NumPy functions that support the axis keyword (e.g. np.mean(), np.std(), etc.), one can simply use the following reshaping trick:
import numpy as np
arr = np.arange(56)
window = 8
np.mean(arr.reshape(-1, window), axis=1)
# array([ 3.5, 11.5, 19.5, 27.5, 35.5, 43.5, 51.5])
Note that this also strictly requires len(arr) % window == 0, which can be enforced with e.g. np.concatenate() to pad zeros at the end of the input:
import numpy as np
arr = np.arange(53)
remainder = len(arr) % window
padder = np.zeros(window - remainder if remainder else 0, dtype=arr.dtype)
window = 8
np.mean(np.concatenate((arr, padder)).reshape(-1, window), axis=1)
# array([ 3.5 , 11.5 , 19.5 , 27.5 , 35.5 , 43.5 , 31.25])

Related

sum through specific values in an array

I have an array of data-points, for example:
[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
and I need to perform the following sum on the values:
However, the problem is that I need to perform this sum on each value > i. For example, using the last 3 values in the set the sum would be:
and so on up to 10.
If i run something like:
import numpy as np
x = np.array([10, 9, 8, 7, 6, 5, 4, 3, 2, 1])
alpha = 1/np.log(2)
for i in x:
y = sum(x**(alpha)*np.log(x))
print (y)
It returns a single value of y = 247.7827060452275, whereas I need an array of values. I think I need to reverse the order of the data to achieve what I want but I'm having trouble visualising the problem (hope I explained it properly) as a whole so any suggestions would be much appreciated.
The following computes all the partial sums of the grand sum in your formula
import numpy as np
# Generate numpy array [1, 10]
x = np.arange(1, 11)
alpha = 1 / np.log(2)
# Compute parts of the sum
parts = x ** alpha * np.log(x)
# Compute all partial sums
part_sums = np.cumsum(parts)
print(part_sums)
You really do not any explicit loop, or a non-numpy operation (like sum()) here. numpy takes care of all your needs.

How to iteratively nest a nested function

I have an array arr_multi_dim which is multi-dimensional. Every time when I increase a parameter n, there will be more entries created in the array results and the array will get larger.
With each increase in n, I need to perform the function np.concatenate() on the array arr_multi_dim, in such a way that there will be more np.concatenate() function nested every time n increases.
For eg.,
when n=2:
arr_multi_dim = np.concatenate(np.concatenate(arr_multi_dim, axis=1), axis=1)
when n=3:
arr_multi_dim = np.concatenate(np.concatenate(
np.concatenate(np.concatenate(arr_multi_dim, axis=1), axis=1), axis=1), axis=1)
when n=4:
arr_multi_dim = np.concatenate(np.concatenate(
np.concatenate(np.concatenate(
np.concatenate(np.concatenate(arr_multi_dim, axis=1), axis=1), axis=1), axis=1), axis=1), axis=1)
etc.
where at each increment of n, a pair of np.concatenate() (ie. two) gets added into the function.
How do I write a function, loops (or something similar), so that when I specify any values for n, the appropriate np.concatenate() function will be used?
Many thanks in advance.
Edit:
This is the full code that I have written which uses the above np.concatenate() function.
from itertools import product
from joblib import Parallel, delayed
from functools import reduce
from operator import mul
import numpy as np
lst = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
arr = np.array(lst)
n = 2
def test1(arr, n):
flat = np.ravel(arr).tolist()
gen = (list(a) for a in product(flat, repeat=n))
results = Parallel(n_jobs=-1)(delayed(reduce)(mul, x) for (x) in gen)
nrows = arr.shape[0]
ncols = arr.shape[1]
arr_multi_dim = np.array(results).reshape((nrows, ncols)*n)
arr_final = np.concatenate(np.concatenate(arr_multi_dim, axis=1), axis=1) # need to generalise this
return arr_final
The above code only works for n=2. I am trying to generalize the np.concatenate part of the code so that it would work for any n as mentioned above.
If i understood you correctly its pretty simple:
arr_multi_dim = results
for i in range(n):
if i < 2:
arr_multi_dim = np.concatenate(arr_multi_dim , axis=1)
else:
arr_multi_dim = np.concatenate(np.concatenate(arr_multi_dim , axis=1), axis=1)
becase the first two iteration only add a single layer while the rest add two layers

Mean value of each element in multiple lists - Python

If I have two lists
a = [2,5,1,9]
b = [4,9,5,10]
How can I find the mean value of each element, so that the resultant list would be:
[3,7,3,9.5]
>>> a = [2,5,1,9]
>>> b = [4,9,5,10]
>>> [(g + h) / 2 for g, h in zip(a, b)]
[3.0, 7.0, 3.0, 9.5]
Referring to your title of the question, you can achieve this simply with:
import numpy as np
multiple_lists = [[2,5,1,9], [4,9,5,10]]
arrays = [np.array(x) for x in multiple_lists]
[np.mean(k) for k in zip(*arrays)]
Above script will handle multiple lists not just two. If you want to compare the performance of two approaches try:
%%time
import random
import statistics
random.seed(33)
multiple_list = []
for seed in random.sample(range(100), 100):
random.seed(seed)
multiple_list.append(random.sample(range(100), 100))
result = [statistics.mean(k) for k in zip(*multiple_list)]
or alternatively:
%%time
import random
import numpy as np
random.seed(33)
multiple_list = []
for seed in random.sample(range(100), 100):
random.seed(seed)
multiple_list.append(np.array(random.sample(range(100), 100)))
result = [np.mean(k) for k in zip(*multiple_list)]
To my experience numpy approach is much faster.
What you want is the mean of two arrays (or vectors in math).
Since Python 3.4, there is a statistics module which provides a mean() function:
statistics.mean(data)
Return the sample arithmetic mean of data, a sequence or iterator of real-valued numbers.
You can use it like this:
import statistics
a = [2, 5, 1, 9]
b = [4, 9, 5, 10]
result = [statistics.mean(k) for k in zip(a, b)]
# -> [3.0, 7.0, 3.0, 9.5]
notice: this solution can be use for more than two arrays, because zip() can have multiple parameters.
An alternate to using a list and for loop would be to use a numpy array.
import numpy as np
# an array can perform element wise calculations unlike lists.
a, b = np.array([2,5,1,9]), np.array([4,9,5,10])
mean = (a + b)/2; print(mean)
>>>[ 3. 7. 3. 9.5]
Put the two lists into a numpy array using vstack and then take the mean (using 'tolist' to get back from the numpy array):
import numpy as np
a = [2,5,1,9]
b = [4,9,5,10]
np.mean(np.vstack([a,b]), axis=0).tolist()
[3.0, 7.0, 3.0, 9.5]
Seems you are looking for an element-wise mean value. setting axis=0 in np.mean is what you need.
>>> import numpy as np
>>> a = [2,5,1,9]
>>> b = [4,9,5,10]
Create a list containing all your lists
>>> a_b = [a,b]
>>> a_b
[[2, 5, 1, 9], [4, 9, 5, 10]]
Use np.mean and set the axis to 0
>>> np.mean(a_b, axis=0)
array([3. , 7. , 3. , 9.5])

Partial convolution / correlation with numpy [duplicate]

I am learning numpy/scipy, coming from a MATLAB background. The xcorr function in Matlab has an optional argument "maxlag" that limits the lag range from –maxlag to maxlag. This is very useful if you are looking at the cross-correlation between two very long time series but are only interested in the correlation within a certain time range. The performance increases are enormous considering that cross-correlation is incredibly expensive to compute.
In numpy/scipy it seems there are several options for computing cross-correlation. numpy.correlate, numpy.convolve, scipy.signal.fftconvolve. If someone wishes to explain the difference between these, I'd be happy to hear, but mainly what is troubling me is that none of them have a maxlag feature. This means that even if I only want to see correlations between two time series with lags between -100 and +100 ms, for example, it will still calculate the correlation for every lag between -20000 and +20000 ms (which is the length of the time series). This gives a 200x performance hit! Do I have to recode the cross-correlation function by hand to include this feature?
Here are a couple functions to compute auto- and cross-correlation with limited lags. The order of multiplication (and conjugation, in the complex case) was chosen to match the corresponding behavior of numpy.correlate.
import numpy as np
from numpy.lib.stride_tricks import as_strided
def _check_arg(x, xname):
x = np.asarray(x)
if x.ndim != 1:
raise ValueError('%s must be one-dimensional.' % xname)
return x
def autocorrelation(x, maxlag):
"""
Autocorrelation with a maximum number of lags.
`x` must be a one-dimensional numpy array.
This computes the same result as
numpy.correlate(x, x, mode='full')[len(x)-1:len(x)+maxlag]
The return value has length maxlag + 1.
"""
x = _check_arg(x, 'x')
p = np.pad(x.conj(), maxlag, mode='constant')
T = as_strided(p[maxlag:], shape=(maxlag+1, len(x) + maxlag),
strides=(-p.strides[0], p.strides[0]))
return T.dot(p[maxlag:].conj())
def crosscorrelation(x, y, maxlag):
"""
Cross correlation with a maximum number of lags.
`x` and `y` must be one-dimensional numpy arrays with the same length.
This computes the same result as
numpy.correlate(x, y, mode='full')[len(a)-maxlag-1:len(a)+maxlag]
The return vaue has length 2*maxlag + 1.
"""
x = _check_arg(x, 'x')
y = _check_arg(y, 'y')
py = np.pad(y.conj(), 2*maxlag, mode='constant')
T = as_strided(py[2*maxlag:], shape=(2*maxlag+1, len(y) + 2*maxlag),
strides=(-py.strides[0], py.strides[0]))
px = np.pad(x, maxlag, mode='constant')
return T.dot(px)
For example,
In [367]: x = np.array([2, 1.5, 0, 0, -1, 3, 2, -0.5])
In [368]: autocorrelation(x, 3)
Out[368]: array([ 20.5, 5. , -3.5, -1. ])
In [369]: np.correlate(x, x, mode='full')[7:11]
Out[369]: array([ 20.5, 5. , -3.5, -1. ])
In [370]: y = np.arange(8)
In [371]: crosscorrelation(x, y, 3)
Out[371]: array([ 5. , 23.5, 32. , 21. , 16. , 12.5, 9. ])
In [372]: np.correlate(x, y, mode='full')[4:11]
Out[372]: array([ 5. , 23.5, 32. , 21. , 16. , 12.5, 9. ])
(It will be nice to have such a feature in numpy itself.)
Until numpy implements the maxlag argument, you can use the function ucorrelate from the pycorrelate package. ucorrelate operates on numpy arrays and has a maxlag keyword. It implements the correlation from using a for-loop and optimizes the execution speed with numba.
Example - autocorrelation with 3 time lags:
import numpy as np
import pycorrelate as pyc
x = np.array([2, 1.5, 0, 0, -1, 3, 2, -0.5])
c = pyc.ucorrelate(x, x, maxlag=3)
c
Result:
Out[1]: array([20, 5, -3])
The pycorrelate documentation contains a notebook showing perfect match between pycorrelate.ucorrelate and numpy.correlate:
matplotlib.pyplot provides matlab like syntax for computating and plotting of cross correlation , auto correlation etc.
You can use xcorr which allows to define the maxlags parameter.
import matplotlib.pyplot as plt
import numpy as np
data = np.arange(0,2*np.pi,0.01)
y1 = np.sin(data)
y2 = np.cos(data)
coeff = plt.xcorr(y1,y2,maxlags=10)
print(*coeff)
[-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7
8 9 10] [ -9.81991753e-02 -8.85505028e-02 -7.88613080e-02 -6.91325329e-02
-5.93651264e-02 -4.95600447e-02 -3.97182508e-02 -2.98407146e-02
-1.99284126e-02 -9.98232812e-03 -3.45104289e-06 9.98555430e-03
1.99417667e-02 2.98641953e-02 3.97518558e-02 4.96037706e-02
5.94189688e-02 6.91964864e-02 7.89353663e-02 8.86346584e-02
9.82934198e-02] <matplotlib.collections.LineCollection object at 0x00000000074A9E80> Line2D(_line0)
#Warren Weckesser's answer is the best as it leverages numpy to get performance savings (and not just call corr for each lag). Nonetheless, it returns the cross-product (eg the dot product between the inputs at various lags). To get the actual cross-correlation I modified his answer w/ an optional mode argument, which if set to 'corr' returns the cross-correlation as such:
def crosscorrelation(x, y, maxlag, mode='corr'):
"""
Cross correlation with a maximum number of lags.
`x` and `y` must be one-dimensional numpy arrays with the same length.
This computes the same result as
numpy.correlate(x, y, mode='full')[len(a)-maxlag-1:len(a)+maxlag]
The return vaue has length 2*maxlag + 1.
"""
py = np.pad(y.conj(), 2*maxlag, mode='constant')
T = as_strided(py[2*maxlag:], shape=(2*maxlag+1, len(y) + 2*maxlag),
strides=(-py.strides[0], py.strides[0]))
px = np.pad(x, maxlag, mode='constant')
if mode == 'dot': # get lagged dot product
return T.dot(px)
elif mode == 'corr': # gets Pearson correlation
return (T.dot(px)/px.size - (T.mean(axis=1)*px.mean())) / \
(np.std(T, axis=1) * np.std(px))
I encountered the same problem some time ago, I paid more attention to the efficiency of calculation.Refer to the source code of MATLAB's function xcorr.m, I made a simple one.
import numpy as np
from scipy import signal, fftpack
import math
import time
def nextpow2(x):
if x == 0:
y = 0
else:
y = math.ceil(math.log2(x))
return y
def xcorr(x, y, maxlag):
m = max(len(x), len(y))
mx1 = min(maxlag, m - 1)
ceilLog2 = nextpow2(2 * m - 1)
m2 = 2 ** ceilLog2
X = fftpack.fft(x, m2)
Y = fftpack.fft(y, m2)
c1 = np.real(fftpack.ifft(X * np.conj(Y)))
index1 = np.arange(1, mx1+1, 1) + (m2 - mx1 -1)
index2 = np.arange(1, mx1+2, 1) - 1
c = np.hstack((c1[index1], c1[index2]))
return c
if __name__ == "__main__":
s = time.clock()
a = [1, 2, 3, 4, 5]
b = [6, 7, 8, 9, 10]
c = xcorr(a, b, 3)
e = time.clock()
print(c)
print(e-c)
Take the results of a certain run as an exmple:
[ 29. 56. 90. 130. 110. 86. 59.]
0.0001745000000001884
comparing with MATLAB code:
clear;close all;clc
tic
a = [1, 2, 3, 4, 5];
b = [6, 7, 8, 9, 10];
c = xcorr(a, b, 3)
toc
29.0000 56.0000 90.0000 130.0000 110.0000 86.0000 59.0000
时间已过 0.000279 秒。
If anyone can give a strict mathematical derivation about this,that would be very helpful.
I think I have found a solution, as I was facing the same problem:
If you have two vectors x and y of any length N, and want a cross-correlation with a window of fixed len m, you can do:
x = <some_data>
y = <some_data>
# Trim your variables
x_short = x[window:]
y_short = y[window:]
# do two xcorrelations, lagging x and y respectively
left_xcorr = np.correlate(x, y_short) #defaults to 'valid'
right_xcorr = np.correlate(x_short, y) #defaults to 'valid'
# combine the xcorrelations
# note the first value of right_xcorr is the same as the last of left_xcorr
xcorr = np.concatenate(left_xcorr, right_xcorr[1:])
Remember you might need to normalise the variables if you want a bounded correlation
Here is another answer, sourced from here, seems faster on the margin than np.correlate and has the benefit of returning a normalised correlation:
def rolling_window(self, a, window):
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
def xcorr(self, x,y):
N=len(x)
M=len(y)
meany=np.mean(y)
stdy=np.std(np.asarray(y))
tmp=self.rolling_window(np.asarray(x),M)
c=np.sum((y-meany)*(tmp-np.reshape(np.mean(tmp,-1),(N-M+1,1))),-1)/(M*np.std(tmp,-1)*stdy)
return c
as I answered here, https://stackoverflow.com/a/47897581/5122657
matplotlib.xcorr has the maxlags param. It is actually a wrapper of the numpy.correlate, so there is no performance saving. Nevertheless it gives exactly the same result given by Matlab's cross-correlation function. Below I edited the code from matplotlib so that it will return only the correlation. The reason is that if we use matplotlib.corr as it is, it will return the plot as well. The problem is, if we put complex data type as the arguments into it, we will get "casting complex to real datatype" warning when matplotlib tries to draw the plot.
<!-- language: python -->
import numpy as np
import matplotlib.pyplot as plt
def xcorr(x, y, maxlags=10):
Nx = len(x)
if Nx != len(y):
raise ValueError('x and y must be equal length')
c = np.correlate(x, y, mode=2)
if maxlags is None:
maxlags = Nx - 1
if maxlags >= Nx or maxlags < 1:
raise ValueError('maxlags must be None or strictly positive < %d' % Nx)
c = c[Nx - 1 - maxlags:Nx + maxlags]
return c

Decrease array size by averaging adjacent values with numpy

I have a large array of thousands of vals in numpy. I want to decrease its size by averaging adjacent values.
For example:
a = [2,3,4,8,9,10]
#average down to 2 values here
a = [3,9]
#it averaged 2,3,4 and 8,9,10 together
So, basically, I have n number of elements in array, and I want to tell it to average down to X number of values, and it averages like above.
Is there some way to do that with numpy (already using it for other things, so I'd like to stick with it).
Using reshape and mean, you can average every m adjacent values of an 1D-array of size N*m, with N being any positive integer number. For example:
import numpy as np
m = 3
a = np.array([2, 3, 4, 8, 9, 10])
b = a.reshape(-1, m).mean(axis=1)
#array([3., 9.])
1)a.reshape(-1, m) will create a 2D image of the array without copying data:
array([[ 2, 3, 4],
[ 8, 9, 10]])
2)taking the mean in the second axis (axis=1) will then calculate the mean value of each row, resulting in:
array([3., 9.])
Try this:
n_averaged_elements = 3
averaged_array = []
a = np.array([ 2, 3, 4, 8, 9, 10])
for i in range(0, len(a), n_averaged_elements):
slice_from_index = i
slice_to_index = slice_from_index + n_averaged_elements
averaged_array.append(np.mean(a[slice_from_index:slice_to_index]))
>>>> averaged_array
>>>> [3.0, 9.0]
Looks like a simple non-overlapping moving window average to me, how about:
In [3]:
import numpy as np
a = np.array([2,3,4,8,9,10])
window_sz = 3
a[:len(a)/window_sz*window_sz].reshape(-1,window_sz).mean(1)
#you want to be sure your array can be reshaped properly, so the [:len(a)/window_sz*window_sz] part
Out[3]:
array([ 3., 9.])
In this example, I presume that a is the 1D numpy array that needs to be averaged. In the method that I give below, we first find the factors of the length of this array a. And, then we choose the an appropriate factor as the step size to average the array with.
Here is the code.
import numpy as np
from functools import reduce
''' Function to find factors of a given number 'n' '''
def factors(n):
return list(set(reduce(list.__add__,
([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0))))
a = [2,3,4,8,9,10] #Given array.
'''fac: list of factors of length of a.
In this example, len(a) = 6. So, fac = [1, 2, 3, 6] '''
fac = factors(len(a))
'''step: choose an appropriate step size from the list 'fac'.
In this example, we choose one of the middle numbers in fac
(3). '''
step = fac[int( len(fac)/3 )+1]
'''avg: initialize an empty array. '''
avg = np.array([])
for i in range(0, len(a), step):
avg = np.append( avg, np.mean(a[i:i+step]) ) #append averaged values to `avg`
print avg #Prints the final result
[3.0, 9.0]

Categories

Resources