pandas - cumulative median - python

I was wondering if there is any pandas equivalent to cumsum() or cummax() etc. for median: e.g. cummedian().
So that if I have, for example this dataframe:
a
1 5
2 7
3 6
4 4
what I want is something like:
df['a'].cummedian()
which should output:
5
6
6
5.5

You can use expanding.median -
df.a.expanding().median()
1 5.0
2 6.0
3 6.0
4 5.5
Name: a, dtype: float64
Timings
df = pd.DataFrame({'a' : np.arange(1000000)})
%timeit df['a'].apply(cummedian())
1 loop, best of 3: 1.69 s per loop
%timeit df.a.expanding().median()
1 loop, best of 3: 838 ms per loop
The winner is expanding.median by a huge margin. Divakar's method is memory intensive and suffers memory blowout at this size of input.

We could create nan filled subarrays as rows with a strides based function, like so -
def nan_concat_sliding_windows(x):
n = len(x)
add_arr = np.full(n-1, np.nan)
x_ext = np.concatenate((add_arr, x))
strided = np.lib.stride_tricks.as_strided
nrows = len(x_ext)-n+1
s = x_ext.strides[0]
return strided(x_ext, shape=(nrows,n), strides=(s,s))
Sample run -
In [56]: x
Out[56]: array([5, 6, 7, 4])
In [57]: nan_concat_sliding_windows(x)
Out[57]:
array([[ nan, nan, nan, 5.],
[ nan, nan, 5., 6.],
[ nan, 5., 6., 7.],
[ 5., 6., 7., 4.]])
Thus, to get sliding median values for an array x, we would have a vectorized solution, like so-
np.nanmedian(nan_concat_sliding_windows(x), axis=1)
Hence, the final solution would be -
In [54]: df
Out[54]:
a
1 5
2 7
3 6
4 4
In [55]: pd.Series(np.nanmedian(nan_concat_sliding_windows(df.a.values), axis=1))
Out[55]:
0 5.0
1 6.0
2 6.0
3 5.5
dtype: float64

A faster solution for the specific cumulative median
In [1]: import timeit
In [2]: setup = """import bisect
...: import pandas as pd
...: def cummedian():
...: l = []
...: info = [0, True]
...: def inner(n):
...: bisect.insort(l, n)
...: info[0] += 1
...: info[1] = not info[1]
...: median = info[0] // 2
...: if info[1]:
...: return (l[median] + l[median - 1]) / 2
...: else:
...: return l[median]
...: return inner
...: df = pd.DataFrame({'a': range(20)})"""
In [3]: timeit.timeit("df['cummedian'] = df['a'].apply(cummedian())",setup=setup,number=100000)
Out[3]: 27.11604686321956
In [4]: timeit.timeit("df['expanding'] = df['a'].expanding().median()",setup=setup,number=100000)
Out[4]: 48.457676260100335
In [5]: 48.4576/27.116
Out[5]: 1.7870482372031273

Related

Efficient NumPy way of rolling nanstd

I have a 1-D NumPy array where I create a rolling window and then compute the np.nanstd:
import numpy as np
def rolling_window(a, window):
a = np.asarray(a)
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
if __name__ == "__main__":
n = 100_000_000
nan_indices = np.random.choice(np.arange(n), size=1000, replace=False)
T = np.random.rand(n)
T[nan_indices] = np.nan
m = 50
np.nanstd(rolling_window(T, m), axis=T.ndim)
However, I noticed that not only is this extremely time consuming, it also uses a lot of memory. Is there a way to improve both the memory and speed performance (Numba is an option)?
NumPy vectorized
After grueling through the math, here's what I ended up with few np.convolve and some masking to get a vectorized NumPy solution -
def nanstd(a, W):
k = np.ones(W, dtype=int)
m = ~np.isnan(a)
a0 = np.where(m, a,0)
n = np.convolve(m,k,'valid')
c1 = np.convolve(a0, k,'valid')
f2 = c1**2
p2 = f2/n**2
f1 = np.convolve((a0**2)*m,k,'valid')+n*p2
out = np.sqrt((f1 - (2/n)*f2)/n)
return out
Complete Explanation is at the end of this post.
Pandas equivalent
Here's the equivalent pandas version, which isn't too bad on performance -
import pandas as pd
def pdroll(T,m):
return pd.Series(T).rolling(m).std(ddof=0).values[m-1:]
Benchmarking
Using benchit package (few benchmarking tools packaged together; disclaimer: I am its author) to benchmark proposed solutions.
def setup(n):
nan_indices = np.random.choice(np.arange(n), size=10, replace=False)
T = np.random.rand(n)
T[nan_indices] = np.nan
return T
import benchit
f = {'rolling': lambda T,m: np.nanstd(rolling_window(T, m), axis=T.ndim),
'pdroll': pdroll, 'conv':nanstd}
in_={(n,w):(setup(n),w) for n in 10**np.arange(2,6) for w in [5,10,20,50,80,100]}
t = benchit.timings(f, in_, multivar=True)
t.plot(logx=True, sp_ncols=2, save='timings.png', dpi=200)
NumPy one is good on smaller window sizes, while pandas is better on larger ones.
NumPy vectorized : Explanation on NumPy version nanstd
Basically, np.nanstd is computing std ignoring NaNs. Now, std can be computed based on mean.
Thus, for an array a with no NaNs, it would be :
np.sqrt(np.mean((a-np.mean(a))**2)) # (1)
Let's prove it :
In [43]: a = np.arange(1,6).astype(float)
In [44]: np.nanstd(a)
Out[44]: 1.4142135623730951
In [45]: np.sqrt(np.mean((a-np.mean(a))**2))
Out[45]: 1.4142135623730951
Now, let's say, we have a NaN in it :
In [46]: a[2] = np.nan
In [47]: a
Out[47]: array([ 1., 2., nan, 4., 5.])
The std with nanstd would be :
In [48]: np.nanstd(a)
Out[48]: 1.5811388300841898
Let's figure out the equivalent one based on (1).
Let's start with (a-np.mean(a))**2.
This one : ?
In [72]: (a-np.mean(a))**2
Out[72]: array([nan, nan, nan, nan, nan])
No!
This one : ?
In [73]: (a0 - np.sum(a0)/n)**2
Out[73]: array([4., 1., 9., 1., 4.])
No! Because a is :
In [76]: a
Out[76]: array([ 1., 2., nan, 4., 5.])
We need to make the NaN position one 0.
This one : ?
In [75]: m*((a0 - np.sum(a0)/n)**2)
Out[75]: array([4., 1., 0., 1., 4.])
Yes!
Then, what's np.mean((a-np.mean(a))**2)? It would be, sum of those in [75] divided by n :
In [77]: np.sum(m*((a0-np.sum(a0)/n)**2))/n
Out[77]: 2.5
Hence, the final std value :
In [78]: np.sqrt(np.sum(m*((a0-np.sum(a0)/n)**2))/n)
Out[78]: 1.5811388300841898
Summarizing :
In [55]: m = ~np.isnan(a) # (2)
...: a0 = np.where(m, a,0)
...: n = m.sum()
...: out0 = np.sqrt(np.sum(m*((a0-np.sum(a0)/n)**2))/n)
In [56]: out0
Out[56]: 1.5811388300841898
Next part is incorporating the sliding nature. So, we need to do (2) in a sliding nature. So, the first two steps remains.
Hence, it starts off with :
m = ~np.isnan(a)
a0 = np.where(m, a,0)
But the last two would change, let's see how.
Let's focus on the final step to compute out0. We have :
m*((a0-np.sum(a0)/n)**2)
Then, we compute the summation :
np.sum(m*((a0-np.sum(a0)/n)**2))
We have : (a-b)**2 = a**2 + b**2 - 2*a*b. So, earlier step becomes
np.sum(m*(a0**2 + (np.sum(a0)/n)**2 - 2*a0*np.sum(a0)/n))
Further re-arranging leads to :
np.sum(m*(a0**2 + (np.sum(a0)/n)**2) - np.sum(2*a0*np.sum(a0)/n))
np.sum(m*(a0**2 + (np.sum(a0)/n)**2)) - np.sum(2*a0*np.sum(a0)/n)
np.sum(m*(a0**2 + (np.sum(a0)/n)**2)) - 2*np.sum(a0*np.sum(a0))/n
np.sum(m*(a0**2 + (np.sum(a0)/n)**2)) - (2/n)*np.sum(a0*np.sum(a0)) # (3)
Let's focus on the first two parts for the summation.
Also, let's take a sample case to make things concrete. We will setup two datasets - One for the complete array setup and another a windowed version of the complete one.
Setup :
#=========================== 1. Complete setup
a = np.arange(1,10).astype(float)
a[[2,5]] = np.nan
W = 5
k = np.ones(W, dtype=int)
m_comp = ~np.isnan(a)
a0_comp = np.where(m_comp, a,0)
n_comp = np.convolve(m_comp,k,'valid')
c1 = np.convolve(a0_comp, k,'valid')
c2 = np.convolve((a0_comp**2)*m_comp,k,'valid')
#=========================== 2. Windowed setup
a1 = np.arange(1,6).astype(float)
a1[2] = np.nan
m = ~np.isnan(a1)
a0 = np.where(m, a1,0)
n = m.sum()
out0 = np.sqrt(np.sum(m*((a0-np.sum(a0)/n)**2))/n)
From windowed setup, we have :
In [51]: np.sum(m*(a0**2 + (np.sum(a0)/n)**2))
Out[51]: 82.0
In [52]: np.sum(m*(a0**2) + m*((np.sum(a0)/n)**2))
Out[52]: 82.0
In [53]: np.sum(m*(a0**2)) + np.sum(m*((np.sum(a0)/n)**2))
Out[53]: 82.0
First summation part :
In [86]: np.sum(m*(a0**2))
Out[86]: 46.0
# complete setup version :
In [87]: c2
Out[87]: array([ 46., 45., 90., 154., 219.])
Second summation part :
In [54]: np.sum(m*((np.sum(a0)/n)**2))
Out[54]: 36.0
# complete setup version :
In [55]: n_comp*(c1/n_comp)**2
Out[55]:
array([ 36. , 40.33333333, 85.33333333, 144. ,
210.25 ])
The remaining piece of puzzle fromn (3) is :
In [79]: (2/n)*np.sum(a0*np.sum(a0))
Out[79]: 72.0
Let's focus on the meat of it :
In [80]: np.sum(a0*np.sum(a0))
Out[80]: 144.0
On the complete setup, it would correspond to :
In [81]: c1**2
Out[81]: array([144., 121., 256., 576., 841.])
Thus, for the entire remaining piece :
In [82]: (2/n)*np.sum(a0*np.sum(a0))
Out[82]: 72.0
# complete setup version :
In [83]: (2/n_comp)*c1**2
Out[83]:
array([ 72. , 80.66666667, 170.66666667, 288. ,
420.5 ])
Hence, (3) and its complete version counterpart would be :
In [89]: np.sum(m*(a0**2 + (np.sum(a0)/n)**2)) - (2/n)*np.sum(a0*np.sum(a0))
Out[89]: 10.0
In [90]: c2 + n_comp*(c1/n_comp)**2 - (2/n_comp)*c1**2
Out[90]: array([10. , 4.66666667, 4.66666667, 10. , 8.75 ])
To get the final std values, we need to divide by the count of valid ones per window and then apply sqrt :
In [99]: np.sqrt((c2 + n_comp*(c1/n_comp)**2 - (2/n_comp)*c1**2)/n_comp)
Out[99]: array([1.58113883, 1.24721913, 1.24721913, 1.58113883, 1.47901995])
Hence, with some cleanup, we end up with the final nanstd version.

Fastest way to calculate difference in all columns

I have a dataframe of all float columns. For example:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.arange(12.0).reshape(3,4), columns=list('ABCD'))
# A B C D
# 0 0.0 1.0 2.0 3.0
# 1 4.0 5.0 6.0 7.0
# 2 8.0 9.0 10.0 11.0
I would like to calculate column-wise differences for all combinations of columns (e.g., A-B, A-C, B-C, etc.).
E.g., the desired output would be something like:
A_B A_C A_D B_C B_D C_D
-1.0 -2.0 -3.0 -1.0 -2.0 -1.0
-1.0 -2.0 -3.0 -1.0 -2.0 -1.0
-1.0 -2.0 -3.0 -1.0 -2.0 -1.0
Since the number of columns may be large, I'd like to do the calculations as efficiently/quickly as possible. I assume I'll get a big speed bump by converting the dataframe to a numpy array first so I'll do that, but I'm wondering if there are any other strategies that might result in large performance gains. Maybe some matrix algebra or multidimensional data format trick that results in not having to loop through all unique combinations. Any suggestions are welcome. This project is in Python 3.
Listed in this post are two NumPy approaches for performance - One would be fully vectorized approach and another with one loop.
Approach #1
def numpy_triu1(df):
a = df.values
r,c = np.triu_indices(a.shape[1],1)
cols = df.columns
nm = [cols[i]+"_"+cols[j] for i,j in zip(r,c)]
return pd.DataFrame(a[:,r] - a[:,c], columns=nm)
Sample run -
In [72]: df
Out[72]:
A B C D
0 0.0 1.0 2.0 3.0
1 4.0 5.0 6.0 7.0
2 8.0 9.0 10.0 11.0
In [78]: numpy_triu(df)
Out[78]:
A_B A_C A_D B_C B_D C_D
0 -1.0 -2.0 -3.0 -1.0 -2.0 -1.0
1 -1.0 -2.0 -3.0 -1.0 -2.0 -1.0
2 -1.0 -2.0 -3.0 -1.0 -2.0 -1.0
Approach #2
If we are okay with array as output or dataframe without specialized column names, here's another -
def pairwise_col_diffs(a): # a would df.values
n = a.shape[1]
N = n*(n-1)//2
idx = np.concatenate(( [0], np.arange(n-1,0,-1).cumsum() ))
start, stop = idx[:-1], idx[1:]
out = np.empty((a.shape[0],N),dtype=a.dtype)
for j,i in enumerate(range(n-1)):
out[:, start[j]:stop[j]] = a[:,i,None] - a[:,i+1:]
return out
Runtime test
Since OP has mentioned that multi-dim array output would work for them as well, here are the array based approaches from other author(s) -
# #Allen's soln
def Allen(arr):
n = arr.shape[1]
idx = np.asarray(list(itertools.combinations(range(n),2))).T
return arr[:,idx[0]]-arr[:,idx[1]]
# #DYZ's soln
def DYZ(arr):
result = np.concatenate([(arr.T - arr.T[x])[x+1:] \
for x in range(arr.shape[1])]).T
return result
pandas based solution from #Gerges Dib's post wasn't included as it came out very slow as compared to others.
Timings -
We will use three dataset sizes - 100, 500 and 1000 :
In [118]: df = pd.DataFrame(np.random.randint(0,9,(3,100)))
...: a = df.values
...:
In [119]: %timeit DYZ(a)
...: %timeit Allen(a)
...: %timeit pairwise_col_diffs(a)
...:
1000 loops, best of 3: 258 µs per loop
1000 loops, best of 3: 1.48 ms per loop
1000 loops, best of 3: 284 µs per loop
In [121]: df = pd.DataFrame(np.random.randint(0,9,(3,500)))
...: a = df.values
...:
In [122]: %timeit DYZ(a)
...: %timeit Allen(a)
...: %timeit pairwise_col_diffs(a)
...:
100 loops, best of 3: 2.56 ms per loop
10 loops, best of 3: 39.9 ms per loop
1000 loops, best of 3: 1.82 ms per loop
In [123]: df = pd.DataFrame(np.random.randint(0,9,(3,1000)))
...: a = df.values
...:
In [124]: %timeit DYZ(a)
...: %timeit Allen(a)
...: %timeit pairwise_col_diffs(a)
...:
100 loops, best of 3: 8.61 ms per loop
10 loops, best of 3: 167 ms per loop
100 loops, best of 3: 5.09 ms per loop
I think you can do it with NumPy. Let arr=df.values. First, let's find all two-column combinations:
from itertools import combinations
column_combos = combinations(range(arr.shape[1]), 2)
Now, subtract columns pairwise and convert a list of arrays back to a 2D array:
result = np.array([(arr[:,x[1]] - arr[:,x[0]]) for x in column_combos]).T
#array([[1., 2., 3., 1., 2., 1.],
# [1., 2., 3., 1., 2., 1.],
# [1., 2., 3., 1., 2., 1.]])
Another solution is somewhat (~15%) faster because it subtracts whole 2D arrays rather than columns, and has fewer Python-side iterations:
result = np.concatenate([(arr.T - arr.T[x])[x+1:] for x in range(arr.shape[1])]).T
#array([[ 1., 2., 3., 1., 2., 1.],
# [ 1., 2., 3., 1., 2., 1.],
# [ 1., 2., 3., 1., 2., 1.]])
You can convert the result back to a DataFrame if you want:
columns = list(map(lambda x: x[1]+x[0], combinations(df.columns, 2)))
#['BA', 'CA', 'DA', 'CB', 'DB', 'DC']
pd.DataFrame(result, columns=columns)
# BA CA DA CB DB DC
#0 1.0 2.0 3.0 1.0 2.0 1.0
#1 1.0 2.0 3.0 1.0 2.0 1.0
#2 1.0 2.0 3.0 1.0 2.0 1.0
import itertools
df = pd.DataFrame(np.arange(12.0).reshape(3,4), columns=list('ABCD'))
df_cols = df.columns.tolist()
#build a index array of all the pairs need to do the subtraction
idx = np.asarray(list(itertools.combinations(range(len(df_cols)),2))).T
#build a new DF using the pairwise difference and column names
df_new = pd.DataFrame(data=df.values[:,idx[0]]-df.values[:,idx[1]],
columns=[''.join(e) for e in (itertools.combinations(df_cols,2))])
df_new
Out[43]:
AB AC AD BC BD CD
0 -1.0 -2.0 -3.0 -1.0 -2.0 -1.0
1 -1.0 -2.0 -3.0 -1.0 -2.0 -1.0
2 -1.0 -2.0 -3.0 -1.0 -2.0 -1.0
I am not sure how fast can this be compared to other possible methods, but here it is:
df = pd.DataFrame(np.arange(12.0).reshape(3,4), columns=list('ABCD'))
# get the columns as list
cols = list(df.columns)
# define output dataframe
out = pd.DataFrame()
# loop over possible periods
for period in range(1, df.shape[1]):
names = [l1 + l2 for l1, l2, in zip(cols, cols[period:])]
out[names] = df.diff(periods=period, axis=1).dropna(axis=1, how='all')
print(out)
# column name shows which two columns are subtracted
AB BC CD AC BD AD
0 1.0 1.0 1.0 2.0 2.0 3.0
1 1.0 1.0 1.0 2.0 2.0 3.0
2 1.0 1.0 1.0 2.0 2.0 3.0

How to divide two columns of type `pandas.core.series.Series`?

I need to divide two series element wise.
The elements are of type float.
A = [10,20,30]
B = [2,5,5]
result = A/B
I expect
result = [5,4,6]
but get
result = [NaN, NaN, NaN]
This just works with pandas Series as expected:
In [3]: import pandas as pd
In [4]: A = pd.Series([10,20,30])
In [5]: B = pd.Series([2,5,5])
In [6]: A/B
Out[6]:
0 5
1 4
2 6
dtype: float64

Number of non-missing values in array? Len(x) excluding missing values?

Is there a function in python that allows me to count the number of non-missing values in an array?
My data:
df.wealth1[df.wealth < 25000] = df.wealth
df.wealth2[df.wealth <50000 & df.wealth > 25000] = df.wealth
df.wealth3[df.wealth < 75000 & df.wealth > 50000] = df.wealth
...
id, income, wealth, wealth1, wealth2, ... wealth9
1, 100000, 20000, 20000, ,...,
2, 60000, 40000, , 40000, ...,
3 70000, 23000, 23000, , ...,
4 80000, 75000, , ,..., 75000
...
My current situation:
income_brackets = [(0, 25000), (25000,50000), (50000,100000)]
source = {'wealth1': [], 'wealth2' :[], .... 'wealth9' : []
for lower, upper in income_brackets:
for key in source:
source[key].append(len(df.query('income > {} and income < {}'.format(lower,upper))[np.logical_not(np.isnan([key]))]))
But this does not work because np.isnan('wealth1') is invalid. it only works with np.isnan(df.wealth1), however I cannot incorporate that into my for loop. I am pretty new to python so perhaps (hopefully) I am missing something obvious.
Any suggestions or question would be great. Thanks! Cheers
The best way to do this is with the count method of DataFrame objects:
In [18]: data = randn(1000, 3)
In [19]: data
Out[19]:
array([[ 0.1035, 0.9239, 0.3902],
[ 0.2022, -0.1755, -0.4633],
[ 0.0595, -1.3779, -1.1187],
...,
[ 1.3931, 0.4087, 2.348 ],
[ 1.2746, -0.6431, 0.0707],
[-1.1062, 1.3949, 0.3065]])
In [20]: data[rand(len(data)) > 0.5] = nan
In [21]: data
Out[21]:
array([[ 0.1035, 0.9239, 0.3902],
[ 0.2022, -0.1755, -0.4633],
[ nan, nan, nan],
...,
[ 1.3931, 0.4087, 2.348 ],
[ 1.2746, -0.6431, 0.0707],
[-1.1062, 1.3949, 0.3065]])
In [22]: df = DataFrame(data, columns=list('abc'))
In [23]: df.head()
Out[23]:
a b c
0 0.1035 0.9239 0.3902
1 0.2022 -0.1755 -0.4633
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
[5 rows x 3 columns]
In [24]: df.count()
Out[24]:
a 498
b 498
c 498
dtype: int64
In [26]: df.notnull().sum()
Out[26]:
a 498
b 498
c 498
dtype: int64
Like many pandas methods, this also works on Series objects:
In [27]: df.a.count()
Out[27]: 498
Pandas allows you to access columns in the following way too:
np.isnan(df['wealth1'])
By the way, even if this was not the case, you could still do
np.isnan(getattr(df, 'wealth1'))

Numpy cumsum considering NaNs

I am looking for a succinct way to go from:
a = numpy.array([1,4,1,numpy.nan,2,numpy.nan])
to:
b = numpy.array([1,5,6,numpy.nan,8,numpy.nan])
The best I can do currently is:
b = numpy.insert(numpy.cumsum(a[numpy.isfinite(a)]), (numpy.argwhere(numpy.isnan(a)) - numpy.arange(len(numpy.argwhere(numpy.isnan(a))))), numpy.nan)
Is there a shorter way to accomplish the same? What about doing a cumsum along an axis of a 2D array?
Pandas is a library build on top of numpy. It's
Series class has a cumsum method, which preserves the nan's and is considerably faster than the solution proposed by DSM:
In [15]: a = arange(10000.0)
In [16]: a[1] = np.nan
In [17]: %timeit a*0 + np.nan_to_num(a).cumsum()
1000 loops, best of 3: 465 us per loop
In [18] s = pd.Series(a)
In [19]: s.cumsum()
Out[19]:
0 0
1 NaN
2 2
3 5
...
9996 49965005
9997 49975002
9998 49985000
9999 49994999
Length: 10000
In [20]: %timeit s.cumsum()
10000 loops, best of 3: 175 us per loop
How about (for not-too-big arrays):
In [34]: import numpy as np
In [35]: a = np.array([1,4,1,np.nan,2,np.nan])
In [36]: a*0 + np.nan_to_num(a).cumsum()
Out[36]: array([ 1., 5., 6., nan, 8., nan])
Masked arrays are for just this type of situation.
>>> import numpy as np
>>> from numpy import ma
>>> a = np.array([1,4,1,np.nan,2,np.nan])
>>> b = ma.masked_array(a,mask = (np.isnan(a) | np.isinf(a)))
>>> b
masked_array(data = [1.0 4.0 1.0 -- 2.0 --],
mask = [False False False True False True],
fill_value = 1e+20)
>>> c = b.cumsum()
>>> c
masked_array(data = [1.0 5.0 6.0 -- 8.0 --],
mask = [False False False True False True],
fill_value = 1e+20)
>>> c.filled(np.nan)
array([ 1., 5., 6., nan, 8., nan])

Categories

Resources