Calculate "energy" of columns with pandas - python

I try to calculate the signal energy of my pandas.DataFrame following this formula for discrete-time signal. I tried with apply and applymap, also with reduce, as suggested here: How do I columnwise reduce a pandas dataframe? . But all I tried resulted doing the operation for each element, not for the whole column.
This not a signal processing specific question, it's just an example how to apply a "summarize" (I don't know the right term for this) function to columns.
My workaround, was to get the raw numpy.array data and do my calculations. But I am pretty sure there is a pandatic way to do this (and surly a more numpyic way).
import pandas as pd
import numpy as np
d = np.array([[2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[0, -1, 2, -3, 4, -5, 6, -7, 8, -9],
[0, 1, -2, 3, -4, 5, -6, 7, -8, 9]]).transpose()
df = pd.DataFrame(d)
energies = []
# a same as d
a = df.as_matrix()
assert(np.array_equal(a, d))
for column in range(a.shape[1]):
energies.append(sum(a[:,column] ** 2))
print(energies) # [40, 285, 285]
Thanks in advance!

You could do the following for dataframe output -
(df**2).sum(axis=0) # Or (df**2).sum(0)
For performance, we could work with array extracted from the dataframe -
(df.values**2).sum(axis=0) # Or (df.values**2).sum(0)
For further performance boost, there's np.einsum -
a = df.values
out = np.einsum('ij,ij->j',a,a)
Runtime test -
In [31]: df = pd.DataFrame(np.random.randint(0,9,(1000,30)))
In [32]: %timeit (df**2).sum(0)
1000 loops, best of 3: 518 µs per loop
In [33]: %timeit (df.values**2).sum(0)
10000 loops, best of 3: 40.2 µs per loop
In [34]: def einsum_based(a):
...: a = df.values
...: return np.einsum('ij,ij->j',a,a)
...:
In [35]: %timeit einsum_based(a)
10000 loops, best of 3: 32.2 µs per loop

You can use DataFrame.pow with DataFrame.sum:
print (df.pow(2).sum())
0 40
1 285
2 285
dtype: int64
print (df.pow(2).sum().values.tolist())
[40, 285, 285]

There is a property df.var() which returns variance of the columns. Which is energy (dependent on definition, you might need to multiply it by the number of elements df.var()*df.shape[0]).

Related

Matrix element wise multiplication with shifted columns

Say I have two arrays, A and B.
An element wise multiplication is defined as follows:
I want to do an element-wise multiplication in a convolutional-like manner, i.e., move every column one step right, for example, column 1 will be now column 2 and column 3 will be now column 1.
This should yield a ( 2 by 3 by 3 ) array (2x3 matrix for all 3 possibilities)
We can concatenate A with one of it's own slice and then get those sliding windows. To get those windows, we can leverage np.lib.stride_tricks.as_strided based scikit-image's view_as_windows. Then, multiply those windows with B for the final output. More info on use of as_strided based view_as_windows.
Hence, we will have one vectorized solution like so -
In [70]: from skimage.util.shape import view_as_windows
In [71]: A1 = np.concatenate((A,A[:,:-1]),axis=1)
In [74]: view_as_windows(A1,A.shape)[0]*B
Out[74]:
array([[[1, 0, 3],
[0, 0, 6]],
[[2, 0, 1],
[0, 0, 4]],
[[3, 0, 2],
[0, 0, 5]]])
We can also leverage multi-cores with numexpr module for the final step of broadcasted-multiplication, which should be better on larger arrays. Hence, for the sample case, it would be -
In [53]: import numexpr as ne
In [54]: w = view_as_windows(A1,A.shape)[0]
In [55]: ne.evaluate('w*B')
Out[55]:
array([[[1, 0, 3],
[0, 0, 6]],
[[2, 0, 1],
[0, 0, 4]],
[[3, 0, 2],
[0, 0, 5]]])
Timings on large arrays comparing the proposed two methods -
In [56]: A = np.random.rand(500,500)
...: B = np.random.rand(500,500)
In [57]: A1 = np.concatenate((A,A[:,:-1]),axis=1)
...: w = view_as_windows(A1,A.shape)[0]
In [58]: %timeit w*B
...: %timeit ne.evaluate('w*B')
1 loop, best of 3: 422 ms per loop
1 loop, best of 3: 228 ms per loop
Squeezing out the best off strided-based method
If you really squeeze out the best off the strided-view-based approach, go with the original np.lib.stride_tricks.as_strided based one to avoid the functional overhead off view_as_windows -
def vaw_with_as_strided(A,B):
A1 = np.concatenate((A,A[:,:-1]),axis=1)
s0,s1 = A1.strides
S = (A.shape[1],)+A.shape
w = np.lib.stride_tricks.as_strided(A1,shape=S,strides=(s1,s0,s1))
return w*B
Comparing against #Paul Panzer's array-assignment based one, the crossover seems to be at 19x19 shaped arrays -
In [33]: n = 18
...: A = np.random.rand(n,n)
...: B = np.random.rand(n,n)
In [34]: %timeit vaw_with_as_strided(A,B)
...: %timeit pp(A,B)
10000 loops, best of 3: 22.4 µs per loop
10000 loops, best of 3: 21.4 µs per loop
In [35]: n = 19
...: A = np.random.rand(n,n)
...: B = np.random.rand(n,n)
In [36]: %timeit vaw_with_as_strided(A,B)
...: %timeit pp(A,B)
10000 loops, best of 3: 24.5 µs per loop
10000 loops, best of 3: 24.5 µs per loop
So, for anything smaller than 19x19, array-assignment seems to be better and for larger than those, strided-based one should be the way to go.
Just a note on view_as_windows/as_strided. Neat as these functions are, it is useful to know that they have a rather pronounced constant overhead. Here is comparison between #Divakar's view_as_windows based solution (vaw) and a copy-reshape based approach by me.
As you can see vaw is not very fast on small to medium sized operands and only begins to shine above array size 30x30.
Code:
from simple_benchmark import BenchmarkBuilder, MultiArgument
import numpy as np
from skimage.util.shape import view_as_windows
B = BenchmarkBuilder()
#B.add_function()
def vaw(A,B):
A1 = np.concatenate((A,A[:,:-1]),axis=1)
w = view_as_windows(A1,A.shape)[0]
return w*B
#B.add_function()
def pp(A,B):
m,n = A.shape
aux = np.empty((n,m,2*n),A.dtype)
AA = np.concatenate([A,A],1)
aux.reshape(-1)[:-n].reshape(n,-1)[...] = AA.reshape(-1)[:-1]
return aux[...,:n]*B
#B.add_arguments('array size')
def argument_provider():
for exp in range(4, 16):
dim_size = int(1.4**exp)
a = np.random.rand(dim_size,dim_size)
b = np.random.rand(dim_size,dim_size)
yield dim_size, MultiArgument([a,b])
r = B.run()
r.plot()
import pylab
pylab.savefig('vaw.png')
Run a for loop for the number of columns and use np.roll() around axis =1, to shift your columns and do the matrix multiplication.
refer to the accepted answer in this reference.
Hope this helps.
I can actually pad the array from its two sides with 2 columns (to get 2x5 array)
and run a conv2 with 'b' as a kernel, I think it's more efficient

numpy.argmin for elements greater than a threshold

I'm interested in getting the location of the minimum value in an 1-d NumPy array that meets a certain condition (in my case, a medium threshold). For example:
import numpy as np
limit = 3
a = np.array([1, 2, 4, 5, 2, 5, 3, 6, 7, 9, 10])
I'd like to effectively mask all numbers in a that are under the limit, such that the result of np.argmin would be 6. Is there a computationally cheap way to mask values that don't meet a condition and then apply np.argmin?
You could store the valid indices and use those for both selecting the valid elements from a and also indexing into with the argmin() among the selected elements to get the final index output. Thus, the implementation would look something like this -
valid_idx = np.where(a >= limit)[0]
out = valid_idx[a[valid_idx].argmin()]
Sample run -
In [32]: limit = 3
...: a = np.array([1, 2, 4, 5, 2, 5, 3, 6, 7, 9, 10])
...:
In [33]: valid_idx = np.where(a >= limit)[0]
In [34]: valid_idx[a[valid_idx].argmin()]
Out[34]: 6
Runtime test -
For performance benchmarking, in this section I am comparing the other solution based on masked array against a regular array based solution as proposed earlier in this post for various datasizes.
def masked_argmin(a,limit): # Defining func for regular array based soln
valid_idx = np.where(a >= limit)[0]
return valid_idx[a[valid_idx].argmin()]
In [52]: # Inputs
...: a = np.random.randint(0,1000,(10000))
...: limit = 500
...:
In [53]: %timeit np.argmin(np.ma.MaskedArray(a, a<limit))
1000 loops, best of 3: 233 µs per loop
In [54]: %timeit masked_argmin(a,limit)
10000 loops, best of 3: 101 µs per loop
In [55]: # Inputs
...: a = np.random.randint(0,1000,(100000))
...: limit = 500
...:
In [56]: %timeit np.argmin(np.ma.MaskedArray(a, a<limit))
1000 loops, best of 3: 1.73 ms per loop
In [57]: %timeit masked_argmin(a,limit)
1000 loops, best of 3: 1.03 ms per loop
This can simply be accomplished using numpy's MaskedArray
import numpy as np
limit = 3
a = np.array([1, 2, 4, 5, 2, 5, 3, 6, 7, 9, 10])
b = np.ma.MaskedArray(a, a<limit)
np.ma.argmin(b) # == 6

Efficient fenceposting of 1D ndarray

What is the most efficient way to reshape data to fencepost with numpy?
data = np.array([1, 2, 3, 4, 5])
fencepost = np.array([[1, 2], [2, 3], [3, 4], [4, 5]])
You can achieve the same result simply by looking at the same data differently:
>>> from numpy.lib.stride_tricks import as_strided
>>> fencepost = as_strided(data, shape=(data.shape[0]-1, 2),
strides=(data.strides[0],)*2)
>>> fencepost
array([[1, 2],
[2, 3],
[3, 4],
[4, 5]])
No data is being copied, so especially for very large arrays, this is going to be about as quick as it gets. And if you do need a separate copy, you can simply do fencepost = fencepost.copy() and let numpy handle everything internally for you:
In [11]: data = np.arange(10000000)
In [12]: %timeit as_strided(data, shape=(data.shape[0]-1, 2),
... strides=(data.strides[0],)*2)
100000 loops, best of 3: 12.2 us per loop
In [13]: %timeit as_strided(data, shape=(data.shape[0]-1, 2),
... strides=(data.strides[0],)*2).copy()
10 loops, best of 3: 183 ms per loop
This isn't really reshaping, because the second array has a different number of elements. If the first array has N elements (in this case N=5) the second has 2N-2 (in this case 8).
So you will have to make a new array and populate it accordingly. There are two approaches to this. You can populate column by column, or row by row. Which is more efficient will depend on ... well lets find out!
Here I use %timeit from IPython with three different array sizes:
import numpy as np
from numba import jit
data = np.array([1, 2, 3, 4, 5])
#fencepost = np.array([[1, 2], [2, 3], [3, 4], [4, 5]])
def fp1(data):
f = np.zeros((data.shape[0]-1,2))
for i in range(data.shape[0]-1):
f[i] = data[i:i+2]
return f
def fp2(data):
f = np.zeros((data.shape[0]-1,2))
f[:,0] = data[:-1]
f[:,1] = data[1:]
return f
%timeit fp1(data)
%timeit fp2(data)
data2 = np.array(range(100000))
%timeit fp1(data2)
%timeit fp2(data2)
data3 = np.array(range(10000000))
%timeit fp1(data3)
%timeit fp2(data3)
On my computer the results are slightly more efficient to do row by row for small arrays, but quickly column by column is much, much better (hence fp2 is the efficient answer):
100000 loops, best of 3: 13 µs per loop
100000 loops, best of 3: 14.4 µs per loop
1 loops, best of 3: 203 ms per loop
1000 loops, best of 3: 1.09 ms per loop
1 loops, best of 3: 20.7 s per loop
1 loops, best of 3: 253 ms per loop
Essentially, fp2 is faster because it is only 2 numpy operations, whereas fp1 is a loop requiring many calls to numpy. For small arrays, the overhead of 5 calls to numpy is negligible.

Efficient way to create an array that is a sequence of variable length ranges in numpy

Suppose I have an array
import numpy as np
x=np.array([5,7,2])
I want to create an array that contains a sequence of ranges stacked together with the
length of each range given by x:
y=np.hstack([np.arange(1,n+1) for n in x])
Is there some way to do this without the speed penalty of a list comprehension or looping. (x could be a very large array)
The result should be
y == np.array([1,2,3,4,5,1,2,3,4,5,6,7,1,2])
You could use accumulation:
def my_sequences(x):
x = x[x != 0] # you can skip this if you do not have 0s in x.
# Create result array, filled with ones:
y = np.cumsum(x, dtype=np.intp)
a = np.ones(y[-1], dtype=np.intp)
# Set all beginnings to - previous length:
a[y[:-1]] -= x[:-1]
# and just add it all up (btw. np.add.accumulate is equivalent):
return np.cumsum(a, out=a) # here, in-place should be safe.
(One word of caution: If you result array would be larger then the possible size np.iinfo(np.intp).max this might with some bad luck return wrong results instead of erroring out cleanly...)
And because everyone always wants timings (compared to Ophion's) method:
In [11]: x = np.random.randint(0, 20, 1000000)
In [12]: %timeit ua,uind=np.unique(x,return_inverse=True);a=[np.arange(1,k+1) for k in ua];np.concatenate(np.take(a,uind))
1 loops, best of 3: 753 ms per loop
In [13]: %timeit my_sequences(x)
1 loops, best of 3: 191 ms per loop
of course the my_sequences function will not ill-perform when the values of x get large.
First idea; prevent multiple calls to np.arange and concatenate should be much faster then hstack:
import numpy as np
x=np.array([5,7,2])
>>>a=np.arange(1,x.max()+1)
>>> np.hstack([a[:k] for k in x])
array([1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 7, 1, 2])
>>> np.concatenate([a[:k] for k in x])
array([1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 7, 1, 2])
If there are many nonunique values this seems more efficient:
>>>ua,uind=np.unique(x,return_inverse=True)
>>>a=[np.arange(1,k+1) for k in ua]
>>>np.concatenate(np.take(a,uind))
array([1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 7, 1, 2])
Some timings for your case:
x=np.random.randint(0,20,1000000)
Original code
#Using hstack
%timeit np.hstack([np.arange(1,n+1) for n in x])
1 loops, best of 3: 7.46 s per loop
#Using concatenate
%timeit np.concatenate([np.arange(1,n+1) for n in x])
1 loops, best of 3: 5.27 s per loop
First code:
#Using hstack
%timeit a=np.arange(1,x.max()+1);np.hstack([a[:k] for k in x])
1 loops, best of 3: 3.03 s per loop
#Using concatenate
%timeit a=np.arange(1,x.max()+1);np.concatenate([a[:k] for k in x])
10 loops, best of 3: 998 ms per loop
Second code:
%timeit ua,uind=np.unique(x,return_inverse=True);a=[np.arange(1,k+1) for k in ua];np.concatenate(np.take(a,uind))
10 loops, best of 3: 522 ms per loop
Looks like we gain a 14x speedup with the final code.
Small sanity check:
ua,uind=np.unique(x,return_inverse=True)
a=[np.arange(1,k+1) for k in ua]
out=np.concatenate(np.take(a,uind))
>>>out.shape
(9498409,)
>>>np.sum(x)
9498409

Is there a better way to broadcast arrays?

I want to broadcast an array b to the shape it would take if it were in an arithmetic operation with another array a.
For example, if a.shape = (3,3) and b was a scalar, I want to get an array whose shape is (3,3) and is filled with the scalar.
One way to do this is like this:
>>> import numpy as np
>>> a = np.arange(9).reshape((3,3))
>>> b = 1 + a*0
>>> b
array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
Although this works practically, I can't help but feel it looks a bit weird, and wouldn't be obvious to someone else looking at the code what I was trying to do.
Is there any more elegant way to do this? I've looked at the documentation for np.broadcast, but it's orders of magnitude slower.
In [1]: a = np.arange(10000).reshape((100,100))
In [2]: %timeit 1 + a*0
10000 loops, best of 3: 31.9 us per loop
In [3]: %timeit bc = np.broadcast(a,1);np.fromiter((v for u, v in bc),float).reshape(bc.shape)
100 loops, best of 3: 5.2 ms per loop
In [4]: 5.2e-3/32e-6
Out[4]: 162.5
If you just want to fill an array with a scalar, fill is probably the best choice. But it sounds like you want something more generalized. Rather than using broadcast you can use broadcast_arrays to get the result that (I think) you want.
>>> a = numpy.arange(9).reshape(3, 3)
>>> numpy.broadcast_arrays(a, 1)[1]
array([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
This generalizes to any two broadcastable shapes:
>>> numpy.broadcast_arrays(a, [1, 2, 3])[1]
array([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
It's not quite as fast as your ufunc-based method, but it's still on the same order of magnitude:
>>> %timeit 1 + a * 0
10000 loops, best of 3: 23.2 us per loop
>>> %timeit numpy.broadcast_arrays(a, 1)[1]
10000 loops, best of 3: 52.3 us per loop
But scalars, fill is still the clear front-runner:
>>> %timeit b = numpy.empty_like(a, dtype='i8'); b.fill(1)
100000 loops, best of 3: 6.59 us per loop
Finally, further testing shows that the fastest approach -- in at least some cases -- is to multiply by ones:
>>> %timeit numpy.broadcast_arrays(a, numpy.arange(100))[1]
10000 loops, best of 3: 53.4 us per loop
>>> %timeit (1 + a * 0) * numpy.arange(100)
10000 loops, best of 3: 45.9 us per loop
>>> %timeit b = numpy.ones_like(a, dtype='i8'); b * numpy.arange(100)
10000 loops, best of 3: 28.9 us per loop
The fastest and cleanest solution I know is:
b_arr = numpy.empty(a.shape) # Empty array
b_arr.fill(b) # Filling with one value
fill sounds like the simplest way:
>>> a = np.arange(9).reshape((3,3))
>>> a
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> a.fill(10)
>>> a
array([[10, 10, 10],
[10, 10, 10],
[10, 10, 10]])
EDIT: As #EOL points out, you don't need arange if you want to create a new array, np.empty((100,100)) (or whatever shape) is better for this.
Timings:
In [3]: a = np.arange(10000).reshape((100,100))
In [4]: %timeit 1 + a*0
100000 loops, best of 3: 19.9 us per loop
In [5]: a = np.arange(10000).reshape((100,100))
In [6]: %timeit a.fill(1)
100000 loops, best of 3: 3.73 us per loop
If you just need to broadcast a scalar to some arbitrary shape, you can do something like this:
a = b*np.ones(shape=(3,3))
Edit: np.tile is more general. You can use it to duplicate any scalar/vector in any number of dimensions:
b = 1
N = 100
a = np.tile(b, reps=(N, N))

Categories

Resources