python creating new list using a "template list" - python

Suppose i have:
x1 = [1, 3, 2, 4]
and:
x2 = [0, 1, 1, 0]
with the same shape
now i want to "put x2 ontop of x1" and sum up all the numbers of x1 corresponding to the numbers of x2
so the end result is:
end = [1+4 ,3+2] # end[0] is the sum of all numbers of x1 where a 0 was in x2
this is a naive implementation using list to further clarify the question
store_0 = 0
store_1 = 0
x1 = [1, 3, 4, 2]
x2 = [0, 1, 1, 0]
for value_x1 ,value_x2 in zip(x1 ,x2):
if value_x2 == 0:
store_0 += value_x1
elif value_x2 == 1:
store_1 += value_x1
so my question:
is there is a way to implement this in numpy without
using loops or in general just faster?

In this particular example (and, in general, for unique, duplicated, and groupby kinds of operations), pandas is faster than a pure numpy solution:
A pandas way, using Series (credit: very similar to #mcsoini's answer):
def pd_group_sum(x1, x2):
return pd.Series(x1, index=x2).groupby(x2).sum()
A pure numpy way, using np.unique and some fancy indexing:
def np_group_sum(a, groups):
_, ix, rix = np.unique(groups, return_index=True, return_inverse=True)
return np.where(np.arange(len(ix))[:, None] == rix, a, 0).sum(axis=1)
Note: a better pure numpy way is inspired by #Woodford's answer:
def selsum(a, g, e):
return a[g==e].sum()
vselsum = np.vectorize(selsum, signature='(n),(n),()->()')
def np_group_sum2(a, groups):
return vselsum(a, groups, np.unique(groups))
Yet another pure numpy way is inspired by a comment from #mapf about using argsort(). That in itself already takes 45ms, but we may try something based on np.argpartition(x2, len(x2)-1) instead, since that takes only 7.5ms by itself on the benchmark below:
def np_group_sum3(a, groups):
ix = np.argpartition(groups, len(groups)-1)
ends = np.nonzero(np.diff(np.r_[groups[ix], groups.max() + 1]))[0]
return np.diff(np.r_[0, a[ix].cumsum()[ends]])
(Slightly modified) example
x1 = np.array([1, 3, 2, 4, 8]) # I added a group for sake of generality
x2 = np.array([0, 1, 1, 0, 7])
>>> pd_group_sum(x1, x2)
0 5
1 5
7 8
>>> np_group_sum(x1, x2) # and all the np_group_sum() variants
array([5, 5, 8])
Speed
n = 1_000_000
x1 = np.random.randint(0, 20, n)
x2 = np.random.randint(0, 20, n)
%timeit pd_group_sum(x1, x2)
# 13.9 ms ± 65.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit np_group_sum(x1, x2)
# 171 ms ± 129 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit np_group_sum2(x1, x2)
# 66.7 ms ± 19.4 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit np_group_sum3(x1, x2)
# 25.6 ms ± 41.3 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Going via pandas is faster, in part because of numpy issue 11136.

>>> x1 = np.array([1, 3, 2, 7])
>>> x2 = np.array([0, 1, 1, 0])
>>> for index in np.unique(x2):
>>> print(f'{index}: {x1[x2==index].sum()}')
0: 8
1: 5
>>> # or in one line
>>> [(index, x1[x2==index].sum()) for index in np.unique(x2)]
[(0, 8), (1, 5)]

Would a pandas one-liner be ok?
store_0, store_1 = pd.DataFrame({"x1": x1, "x2": x2}).groupby("x2").x1.sum()
Or as a dictionary, for arbitrarily many values in x2:
pd.DataFrame({"x1": x1, "x2": x2}).groupby("x2").x1.sum().to_dict()
Output:
{0: 5, 1: 5}

using compress
from itertools import compress
result = [sum(compress(x1,x2)),sum(compress(x1, (map(lambda x: not x,x2))))]

This extends your loop into a larger number of values. I can't think of a numpy one-liner to do this.
sums = [0] * 10000
for vx1,vx2 in zip(x1,x2):
sums[vx2] += vx1

By casting the second list as a Boolean array, you can use it to index the first one:
import numpy as np
x1 = np.array([1, 3, 2, 4])
x2 = np.array([0, 1, 1, 0], dtype=bool)
end = [np.sum(x1[~x2]), np.sum(x1[x2])]
end
[5, 5]
Edit:
If x2 can have values larger than 1, you could use a list comprehension:
x1 = np.array([1, 3, 2, 4])
x2 = np.array([0, 1, 1, 0])
end = [np.sum(x1[x2 == i]) for i in range(max(x2) + 1)]

This extends the solution Tim Roberts suggested at the begining but will account for X2 having multiple values i.e Non binary. Here those values are strictly adjacent because the for loop uses the range of rng but it could be extended so that x2 has values that are not adjacent e.g [0 2 2 2 1 4] <- no 3's whereas randint used for this example will return a vector something like [0 1 1 3 4 2].
import numpy as np
rng = 5 # Range of values for x2 i.e [0 1 2 3 4]
x1 = np.random.randint(20, size=10000) #random vector of size 10k
x2 = np.random.randint(5, size=10000) # inexing vector size 10k with range (0-4)
store = []
for i in range(rng): # loop and append to list
store.append(x1[x2==i].sum())

Related

Max value per diagonal in 2d array

I have array and need max of rolling difference with dynamic window.
a = np.array([8, 18, 5,15,12])
print (a)
[ 8 18 5 15 12]
So first I create difference by itself:
b = a - a[:, None]
print (b)
[[ 0 10 -3 7 4]
[-10 0 -13 -3 -6]
[ 3 13 0 10 7]
[ -7 3 -10 0 -3]
[ -4 6 -7 3 0]]
Then replace upper triangle matrix to 0:
c = np.tril(b)
print (c)
[[ 0 0 0 0 0]
[-10 0 0 0 0]
[ 3 13 0 0 0]
[ -7 3 -10 0 0]
[ -4 6 -7 3 0]]
Last need max values per diagonal, so it means:
max([0,0,0,0,0]) = 0
max([-10,13,-10,3]) = 13
max([3,3,-7]) = 3
max([-7,6]) = 6
max([-4]) = -4
So expected output is:
[0, 13, 3, 6, -4]
What is some nice vectorized solution? Or is possible some another way for expected output?
Use ndarray.diagonal
v = [max(c.diagonal(-i)) for i in range(b.shape[0])]
print(v) # [0, 13, 3, 6, -4]
Not sure exactly how efficient this is considering the advanced indexing involved, but this is one way to do that:
import numpy as np
a = np.array([8, 18, 5, 15, 12])
b = a[:, None] - a
# Fill lower triangle with largest negative
b[np.tril_indices(len(a))] = np.iinfo(b.dtype).min # np.finfo for float
# Put diagonals as rows
s = b.strides[1]
diags = np.ndarray((len(a) - 1, len(a) - 1), b.dtype, b, offset=s, strides=(s, (len(a) + 1) * s))
# Get maximum from each row and add initial zero
c = np.r_[0, diags.max(1)]
print(c)
# [ 0 13 3 6 -4]
EDIT:
Another alternative, which may not be what you were looking for though, is just using Numba, for example like this:
import numpy as np
import numba as nb
def max_window_diffs_jdehesa(a):
a = np.asarray(a)
dtinf = np.iinfo(b.dtype) if np.issubdtype(b.dtype, np.integer) else np.finfo(b.dtype)
out = np.full_like(a, dtinf.min)
_pwise_diffs(a, out)
return out
#nb.njit(parallel=True)
def _pwise_diffs(a, out):
out[0] = 0
for w in nb.prange(1, len(a)):
for i in range(len(a) - w):
out[w] = max(a[i] - a[i + w], out[w])
a = np.array([8, 18, 5, 15, 12])
print(max_window_diffs(a))
# [ 0 13 3 6 -4]
Comparing these methods to the original:
import numpy as np
import numba as nb
def max_window_diffs_orig(a):
a = np.asarray(a)
b = a - a[:, None]
out = np.zeros(len(a), b.dtype)
out[-1] = b[-1, 0]
for i in range(1, len(a) - 1):
out[i] = np.diag(b, -i).max()
return out
def max_window_diffs_jdehesa_np(a):
a = np.asarray(a)
b = a[:, None] - a
dtinf = np.iinfo(b.dtype) if np.issubdtype(b.dtype, np.integer) else np.finfo(b.dtype)
b[np.tril_indices(len(a))] = dtinf.min
s = b.strides[1]
diags = np.ndarray((len(a) - 1, len(a) - 1), b.dtype, b, offset=s, strides=(s, (len(a) + 1) * s))
return np.concatenate([[0], diags.max(1)])
def max_window_diffs_jdehesa_nb(a):
a = np.asarray(a)
dtinf = np.iinfo(b.dtype) if np.issubdtype(b.dtype, np.integer) else np.finfo(b.dtype)
out = np.full_like(a, dtinf.min)
_pwise_diffs(a, out)
return out
#nb.njit(parallel=True)
def _pwise_diffs(a, out):
out[0] = 0
for w in nb.prange(1, len(a)):
for i in range(len(a) - w):
out[w] = max(a[i] - a[i + w], out[w])
np.random.seed(0)
a = np.random.randint(0, 100, size=100)
r = max_window_diffs_orig(a)
print((max_window_diffs_jdehesa_np(a) == r).all())
# True
print((max_window_diffs_jdehesa_nb(a) == r).all())
# True
%timeit max_window_diffs_orig(a)
# 348 µs ± 986 ns per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit max_window_diffs_jdehesa_np(a)
# 91.7 µs ± 1.3 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit max_window_diffs_jdehesa_nb(a)
# 19.7 µs ± 88.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
np.random.seed(0)
a = np.random.randint(0, 100, size=10000)
%timeit max_window_diffs_orig(a)
# 651 ms ± 26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit max_window_diffs_jdehesa_np(a)
# 1.61 s ± 6.19 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit max_window_diffs_jdehesa_nb(a)
# 22 ms ± 967 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
The first one may be a bit better for smaller arrays, but doesn't work well for bigger ones. Numba on the other hand is pretty good in all cases.
You can use numpy.diagonal:
a = np.array([8, 18, 5,15,12])
b = a - a[:, None]
c = np.tril(b)
for i in range(b.shape[0]):
print(max(c.diagonal(-i)))
Output:
0
13
3
6
-4
Here's a vectorized solution with strides -
from skimage.util import view_as_windows
n = len(a)
z = np.zeros(n-1,dtype=a.dtype)
p = np.concatenate((a,z))
s = view_as_windows(p,n)
mask = np.tri(n,k=-1,dtype=bool)[:,::-1]
v = s[0]-s
out = np.where(mask,v.min()-1,v).max(1)
With one-loop for memory-efficiency -
n = len(a)
out = [max(a[:-i+n]-a[i:]) for i in range(n)]
Use np.max in place of max for better use of array-memory.
You can abuse the fact that reshaping non-square arrays of shape (N+1, N) to (N, N+1) will make diagonals appear as columns
from scipy.linalg import toeplitz
a = toeplitz([1,2,3,4], [1,4,3])
# array([[1, 4, 3],
# [2, 1, 4],
# [3, 2, 1],
# [4, 3, 2]])
a.reshape(3, 4)
# array([[1, 4, 3, 2],
# [1, 4, 3, 2],
# [1, 4, 3, 2]])
Which you can then use like (note that I've swapped the sign and set the lower triangle to zero)
smallv = -10000 # replace this with np.nan if you have floats
a = np.array([8, 18, 5,15,12])
b = a[:, None] - a
b[np.tril_indices(len(b), -1)] = smallv
d = np.vstack((b, np.full(len(b), smallv)))
d.reshape(len(d) - 1, -1).max(0)[:-1]
# array([ 0, 13, 3, 6, -4])

Numpy group by multiple vectors, get group indices

I have several numpy arrays; I want to build a groupby method that would have group ids for these arrays. It will then allow me to index these arrays on the group id to perform operations on the groups.
For an example:
import numpy as np
import pandas as pd
a = np.array([1,1,1,2,2,3])
b = np.array([1,2,2,2,3,3])
def group_np(groupcols):
groupby = np.array([''.join([str(b) for b in bs]) for bs in zip(*[c for c in groupcols])])
_, groupby = np.unique(groupby, return_invesrse=True)
return groupby
def group_pd(groupcols):
df = pd.DataFrame(groupcols[0])
for i in range(1, len(groupcols)):
df[i] = groupcols[i]
for i in range(len(groupcols)):
df[i] = df[i].fillna(-1)
return df.groupby(list(range(len(groupcols)))).grouper.group_info[0]
Outputs:
group_np([a,b]) -> [0, 1, 1, 2, 3, 4]
group_pd([a,b]) -> [0, 1, 1, 2, 3, 4]
Is there a more efficient way of implementing it, ideally in pure numpy? The bottleneck currently seems to be building a vector that would have unique values for each group - at the moment I am doing that by concatenating the values for each vector as strings.
I want this to work for any number of input vectors, which can have millions of elements.
Edit: here is another testcase:
a = np.array([1,2,1,1,1,2,3,1])
b = np.array([1,2,2,2,2,3,3,2])
Here, group elements 2,3,4,7 should all be the same.
Edit2: adding some benchmarks.
a = np.random.randint(1, 1000, 30000000)
b = np.random.randint(1, 1000, 30000000)
c = np.random.randint(1, 1000, 30000000)
def group_np2(groupcols):
_, groupby = np.unique(np.stack(groupcols), return_inverse=True, axis=1)
return groupby
%timeit group_np2([a,b,c])
# 25.1 s +/- 1.06 s per loop (mean +/- std. dev. of 7 runs, 1 loop each)
%timeit group_pd([a,b,c])
# 21.7 s +/- 646 ms per loop (mean +/- std. dev. of 7 runs, 1 loop each)
After using np.stack on the arrays a and b, if you set the parameter return_inverse to True in np.unique then it is the output you are looking for:
a = np.array([1,2,1,1,1,2,3,1])
b = np.array([1,2,2,2,2,3,3,2])
_, inv = np.unique(np.stack([a,b]), axis=1, return_inverse=True)
print (inv)
array([0, 2, 1, 1, 1, 3, 4, 1], dtype=int64)
and you can replace [a,b] in np.stack by a list of all the vectors.
Edit: a faster solution is use np.unique on the sum of the arrays multiply by the cumulative product (np.cumprod) of the max plus 1 of all previous arrays in groupcols. such as:
def group_np_sum(groupcols):
groupcols_max = np.cumprod([ar.max()+1 for ar in groupcols[:-1]])
return np.unique( sum([groupcols[0]] +
[ ar*m for ar, m in zip(groupcols[1:],groupcols_max)]),
return_inverse=True)[1]
To check:
a = np.array([1,2,1,1,1,2,3,1])
b = np.array([1,2,2,2,2,3,3,2])
print (group_np_sum([a,b]))
array([0, 2, 1, 1, 1, 3, 4, 1], dtype=int64)
Note: the number associated to each group may not be the same (here I changed the first element of a by 3)
a = np.array([3,2,1,1,1,2,3,1])
b = np.array([1,2,2,2,2,3,3,2])
print(group_np2([a,b]))
print (group_np_sum([a,b]))
array([3, 1, 0, 0, 0, 2, 4, 0], dtype=int64)
array([0, 2, 1, 1, 1, 3, 4, 1], dtype=int64)
but groups themselves are the same.
Now to check for timing:
a = np.random.randint(1, 100, 30000)
b = np.random.randint(1, 100, 30000)
c = np.random.randint(1, 100, 30000)
groupcols = [a,b,c]
%timeit group_pd(groupcols)
#13.7 ms ± 1.22 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit group_np2(groupcols)
#34.2 ms ± 6.88 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit group_np_sum(groupcols)
#3.63 ms ± 562 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
The numpy_indexed package (dsiclaimer: I am its authos) covers these type of use cases:
import numpy_indexed as npi
npi.group_by((a, b))
Passing a tuple of index-arrays like this avoids creating a copy; but if you dont mind making the copy you can use stacking as well:
npi.group_by(np.stack(a, b))

How do I quickly find the last number of elements matching some criterion in numpy?

Suppose I have some array:
x = numpy.array([1, 2, 3, 4, 5, 1, 1, 1])
And I want to find the number of consecutive 1s at the end of the array. One way to do this is with a loop:
i = len(x) - 1
while x[i] == 1:
i = i - 1
Now I can look at i and work out the number of 1s in the back of x. However, in my real-world example x can be very large, as can the number of 1s, so I want a solution that:
Doesn't use loops, and
Doesn't traverse the whole array
I agree about using Cython or numba to speed up a loop and traverse just the array tail but if you want to give it a try with pure numpy I'd say something like the following could do:
np.argwhere(x[::-1] != 1).ravel()[0]
Reverse the array and take the first non-1 occurrence. It's traversing the whole array though... so it probably doesn't meet your needs.
EDIT: Here's a numba approach for completeness
from numba import jit
#jit
def count_trailing_ones(array):
count = 0
a = array[::-1]
for i in range(array.shape[0]):
if a[i] == 1:
count += 1
else:
return count
Here's a benchmark including also #J...S and #Kasramvd solutions, for a 800MB array with a couple million trailing ones. numba obviously wins but if you're going for numpy I'd say #J...S's argmax is the best.
In [102]: %timeit np.argwhere(x[::-1] != 1).ravel()[0]
631 ms ± 1.83 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [103]: %timeit np.argmax(x[::-1] != 1)
117 ms ± 417 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [104]: %timeit kas(x)
915 ms ± 3.41 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [105]: %timeit count_trailing_ones(x)
4.62 ms ± 16.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
If the last element is always a 1, you could first reverse the array and then use argmax as in
np.argmax(x[::-1]!=1)
Which for the given array would give
3
You could use a check first to be sure if the last element is a 1 like
if(x[-1] == 1):
print(np.argmax(x[::-1]!=1))
else:
print(0)
Here is one vectorize approach:
In [50]: mask = x == 1
In [51]: T_inds = np.where(mask)[0]
In [52]: F_inds = np.where(~mask)[0]
In [53]: last_f_ind = np.where(T_inds[-1] > F_inds)[0][-1]
# x = np.array([1, 2, 3, 4, 5, 1, 1, 1, 4, 5])
In [54]: T_inds[-1] - F_inds[last_f_ind]
Out[54]: 3
The trick is to find the index of latest item in non-one items that is lower than the index of latest one.
Also note that this approach will works for all the case where even ones are not at the trailing of your array (there are no other numbers after last 1s). But if you want to check that particular case where 1s are in end of your array here is a more concise approach:
x.size - np.where(x != 1)[0][-1] - 1
Out[27]: 3
# x != 1 will give you a mask of the indices where their value is not
# equal to one. Then you can use np.where() to find the index of last
# occurrence of not one items. By subtracting it from size of array you
# can get the number of consecutive ones.
Have a look at searchsorted: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.searchsorted.html#numpy.searchsorted
Use it in combination with np.where on the last element to return 0 if != 1.
Note This won't work if your array contain 0's as it will then try to insert the value at that point.
import numpy as np
x = np.array([1, 2, 3, 4, 5, 1, 1, 1])
y = np.array([1, 2, 3, 4, 5, 1, 1, 1, 4, 5])
# Create a lambda function that accepts x(array)
f = lambda x: np.where(x[-1] != 1, 0, np.searchsorted(x[::-1], 1, side='right'))
print(f(x)) # 3
print(f(y)) # 0

Can motelling be vectorized in pandas?

"Motelling" is a way to smooth response to a signal.
For example: Given a time-varying signal St that takes integer values 1-5, and a response function Ft({S0...t}) that assigns [-1, 0, +1] to each signal, a standard motelling response function would return:
-1 if St = 1, or if (St = 2) & (Ft-1 = -1)
+1 if St = 5, or if (St = 4) & (Ft-1 = +1)
0 otherwise
If I have a DataFrame by time of the signal {S}, is there a vectorized way to apply this motelling function?
E.g., if DataFrame df['S'].values = [1, 2, 2, 2, 3, 5, 3, 4, 1]
then is there a vectorized approach that would produce:
df['F'].values = [-1, -1, -1, -1, 0, 1, 0, 0, -1]
Or, absent a vectorized solution, is there something obviously faster than the following DataFrame.itertuples() approach I am using now?
df = pd.DataFrame(np.random.random_integers(1,5,100000), columns=['S'])
# First set response for time t
df['F'] = np.where(df['S'] == 5, 1, np.where(df['S'] == 1, -1, 0))
# Now loop to apply motelling
previousF = 0
for row in df.itertuples():
df.at[row.Index, 'F'] = np.where((row.S >= 4) & (previousF == 1), 1,
np.where((row.S <= 2) & (previousF == -1), -1, row.F))
previousF = row.F
With a complex DataFrame the loop portion takes O(minute per million rows)!
You can try regex.
The patterns we are looking for are
(1) 1 follows by 1 or 2. (We select this rule because any 2 comes after 1 can be considered as 1 and keep influence the next row's result)
(2) 5 follows by 4 or 5. (Similarly any 4 comes after 5 can be considered as 5)
(1) will results in consecutive -1s and (2) will results in consecutive 1s. The rest that does not match will be 0.
Using these rules, the rest of work is to do replacement. We espeically use a method lambda m: "x"*len(m.group(0)) that can turn the matched results into the length of such matches. (see reference)
import re
s = [1, 2, 2, 2, 3, 5, 3, 4, 1]
str_s = "".join(str(i) for i in s)
s1 = re.sub("5[45]*", lambda m: "x"*len(m.group(0)),str_s)
s2 = re.sub("1[12]*", lambda m: "y"*len(m.group(0)),s1)
l = list(s2)
l2 = [v if v in ["x", "y"] else 0 for v in l]
l3 = [1 if v == 'x' else v for v in l2]
l4 = [-1 if v == 'y' else v for v in l3]
[-1, -1, -1, -1, 0, 1, 0, 0, -1]
Bigger dataset
def tai(s):
str_s = "".join(str(i) for i in s)
s1 = re.sub("5[45]*", lambda m: "x"*len(m.group(0)),str_s)
s2 = re.sub("1[12]*", lambda m: "y"*len(m.group(0)),s1)
l = list(s2)
l2 = [v if v in ["x", "y"] else 0 for v in l]
l3 = [1 if v == 'x' else v for v in l2]
l4 = [-1 if v == 'y' else v for v in l3]
return l4
s = np.random.randint(1,6,100000)
%timeit tai(s)
104 ms ± 6.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each
df = pd.DataFrame(np.random.randint(1,6,100000), columns=['S'])
# First set response for time t
df['F'] = np.where(df['S'] == 5, 1, np.where(df['S'] == 1, -1, 0))
# Now loop to apply motelling
%%timeit # (OP's answer)
previousF = 0
for row in df.itertuples():
df.at[row.Index, 'F'] = np.where((row.S >= 4) & (previousF == 1), 1,
np.where((row.S <= 2) & (previousF == -1), -1, row.F))
previousF = row.F
1.11 s ± 27.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Reference
Replace substrings in python with the length of each substring
You may notice that since the consecutive elements of F[t] depend on one another this doesn't vectorize well. I'm partial to using numba in this cases. Your function is simple, it works on a numpy array (series is just array under the hood) and it's not easy to vectorize -> numba is ideal for this.
Imports and function:
import numpy as np
import pandas as pd
def motel(S):
F = np.zeros_like(S)
for t in range(S.shape[0]):
if (S[t] == 1) or (S[t] == 2 and F[t-1] == -1):
F[t] = -1
elif (S[t] == 5) or (S[t] == 4 and F[t-1] == 1):
F[t] = 1
# no else required sinze it's already set to zero
return F
Here we can just jit-compile the function
import numba
jit_motel = numba.jit(nopython=True)(motel)
And ensure that the normal and jit versions return expected values
S = pd.Series([1, 2, 2, 2, 3, 5, 3, 4, 1])
print("motel(S) = ", motel(S))
print("jit_motel(S)", jit_motel(S.values))
result:
motel(S) = [-1 -1 -1 -1 0 1 0 0 -1]
jit_motel(S) [-1 -1 -1 -1 0 1 0 0 -1]
For timing, let's scale:
N = 10**4
S = pd.Series( np.random.randint(1, 5, N) )
%timeit jit_motel(S.values)
%timeit motel(S.values)
result:
82.7 µs ± 1.03 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
7.75 ms ± 77.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For your million data points (didn't time normal function because I didn't wanna wait =) )
N = 10**6
S = pd.Series( np.random.randint(1, 5, N) )
%timeit motel(S.values)
result:
768 ms ± 7.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Boom! Less than a second for a million entries. This approach is simple, readable, and fast. Only downside is the Numba dependency, but it's included in anaconda and available in conda easily (maybe pip I'm not sure).
To aggregate the other answers, first I should note that apparently DataFrame.itertuples() does not iterate deterministically, or as expected, so the sample in the OP doesn't always produce the correct result on large samples.
Thanks to the other answers, I realized that a mechanical application of the motelling logic not only produces correct results, but does so surprisingly quickly when we use DataFrame.fill functions:
def dfmotel(df):
# We'll copy results into column F as we build them
df['F'] = np.nan
# This algo is destructive, so we operate on a copy of the signal
df['temp'] = df['S']
# Fill forward the negative signal
df.loc[df['temp'] == 2, 'temp'] = np.nan
df['temp'].ffill(inplace=True)
df.loc[df['temp'] == 1, 'F'] = -1
# Fill forward the positive signal
df.loc[df['temp'] == 4, 'temp'] = np.nan
df['temp'].ffill(inplace=True)
df.loc[df['temp'] == 5, 'F'] = 1
# All other signals are zero
df['F'].fillna(0, inplace=True)
For all timing tests we will operate on the same input:
df = pd.DataFrame(np.random.randint(1,5,1000000), columns=['S'])
For the DataFrame-based function above we get:
%timeit dfmotel(df.copy())
123 ms ± 2.07 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
This is quite acceptable performance.
tai was first to present this very clever solution using RegEx (which is what inspired my function above), but it can't match the speed of staying in number space:
import re
def tai(s):
str_s = "".join(str(i) for i in s)
s1 = re.sub("5[45]*", lambda m: "x"*len(m.group(0)),str_s)
s2 = re.sub("1[12]*", lambda m: "y"*len(m.group(0)),s1)
l = list(s2)
l2 = [v if v in ["x", "y"] else 0 for v in l]
l3 = [1 if v == 'x' else v for v in l2]
l4 = [-1 if v == 'y' else v for v in l3]
return l4
%timeit tai(df['S'].values)
899 ms ± 9.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
But nothing beats compiled code. Thanks to evamicur for this solution using the convenient numba in-line compiler:
import numba
def motel(S):
F = np.zeros_like(S)
for t in range(S.shape[0]):
if (S[t] == 1) or (S[t] == 2 and F[t-1] == -1):
F[t] = -1
elif (S[t] == 5) or (S[t] == 4 and F[t-1] == 1):
F[t] = 1
return F
jit_motel = numba.jit(nopython=True)(motel)
%timeit jit_motel(df['S'].values)
9.06 ms ± 502 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

What is the best way to create a block matrix form a row vector?

I have the following numpy row matrix.
X = np.array([1,2,3])
I want to create a block matrix as follows:
1 0 0
2 1 0
3 2 1
0 3 2
0 0 3
How can I do this using numpy?
If you read the desired output matrix top-down, then left-right, you see the pattern 1,2,3, 0,0,0, 1,2,3, 0,0,0, 1,2,3. You can use that pattern to easily create a linear array, and then reshape it into the two-dimensional form:
import numpy as np
X = np.array([1,2,3])
N = len(X)
zeros = np.zeros_like(X)
m = np.hstack((np.tile(np.hstack((X,zeros)),N-1),X)).reshape(N,-1).T
print m
gives
[[1 0 0]
[2 1 0]
[3 2 1]
[0 3 2]
[0 0 3]]
Approach #1 : Using np.lib.stride_tricks.as_strided -
from numpy.lib.stride_tricks import as_strided as strided
def zeropad_arr_v1(X):
n = len(X)
z = np.zeros(len(X)-1,dtype=X.dtype)
X_ext = np.concatenate(( z, X, z))
s = X_ext.strides[0]
return strided(X_ext[n-1:], (2*n-1,n), (s,-s), writeable=False)
Note that this would create a read-only output. If you need to write into later on, simply make a copy by appending .copy() at the end.
Approach #2 : Using concatenation with zeros and then clipping/slicing -
def zeropad_arr_v2(X):
n = len(X)
X_ext = np.concatenate((X, np.zeros(n,dtype=X.dtype)))
return np.tile(X_ext, n)[:-n].reshape(-1,n,order='F')
Approach #1 being a strides-based method should be very efficient on performance.
Sample runs -
In [559]: X = np.array([1,2,3])
In [560]: zeropad_arr_v1(X)
Out[560]:
array([[1, 0, 0],
[2, 1, 0],
[3, 2, 1],
[0, 3, 2],
[0, 0, 3]])
In [561]: zeropad_arr_v2(X)
Out[561]:
array([[1, 0, 0],
[2, 1, 0],
[3, 2, 1],
[0, 3, 2],
[0, 0, 3]])
Runtime test
In [611]: X = np.random.randint(0,9,(1000))
# Approach #1 (read-only)
In [612]: %timeit zeropad_arr_v1(X)
100000 loops, best of 3: 8.74 µs per loop
# Approach #1 (writable)
In [613]: %timeit zeropad_arr_v1(X).copy()
1000 loops, best of 3: 1.05 ms per loop
# Approach #2
In [614]: %timeit zeropad_arr_v2(X)
1000 loops, best of 3: 705 µs per loop
# #user8153's solution
In [615]: %timeit hstack_app(X)
100 loops, best of 3: 2.26 ms per loop
An other writable solution :
def block(X):
n=X.size
zeros=np.zeros((2*n-1,n),X.dtype)
zeros[::2]=X
return zeros.reshape(n,-1).T
try :
In [2]: %timeit block(X)
600 µs ± 33 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Categories

Resources