I just want to improve the speed of splitting a list.Now I have a way to split the list, but the speed is not as fast as I expected.
def split_list(lines):
return [x for xs in lines for x in xs.split('-')]
import time
lst= []
for i in range(1000000):
lst.append('320000-320000')
start=time.clock()
lst_new=split_list(lst)
end=time.clock()
print('time\n',str(end-start))
For example,Input:
lst
['320000-320000', '320000-320000']
Output:
lst_new
['320000', '320000', '320000', '320000']
I'm not satisfied with the speed of spliting,as my data contains many lists.
But now I don't know whether there's a more effective way to do it.
According to advice,I try to describe my whole question more specifically.
import pandas as pd
df = pd.DataFrame({ 'line':["320000-320000, 340000-320000, 320000-340000",
"380000-320000",
"380000-320000,380000-310000",
"370000-320000,370000-320000,320000-320000",
"320000-320000, 340000-320000, 320000-340000",
"380000-320000",
"380000-320000,380000-310000",
"370000-320000,370000-320000,320000-320000",
"320000-320000, 340000-320000, 320000-340000",
"380000-320000",
"380000-320000,380000-310000",
"370000-320000,370000-320000,320000-320000"], 'id':[1,2,3,4,5,6,7,8,9,10,11,12],})
def most_common(lst):
return max(set(lst), key=lst.count)
def split_list(lines):
return [x for xs in lines for x in xs.split('-')]
df['line']=df['line'].str.split(',')
col_ix=df['line'].index.values
df['line_start'] = pd.Series(0, index=df.index)
df['line_destination'] = pd.Series(0, index=df.index)
import time
start=time.clock()
for ix in col_ix:
col=df['line'][ix]
col_split=split_list(col)
even_col_split=col_split[0:][::2]
even_col_split_most=most_common(even_col_split)
df['line_start'][ix]=even_col_split_most
odd_col_split=col_split[1:][::2]
odd_col_split_most=most_common(odd_col_split)
df['line_destination'][ix]=odd_col_split_most
end=time.clock()
print('time\n',str(end-start))
del df['line']
print('df\n',df)
Input:
df
id line
0 1 320000-320000, 340000-320000, 320000-340000
1 2 380000-320000
2 3 380000-320000,380000-310000
3 4 370000-320000,370000-320000,320000-320000
4 5 320000-320000, 340000-320000, 320000-340000
5 6 380000-320000
6 7 380000-320000,380000-310000
7 8 370000-320000,370000-320000,320000-320000
8 9 320000-320000, 340000-320000, 320000-340000
9 10 380000-320000
10 11 380000-320000,380000-310000
11 12 370000-320000,370000-320000,320000-320000
Output:
df
id line_start line_destination
0 1 320000 320000
1 2 380000 320000
2 3 380000 320000
3 4 370000 320000
4 5 320000 320000
5 6 380000 320000
6 7 380000 320000
7 8 370000 320000
8 9 320000 320000
9 10 380000 320000
10 11 380000 320000
11 12 370000 320000
You can regard the number of line(eg.320000-32000 represent the starting point and destination of the route).
Expected:
Make the code run faster.(I can't bear the speed of the code)
'-'.join(lst).split('-')
seems quite a bit faster:
>>> timeit("'-'.join(lst).split('-')", globals=globals(), number=10)
1.0838123590219766
>>> timeit("[x for xs in lst for x in xs.split('-')]", globals=globals(), number=10)
3.1370303670410067
Depending on what you want to do with your list, using a genertor can be slightly faster.
If you need to keep the output stored, then the list solution is faster.
If all you need to is to iterate over the words once, you can get rid of some overhead by using a generator.
def split_list_gen(lines):
for line in lines:
yield from line.split('-')
Benchmark
import time
lst = ['32000-32000'] * 10000000
start = time.clock()
for x in split_list(lst):
pass
end = time.clock()
print('list time:', str(end - start))
start = time.clock()
for y in split_list_gen(lst):
pass
end = time.clock()
print('generator time:', str(end - start))
Output
The generator solution is consistently about 10% faster.
list time: 0.4568295369982612
generator time: 0.4020671741918084
Pushing more of the work below the Python level seems to provide a small speedup:
In [7]: %timeit x = split_list(lst)
407 ms ± 876 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [8]: %timeit x = list(chain.from_iterable(map(methodcaller("split", "-"), lst
...: )))
374 ms ± 2.67 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
methodcaller creates a function that calls the function for you:
methodcaller("split", "-")(x) == x.split("-")
chain.from_iterable creates a single iterator consisting of the elements from a group of iterables:
list(chain.from_iterable([[1,2], [3,4]])) == [1,2,3,4]
mapping the function returned by methodcaller on to your list of strings produces an iterable of lists suitable for flattening by from_iterable. The benefit of this more functional approach is that the functions involved are all implemented in C and can work with the data in the Python objects, rather than Python byte code that works on the Python objects.
Related
I've read SOF posts on how to create a field that contains the number of duplicates that row contains in a pandas DataFrame. Without using any other libraries, I tried writing a function that does this, and it works on small DataFrame objects; however, it takes way too long on larger ones and consumes too much memory.
This is the function:
def count_duplicates(dataframe):
function = lambda x: dataframe.to_numpy().tolist().count(x.to_list()) - 1
return dataframe.apply(function, axis=1)
I did a dir into a numpy array from the DataFrame.to_numpy function, and I didn't see a function quite like the list.count function. The reason why this takes so long is because for each row, it needs to compare the row with all of the rows in the numpy array. I'd like a much more efficient way to do this, even if it's not using a pandas DataFrame. I feel like there should be a simple way to do this with numpy, but I'm just not familiar enough. I've been testing different approaches for a while and it's resulting in a lot of errors. I'm going to keep testing different approaches, but felt the community might provide a better way.
Thank you for your help.
Here is an example DataFrame:
one two
0 1 1
1 2 2
2 3 3
3 1 1
I'd use it like this:
d['duplicates'] = count_duplicates(d)
The resulting DataFrame is:
one two duplicates
0 1 1 1
1 2 2 0
2 3 3 0
3 1 1 1
The problem is the actual DataFrame will have 1.4 million rows, and each lambda takes an average of 0.148558 seconds, which if multiplied by 1.4 million rows is about 207981.459 seconds or 57.772 hours. I need a much faster way to accomplish this.
Thank you again.
I updated the function which is speeding things up:
def _counter(series_to_count, list_of_lists):
return list_of_lists.count(series_to_count.to_list()) - 1
def count_duplicates(dataframe):
df_list = dataframe.to_numpy().tolist()
return dataframe.apply(_counter, args=(df_list,), axis=1)
This takes only 29.487 seconds. The bottleneck was converting the dataframe on each function call.
I'm still interested in optimizing this. I'd like to get this down to 2-3 seconds if at all possible. It may not be, but I'd like to make sure it is as fast as possible.
Thank you again.
Here is a vectorized way to do this. For 1.4 million rows, with an average of 140 duplicates for each row, it takes under 0.05 seconds. When there are no duplicates at all, it takes about 0.4 second.
d['duplicates'] = d.groupby(['one', 'two'], sort=False)['one'].transform('size') - 1
On your example:
>>> d
one two duplicates
0 1 1 1
1 2 2 0
2 3 3 0
3 1 1 1
Speed
Relatively high rate of duplicates:
n = 1_400_000
d = pd.DataFrame(np.random.randint(0, 100, size=(n, 2)), columns='one two'.split())
%timeit d.groupby(['one', 'two'], sort=False)['one'].transform('size') - 1
# 48.3 ms ± 110 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# how many duplicates on average?
>>> (d.groupby(['one', 'two'], sort=False)['one'].transform('size') - 1).mean()
139.995841
# (as expected: n / 100**2)
No duplicates
n = 1_400_000
d = pd.DataFrame(np.arange(2 * n).reshape(-1, 2), columns='one two'.split())
%timeit d.groupby(['one', 'two'], sort=False)['one'].transform('size') - 1
# 389 ms ± 1.55 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
I'm starting to belive that pandas dataframes are much less intuitive to handle than Excel, but I'm not giving up yet!
So, I'm JUST trying to check data in the same column but in (various) previous rows using the .shift() method. I'm using the following DF as an example since the original is too complicated to copy into here, but the principle is the same.
counter = list(range(20))
df1 = pd.DataFrame(counter, columns=["Counter"])
df1["Event"] = [True, False, False, False, False, False, True, False,False,False,False,False,False,False,False,False,False,False,False,True]
I'm trying to create sums of the column counter, but only under the following conditions:
If the "Event" = True I want to sum the "Counter" values for the last 10 previous rows before the event happened.
EXCEPT if there is another Event within those 10 previous rows. In this case I only want to sum up the counter values between those two events (without exceeding 10 rows).
To clarify my goal this is the result I had in mind:
My attempt so far looks like this:
for index, row in df1.iterrows():
if row["Event"] == True:
counter = 1
summ = 0
while counter < 10 and row["Event"].shift(counter) == False:
summ += row["Counter"].shift(counter)
counter += 1
else:
df1.at[index, "Sum"] = summ
I'm trying to first find Event == True and from there start iterating backwards with a counter and summing up the counters as I go. However it seems to have a problem with shift:
AttributeError: 'bool' object has no attribute 'shift'
Please shatter my believes and show me, that Excel isn't actually superior.
We need create a subgroup key with cumsum , then do rolling sum
n=10
s=df1.Counter.groupby(df1.Event.iloc[::-1].cumsum()).\
rolling(n+1,min_periods=1).sum().\
reset_index(level=0,drop=True).where(df1.Event)
df1['sum']=(s-df1.Counter).fillna(0)
df1
Counter Event sum
0 0 True 0.0
1 1 False 0.0
2 2 False 0.0
3 3 False 0.0
4 4 False 0.0
5 5 False 0.0
6 6 True 15.0
7 7 False 0.0
8 8 False 0.0
9 9 False 0.0
10 10 False 0.0
11 11 False 0.0
12 12 False 0.0
13 13 False 0.0
14 14 False 0.0
15 15 False 0.0
16 16 False 0.0
17 17 False 0.0
18 18 False 0.0
19 19 True 135.0
Element-wise approach
You definitely can approach a task in pandas the way you would in excel. Your approach needs to be tweaked a bit because pandas.Series.shift operates on whole arrays or Series, not on a single value - you can't use it just to move back up the dataframe relative to a value.
The following loops through the indices of your dataframe, walking back up (up to) 10 spots for each Event:
def create_sum_column_loop(df):
'''
Adds a Sum column with the rolling sum of 10 Counters prior to an Event
'''
df["Sum"] = 0
for index in range(df.shape[0]):
counter = 1
summ = 0
if df.loc[index, "Event"]: # == True is implied
for backup in range(1, 11):
# handle case where index - backup is before
# the start of the dataframe
if index - backup < 0:
break
# stop counting when we hit another event
if df.loc[index - backup, "Event"]:
break
# increment by the counter
summ += df.loc[index - backup, "Counter"]
df.loc[index, "Sum"] = summ
return df
This does the job:
In [15]: df1_sum1 = create_sum_column(df1.copy()) # copy to preserve original
In [16]: df1_sum1
Counter Event Sum
0 0 True 0
1 1 False 0
2 2 False 0
3 3 False 0
4 4 False 0
5 5 False 0
6 6 True 15
7 7 False 0
8 8 False 0
9 9 False 0
10 10 False 0
11 11 False 0
12 12 False 0
13 13 False 0
14 14 False 0
15 15 False 0
16 16 False 0
17 17 False 0
18 18 False 0
19 19 True 135
Better: vectorized operations
However, the power of pandas comes in its vectorized operations. Python is an interpreted, dynamically-typed language, meaning it's flexible, user friendly (easy to read/write/learn), and slow. To combat this, many commonly-used workflows, including many pandas.Series operations, are written in optimized, compiled code from other languages like C, C++, and Fortran. Under the hood, they're doing the same thing... df1.Counter.cumsum() does loop through the elements and create a running total, but it does it in C, making it lightning fast.
This is what makes learning a framework like pandas difficult - you need to relearn how to do math using that framework. For pandas, the entire game is learning how to use pandas and numpy built-in operators to do your work.
Borrowing the clever solution from #YOBEN_S:
def create_sum_column_vectorized(df):
n = 10
s = (
df.Counter
# group by a unique identifier for each event. This is a
# particularly clever bit, where #YOBEN_S reverses
# the order of df.Event, then computes a running total
.groupby(df.Event.iloc[::-1].cumsum())
# compute the rolling sum within each group
.rolling(n+1,min_periods=1).sum()
# drop the group index so we can align with the original DataFrame
.reset_index(level=0,drop=True)
# drop all non-event observations
.where(df.Event)
)
# remove the counter value for the actual event
# rows, then fill the remaining rows with 0s
df['sum'] = (s - df.Counter).fillna(0)
return df
We can see that the result is the same as the one above (though the values are suddenly floats):
In [23]: df1_sum2 = create_sum_column_vectorized(df1) # copy to preserve original
In [24]: df1_sum2
The difference comes in the performance. In ipython or jupyter we can use the %timeit command to see how long a statement takes to run:
In [25]: %timeit create_sum_column_loop(df1.copy())
3.21 ms ± 54.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [26]: %timeit create_sum_column_vectorized(df1.copy())
7.76 ms ± 255 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For small datasets, like the one in your example, the difference will be negligible or will even slightly favor the pure python loop.
For much larger datasets, the difference becomes apparent. Let's create a dataset similar to your example, but with 100,000 rows:
In [27]: df_big = pd.DataFrame({
...: 'Counter': np.arange(100000),
...: 'Event': np.random.random(size=100000) > 0.9,
...: })
...:
Now, you can really see the performance benefit of the vectorized approach:
In [28]: %timeit create_sum_column_loop(df_big.copy())
13 s ± 101 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [29]: %timeit create_sum_column_vectorized(df_big.copy())
5.81 s ± 28 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
The vectorized version takes less than half the time. This difference will continue to widen as the amount of data increases.
Compiling your own workflows with numba
Note that for specific operations, it is possible to speed up operations further by pre-compiling the code yourself. In this case, the looped version can be compiled with numba:
import numba
#numba.jit(nopython=True)
def _inner_vectorized_loop(counter, event, sum_col):
for index in range(len(counter)):
summ = 0
if event[index]:
for backup in range(1, 11):
# handle case where index - backup is before
# the start of the dataframe
if index - backup < 0:
break
# stop counting when we hit another event
if event[index - backup]:
break
# increment by the counter
summ = summ + counter[index - backup]
sum_col[index] = summ
return sum_col
def create_sum_column_loop_jit(df):
'''
Adds a Sum column with the rolling sum of 10 Counters prior to an Event
'''
df["Sum"] = 0
df["Sum"] = _inner_vectorized_loop(
df.Counter.values, df.Event.values, df.Sum.values)
return df
This beats both pandas and the for loop by a factor of more than 1000!
In [90]: %timeit create_sum_column_loop_jit(df_big.copy())
1.62 ms ± 53.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Balancing readability, efficiency, and flexibility is the constant challenge. Best of luck as you dive in!
from collections import defaultdict
dct = defaultdict(list)
for n in range(len(res)):
for i in indices_ordered:
dct[i].append(res[n][i])
Note that res is a list of pandas Series of length 5000, and indices_ordered is a list of strings of length 20000. It takes 23 minutes in my Mac (2.3 GHz Intel Core i5 and 16 GB 2133 MHz LPDDR3) to run this code. I am pretty new to Python, but I feel a more clever coding (maybe less looping) would help a lot.
Edit:
Here is an example of how to create data (res and indices_ordered) to be able to run above snippet (which is slightly changed to access the only field rather than by field name since I could not find how to construct inline a Series with a field name)
import random, string, pandas
index_sz = 20000
res_sz = 5000
indices_ordered = [''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10)) for i in range(index_sz)]
res = [pandas.Series([random.randint(0,10) for i in range(index_sz)], index = random.sample(indices_ordered, index_sz)) for i in range(res_sz)]
The issue here is that you iterate over indices_ordered for every single value. Just drop indices_ordered. Stripping it way back in orders of magnitude to test the timings:
import random
import string
import numpy as np
import pandas as pd
from collections import defaultdict
index_sz = 200
res_sz = 50
indices_ordered = [''.join(random.choice(string.ascii_uppercase + string.digits)
for _ in range(10)) for i in range(index_sz)]
res = [pd.Series([random.randint(0,10) for i in range(index_sz)],
index = random.sample(indices_ordered, index_sz))
for i in range(res_sz)]
def your_way(res, indices_ordered):
dct = defaultdict(list)
for n in range(len(res)):
for i in indices_ordered:
dct[i].append(res[n][i])
def my_way(res):
dct = defaultdict(list)
for item in res:
for string_item, value in item.iteritems():
dct[string_item].append(value)
Gives:
%timeit your_way(res, indices_ordered)
160 ms ± 5.45 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit my_way(res)
6.79 ms ± 47.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
This reduces the time complexity of the whole approach because you don't keep going through indicies_ordered each time and assigning values, so the difference will become much more stark as the size of the data grows.
Just increasing one order of magnitude:
index_sz = 2000
res_sz = 500
Gives:
%timeit your_way(res, indices_ordered)
17.8 s ± 999 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit my_way(res)
543 ms ± 9.07 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
EDIT: Now that testing data is available, it is clear that the changes below have no effect on run-time. The described techniques are only effective when the inner loop is very efficient (on the order of 5-10 dict lookups), making it more efficient still by removing some of the said lookups. Here the r[i] item lookup dwarfs anything else by orders of magnitude, so the optimizations are simply irrelevant.
Your outer loop takes 5000 iterations, and your inner loop 20000 iterations. This means that you are executing 100 million iterations in 23 minutes, i.e. that each iteration takes 13.8 μs. That is not fast, even in Python.
I would try to cut down the run-time by stripping any unnecessary work from the inner loop. Specifically:
convert for n in range(len(res)) followed by res[n] to for r in res. I don't know how efficient item lookup is in pandas, but it's better to do it in the outer than in the inner loop.
move the score attribute lookup to the outer loop.
get rid of defaultdict and pre-create the lists and use an ordinary dict.
avoid dict stores at all and work on the lists directly, pre-creating them in a sequence. Only create a dictionary at the end.
cache the lookup of the append list method, and prepare in advance the (append, i) pairs that the inner loop needs.
Here is code that implements the above suggestions:
# pre-create the lists
lsts = [[] for _ in range(len(indices_ordered))]
# prepare the pairs (appendfn, i)
fast_append = [(l.append, i)
for (l, i) in zip(lsts, indices_ordered)]
for r in res:
# pre-fetch res[n].score
r_score = r.score
for append, i in fast_append:
append(r_score[i])
# finally, create the dict out of the lists
dct = {i: lst for (i, lst) in zip(indices_ordered, lsts)}
You really should use a DataFrame.
Here's a way to create the data directly:
import pandas as pd
import numpy as np
import random
import string
index_sz = 3
res_sz = 10
indices_ordered = [''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(3)) for i in range(index_sz)]
df = pd.DataFrame(np.random.randint(10, size=(res_sz, index_sz)), columns=indices_ordered)
There's no need to sort or index anything. A DataFrame can basically be accessed as an array or as a dict.
It should be much faster than juggling with defaultdicts, lists and Series.
df now looks like:
>>> df
7XQ VTV 38Y
0 6 9 5
1 5 5 4
2 6 0 7
3 0 0 8
4 7 8 9
5 8 6 4
6 2 4 9
7 3 2 2
8 7 6 0
9 8 0 1
>>> df['7XQ']
0 6
1 5
2 6
3 0
4 7
5 8
6 2
7 3
8 7
9 8
Name: 7XQ, dtype: int64
>>> df['7XQ'][:5]
0 6
1 5
2 6
3 0
4 7
Name: 7XQ, dtype: int64
With the original size, this script outputs a 5000 rows × 20000 columns DataFrame
in less than 3 seconds on my laptop.
Use pandas magic (with 2 lines of code) on your input list of pd.Series objects:
all_data = pd.concat([*res])
d = all_data.groupby(all_data.index).apply(list).to_dict()
Implied actions:
pd.concat([*res]) - concatenates all series into a single one preserving indices of each series object (pandas.concat)
all_data.groupby(all_data.index).apply(list).to_dict() - determine a groups of same index label values upon all_data.index, then put each group values into a list with .apply(list) and eventually convert grouped result into a dictionary .to_dict() (pandas.Series.groupby)
I have a dataframe of the form:
df = pd.DataFrame({'Date':['2017-01-01', '2017-02-13', '2018-03-01', '2018-04-01'], 'Value':[1,2,3,4]})
And for each year I have a distinct date range (eg for 2017 from 2017-02-02 to 2017-02-15 and for 2018 from 2018-03-03 to 2018-04-04) stored as a dictionary.
dates_dict = {2017: ('2017-02-02', '2017-02-15'), 2018: ('2018-03-03', '2018-04-04')}
What I want to create is a new column in my dataframe which is True if the Date is within that years date range and False otherwise. For the given example the output would be:
df = Date Value in_range
0 2017-01-01 1 False
1 2017-02-13 2 True
2 2018-03-01 3 False
3 2018-04-01 4 True
My current solution is:
temp = []
for name, group in df.groupby(df['Date'].dt.year):
temp.append((group['Date'] >= dates_dict[name][0]) & (group['Date'] <=
dates_dict[name][1]))
in_range = pd.concat(temp)
in_range = in_range.rename('in_range')
df = df.merge(in_range.to_frame(), left_index=True, right_index=True)
This works but I'm sure there's a more concise way to achieve this. More generally is there a better way of checking whether a date is within a large list of date ranges?
Setup
You can make your solution more efficient by converting your dictionary to actually contain a pd.date_range. Both of these solutions assume you make this transformation:
dates_dict = {k: pd.date_range(s, e) for k, (s, e) in dates_dict.items()}
Option 1
Using apply with a dictionary lookup:
df.Date.apply(lambda x: x in dates_dict[x.year], 1)
0 False
1 True
2 False
3 True
Name: Date, dtype: bool
Option 2
A slightly more performant option using a list comprehension:
df['in_range'] = [i in dates_dict[i.year] for i in df.Date]
Date Value in_range
0 2017-01-01 1 False
1 2017-02-13 2 True
2 2018-03-01 3 False
3 2018-04-01 4 True
Timings
In [208]: %timeit df.Date.apply(lambda x: x in dates_dict[x.year], 1)
289 ms ± 5.77 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [209]: %timeit [i in dates_dict[i.year] for i in df.Date]
284 ms ± 6.26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
you can use map to create a serie ser with the value from you dictionary for each Date and then use between such as:
ser = df.Date.dt.year.map(dates_dict)
df['in_range'] = df.Date.between(pd.to_datetime(ser.str[0]), pd.to_datetime(ser.str[1]))
and you get:
Date Value in_range
0 2017-01-01 1 False
1 2017-02-13 2 True
2 2018-03-01 3 False
3 2018-04-01 4 True
I'm looking for a way to do something like the various rolling_* functions of pandas, but I want the window of the rolling computation to be defined by a range of values (say, a range of values of a column of the DataFrame), not by the number of rows in the window.
As an example, suppose I have this data:
>>> print d
RollBasis ToRoll
0 1 1
1 1 4
2 1 -5
3 2 2
4 3 -4
5 5 -2
6 8 0
7 10 -13
8 12 -2
9 13 -5
If I do something like rolling_sum(d, 5), I get a rolling sum in which each window contains 5 rows. But what I want is a rolling sum in which each window contains a certain range of values of RollBasis. That is, I'd like to be able to do something like d.roll_by(sum, 'RollBasis', 5), and get a result where the first window contains all rows whose RollBasis is between 1 and 5, then the second window contains all rows whose RollBasis is between 2 and 6, then the third window contains all rows whose RollBasis is between 3 and 7, etc. The windows will not have equal numbers of rows, but the range of RollBasis values selected in each window will be the same. So the output should be like:
>>> d.roll_by(sum, 'RollBasis', 5)
1 -4 # sum of elements with 1 <= Rollbasis <= 5
2 -4 # sum of elements with 2 <= Rollbasis <= 6
3 -6 # sum of elements with 3 <= Rollbasis <= 7
4 -2 # sum of elements with 4 <= Rollbasis <= 8
# etc.
I can't do this with groupby, because groupby always produces disjoint groups. I can't do it with the rolling functions, because their windows always roll by number of rows, not by values. So how can I do it?
I think this does what you want:
In [1]: df
Out[1]:
RollBasis ToRoll
0 1 1
1 1 4
2 1 -5
3 2 2
4 3 -4
5 5 -2
6 8 0
7 10 -13
8 12 -2
9 13 -5
In [2]: def f(x):
...: ser = df.ToRoll[(df.RollBasis >= x) & (df.RollBasis < x+5)]
...: return ser.sum()
The above function takes a value, in this case RollBasis, and then indexes the data frame column ToRoll based on that value. The returned series consists of ToRoll values that meet the RollBasis + 5 criterion. Finally, that series is summed and returned.
In [3]: df['Rolled'] = df.RollBasis.apply(f)
In [4]: df
Out[4]:
RollBasis ToRoll Rolled
0 1 1 -4
1 1 4 -4
2 1 -5 -4
3 2 2 -4
4 3 -4 -6
5 5 -2 -2
6 8 0 -15
7 10 -13 -20
8 12 -2 -7
9 13 -5 -5
Code for the toy example DataFrame in case someone else wants to try:
In [1]: from pandas import *
In [2]: import io
In [3]: text = """\
...: RollBasis ToRoll
...: 0 1 1
...: 1 1 4
...: 2 1 -5
...: 3 2 2
...: 4 3 -4
...: 5 5 -2
...: 6 8 0
...: 7 10 -13
...: 8 12 -2
...: 9 13 -5
...: """
In [4]: df = read_csv(io.BytesIO(text), header=0, index_col=0, sep='\s+')
Based on Zelazny7's answer, I created this more general solution:
def rollBy(what, basis, window, func):
def applyToWindow(val):
chunk = what[(val<=basis) & (basis<val+window)]
return func(chunk)
return basis.apply(applyToWindow)
>>> rollBy(d.ToRoll, d.RollBasis, 5, sum)
0 -4
1 -4
2 -4
3 -4
4 -6
5 -2
6 -15
7 -20
8 -7
9 -5
Name: RollBasis
It's still not ideal as it is very slow compared to rolling_apply, but perhaps this is inevitable.
Based on BrenBarns's answer, but speeded up by using label based indexing rather than boolean based indexing:
def rollBy(what,basis,window,func,*args,**kwargs):
#note that basis must be sorted in order for this to work properly
indexed_what = pd.Series(what.values,index=basis.values)
def applyToWindow(val):
# using slice_indexer rather that what.loc [val:val+window] allows
# window limits that are not specifically in the index
indexer = indexed_what.index.slice_indexer(val,val+window,1)
chunk = indexed_what[indexer]
return func(chunk,*args,**kwargs)
rolled = basis.apply(applyToWindow)
return rolled
This is much faster than not using an indexed column:
In [46]: df = pd.DataFrame({"RollBasis":np.random.uniform(0,1000000,100000), "ToRoll": np.random.uniform(0,10,100000)})
In [47]: df = df.sort("RollBasis")
In [48]: timeit("rollBy_Ian(df.ToRoll,df.RollBasis,10,sum)",setup="from __main__ import rollBy_Ian,df", number =3)
Out[48]: 67.6615059375763
In [49]: timeit("rollBy_Bren(df.ToRoll,df.RollBasis,10,sum)",setup="from __main__ import rollBy_Bren,df", number =3)
Out[49]: 515.0221037864685
Its worth noting that the index based solution is O(n), while the logical slicing version is O(n^2) in the average case (I think).
I find it more useful to do this over evenly spaced windows from the min value of Basis to the max value of Basis, rather than at every value of basis. This means altering the function thus:
def rollBy(what,basis,window,func,*args,**kwargs):
#note that basis must be sorted in order for this to work properly
windows_min = basis.min()
windows_max = basis.max()
window_starts = np.arange(windows_min, windows_max, window)
window_starts = pd.Series(window_starts, index = window_starts)
indexed_what = pd.Series(what.values,index=basis.values)
def applyToWindow(val):
# using slice_indexer rather that what.loc [val:val+window] allows
# window limits that are not specifically in the index
indexer = indexed_what.index.slice_indexer(val,val+window,1)
chunk = indexed_what[indexer]
return func(chunk,*args,**kwargs)
rolled = window_starts.apply(applyToWindow)
return rolled
To extend the answer of #Ian Sudbury, I've extended it in such a way that one can use it directly on a dataframe by binding the method to the DataFrame class (I expect that there definitely might be some improvements on my code in speed, because I do not know how to access all internals of the class).
I've also added functionality for backward facing windows and centered windows. They only function perfectly when you're away from the edges.
import pandas as pd
import numpy as np
def roll_by(self, basis, window, func, forward=True, *args, **kwargs):
the_indexed = pd.Index(self[basis])
def apply_to_window(val):
if forward == True:
indexer = the_indexed.slice_indexer(val, val+window)
elif forward == False:
indexer = the_indexed.slice_indexer(val-window, val)
elif forward == 'both':
indexer = the_indexed.slice_indexer(val-window/2, val+window/2)
else:
raise RuntimeError('Invalid option for "forward". Can only be True, False, or "both".')
chunck = self.iloc[indexer]
return func(chunck, *args, **kwargs)
rolled = self[basis].apply(apply_to_window)
return rolled
pd.DataFrame.roll_by = roll_by
For the other tests, I've used the following definitions:
def rollBy_Ian_iloc(what,basis,window,func,*args,**kwargs):
#note that basis must be sorted in order for this to work properly
indexed_what = pd.Series(what.values,index=basis.values)
def applyToWindow(val):
# using slice_indexer rather that what.loc [val:val+window] allows
# window limits that are not specifically in the index
indexer = indexed_what.index.slice_indexer(val,val+window,1)
chunk = indexed_what.iloc[indexer]
return func(chunk,*args,**kwargs)
rolled = basis.apply(applyToWindow)
return rolled
def rollBy_Ian_index(what,basis,window,func,*args,**kwargs):
#note that basis must be sorted in order for this to work properly
indexed_what = pd.Series(what.values,index=basis.values)
def applyToWindow(val):
# using slice_indexer rather that what.loc [val:val+window] allows
# window limits that are not specifically in the index
indexer = indexed_what.index.slice_indexer(val,val+window,1)
chunk = indexed_what[indexed_what.index[indexer]]
return func(chunk,*args,**kwargs)
rolled = basis.apply(applyToWindow)
return rolled
def rollBy_Bren(what, basis, window, func):
def applyToWindow(val):
chunk = what[(val<=basis) & (basis<val+window)]
return func(chunk)
return basis.apply(applyToWindow)
Timings and tests:
df = pd.DataFrame({"RollBasis":np.random.uniform(0,100000,10000), "ToRoll": np.random.uniform(0,10,10000)}).sort_values("RollBasis")
In [14]: %timeit rollBy_Ian_iloc(df.ToRoll,df.RollBasis,10,sum)
...: %timeit rollBy_Ian_index(df.ToRoll,df.RollBasis,10,sum)
...: %timeit rollBy_Bren(df.ToRoll,df.RollBasis,10,sum)
...: %timeit df.roll_by('RollBasis', 10, lambda x: x['ToRoll'].sum())
...:
484 ms ± 28.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.58 s ± 10.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3.12 s ± 22.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.48 s ± 45.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Conclusion: the bound method is not as fast as the method by #Ian Sudbury, but not as slow as that of #BrenBarn, but it does allow for more flexibility regarding the functions one can call on them.