Filter pandas index by function - python

I want to filter a pandas dataframe by a function along the index. I can't seem to find a built-in way of performing this action.
So essentially, I have a function that through some arbitrarily complicated means determines whether a particular index should be included, I'll call it filter_func for this example. I wish to apply exactly what the below code does, but to the index:
new_index = filter(filter_func, df.index)
And only include the values that the filter_func allows. The index could also be any type.
This is a pretty important factor of data manipulation, so I imagine there's a built-in way of doing this action.
ETA:
I found that indexing the dataframe by a list of booleans will do what I want, but still requires double the space of the index in order to apply the filter. So my question still remains if there's a built-in way of doing this that does not require twice the space.
Here's an example:
import pandas as pd
df = pd.DataFrame({"value":[12,34,2,23,6,23,7,2,35,657,1,324]})
def filter_func(ind, n=0):
if n > 200: return False
if ind % 79 == 0: return True
return filter_func(ind+ind-1, n+1)
new_index = filter(filter_func, df)
And I want to do this:
mask = []
for i in df.index:
mask.append(filter_func(i))
df = df[mask]
But in a way that doesn't take twice the space of the index to do so

You can use map instead of filter and then do a boolean indexing:
df.loc[map(filter_func,df.index)]
value
0 12
4 6
7 2
8 35

Have you tried using df.apply?
>>> df = pd.DataFrame(np.arange(9).reshape(3, 3), columns=['a', 'b', 'c'])
a b c
0 0 1 2
1 3 4 5
2 6 7 8
df[df.apply(lambda x: x['c']%2 == 0, axis = 1)]
a b c
0 0 1 2
2 6 7 8
You can customize the lambda function in any way you want, let me know if this isn't what you're looking for.

If you want to avoid referencing df explicitly inside the filtering condition, you can use the following:
import pandas as pd
df = pd.DataFrame({"value":[12,34,2,23,6,23,7,2,35,657,1,324]}, dtype=object)
df.apply(lambda x: x if filter_func(x.name) else None, axis=1, result_type='broadcast').dropna()

Related

How can I vectorize the apply + filter operation on pandas.DataFrame?

Imagine that I have a Dataframe and the columns are [A,B,C]. There are some different values for each of these columns. And I want to produce one more column D which can be received with the following function:
def produce_column(i):
# Extract current row by index
raw = df.loc[i]
# Extract previous 3 values for the same sub-df which are before i
df_same = df[
(df['A'] == raw.A)
& (df['B'] == raw.B)
].loc[:i].tail(3)
# Check that we have enough values
if df_same.shape[0] != 3:
return False
# Doesn't matter which function is in use, I just need to apply it on the column / columns
diffs = df_same['C'].map(lambda x: x <= 10 and x > 0)
return all(diffs)
df['D'] = df.index.map(lambda x: produce_column(x))
So on each step, I need to get the Dataframe, which have the same set of properties as a row and perform some operations on columns of this Dataframe. I have a few hundred thousands of rows, so this code takes a lot of time to be executed. I think that a good idea is to vectorize the operation, but I don't know how to do that. Maybe there's another way to perform this?
Thanks in advance!
UPD Here's an example
df = pd.DataFrame([(1,2,3), (4,5,6), (7,8,9)], columns=['A','B','C'])
A B C
0 1 2 3
1 4 5 6
2 7 8 9
df['D'] = df.index.map(lambda x: produce_column(x))
A B C D
0 1 2 3 True
1 4 5 6 True
2 7 8 9 False

Apply function using multiple Pandas columns? [duplicate]

This question already has answers here:
How to apply a function to two columns of Pandas dataframe
(15 answers)
Closed 4 years ago.
I need to make a column in my pandas dataframe that relies on other items in that same row. For example, here's my dataframe.
df = pd.DataFrame(
[['a',],['a',1],['a',1],['a',2],['b',2],['b',2],['c',3]],
columns=['letter','number']
)
letters numbers
0 a 1
1 a 1
2 a 1
3 a 2
4 b 2
5 b 2
6 c 3
I need a third column, that is 1 if 'a' and 2 are present in the row, and 0 otherwise. So it would be [`0,0,0,1,0,0,0]`
How can I use Pandas `apply` or `map` to do this? Iterating over the rows is my first thought, but this seems like a clumsy way of doing it.
You can use apply with axis=1. Suppose you wanted to call your new column c:
df['c'] = df.apply(
lambda row: (row['letter'] == 'a') and (row['number'] == 2),
axis=1
).astype(int)
print(df)
# letter number c
#0 a NaN 0
#1 a 1.0 0
#2 a 1.0 0
#3 a 2.0 1
#4 b 2.0 0
#5 b 2.0 0
#6 c 3.0 0
But apply is slow and should be avoided if possible. In this case, it would be much better to boolean logic operations, which are vectorized.
df['c'] = ((df['letter'] == "a") & (df['number'] == 2)).astype(int)
This has the same result as using apply above.
You can try to use pd.Series.where()/np.where(). If you only are interested in the int represantation of the boolean values, you can pick the other solution. If you want more freedom for the if/else value you can use np.where()
import pandas as pd
import numpy as np
# create example
values = ['a', 'b', 'c']
df = pd.DataFrame()
df['letter'] = np.random.choice(values, size=10)
df['number'] = np.random.randint(1,3, size=10)
# condition
df['result'] = np.where((df['letter'] == 'a') & (df['number'] == 2), 1, 0)

Using looping through data frames and objects [duplicate]

I have a scenario where a user wants to apply several filters to a Pandas DataFrame or Series object. Essentially, I want to efficiently chain a bunch of filtering (comparison operations) together that are specified at run-time by the user.
The filters should be additive (aka each one applied should narrow results).
I'm currently using reindex() (as below) but this creates a new object each time and copies the underlying data (if I understand the documentation correctly). I want to avoid this unnecessary copying as it will be really inefficient when filtering a big Series or DataFrame.
I'm thinking that using apply(), map(), or something similar might be better. I'm pretty new to Pandas though so still trying to wrap my head around everything.
Also, I would like to expand this so that the dictionary passed in can include the columns to operate on and filter an entire DataFrame based on the input dictionary. However, I'm assuming whatever works for a Series can be easily expanded to a DataFrame.
TL;DR
I want to take a dictionary of the following form and apply each operation to a given Series object and return a 'filtered' Series object.
relops = {'>=': [1], '<=': [1]}
Long Example
I'll start with an example of what I have currently and just filtering a single Series object. Below is the function I'm currently using:
def apply_relops(series, relops):
"""
Pass dictionary of relational operators to perform on given series object
"""
for op, vals in relops.iteritems():
op_func = ops[op]
for val in vals:
filtered = op_func(series, val)
series = series.reindex(series[filtered])
return series
The user provides a dictionary with the operations they want to perform:
>>> df = pandas.DataFrame({'col1': [0, 1, 2], 'col2': [10, 11, 12]})
>>> print df
>>> print df
col1 col2
0 0 10
1 1 11
2 2 12
>>> from operator import le, ge
>>> ops ={'>=': ge, '<=': le}
>>> apply_relops(df['col1'], {'>=': [1]})
col1
1 1
2 2
Name: col1
>>> apply_relops(df['col1'], relops = {'>=': [1], '<=': [1]})
col1
1 1
Name: col1
Again, the 'problem' with my above approach is that I think there is a lot of possibly unnecessary copying of the data for the in-between steps.
Pandas (and numpy) allow for boolean indexing, which will be much more efficient:
In [11]: df.loc[df['col1'] >= 1, 'col1']
Out[11]:
1 1
2 2
Name: col1
In [12]: df[df['col1'] >= 1]
Out[12]:
col1 col2
1 1 11
2 2 12
In [13]: df[(df['col1'] >= 1) & (df['col1'] <=1 )]
Out[13]:
col1 col2
1 1 11
If you want to write helper functions for this, consider something along these lines:
In [14]: def b(x, col, op, n):
return op(x[col],n)
In [15]: def f(x, *b):
return x[(np.logical_and(*b))]
In [16]: b1 = b(df, 'col1', ge, 1)
In [17]: b2 = b(df, 'col1', le, 1)
In [18]: f(df, b1, b2)
Out[18]:
col1 col2
1 1 11
Update: pandas 0.13 has a query method for these kind of use cases, assuming column names are valid identifiers the following works (and can be more efficient for large frames as it uses numexpr behind the scenes):
In [21]: df.query('col1 <= 1 & 1 <= col1')
Out[21]:
col1 col2
1 1 11
Chaining conditions creates long lines, which are discouraged by PEP8.
Using the .query method forces to use strings, which is powerful but unpythonic and not very dynamic.
Once each of the filters is in place, one approach could be:
import numpy as np
import functools
def conjunction(*conditions):
return functools.reduce(np.logical_and, conditions)
c_1 = data.col1 == True
c_2 = data.col2 < 64
c_3 = data.col3 != 4
data_filtered = data[conjunction(c_1,c_2,c_3)]
np.logical operates on and is fast, but does not take more than two arguments, which is handled by functools.reduce.
Note that this still has some redundancies:
Shortcutting does not happen on a global level
Each of the individual conditions runs on the whole initial data
Still, I expect this to be efficient enough for many applications and it is very readable. You can also make a disjunction (wherein only one of the conditions needs to be true) by using np.logical_or instead:
import numpy as np
import functools
def disjunction(*conditions):
return functools.reduce(np.logical_or, conditions)
c_1 = data.col1 == True
c_2 = data.col2 < 64
c_3 = data.col3 != 4
data_filtered = data[disjunction(c_1,c_2,c_3)]
Simplest of All Solutions:
Use:
filtered_df = df[(df['col1'] >= 1) & (df['col1'] <= 5)]
Another Example, To filter the dataframe for values belonging to Feb-2018, use the below code
filtered_df = df[(df['year'] == 2018) & (df['month'] == 2)]
Since pandas 0.22 update, comparison options are available like:
gt (greater than)
lt (less than)
eq (equals to)
ne (not equals to)
ge (greater than or equals to)
and many more. These functions return boolean array. Let's see how we can use them:
# sample data
df = pd.DataFrame({'col1': [0, 1, 2,3,4,5], 'col2': [10, 11, 12,13,14,15]})
# get values from col1 greater than or equals to 1
df.loc[df['col1'].ge(1),'col1']
1 1
2 2
3 3
4 4
5 5
# where co11 values is between 0 and 2
df.loc[df['col1'].between(0,2)]
col1 col2
0 0 10
1 1 11
2 2 12
# where col1 > 1
df.loc[df['col1'].gt(1)]
col1 col2
2 2 12
3 3 13
4 4 14
5 5 15
Why not do this?
def filt_spec(df, col, val, op):
import operator
ops = {'eq': operator.eq, 'neq': operator.ne, 'gt': operator.gt, 'ge': operator.ge, 'lt': operator.lt, 'le': operator.le}
return df[ops[op](df[col], val)]
pandas.DataFrame.filt_spec = filt_spec
Demo:
df = pd.DataFrame({'a': [1,2,3,4,5], 'b':[5,4,3,2,1]})
df.filt_spec('a', 2, 'ge')
Result:
a b
1 2 4
2 3 3
3 4 2
4 5 1
You can see that column 'a' has been filtered where a >=2.
This is slightly faster (typing time, not performance) than operator chaining. You could of course put the import at the top of the file.
e can also select rows based on values of a column that are not in a list or any iterable. We will create boolean variable just like before, but now we will negate the boolean variable by placing ~ in the front.
For example
list = [1, 0]
df[df.col1.isin(list)]
If you want to check any/all of multiple columns for a value, you can do:
df[(df[['HomeTeam', 'AwayTeam']] == 'Fulham').any(axis=1)]

Reference values in the previous row with map or apply

Given a dataframe df, I would like to generate a new variable/column for each row based on the values in the previous row. df is sorted so that the order of the rows is meaningful.
Normally, we can use either map or apply, but it seems that neither of them allows the access to values in the previous row.
For example, given existing rows a b c, I want to generate a new column d, which is based on some calculation using the value of c in the previous row.
How should I do it in pandas?
If you just want to do a calculation based on the previous row, you can calculate and then shift:
In [2]: df = pd.DataFrame({'a':[0,1,2], 'b':[0,10,20]})
In [3]: df
Out[3]:
a b
0 0 0
1 1 10
2 2 20
# a calculation based on other column
In [4]: df['c'] = df['b'] + 1
# shift the column
In [5]: df['c'] = df['c'].shift()
In [6]: df
Out[6]:
a b c
0 0 0 NaN
1 1 10 1
2 2 20 11
If you want to do a calculation based on multiple rows, you could look at the rolling_apply function (http://pandas.pydata.org/pandas-docs/stable/computation.html#moving-rolling-statistics-moments and http://pandas.pydata.org/pandas-docs/stable/generated/pandas.rolling_apply.html#pandas.rolling_apply)
You can use the dataframe 'apply' function and leverage the unused the 'kwargs' parameter to store the previous row.
import pandas as pd
df = pd.DataFrame({'a':[0,1,2], 'b':[0,10,20]})
new_col = 'c'
def apply_func_decorator(func):
prev_row = {}
def wrapper(curr_row, **kwargs):
val = func(curr_row, prev_row)
prev_row.update(curr_row)
prev_row[new_col] = val
return val
return wrapper
#apply_func_decorator
def running_total(curr_row, prev_row):
return curr_row['a'] + curr_row['b'] + prev_row.get('c', 0)
df[new_col] = df.apply(running_total, axis=1)
print(df)
# Output will be:
# a b c
# 0 0 0 0
# 1 1 10 11
# 2 2 20 33
This example uses a decorator to store the previous row in a dictionary and then pass it to the function when Pandas calls it on the next row.
Disclaimer 1: The 'prev_row' variable starts off empty for the first row so when using it in the apply function I had to supply a default value to avoid a 'KeyError'.
Disclaimer 2: I am fairly certain this will be slower the apply operation but I did not do any tests to figure out how much.

Efficient way to apply multiple filters to pandas DataFrame or Series

I have a scenario where a user wants to apply several filters to a Pandas DataFrame or Series object. Essentially, I want to efficiently chain a bunch of filtering (comparison operations) together that are specified at run-time by the user.
The filters should be additive (aka each one applied should narrow results).
I'm currently using reindex() (as below) but this creates a new object each time and copies the underlying data (if I understand the documentation correctly). I want to avoid this unnecessary copying as it will be really inefficient when filtering a big Series or DataFrame.
I'm thinking that using apply(), map(), or something similar might be better. I'm pretty new to Pandas though so still trying to wrap my head around everything.
Also, I would like to expand this so that the dictionary passed in can include the columns to operate on and filter an entire DataFrame based on the input dictionary. However, I'm assuming whatever works for a Series can be easily expanded to a DataFrame.
TL;DR
I want to take a dictionary of the following form and apply each operation to a given Series object and return a 'filtered' Series object.
relops = {'>=': [1], '<=': [1]}
Long Example
I'll start with an example of what I have currently and just filtering a single Series object. Below is the function I'm currently using:
def apply_relops(series, relops):
"""
Pass dictionary of relational operators to perform on given series object
"""
for op, vals in relops.iteritems():
op_func = ops[op]
for val in vals:
filtered = op_func(series, val)
series = series.reindex(series[filtered])
return series
The user provides a dictionary with the operations they want to perform:
>>> df = pandas.DataFrame({'col1': [0, 1, 2], 'col2': [10, 11, 12]})
>>> print df
>>> print df
col1 col2
0 0 10
1 1 11
2 2 12
>>> from operator import le, ge
>>> ops ={'>=': ge, '<=': le}
>>> apply_relops(df['col1'], {'>=': [1]})
col1
1 1
2 2
Name: col1
>>> apply_relops(df['col1'], relops = {'>=': [1], '<=': [1]})
col1
1 1
Name: col1
Again, the 'problem' with my above approach is that I think there is a lot of possibly unnecessary copying of the data for the in-between steps.
Pandas (and numpy) allow for boolean indexing, which will be much more efficient:
In [11]: df.loc[df['col1'] >= 1, 'col1']
Out[11]:
1 1
2 2
Name: col1
In [12]: df[df['col1'] >= 1]
Out[12]:
col1 col2
1 1 11
2 2 12
In [13]: df[(df['col1'] >= 1) & (df['col1'] <=1 )]
Out[13]:
col1 col2
1 1 11
If you want to write helper functions for this, consider something along these lines:
In [14]: def b(x, col, op, n):
return op(x[col],n)
In [15]: def f(x, *b):
return x[(np.logical_and(*b))]
In [16]: b1 = b(df, 'col1', ge, 1)
In [17]: b2 = b(df, 'col1', le, 1)
In [18]: f(df, b1, b2)
Out[18]:
col1 col2
1 1 11
Update: pandas 0.13 has a query method for these kind of use cases, assuming column names are valid identifiers the following works (and can be more efficient for large frames as it uses numexpr behind the scenes):
In [21]: df.query('col1 <= 1 & 1 <= col1')
Out[21]:
col1 col2
1 1 11
Chaining conditions creates long lines, which are discouraged by PEP8.
Using the .query method forces to use strings, which is powerful but unpythonic and not very dynamic.
Once each of the filters is in place, one approach could be:
import numpy as np
import functools
def conjunction(*conditions):
return functools.reduce(np.logical_and, conditions)
c_1 = data.col1 == True
c_2 = data.col2 < 64
c_3 = data.col3 != 4
data_filtered = data[conjunction(c_1,c_2,c_3)]
np.logical operates on and is fast, but does not take more than two arguments, which is handled by functools.reduce.
Note that this still has some redundancies:
Shortcutting does not happen on a global level
Each of the individual conditions runs on the whole initial data
Still, I expect this to be efficient enough for many applications and it is very readable. You can also make a disjunction (wherein only one of the conditions needs to be true) by using np.logical_or instead:
import numpy as np
import functools
def disjunction(*conditions):
return functools.reduce(np.logical_or, conditions)
c_1 = data.col1 == True
c_2 = data.col2 < 64
c_3 = data.col3 != 4
data_filtered = data[disjunction(c_1,c_2,c_3)]
Simplest of All Solutions:
Use:
filtered_df = df[(df['col1'] >= 1) & (df['col1'] <= 5)]
Another Example, To filter the dataframe for values belonging to Feb-2018, use the below code
filtered_df = df[(df['year'] == 2018) & (df['month'] == 2)]
Since pandas 0.22 update, comparison options are available like:
gt (greater than)
lt (less than)
eq (equals to)
ne (not equals to)
ge (greater than or equals to)
and many more. These functions return boolean array. Let's see how we can use them:
# sample data
df = pd.DataFrame({'col1': [0, 1, 2,3,4,5], 'col2': [10, 11, 12,13,14,15]})
# get values from col1 greater than or equals to 1
df.loc[df['col1'].ge(1),'col1']
1 1
2 2
3 3
4 4
5 5
# where co11 values is between 0 and 2
df.loc[df['col1'].between(0,2)]
col1 col2
0 0 10
1 1 11
2 2 12
# where col1 > 1
df.loc[df['col1'].gt(1)]
col1 col2
2 2 12
3 3 13
4 4 14
5 5 15
Why not do this?
def filt_spec(df, col, val, op):
import operator
ops = {'eq': operator.eq, 'neq': operator.ne, 'gt': operator.gt, 'ge': operator.ge, 'lt': operator.lt, 'le': operator.le}
return df[ops[op](df[col], val)]
pandas.DataFrame.filt_spec = filt_spec
Demo:
df = pd.DataFrame({'a': [1,2,3,4,5], 'b':[5,4,3,2,1]})
df.filt_spec('a', 2, 'ge')
Result:
a b
1 2 4
2 3 3
3 4 2
4 5 1
You can see that column 'a' has been filtered where a >=2.
This is slightly faster (typing time, not performance) than operator chaining. You could of course put the import at the top of the file.
e can also select rows based on values of a column that are not in a list or any iterable. We will create boolean variable just like before, but now we will negate the boolean variable by placing ~ in the front.
For example
list = [1, 0]
df[df.col1.isin(list)]
If you want to check any/all of multiple columns for a value, you can do:
df[(df[['HomeTeam', 'AwayTeam']] == 'Fulham').any(axis=1)]

Categories

Resources