Use a function instead of string in Pandas Groupby Agg - python

When aggregating data in Pandas I am able to return strings like "count", "sum", "mean", etc to aggregate data. Are there functions I can use instead of strings that would provide equivalent behavior. For example, if I try to use pd.Series.Count instead of count, the runtime takes a sizable hit.
import pandas as pd
import numpy as np
n = 10000000
df_nan = pd.DataFrame({"a": np.random.randint(0, 100, n*2),
"b": np.linspace(0, 100, n).tolist() + [None]*n})
%timeit df_nan.groupby("a").agg({"b": pd.Series.count})
1.63 s ± 28 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit df_nan.groupby("a").agg({"b": "count"})
479 ms ± 18.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Any idea what function I could return instead?

Related

Efficient way to calculate difference from pandas datetime columns based on days

I have a dataframe with a few million rows where I want to calculate the difference on a daily basis between two columns which are in datetime format.
There are stack overflow questions which answer this question computing the difference on a timestamp basis (see here
Doing it on the timestamp basis felt quite fast:
df["Differnce"] = (df["end_date"] - df["start_date"]).dt.days
But doing it on a daily basis felt quite slow:
df["Differnce"] = (df["end_date"].dt.date - df["start_date"].dt.date).dt.days
I was wondering if there is a easy but better/faster way to achieve the same result?
Example Code:
import pandas as pd
import numpy as np
data = {'Condition' :["a", "a", "b"],
'start_date': [pd.Timestamp('2022-01-01 23:00:00.000000'), pd.Timestamp('2022-01-01 23:00:00.000000'), pd.Timestamp('2022-01-01 23:00:00.000000')],
'end_date': [pd.Timestamp('2022-01-02 01:00:00.000000'), pd.Timestamp('2022-02-01 23:00:00.000000'), pd.Timestamp('2022-01-02 01:00:00.000000')]}
df = pd.DataFrame(data)
df["Right_Difference"] = np.where((df["Condition"] == "a"), ((df["end_date"].dt.date - df["start_date"].dt.date).dt.days), np.nan)
df["Wrong_Difference"] = np.where((df["Condition"] == "a"), ((df["end_date"] - df["start_date"]).dt.days), np.nan)
Use Series.dt.to_period, faster is Series.dt.normalize or Series.dt.floor :
#300k rows
df = pd.concat([df] * 100000, ignore_index=True)
In [286]: %timeit (df["end_date"].dt.date - df["start_date"].dt.date).dt.days
1.14 s ± 135 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [287]: %timeit df["end_date"].dt.to_period('d').astype('int') - df["start_date"].dt.to_period('d').astype('int')
64.1 ms ± 3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [288]: %timeit (df["end_date"].dt.normalize() - df["start_date"].dt.normalize()).dt.days
27.7 ms ± 316 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [289]: %timeit (df["end_date"].dt.floor('d') - df["start_date"].dt.floor('d')).dt.days
27.7 ms ± 937 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

how does DataFrameGroupBy.apply handle large dataframes with duplicate index in pandas?

Suppose that we have a large dataframe with duplicate index,
# IPython
In [1]: import pandas as pd
In [2]: from numpy.random import randint
In [3]: df = pd.DataFrame({'a': randint(1, 10, 10000)}, index=randint(1, 10, 10000))
and we want to group by column 'a' and do some operations using apply, such as (just do nothing in apply)
In [4]: df.groupby('a').apply(lambda x: x)
which will take a long time:
In [5]: %timeit df.groupby('a').apply(lambda x: x)
19.9 s ± 322 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
If the size goes bigger, it becames unbearable. However, if we reset_index first, it goes fast.
In [6]: %timeit df.reset_index(drop=True).groupby('a').apply(lambda x: x)
2.24 ms ± 60.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Then my question is how apply handle duplicate index, and why it's so slow.
Thanks for any help.

Is .isin() faster than .query()

Question:
Hi,
When searching for methods to make a selection of a dataframe (being relatively unexperienced with Pandas), I had the following question:
What is faster for large datasets - .isin() or .query()?
Query is somewhat more intuitive to read, so my preferred approach due to my line of work. However, testing it on a very small example dataset, query seems to be much slower.
Is there anyone who has tested this properly before? If so, what were the outcomes? I searched the web, but could not find another post on this.
See the sample code below, which works for Python 3.8.5.
Thanks a lot in advance for your help!
Code:
# Packages
import pandas as pd
import timeit
import numpy as np
# Create dataframe
df = pd.DataFrame({'name': ['Foo', 'Bar', 'Faz'],
'owner': ['Canyon', 'Endurace', 'Bike']},
index=['Frame', 'Type', 'Kind'])
# Show dataframe
df
# Create filter
selection = ['Canyon']
# Filter dataframe using 'isin' (type 1)
df_filtered = df[df['owner'].isin(selection)]
%timeit df_filtered = df[df['owner'].isin(selection)]
213 µs ± 14 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# Filter dataframe using 'isin' (type 2)
df[np.isin(df['owner'].values, selection)]
%timeit df_filtered = df[np.isin(df['owner'].values, selection)]
128 µs ± 3.11 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
# Filter dataframe using 'query'
df_filtered = df.query("owner in #selection")
%timeit df_filtered = df.query("owner in #selection")
1.15 ms ± 9.35 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The best test in real data, here fast comparison for 3k, 300k,3M rows with this sample data:
selection = ['Hedge']
df = pd.concat([df] * 1000, ignore_index=True)
In [139]: %timeit df[df['owner'].isin(selection)]
449 µs ± 58 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [140]: %timeit df.query("owner in #selection")
1.57 ms ± 33.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
df = pd.concat([df] * 100000, ignore_index=True)
In [142]: %timeit df[df['owner'].isin(selection)]
8.25 ms ± 66.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [143]: %timeit df.query("owner in #selection")
13 ms ± 1.05 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
df = pd.concat([df] * 1000000, ignore_index=True)
In [145]: %timeit df[df['owner'].isin(selection)]
94.5 ms ± 9.28 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [146]: %timeit df.query("owner in #selection")
112 ms ± 499 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
If check docs:
DataFrame.query() using numexpr is slightly faster than Python for large frames
Conclusion - The best test in real data, because depends of number of rows, number of matched values and also by length of list selection.
A perfplot over some generated data:
Assuming some hypothetical data, as well as a proportionally increasing selection size (10% of frame size).
Sample data for n=10:
df:
name owner
0 Constant JoVMq
1 Constant jiKNB
2 Constant WEqhm
3 Constant pXNqB
4 Constant SnlbV
5 Constant Euwsj
6 Constant QPPbs
7 Constant Nqofa
8 Constant qeUKP
9 Constant ZBFce
Selection:
['ZBFce']
Performance reflects the docs. At smaller frames the overhead of query is significant over isin However, at frames around 200k rows the performance is comparable to isin and at frames around 10m rows query starts to become more performant.
I agree with #jezrael that, this is, as with most pandas runtime problems, very data dependent, and the best test would be to test on real datasets for a given use case and make a decision based on that.
Edit: Included #AlexanderVolkovsky's suggestion to convert selection to a set and use apply + in:
Perfplot Code:
import string
import numpy as np
import pandas as pd
import perfplot
charset = list(string.ascii_letters)
np.random.seed(5)
def gen_data(n):
df = pd.DataFrame({'name': 'Constant',
'owner': [''.join(np.random.choice(charset, 5))
for _ in range(n)]})
selection = df['owner'].sample(frac=.1).tolist()
return df, selection, set(selection)
def test_isin(params):
df, selection, _ = params
return df[df['owner'].isin(selection)]
def test_query(params):
df, selection, _ = params
return df.query("owner in #selection")
def test_apply_over_set(params):
df, _, set_selection = params
return df[df['owner'].apply(lambda x: x in set_selection)]
if __name__ == '__main__':
out = perfplot.bench(
setup=gen_data,
kernels=[
test_isin,
test_query,
test_apply_over_set
],
labels=[
'test_isin',
'test_query',
'test_apply_over_set'
],
n_range=[2 ** k for k in range(25)],
equality_check=None
)
out.save('perfplot_results.png', transparent=False)

How to round calculations with pandas

I know how to simply round the column in pandas (link), however, my problem is how can I round and do calculation at the same time in pandas.
df['age_new'] = df['age'].apply(lambda x: round(x['age'] * 0.024319744084, 0.000000000001))
TypeError: 'float' object is not subscriptable
Is there any way to do this?
.apply is not vectorized.
When using .apply on a pandas.Series, like 'age', the lambda variable, x is the 'age' column, so the correct syntax is round(x * 0.0243, 4)
The ndigits parameter of round, requires an int, not a float.
It is faster to use vectorized methods, like .mul, and then .round.
In this case, with 1000 rows, the vectorized method is 4 times faster than using .apply.
import pandas as pd
import numpy as np
# test data
np.random.seed(365)
df = pd.DataFrame({'age': np.random.randint(110, size=(1000))})
%%timeit
df.age.mul(0.024319744084).round(5)
[out]:
212 µs ± 3.86 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
(df['age'] * 0.024319744084).round(5)
[out]:
211 µs ± 9.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
df.age.apply(lambda x: round(x * 0.024319744084, 5))
[out]:
845 µs ± 20.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
There's two problems:
x['age'] inside the brackets doesn't need ['age'] as you already apply to the column age (that's why you get the error)
round takes an int as second argument.
Try
df['age_new'] = df['age'].apply(lambda x: round(x * 0.024319744084, 5))
(5 is just an example.)

Pandas - Add values from series to dataframe column based on index of series matching some value in dataframe

Data
pb = {"mark_up_id":{"0":"123","1":"456","2":"789","3":"111","4":"222"},"mark_up":{"0":1.2987,"1":1.5625,"2":1.3698,"3":1.3333,"4":1.4589}}
data = {"id":{"0":"K69","1":"K70","2":"K71","3":"K72","4":"K73","5":"K74","6":"K75","7":"K79","8":"K86","9":"K100"},"cost":{"0":29.74,"1":9.42,"2":9.42,"3":9.42,"4":9.48,"5":9.48,"6":24.36,"7":5.16,"8":9.8,"9":3.28},"mark_up_id":{"0":"123","1":"456","2":"789","3":"111","4":"222","5":"333","6":"444","7":"555","8":"666","9":"777"}}
pb = pd.DataFrame(data=pb).set_index('mark_up_id')
df = pd.DataFrame(data=data)
Expected Output
test = df.join(pb, on='mark_up_id', how='left')
test['cost'].update(test['cost'] + test['mark_up'])
test.drop('mark_up',axis=1,inplace=True)
Or..
df['cost'].update(df['mark_up_id'].map(pb['mark_up']) + df['cost'])
Question
Is there a function that does the above, or is this the best way to go about this type of operation?
I would use the second solution you propose or better this:
df['cost']=(df['mark_up_id'].map(pb['mark_up']) + df['cost']).fillna(df['cost'])
I think using update can be uncomfortable because it doesn't return anything.
Let's say Series.fillna is more flexible.
We can also use DataFrame.assign
in order to continue working on the DataFrame that the assignment returns.
df.assign( Cost=(df['mark_up_id'].map(pb['mark_up']) + df['cost']).fillna(df['cost']) )
Time comparision with join method
%%timeit
df['cost']=(df['mark_up_id'].map(pb['mark_up']) + df['cost']).fillna(df['cost'])
#945 µs ± 46 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
test = df.join(pb, on='mark_up_id', how='left')
test['cost'].update(test['cost'] + test['mark_up'])
test.drop('mark_up',axis=1,inplace=True)
#3.59 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
slow..
%%timeit
df['cost'].update(df['mark_up_id'].map(pb['mark_up']) + df['cost'])
#985 µs ± 32.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Finally,I recommend you see: Underastanding inplace and When I should use apply

Categories

Resources