Faster Way to GroupBy Apply Python Pandas? - python

How can I make the Groupby Apply run faster, or how can I write it a different way?
import numpy as np
import pandas as pd
df = pd.DataFrame({'ID':[1,1,1,1,1,2,2,2,2,2],\
'value':[1,2,np.nan,3,np.nan,1,2,np.nan,4,np.nan]})
result = df.groupby("ID").apply(lambda x: len(x[x['value'].notnull()].index)\
if((len(x[x['value']==1].index)>=1)&\
(len(x[x['value']==4].index)==0)) else 0)
output:
Index 0
1 3
2 0
My program runs very slow right now. Can I make it faster? I have in the past filtered before using groupby() but I don't see an easy way to do it in this situation.

Not sure if this is what you need. I have decomposed it a bit, but you can easily method-chain it to get the code more compact:
df = pd.DataFrame(
{
"ID": [1, 1, 1, 1, 1, 2, 2, 2, 2, 2],
"value": [1, 2, np.nan, 3, np.nan, 1, 2, np.nan, 4, np.nan],
}
)
df["x1"] = df["value"] == 1
df["x2"] = df["value"] == 4
df2 = df.groupby("ID").agg(
y1=pd.NamedAgg(column="x1", aggfunc="max"),
y2=pd.NamedAgg(column="x2", aggfunc="max"),
cnt=pd.NamedAgg(column="value", aggfunc="count"),
)
df3 = df2.assign(z=lambda x: (x['y1'] & ~x['y2'])*x['cnt'])
result = df3.drop(columns=['y1', 'y2', 'cnt'])
print(result)
which will yield
z
ID
1 3
2 0

Related

Finding length of adjacent datagaps in Pandas

is there a way to find the maximum length of contiguous periods without data for each column? `
df.isna().sum() gives me the number of all nan but here in the example I am looking for a way to get for A=3 and B=2:
import pandas as pd
import numpy as np
i = pd.date_range('2018-04-09', periods=8, freq='1D')
df = pd.DataFrame({'A': [1, 5, np.nan ,np.nan, np.nan, 2, 5, np.nan], 'B' : [np.nan, 2, 3, np.nan, np.nan, 6, np.nan, 8]}, index=i)
df
For one Series you can make groups of consecutive NaNs using the non-NaNs as starting points. Then count them and get the max:
s = df['A'].isna()
s.groupby((~s).cumsum()).sum().max()
Output: 3
Now do this for all columns:
def max_na_stretch(s):
s = s.isna()
return s.groupby((~s).cumsum()).sum().max()
df.apply(max_na_stretch)
Output:
A 3
B 2
dtype: int64

Struggling to understand groupby pandas

I'm struggling to understand how the parameters for df.groupby works. I have the following code:
df = pd.read_sql(query_cnxn)
codegroup = df.groupby(['CODE'])
I then attempt a for loop as follows:
for code in codegroup:
dfsize = codegroup.size()
dfmax = codegroup['ID'].max()
dfmin = codegroup['ID'].min()
result = ((dfmax-dfmin)-dfsize)
if result == 1:
df2 = df2.append(itn)
else:
df3 = df3.append(itn)
I'm trying to iterate over each unique code. Does the for loop understand that i'm trying to loop through each code based on the above? Thank you in advance.
Pandas groupby returns an iterator that emits the key of the iterating group and group df as a tuple. You can perform your max and min operation on the group as:
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'a': [0, 0, 0, 1, 1, 1], 'b': [3, 4, 5, 6, 7, 8]})
In [3]: for k, g in df.groupby('a'):
...: print(g['b'].max())
...:
5
8
You can also get the min-max directly as df using agg:
In [4]: df.groupby('a')['b'].agg(['max', 'min'])
Out[4]:
max min
a
0 5 3
1 8 6

Create new column (pandas dataframe) when duplicate ids have a payment date

I have a pandas dataframe:
pd.DataFrame({'id': [1, 1, 2, 2, 3, 3],
'payment_count': 1, 2, 1, 2, 1, 2,
'payment_date': ['2/2/2020', '4/6/2020', '3/20/2020', '3/29/2020', '5/1/2020', '5/30/2020']})
I want to take max('payment_count') by each 'id' and create a new column with the associated 'payment_date'. Desired output:
pd.DataFrame({'id': [1, 2, 3],
'payment_date_1': ['2/2/2020', '3/20/2020', '5/1/2020'],
'payment_date_2': ['4/6/2020', '3/29/2020', '5/30/2020']})
You can try with pivot, add_prefix, rename_axis and reset_index
df.pivot(index='id',columns='payment_count',values='payment_date_')\
.rename_axis(None, axis = 1)\
.add_prefix('payment_date')\
.reset_index()
Output:
id payment_date_1 payment_date_2
0 1 2/2/2020 4/6/2020
1 2 3/20/2020 3/29/2020
2 3 5/1/2020 5/30/2020
Another way using groupby.
df['paydate'] = df.groupby('id')['payment_date'].cumcount()+1
df['paydate'] = 'payment_date' + df['paydate'].astype(str)
df = df.set_index(['paydate','id'])['payment_date']
df = df.unstack(0).rename_axis(None)
Ugly but it does what you asked. pivot sounds better though.
groups = df.groupby('id')
args = {group[0]:group[1].payment_count.argsort() for group in groups}
records = []
for k,v in args.items():
payments = {f'payment_{i}':date
for i,date in enumerate(df.payment_date[v])}
payments['id'] = k
records.append(payments)
_df = pd.DataFrame(records)

How to get the position of certain columns in dataframe - Python [duplicate]

In R when you need to retrieve a column index based on the name of the column you could do
idx <- which(names(my_data)==my_colum_name)
Is there a way to do the same with pandas dataframes?
Sure, you can use .get_loc():
In [45]: df = DataFrame({"pear": [1,2,3], "apple": [2,3,4], "orange": [3,4,5]})
In [46]: df.columns
Out[46]: Index([apple, orange, pear], dtype=object)
In [47]: df.columns.get_loc("pear")
Out[47]: 2
although to be honest I don't often need this myself. Usually access by name does what I want it to (df["pear"], df[["apple", "orange"]], or maybe df.columns.isin(["orange", "pear"])), although I can definitely see cases where you'd want the index number.
Here is a solution through list comprehension. cols is the list of columns to get index for:
[df.columns.get_loc(c) for c in cols if c in df]
DSM's solution works, but if you wanted a direct equivalent to which you could do (df.columns == name).nonzero()
For returning multiple column indices, I recommend using the pandas.Index method get_indexer, if you have unique labels:
df = pd.DataFrame({"pear": [1, 2, 3], "apple": [2, 3, 4], "orange": [3, 4, 5]})
df.columns.get_indexer(['pear', 'apple'])
# Out: array([0, 1], dtype=int64)
If you have non-unique labels in the index (columns only support unique labels) get_indexer_for. It takes the same args as get_indexer:
df = pd.DataFrame(
{"pear": [1, 2, 3], "apple": [2, 3, 4], "orange": [3, 4, 5]},
index=[0, 1, 1])
df.index.get_indexer_for([0, 1])
# Out: array([0, 1, 2], dtype=int64)
Both methods also support non-exact indexing with, f.i. for float values taking the nearest value with a tolerance. If two indices have the same distance to the specified label or are duplicates, the index with the larger index value is selected:
df = pd.DataFrame(
{"pear": [1, 2, 3], "apple": [2, 3, 4], "orange": [3, 4, 5]},
index=[0, .9, 1.1])
df.index.get_indexer([0, 1])
# array([ 0, -1], dtype=int64)
When you might be looking to find multiple column matches, a vectorized solution using searchsorted method could be used. Thus, with df as the dataframe and query_cols as the column names to be searched for, an implementation would be -
def column_index(df, query_cols):
cols = df.columns.values
sidx = np.argsort(cols)
return sidx[np.searchsorted(cols,query_cols,sorter=sidx)]
Sample run -
In [162]: df
Out[162]:
apple banana pear orange peach
0 8 3 4 4 2
1 4 4 3 0 1
2 1 2 6 8 1
In [163]: column_index(df, ['peach', 'banana', 'apple'])
Out[163]: array([4, 1, 0])
Update: "Deprecated since version 0.25.0: Use np.asarray(..) or DataFrame.values() instead." pandas docs
In case you want the column name from the column location (the other way around to the OP question), you can use:
>>> df.columns.values()[location]
Using #DSM Example:
>>> df = DataFrame({"pear": [1,2,3], "apple": [2,3,4], "orange": [3,4,5]})
>>> df.columns
Index(['apple', 'orange', 'pear'], dtype='object')
>>> df.columns.values()[1]
'orange'
Other ways:
df.iloc[:,1].name
df.columns[location] #(thanks to #roobie-nuby for pointing that out in comments.)
To modify DSM's answer a bit, get_loc has some weird properties depending on the type of index in the current version of Pandas (1.1.5) so depending on your Index type you might get back an index, a mask, or a slice. This is somewhat frustrating for me because I don't want to modify the entire columns just to extract one variable's index. Much simpler is to avoid the function altogether:
list(df.columns).index('pear')
Very straightforward and probably fairly quick.
how about this:
df = DataFrame({"pear": [1,2,3], "apple": [2,3,4], "orange": [3,4,5]})
out = np.argwhere(df.columns.isin(['apple', 'orange'])).ravel()
print(out)
[1 2]
When the column might or might not exist, then the following (variant from above works.
ix = 'none'
try:
ix = list(df.columns).index('Col_X')
except ValueError as e:
ix = None
pass
if ix is None:
# do something
import random
def char_range(c1, c2): # question 7001144
for c in range(ord(c1), ord(c2)+1):
yield chr(c)
df = pd.DataFrame()
for c in char_range('a', 'z'):
df[f'{c}'] = random.sample(range(10), 3) # Random Data
rearranged = random.sample(range(26), 26) # Random Order
df = df.iloc[:, rearranged]
print(df.iloc[:,:15]) # 15 Col View
for col in df.columns: # List of indices and columns
print(str(df.columns.get_loc(col)) + '\t' + col)
![Results](Results

Get column index from column name in python pandas

In R when you need to retrieve a column index based on the name of the column you could do
idx <- which(names(my_data)==my_colum_name)
Is there a way to do the same with pandas dataframes?
Sure, you can use .get_loc():
In [45]: df = DataFrame({"pear": [1,2,3], "apple": [2,3,4], "orange": [3,4,5]})
In [46]: df.columns
Out[46]: Index([apple, orange, pear], dtype=object)
In [47]: df.columns.get_loc("pear")
Out[47]: 2
although to be honest I don't often need this myself. Usually access by name does what I want it to (df["pear"], df[["apple", "orange"]], or maybe df.columns.isin(["orange", "pear"])), although I can definitely see cases where you'd want the index number.
Here is a solution through list comprehension. cols is the list of columns to get index for:
[df.columns.get_loc(c) for c in cols if c in df]
DSM's solution works, but if you wanted a direct equivalent to which you could do (df.columns == name).nonzero()
For returning multiple column indices, I recommend using the pandas.Index method get_indexer, if you have unique labels:
df = pd.DataFrame({"pear": [1, 2, 3], "apple": [2, 3, 4], "orange": [3, 4, 5]})
df.columns.get_indexer(['pear', 'apple'])
# Out: array([0, 1], dtype=int64)
If you have non-unique labels in the index (columns only support unique labels) get_indexer_for. It takes the same args as get_indexer:
df = pd.DataFrame(
{"pear": [1, 2, 3], "apple": [2, 3, 4], "orange": [3, 4, 5]},
index=[0, 1, 1])
df.index.get_indexer_for([0, 1])
# Out: array([0, 1, 2], dtype=int64)
Both methods also support non-exact indexing with, f.i. for float values taking the nearest value with a tolerance. If two indices have the same distance to the specified label or are duplicates, the index with the larger index value is selected:
df = pd.DataFrame(
{"pear": [1, 2, 3], "apple": [2, 3, 4], "orange": [3, 4, 5]},
index=[0, .9, 1.1])
df.index.get_indexer([0, 1])
# array([ 0, -1], dtype=int64)
When you might be looking to find multiple column matches, a vectorized solution using searchsorted method could be used. Thus, with df as the dataframe and query_cols as the column names to be searched for, an implementation would be -
def column_index(df, query_cols):
cols = df.columns.values
sidx = np.argsort(cols)
return sidx[np.searchsorted(cols,query_cols,sorter=sidx)]
Sample run -
In [162]: df
Out[162]:
apple banana pear orange peach
0 8 3 4 4 2
1 4 4 3 0 1
2 1 2 6 8 1
In [163]: column_index(df, ['peach', 'banana', 'apple'])
Out[163]: array([4, 1, 0])
Update: "Deprecated since version 0.25.0: Use np.asarray(..) or DataFrame.values() instead." pandas docs
In case you want the column name from the column location (the other way around to the OP question), you can use:
>>> df.columns.values()[location]
Using #DSM Example:
>>> df = DataFrame({"pear": [1,2,3], "apple": [2,3,4], "orange": [3,4,5]})
>>> df.columns
Index(['apple', 'orange', 'pear'], dtype='object')
>>> df.columns.values()[1]
'orange'
Other ways:
df.iloc[:,1].name
df.columns[location] #(thanks to #roobie-nuby for pointing that out in comments.)
To modify DSM's answer a bit, get_loc has some weird properties depending on the type of index in the current version of Pandas (1.1.5) so depending on your Index type you might get back an index, a mask, or a slice. This is somewhat frustrating for me because I don't want to modify the entire columns just to extract one variable's index. Much simpler is to avoid the function altogether:
list(df.columns).index('pear')
Very straightforward and probably fairly quick.
how about this:
df = DataFrame({"pear": [1,2,3], "apple": [2,3,4], "orange": [3,4,5]})
out = np.argwhere(df.columns.isin(['apple', 'orange'])).ravel()
print(out)
[1 2]
When the column might or might not exist, then the following (variant from above works.
ix = 'none'
try:
ix = list(df.columns).index('Col_X')
except ValueError as e:
ix = None
pass
if ix is None:
# do something
import random
def char_range(c1, c2): # question 7001144
for c in range(ord(c1), ord(c2)+1):
yield chr(c)
df = pd.DataFrame()
for c in char_range('a', 'z'):
df[f'{c}'] = random.sample(range(10), 3) # Random Data
rearranged = random.sample(range(26), 26) # Random Order
df = df.iloc[:, rearranged]
print(df.iloc[:,:15]) # 15 Col View
for col in df.columns: # List of indices and columns
print(str(df.columns.get_loc(col)) + '\t' + col)
![Results](Results

Categories

Resources