filter pandas data on specific index - python

I'd like to filter a dataframe based on specifics index.
I've read things about query but I don't succeed.
Here is the code which create my pivot table. I'd like to filter on specific members
df = pd.DataFrame(my_dataframe)
table = pd.pivot_table(df,index=["Date","member","Card"], columns=["Type"],values=["Heure"],aggfunc=[len]) #,fill_value=0)
table.to_excel(writer, sheet_name='TcD')
What should I do ?
Thanks

You can use query or select by level of MultiIndex by slicers:
df = pd.DataFrame({'Card':list('baaaaa'),
'Date':['2017-10-01'] * 6,
'Heure':[1,3,5,7,1,0],
'Type':[5,5,5,9,5,9],
'member':list('aaabbb')})
print (df)
Card Date Heure Type member
0 b 2017-10-01 1 5 a
1 a 2017-10-01 3 5 a
2 a 2017-10-01 5 5 a
3 a 2017-10-01 7 9 b
4 a 2017-10-01 1 5 b
5 a 2017-10-01 0 9 b
table = pd.pivot_table(df,index=["Date","member","Card"],
columns="Type",
values="Heure",
aggfunc='size')
print (table)
Type 5 9
Date member Card
2017-10-01 a a 2.0 NaN
b 1.0 NaN
b a 1.0 2.0
table1 = table.query('member == "a"')
print (table1)
Type 5 9
Date member Card
2017-10-01 a a 2.0 NaN
b 1.0 NaN
idx = pd.IndexSlice
table1 = table.loc[idx[:,'a',:],:]
print (table1)
Type 5 9
Date member Card
2017-10-01 a a 2.0 NaN
b 1.0 NaN
EDIT:
For filter by multiple values use:
table1 = table.query('member in ["a", "b"]')
print (table1)
Type 5 9
Date member Card
2017-10-01 a a 2.0 NaN
b 1.0 NaN
b a 1.0 2.0
idx = pd.IndexSlice
table1 = table.loc[idx[:,['a', 'b'],:],:]
print (table1)
Type 5 9
Date member Card
2017-10-01 a a 2.0 NaN
b 1.0 NaN
b a 1.0 2.0

Related

Rolling sum of groups by period

I have got this dataframe:
lst=[['01012021','A',10],['01012021','B',20],['02012021','A',12],['02012021','B',23]]
df2=pd.DataFrame(lst,columns=['Date','FN','AuM'])
I would like to get the rolling sum by date and FN. The desired result looks like this:
lst=[['01012021','A',10,''],['01012021','B',20,''],['02012021','A',12,22],['02012021','B',23,33]]
df2=pd.DataFrame(lst,columns=['Date','FN','AuM','Roll2PeriodSum'])
Would you please help me?
Thank you
Solution if consecutive datetimes, not used column date for count per groups:
df2['Roll2PeriodSum'] = (df2.groupby('FN').AuM
.rolling(2)
.sum()
.reset_index(level=0, drop=True))
print (df2)
Date FN AuM Roll2PeriodSum
0 01012021 A 10 NaN
1 01012021 B 20 NaN
2 02012021 A 12 22.0
3 02012021 B 23 43.0
Solution with datetimes, is used column date for counts:
df2['Date'] = pd.to_datetime(df2['Date'], format='%d%m%Y')
df = (df2.join(df2.set_index('Date')
.groupby('FN').AuM
.rolling('2D')
.sum().rename('Roll2PeriodSum'), on=['FN','Date']))
print (df)
Date FN AuM Roll2PeriodSum
0 2021-01-01 A 10 10.0
1 2021-01-01 B 20 20.0
2 2021-01-02 A 12 22.0
3 2021-01-02 B 23 43.0
df = (df2.join(df2.set_index('Date')
.groupby('FN').AuM
.rolling('2D', min_periods=2)
.sum()
.rename('Roll2PeriodSum'), on=['FN','Date']))
print (df)
Date FN AuM Roll2PeriodSum
0 2021-01-01 A 10 NaN
1 2021-01-01 B 20 NaN
2 2021-01-02 A 12 22.0
3 2021-01-02 B 23 43.0
Use groupby.rolling.sum:
df2['Roll2PeriodSum'] = (
df2.assign(Date=pd.to_datetime(df2['Date'], format='%d%m%Y'))
.groupby('FN').rolling(2)['AuM'].sum().droplevel(0)
)
print(df2)
# Output
Date FN AuM Roll2PeriodSum
0 01012021 A 10 NaN
1 01012021 B 20 NaN
2 02012021 A 12 22.0
3 02012021 B 23 43.0

Duplicate Quantity as new columns

This is my table
I want it to be the following, i.e., by duplicating Quantity of (shopID, productID) Quantity of other difference (shopID, productID) as new columns, Quantity_shopID_productID.
Following is my code:
from datetime import date
import pandas as pd
df=pd.DataFrame({"Date":[date(2019,10,1),date(2019,10,2),date(2019,10,1),date(2019,10,2),date(2019,10,1),date(2019,10,2),date(2019,10,1),date(2019,10,2)],
"ShopID":[1,1,1,1,2,2,2,2],
"ProductID":[1,1,2,2,1,1,2,2],
"Quantity":[3,3,4,4,5,5,6,6]})
for sid in df.ShopID.unique():
for pid in df.ProductID.unique():
col_name='Quantity{}_{}'.format(sid,pid)
print(col_name)
df1=df[(df.ShopID==sid) & (df.ProductID==pid)][['Date','Quantity']]
df1.rename(columns={'Quantity':col_name}, inplace=True)
display(df1)
df=df.merge(df1, how="left",on="Date")
df.loc[(df.ShopID==sid) & (df.ProductID==pid),col_name]=None
print(df)
The problem is, it works very slow as I have over 108 different (shopID, productID) combinations over 3 years period. Is there anyway to make it more efficient?
Method 1: using pivot_table with join (vectorized solution)
We can pivot your quantity values per shopid, productid to columns, and then join them back to your original dataframe. This should be way faster than your forloops since this is a vectorized approach:
piv = df.pivot_table(index=['ShopID', 'ProductID'], columns=['ShopID', 'ProductID'], values='Quantity')
piv2 = piv.ffill().bfill()
piv3 = piv2.mask(piv2.eq(piv))
final = df.set_index(['ShopID', 'ProductID']).join(piv3).reset_index()
Output
ShopID ProductID dt Quantity (1, 1) (1, 2) (2, 1) (2, 2)
0 1 1 2019-10-01 3 NaN 4.0 5.0 6.0
1 1 1 2019-10-02 3 NaN 4.0 5.0 6.0
2 1 2 2019-10-01 4 3.0 NaN 5.0 6.0
3 1 2 2019-10-02 4 3.0 NaN 5.0 6.0
4 2 1 2019-10-01 5 3.0 4.0 NaN 6.0
5 2 1 2019-10-02 5 3.0 4.0 NaN 6.0
6 2 2 2019-10-01 6 3.0 4.0 5.0 NaN
7 2 2 2019-10-02 6 3.0 4.0 5.0 NaN
Method 2, using GroupBy, mask, where:
We can speed up your code by using GroupBy and mask + where instead of two for-loops:
groups = df.groupby(['ShopID', 'ProductID'])
for grp, data in groups:
m = df['ShopID'].eq(grp[0]) & df['ProductID'].eq(grp[1])
values = df['Quantity'].where(m).ffill().bfill()
df[f'Quantity_{grp[0]}_{grp[1]}'] = values.mask(m)
Output
dt ShopID ProductID Quantity Quantity_1_1 Quantity_1_2 Quantity_2_1 Quantity_2_2
0 2019-10-01 1 1 3 NaN 4.0 5.0 6.0
1 2019-10-02 1 1 3 NaN 4.0 5.0 6.0
2 2019-10-01 1 2 4 3.0 NaN 5.0 6.0
3 2019-10-02 1 2 4 3.0 NaN 5.0 6.0
4 2019-10-01 2 1 5 3.0 4.0 NaN 6.0
5 2019-10-02 2 1 5 3.0 4.0 NaN 6.0
6 2019-10-01 2 2 6 3.0 4.0 5.0 NaN
7 2019-10-02 2 2 6 3.0 4.0 5.0 NaN
This is a pivot and merge problem with a little extra:
# somehow merge only works with pandas datetime
df['Date'] = pd.to_datetime(df['Date'])
# define the new column names
df['new_col'] = 'Quantity_'+df['ShopID'].astype(str) + '_' + df['ProductID'].astype(str)
# new data to merge:
pivot = df.pivot_table(index='Date',
columns='new_col',
values='Quantity')
# merge
new_df = df.merge(pivot, left_on='Date', right_index=True)
# mask
mask = new_df['new_col'].values[:,None] == pivot.columns.values
# adding the None the values:
new_df[pivot.columns] = new_df[pivot.columns].mask(mask)
Output:
Date ShopID ProductID Quantity new_col Quantity_1_1 Quantity_1_2 Quantity_2_1 Quantity_2_2
-- ------------------- -------- ----------- ---------- ------------ -------------- -------------- -------------- --------------
0 2019-10-01 00:00:00 1 1 3 Quantity_1_1 nan 4 5 6
1 2019-10-02 00:00:00 1 1 3 Quantity_1_1 nan 4 5 6
2 2019-10-01 00:00:00 1 2 4 Quantity_1_2 3 nan 5 6
3 2019-10-02 00:00:00 1 2 4 Quantity_1_2 3 nan 5 6
4 2019-10-01 00:00:00 2 1 5 Quantity_2_1 3 4 nan 6
5 2019-10-02 00:00:00 2 1 5 Quantity_2_1 3 4 nan 6
6 2019-10-01 00:00:00 2 2 6 Quantity_2_2 3 4 5 nan
7 2019-10-02 00:00:00 2 2 6 Quantity_2_2 3 4 5 nan
Test data with similar size to your actual data:
# 3 years dates
dates = pd.date_range('2015-01-01', '2018-12-31', freq='D')
# 12 Shops and 9 products
idx = pd.MultiIndex.from_product((dates, range(1,13), range(1,10)),
names=('Date','ShopID', 'ProductID'))
# the test data
np.random.seed(1)
df = pd.DataFrame({'Quantity':np.random.randint(0,10, len(idx))},
index=idx).reset_index()
The above code tooks about 10 seconds on an i5 laptop :-)

Pandas - Replace NaNs in a column with the mean of specific group

I am working with data like the following. The dataframe is sorted by the date:
category value Date
0 1 24/5/2019
1 NaN 24/5/2019
1 1 26/5/2019
2 2 1/6/2019
1 2 23/7/2019
2 NaN 18/8/2019
2 3 20/8/2019
7 3 1/9/2019
1 NaN 12/9/2019
2 NaN 13/9/2019
I would like to replace the "NaN" values with the previous mean for that specific category.
What is the best way to do this in pandas?
Some approaches I considered:
1) This litte riff:
df['mean' = df.groupby('category')['time'].apply(lambda x: x.shift().expanding().mean()))
source
This gets me the the correct means in but in another column, and it does not replace the NaNs.
2) This riff replaces the NaNs with the average of the columns:
df = df.groupby(df.columns, axis = 1).transform(lambda x: x.fillna(x.mean()))
Source 2
Both of these do not exactly give what I want. If someone could guide me on this it would be much appreciated!
You can replace value by new Series from shift + expanding + mean, first value of 1 group is not replaced, because no previous NaN values exits:
df['Date'] = pd.to_datetime(df['Date'])
s = df.groupby('category')['value'].apply(lambda x: x.shift().expanding().mean())
df['value'] = df['value'].fillna(s)
print (df)
category value Date
0 0 1.0 2019-05-24
1 1 NaN 2019-05-24
2 1 1.0 2019-05-26
3 2 2.0 2019-01-06
4 1 2.0 2019-07-23
5 2 2.0 2019-08-18
6 2 3.0 2019-08-20
7 7 3.0 2019-01-09
8 1 1.5 2019-12-09
9 2 2.5 2019-09-13
You can use pandas.Series.fillna to replace NaN values:
df['value']=df['value'].fillna(df.groupby('category')['value'].transform(lambda x: x.shift().expanding().mean()))
print(df)
category value Date
0 0 1.0 24/5/2019
1 1 NaN 24/5/2019
2 1 1.0 26/5/2019
3 2 2.0 1/6/2019
4 1 2.0 23/7/2019
5 2 2.0 18/8/2019
6 2 3.0 20/8/2019
7 7 3.0 1/9/2019
8 1 1.5 12/9/2019
9 2 2.5 13/9/2019

How to replace pandas dataframe column A value base on the value of another column B

I hava a data frame, for example
df = pd.DataFrame([[1,2,np.nan],[4,5,np.nan],[7,8,9]])
so it would be
sku r1 r2
0 1 2 NaN
1 4 5 NaN
2 7 8 9.0
if I would like to change r1 column's value base on r2, I mean if r2 is Not Nan, then use r2's value replace r1'value, otherwise keep r1 no change
So the result would be:
sku r1 r2
0 1 2 NaN
1 4 5 NaN
2 7 9.0 9.0
so you see, change 8 to 9.0 in third case in this example.
I am a new learner of pandas, it takes me time to find a solution for this.
Thanks for help.
You can use mask with notnull:
df['r1'] = df['r1'].mask(df['r2'].notnull(), df['r2'])
print (df)
sku r1 r2
0 1 2.0 NaN
1 4 5.0 NaN
2 7 9.0 9.0
Or loc:
df.loc[df['r2'].notnull(), 'r1'] = df['r2']
print (df)
sku r1 r2
0 1 2.0 NaN
1 4 5.0 NaN
2 7 9.0 9.0
Use np.where:
df['r1'] = np.where(df['r2'].notnull(),df['r2'],df['r1'])
df
Output:
sku r1 r2
0 1 2.0 NaN
1 4 5.0 NaN
2 7 9.0 9.0

(pandas) Fill NaN based on groupby and column condition

Using 'bfill' or 'ffill' on a groupby element is trivial, but what if you need to fill the na with a specific value in a second column, based on a condition in a third column?
For example:
>>> df=pd.DataFrame({'date':['01/10/2017', '02/09/2017', '02/10/2016','01/10/2017', '01/11/2017', '02/10/2016'], 'a':[1,1,1,2,2,2], 'b':[4,np.nan,6, 5, np.nan, 7]})
>>> df
a b date
0 1 4.0 01/10/2017
1 1 NaN 02/09/2017
2 1 6.0 02/10/2016
3 2 5.0 01/10/2017
4 2 NaN 01/11/2017
5 2 7.0 02/10/2016
I need to group by column 'a', and fill the NaN with the column 'b' value where the date for that row is closest to the date in the NaN row.
So the output should look like:
a b date
0 1 4.0 01/10/2017
1 1 6.0 02/09/2017
2 1 6.0 02/10/2016
3 2 5.0 01/10/2017
4 2 5.0 01/11/2017
5 2 7.0 02/10/2016
Assume there is a closest_date() function that takes the NaN date and the list of other dates in that group, and returns the closest date.
I'm trying to find a clean solution that doesn't have to iterate through rows, ideally able to use apply() with lambdas. Any ideas?
This should work:
df['closest_date_by_a'] = df.groupby('a')['date'].apply(closest_date)
df['b'] = df.groupby(['a', 'closest_date_by_a'])['b'].ffill().bfill()
Given a function (closest_date()), you need to apply that function by group so it calculates the closest dates for rows within each group. Then you can group by both the main grouping column (a) and the closest date column (closest_date_by_a) and perform your filling.
Ensure that your date column are in fact dates.
df = pd.DataFrame(
{'date': ['01/10/2017', '02/09/2017', '02/10/2016','01/10/2017', '01/11/2017', '02/10/2016'],
'a':[1,1,1,2,2,2], 'b':[4,np.nan,6, 5, np.nan, 7]})
df.date = pd.to_datetime(df.date)
print(df)
a b date
0 1 4.0 2017-01-10
1 1 NaN 2017-02-09
2 1 6.0 2016-02-10
3 2 5.0 2017-01-10
4 2 NaN 2017-01-11
5 2 7.0 2016-02-10
Use reindex with method='nearest' after having dropna()
def fill_with_nearest(df):
s = df.set_index('date').b
s = s.dropna().reindex(s.index, method='nearest')
s.index = df.index
return s
df.loc[df.b.isnull(), 'b'] = df.groupby('a').apply(fill_with_nearest).reset_index(0, drop=True)
print(df)
a b date
0 1 4.0 2017-01-10
1 1 4.0 2017-02-09
2 1 6.0 2016-02-10
3 2 5.0 2017-01-10
4 2 5.0 2017-01-11
5 2 7.0 2016-02-10

Categories

Resources