How to find churned customers on a monthly basis? Python Pandas - python

I have a large customer dataset, it has things like Customer ID, Service ID, Product, etc. So the two ways we can measure churn are at a Customer-ID level, if the entire customer leaves and at a Service-ID level where maybe they cancel 2 out of 5 services.
The data looks like this, and as we can see
Alligators stops being a customer at the end of Jan as they dont have any rows in Feb (CustomerChurn)
Aunties stops being a customer at the end of Jan as they dont have any rows in Feb (CustomerChurn)
Bricks continues with Apples and Oranges in Jan and Feb (ServiceContinue)
Bricks continues being a customer but cancels two services at the end of Jan (ServiceChurn)
I am trying to write some code that creates the 'Churn' column.. I have tried
To manually just grab lists of CustomerIDs and ServiceIDs using Set from Oct 2019, and then comparing that to Nov 2019, to find the ones that churned. This is not too slow but doesn't seem very Pythonic.
Thank you!
data = {'CustomerName': ['Alligators','Aunties', 'Bricks', 'Bricks','Bricks', 'Bricks', 'Bricks', 'Bricks', 'Bricks', 'Bricks'],
'ServiceID': [1009, 1008, 1001, 1002, 1003, 1004, 1001, 1002, 1001, 1002],
'Product': ['Apples', 'Apples', 'Apples', 'Bananas', 'Oranges', 'Watermelon', 'Apples', 'Bananas', 'Apples', 'Bananas'],
'Month': ['Jan', 'Jan', 'Jan', 'Jan', 'Jan', 'Jan', 'Feb', 'Feb', 'Mar', 'Mar'],
'Year': [2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021],
'Churn': ['CustomerChurn', 'CustomerChurn', 'ServiceContinue', 'ServiceContinue', 'ServiceChurn', 'ServiceChurn','ServiceContinue', 'ServiceContinue', 'NA', 'NA']}
df = pd.DataFrame(data)
df

I think this gets close to what you want, except for the NA in the last two rows, but if you really need those NA, then you can filter by date and change the values.
Because you are really testing two different groupings, I send the first customername grouping through a function and depending what I see, I send a more refined grouping through a second function. For this data set it seems to work.
I create an actual date column and make sure everything is sorted before grouping. The logic inside the functions is testing the max date of the group to see if it's less than a certain date. Looks like you are testing March as the current month
You should be able to adapt it for your needs
df['testdate'] = df.apply(lambda x: datetime.datetime.strptime('-'.join((x['Month'], str(x['Year']))),'%b-%Y'), axis=1)
df = df.sort_values('testdate')
df1 = df.drop('Churn',axis=1)
def get_customerchurn(x, tdate):
# print(x)
# print(tdate)
if x.testdate.max() < tdate:
x.loc[:, 'Churn'] = 'CustomerChurn'
return x
else:
x = x.groupby(['CustomerName', 'Product']).apply(lambda x: get_servicechurn(x, datetime.datetime(2021,3,1)))
return x
def get_servicechurn(x, tdate):
print(x)
# print(tdate)
if x.testdate.max() < tdate:
x.loc[:, 'Churn'] = 'ServiceChurn'
return x
else:
x.loc[:, 'Churn'] = 'ServiceContinue'
return x
df2 = df1.groupby(['CustomerName']).apply(lambda x: get_customerchurn(x, datetime.datetime(2021,3,1)))
df2
Output:
CustomerName ServiceID Product Month Year testdate Churn
0 Alligators 1009 Apples Jan 2021 2021-01-01 CustomerChurn
1 Aunties 1008 Apples Jan 2021 2021-01-01 CustomerChurn
2 Bricks 1001 Apples Jan 2021 2021-01-01 ServiceContinue
3 Bricks 1002 Bananas Jan 2021 2021-01-01 ServiceContinue
4 Bricks 1003 Oranges Jan 2021 2021-01-01 ServiceChurn
5 Bricks 1004 Watermelon Jan 2021 2021-01-01 ServiceChurn
6 Bricks 1001 Apples Feb 2021 2021-02-01 ServiceContinue
7 Bricks 1002 Bananas Feb 2021 2021-02-01 ServiceContinue
8 Bricks 1001 Apples Mar 2021 2021-03-01 ServiceContinue
9 Bricks 1002 Bananas Mar 2021 2021-03-01 ServiceContinue

Related

Python/increase code efficiency about multiple columns filter

I was wondering if someone could help me find a more efficiency way to run my code.
I have a dataset contains 7 columns, which are country,sector,year,month,week,weekday,value.
the year column have only 3 elements, 2019,2020,2021
What I have to do here is to substract every value in 2020 and 2021 from 2019.
But its more complicated that I need to match the weekday columns.
For example,i need to use year 2020, month 1, week 1, weekday 0(monday) value to substract,
year 2019, month 1, week 1, weekday 0(monday) value, if cant find it, it will pass, and so on, which means, the weekday(monday,Tuesaday....must be matched)
And here is my code, it can run, but it tooks me hours:(
for i in itertools.product(year_list,country_list, sector_list,month_list,week_list,weekday_list):
try:
data_2 = df_carbon[(df_carbon['country'] == i[1])
& (df_carbon['sector'] == i[2])
& (df_carbon['year'] == i[0])
& (df_carbon['month'] == i[3])
& (df_carbon['week'] == i[4])
& (df_carbon['weekday'] == i[5])]['co2'].tolist()[0]
data_1 = df_carbon[(df_carbon['country'] == i[1])
& (df_carbon['sector'] == i[2])
& (df_carbon['year'] == 2019)
& (df_carbon['month'] == i[3])
& (df_carbon['week'] == i[4])
& (df_carbon['weekday'] == i[5])]['co2'].tolist()[0]
co2.append(data_2-data_1)
country.append(i[1])
sector.append(i[2])
year.append(i[0])
month.append(i[3])
week.append(i[4])
weekday.append(i[5])
except:
pass
I changed the for loops to itertools, but it still not fast enough, any other ideas?
many thanks:)
##############################
here is the sample dataset
country co2 sector date week weekday year month
Brazil 108.767782 Power 2019-01-01 0 1 2019 1
China 14251.044482 Power 2019-01-01 0 1 2019 1
EU27 & UK 1886.493814 Power 2019-01-01 0 1 2019 1
France 53.856398 Power 2019-01-01 0 1 2019 1
Germany 378.323440 Power 2019-01-01 0 1 2019 1
Japan 21.898788 IA 2021-11-30 48 1 2021 11
Russia 19.773822 IA 2021-11-30 48 1 2021 11
Spain 42.293944 IA 2021-11-30 48 1 2021 11
UK 56.425121 IA 2021-11-30 48 1 2021 11
US 166.425000 IA 2021-11-30 48 1 2021 11
or this
import pandas as pd
pd.DataFrame({
'year': [2019, 2020, 2021],
'co2': [1,2,3],
'country': ['Brazil', 'Brazil', 'Brazil'],
'sector': ['power', 'power', 'power'],
'month': [1, 1, 1],
'week': [0,0,0],
'weekday': [0,0,0]
})
pandas can subtract two dataframe index-by-index, so the idea would be to separate your data into a minuend and a subtrahend, set ['country', 'sector', 'month', 'week', 'weekday'] as their indices, just subtract them, and remove rows (by dropna) where a match in year 2019 is not found.
df_carbon = pd.DataFrame({
'year': [2019, 2020, 2021],
'co2': [1,2,3],
'country': ['ab', 'ab', 'bc']
})
index = ['country']
# index = ['country', 'sector', 'month', 'week', 'weekday']
df_2019 = df_carbon[df_carbon['year']==2019].set_index(index)
df_rest = df_carbon[df_carbon['year']!=2019].set_index(index)
ans = (df_rest - df_2019).reset_index().dropna()
ans['year'] += 2019
Two additional points:
In this subtraction the year is also covered, so I need to add 2019 back.
I created a small example of df_carbon to test my code. If you had provided a more realistic version in text form, I would have tested my code using your data.

How to fill missing values in a dataframe based on group value counts?

I have a pandas DataFrame with 2 columns: Year(int) and Condition(string). In column Condition I have a nan value and I want to replace it based on information from groupby operation.
import pandas as pd
import numpy as np
year = [2015, 2016, 2017, 2016, 2016, 2017, 2015, 2016, 2015, 2015]
cond = ["good", "good", "excellent", "good", 'excellent','excellent', np.nan, 'good','excellent', 'good']
X = pd.DataFrame({'year': year, 'condition': cond})
stat = X.groupby('year')['condition'].value_counts()
It gives:
print(X)
year condition
0 2015 good
1 2016 good
2 2017 excellent
3 2016 good
4 2016 excellent
5 2017 excellent
6 2015 NaN
7 2016 good
8 2015 excellent
9 2015 good
print(stat)
year condition
2015 good 2
excellent 1
2016 good 3
excellent 1
2017 excellent 2
As nan value in 6th row gets year = 2015 and from stat I get that from 2015 the most frequent is 'good' so I want to replace this nan value with 'good' value.
I have tried with fillna and .transform method but it does not work :(
I would be grateful for any help.
I did a little extra transformation to get stat as a dictionary mapping the year to its highest frequency name (credit to this answer):
In[0]:
fill_dict = stat.unstack().idxmax(axis=1).to_dict()
fill_dict
Out[0]:
{2015: 'good', 2016: 'good', 2017: 'excellent'}
Then use fillna with map based on this dictionary (credit to this answer):
In[0]:
X['condition'] = X['condition'].fillna(X['year'].map(fill_dict))
X
Out[0]:
year condition
0 2015 good
1 2016 good
2 2017 excellent
3 2016 good
4 2016 excellent
5 2017 excellent
6 2015 good
7 2016 good
8 2015 excellent
9 2015 good

Sort groupby pandas output by Month name and year

df
order_date Month Name Year Days Data
2015-12-20 Dec 2014 1 3
2016-1-21 Jan 2014 2 3
2015-08-20 Aug 2015 1 1
2016-04-12 Apr 2016 4 1
and so on
Code:
df = df.groupby(["Year", "Month Name"], as_index=False)["days"].agg(['min',
'mean'])
df3 = (df.groupby(["Year", "Month Name"], as_index=False)
["Data"].agg(['count']))
merged_df=pd.merge(df3, df, on=['Year','Month Name'])
I have a groupby output as below
Min Mean Count
Year Month Name
2015 Aug 2 11 200
Dec 5 13 130
Feb 3 15 100
Jan 4 20 123
May 1 21 342
Nov 2 12 234
2016 Apr 1 10 200
Dec 2 12 120
Feb 2 13 200
Jan 2 24 200
Sep 1 25 220
Issue:
Basically I am getting output of groupby sorted by Month Name starting from A to Z, So I am getting April, August, December, Feb etc......rather than Jan, Feb ....till Dec etc. How to get output sorted by Month number.
I need output like 2016, Jan, Feb ....Dec then 2017, Jan , Feb, Mar till Dec
Please help if there is merging of 2 dfs. I have just presented a simplified code here(real code is different, I need to merge both and then only I can work)
EDIT: Your solution should be changed:
df1 = df.groupby(["Year", "Month Name"], as_index=False)["Days"].agg(['min', 'mean'])
df3 = df.groupby(["Year", "Month Name"], as_index=False)["Data"].agg(['count'])
merged_df=pd.merge(df3, df1, on=['Year','Month Name']).reset_index()
cats = ['Jan', 'Feb', 'Mar', 'Apr','May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
merged_df['Month Name'] = pd.Categorical(merged_df['Month Name'],categories=cats, ordered=True)
merged_df = merged_df.sort_values(["Year", "Month Name"])
print (merged_df)
Year Month Name count min mean
1 2014 Jan 1 2 2
0 2014 Dec 1 1 1
2 2015 Aug 1 1 1
3 2016 Apr 1 4 4
Or:
df1 = (df.groupby(["Year", "Month Name"])
.agg(min_days=("Days", 'min'),
avg_days=("Days", 'mean'),
count = ('Data', 'count'))
.reset_index())
cats = ['Jan', 'Feb', 'Mar', 'Apr','May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
df1['Month Name'] = pd.Categorical(df1['Month Name'], categories=cats, ordered=True)
df1 = df1.sort_values(["Year", "Month Name"])
print (df1)
Year Month Name min_days avg_days count
1 2014 Jan 2 2 1
0 2014 Dec 1 1 1
2 2015 Aug 1 1 1
3 2016 Apr 4 4 1
Last solution with MultiIndex and no categoricals, solution create helper dates column and sorting by it:
df1 = (df.groupby(["Year", "Month Name"])
.agg(min_days=("Days", 'min'),
avg_days=("Days", 'mean'),
count = ('Data', 'count'))
)
df1['dates'] = pd.to_datetime([f'{y}{m}' for y, m in df1.index], format='%Y%b')
df1 = df1.sort_values('dates')
print (df1)
min_days avg_days count dates
Year Month Name
2014 Jan 2 2 1 2014-01-01
Dec 1 1 1 2014-12-01
2015 Aug 1 1 1 2015-08-01
2016 Apr 4 4 1 2016-04-01
Simply tell groupby you don't want it to sort group keys (by default, that's what it does - see the docs)
df.groupby(["Year", "Month Name"], as_index=False, sort=False)["Days"].agg(
["min", "mean"]
)
NOTE: you should make sure your df is sorted before applying groupby
Here is my solution to sort by month number and return sorted month names for level=1 of multiindex taking merged_df as the input:
import calendar
d={i:e for e,i in enumerate([*calendar.month_abbr])}
#for full month name use :-> d={i:e for e,i in enumerate([*calendar.month_name])}
merged_df.index=pd.MultiIndex.from_tuples(sorted(merged_df.index,key=lambda x: d.get(x[1])))
merged_df = merged_df.sort_index(level=0)
print(merged_df)
count min mean
Year Month Name
2014 Jan 1 2 2
Dec 1 1 1
2015 Aug 1 1 1
2016 Apr 1 4 4

How to groupby in Pandas and keep all columns [duplicate]

This question already has answers here:
Pandas DataFrame Groupby two columns and get counts
(8 answers)
Closed 3 years ago.
I have a data frame like this:
year drug_name avg_number_of_ingredients
0 2019 NEXIUM I.V. 8
1 2016 ZOLADEX 10
2 2017 PRILOSEC 59
3 2017 BYDUREON BCise 24
4 2019 Lynparza 28
And I need to group drug names and mean number of ingredients by year like this:
year drug_name avg_number_of_ingredients
0 2019 drug a,b,c.. mean value for column
1 2018 drug a,b,c.. mean value for column
2 2017 drug a,b,c.. mean value for column
If I do df.groupby('year'), I lose drug names. How can I do it?
Let me show you the solution on the simple example. First, I make the same data frame as you have:
>>> df = pd.DataFrame(
[
{'year': 2019, 'drug_name': 'NEXIUM I.V.', 'avg_number_of_ingredients': 8},
{'year': 2016, 'drug_name': 'ZOLADEX', 'avg_number_of_ingredients': 10},
{'year': 2017, 'drug_name': 'PRILOSEC', 'avg_number_of_ingredients': 59},
{'year': 2017, 'drug_name': 'BYDUREON BCise', 'avg_number_of_ingredients': 24},
{'year': 2019, 'drug_name': 'Lynparza', 'avg_number_of_ingredients': 28},
]
)
>>> print(df)
year drug_name avg_number_of_ingredients
0 2019 NEXIUM I.V. 8
1 2016 ZOLADEX 10
2 2017 PRILOSEC 59
3 2017 BYDUREON BCise 24
4 2019 Lynparza 28
Now, I make a df_grouped, which still consists of information about drugs name.
>>> df_grouped = df.groupby('year', as_index=False).agg({'drug_name': ', '.join, 'avg_number_of_ingredients': 'mean'})
>>> print(df_grouped)
year drug_name avg_number_of_ingredients
0 2016 ZOLADEX 10.0
1 2017 PRILOSEC, BYDUREON BCise 41.5
2 2019 NEXIUM I.V., Lynparza 18.0

How to save split data in panda in reverse order?

You can use this to create the dataframe:
xyz = pd.DataFrame({'release' : ['7 June 2013', '2012', '31 January 2013',
'February 2008', '17 June 2014', '2013']})
I am trying to split the data and save, them into 3 columns named "day, month and year", using this command:
dataframe[['day','month','year']] = dataframe['release'].str.rsplit(expand=True)
The resulting dataframe is :
dataframe
As you can see, that it works perfectly when it gets 3 strings, but whenever it is getting less then 3 strings, it saves the data at the wrong place.
I have tried split and rsplit, both are giving the same result.
Any solution to get the data at the right place?
The last one is year and it is present in every condition , it should be the first one to be saved and then month if it is present otherwise nothing and same way the day should be stored.
You could
In [17]: dataframe[['year', 'month', 'day']] = dataframe['release'].apply(
lambda x: pd.Series(x.split()[::-1]))
In [18]: dataframe
Out[18]:
release year month day
0 7 June 2013 2013 June 7
1 2012 2012 NaN NaN
2 31 January 2013 2013 January 31
3 February 2008 2008 February NaN
4 17 June 2014 2014 June 17
5 2013 2013 NaN NaN
Try reversing the result.
dataframe[['year','month','day']] = dataframe['release'].str.rsplit(expand=True).reverse()

Categories

Resources