I have a data frame like shown below
customer
organization
currency
volume
revenue
Duration
Peter
XYZ Ltd
CNY, INR
20
3,000
01-Oct-2022
John
abc Ltd
INR
7
184
01-Oct-2022
Mary
aaa Ltd
USD
3
43
03-Oct-2022
John
bbb Ltd
THB
17
2,300
04-Oct-2022
Dany
ccc Ltd
CNY, INR , KRW
45
15,100
04-Oct-2022
If I pivot as shown below
df = pd.pivot_table(df, values=['runs', 'volume','revenue'],
index=['customer', 'organization', 'currency'],
columns=['Duration'],
aggfunc=sum,
fill_value=0
)
level = 0 becomes volume for all Duration (level 1) revenue for all Duration duration for all Duration.
I would like to pivot by Duration as level 0 and volume, revenue as level 2.
How to achieve it?
Current output:
I would like to have date as level 0 and volume, revenue and runs under it.
You can use swaplevel like below in your current pivot code; try this;
df1 = df.pivot_table(index=['customer', 'organization', 'currency'],
columns=['Duration'],
aggfunc=sum,
fill_value=0).swaplevel(0,1, axis=1).sort_index(axis=1)
Hope this Helps...
Related
I have two different dataframes, one containing the Net Revenue by SKU and Supplier and another one containing the stock of SKUs in each store. I need to get an average by Supplier of the stores that contains the SKUs that compouse up to 90% the net revenue of the supplier. It's a bit complicated but I will exemplify, and I hope it can make it clear. Please, note that if 3 SKUs compose 89% of the revenue, we need to consider another one.
Example:
Dataframe 1 - Net Revenue
Supplier
SKU
Net Revenue
UNILEVER
1111
10000
UNILEVER
2222
50000
UNILEVER
3333
500
PEPSICO
1313
680
PEPSICO
2424
10000
PEPSICO
2323
450
Dataframe 2 - Stock
Store
SKU
Stock
1
1111
1
1
2222
2
1
3333
1
2
1111
1
2
2222
0
2
3333
1
In this case, for UNILEVER, we need to discard SKU 3333 because its net revenue is not relevant (as 1111 and 2222 already compouse more than 90% of the total net revenue of UNILEVER). Coverage in this case will be 1.5 (we have 1111 in 2 stores and 2222 in one store: (1+2)/2).
Result is something like this:
Supplier
Coverage
UNILEVER
1.5
PEPSICO
...
Please, note that the real dataset has a different number of SKUs by supplier and a huge number of suppliers (around 150), so performance doesn't need to be PRIORITY but it has to be considered.
Thanks in advance, guys.
Calculate the cumulative sum grouping by Suppler and divide by the Supplier Total Revenue.
Then find each Supplier Revenue Threshold by getting the minimum Cumulative Revenue Percentage under 90%.
Then you can get the list of SKUs by Supplier and calculate the coverage.
import pandas as pd
df = pd.DataFrame([
['UNILEVER', '1111', 10000],
['UNILEVER', '2222', 50000],
['UNILEVER', '3333', 500],
['PEPSICO', '1313', 680],
['PEPSICO', '2424', 10000],
['PEPSICO', '2323', 450],
], columns=['Supplier', 'SKU', 'Net Revenue'])
total_revenue_by_supplier = df.groupby(df['Supplier']).sum().reset_index()
total_revenue_by_supplier.columns = ['Supplier', 'Total Revenue']
df = df.sort_values(['Supplier', 'Net Revenue'], ascending=[True, False])
df['cumsum'] = df.groupby(df['Supplier'])['Net Revenue'].transform(pd.Series.cumsum)
df = df.merge(total_revenue_by_supplier, on='Supplier')
df['cumpercentage'] = df['cumsum'] / df['Total Revenue']
min_before_threshold = df[df['cumpercentage'] >= 0.9][['Supplier', 'cumpercentage']].groupby('Supplier').min().reset_index()
min_before_threshold.columns = ['Supplier', 'Revenue Threshold']
df = df.merge(min_before_threshold, on='Supplier')
df = df[df['cumpercentage'] <= df['Revenue Threshold']][['Supplier', 'SKU', 'Net Revenue']]
df
I have two dataframes, one with news and the other with stock price. Both the dataframes have a "Date" column. I want to merge them on a gap of 5 days.
Lets say my news dataframe is df1 and the other price dataframe as df2.
My df1 looks like this:
News_Dates News
2018-09-29 Huge blow to ABC Corp. as they lost the 2012 tax case
2018-09-30 ABC Corp. suffers a loss
2018-10-01 ABC Corp to Sell stakes
2018-12-20 We are going to comeback strong said ABC CEO
2018-12-22 Shares are down massively for ABC Corp.
My df2 looks like this:
Dates Price
2018-10-04 120
2018-12-24 131
First method of merging I do is:
pd.merge_asof(df1_zscore.sort_values(by=['Dates']), df_n.sort_values(by=['News_Dates']), left_on=['Dates'], right_on=['News_Dates'] \
tolerance=pd.Timedelta('5d'), direction='backward')
The resulting df is:
Dates News_Dates News Price
2018-10-04 2018-10-01 ABC Corp to Sell stakes 120
2018-12-24 2018-12-22 Shares are down massively for ABC Corp. 131
The second way of merging I do is:
pd.merge_asof(df_n.sort_values(by=['Dates']), df1_zscore.sort_values(by=['Dates']), left_on=['News_Dates'], right_no=['Dates'] \
tolerance=pd.Timedelta('5d'), direction='forward').dropna()
And the resulting df as:
News_Dates News Dates Price
2018-09-29 Huge blow to ABC Corp. as they lost the 2012 tax case 2018-10-04 120
2018-09-30 ABC Corp. suffers a loss 2018-10-04 120
2018-10-01 ABC Corp to Sell stakes 2018-10-04 120
2018-12-22 Shares are down massively for ABC Corp. 2018-12-24 131
Both the merging results in separate dfs, however there are values in both the cases which are missing, like for second case for 4th October price, news from 29th, 30th Sept should have been merged. And in case 2 for 24th December price 20th December should also have been merged.
So I'm not quite able to figure out where am I going wrong.
P.S. My objective is to merge the price df with the news df that have come in the last 5 days from the price date.
You can swap the left and right dataframe:
df = pd.merge_asof(
df1,
df2,
left_on='News_Dates',
right_on='Dates',
tolerance=pd.Timedelta('5D'),
direction='nearest'
)
df = df[['Dates', 'News_Dates', 'News', 'Price']]
print(df)
Dates News_Dates News Price
0 2018-10-04 2018-09-29 Huge blow to ABC Corp. as they lost the 2012 t... 120
1 2018-10-04 2018-09-30 ABC Corp. suffers a loss 120
2 2018-10-04 2018-10-01 ABC Corp to Sell stakes 120
3 2018-12-24 2018-12-20 We are going to comeback strong said ABC CEO 131
4 2018-12-24 2018-12-22 Shares are down massively for ABC Corp. 131
Here is my solution using numpy
df_n = pd.DataFrame([('2018-09-29', 'Huge blow to ABC Corp. as they lost the 2012 tax case'), ('2018-09-30', 'ABC Corp. suffers a loss'), ('2018-10-01', 'ABC Corp to Sell stakes'), ('2018-12-20', 'We are going to comeback strong said ABC CEO'), ('2018-12-22', 'Shares are down massively for ABC Corp.')], columns=('News_Dates', 'News'))
df1_zscore = pd.DataFrame([('2018-10-04', '120'), ('2018-12-24', '131')], columns=('Dates', 'Price'))
df_n["News_Dates"] = pd.to_datetime(df_n["News_Dates"])
df1_zscore["Dates"] = pd.to_datetime(df1_zscore["Dates"])
n_dates = df_n["News_Dates"].values
p_dates = df1_zscore[["Dates"]].values
## substract each pair of n_dates and p_dates and create a matrix
mat_date_compare = (p_dates - n_dates).astype('timedelta64[D]')
## get matrix of boolean for which difference is between 0 and 5 day
## to be used as index for original array
comparision = (mat_date_compare <= pd.Timedelta("5d")) & (mat_date_compare >= pd.Timedelta("0d"))
## get cell numbers which is in range 0 to matrix size which meets the condition
ind = np.arange(len(n_dates)*len(p_dates))[comparision.ravel()]
## calculate row and column index from cell number to index the df
pd.concat([df1_zscore.iloc[ind//len(n_dates)].reset_index(drop=True),
df_n.iloc[ind%len(n_dates)].reset_index(drop=True)], sort=False, axis=1)
Result
Dates Price News_Dates News
0 2018-10-04 120 2018-09-29 Huge blow to ABC Corp. as they lost the 2012 t...
1 2018-10-04 120 2018-09-30 ABC Corp. suffers a loss
2 2018-10-04 120 2018-10-01 ABC Corp to Sell stakes
3 2018-12-24 131 2018-12-20 We are going to comeback strong said ABC CEO
4 2018-12-24 131 2018-12-22 Shares are down massively for ABC Corp.
I have two dataframes - one which is a micro level containing all line items purchased across all transactions (DF1). The other dataframe will be built, with the intention to be a higher level aggregation that summarizes the revenue generated per transaction, essentially summing up all line items for each transaction (DF2).
df1
Out[df1]:
transaction_id item_id amount
0 AJGDO-12304 120 $120
1 AJGDO-12304 40 $10
2 AJGDO-12304 01 $10
3 ODSKF-99130 120 $120
4 ODSKF-99130 44 $30
5 ODSKF-99130 03 $50
df2
Out[df2]
transaction_id location_id customer_id revenue(THIS WILL BE THE ADDED COLUMN!)
0 AJGDO-12304 2131234 1234 $140
1 ODSKF-99130 213124 1345 $200
How would I go about linking the output of a groupby.sum() and assigning it to df2? The revenue column will essentially be the revenue aggregation of df1['transaction_id'] and I want to link it to df2['transaction_id']
Here is what I currently have tried but am struggling with putting together,
results = df1.groupby('transaction_id')['amount'].sum()
df2['revenue'] = df2['transaction_id'].merge(results,how='left').value
Use map:
lookup = df1.groupby(['transaction_id'])['amount'].sum()
df2['revenue'] = df2.transaction_id.map(lookup)
print(df2)
Output
transaction_id location_id customer_id revenue
0 AJGDO-12304 2131234 1234 140
1 ODSKF-99130 213124 1345 200
Use map:
lookup = df1.groupby(['transaction_id'])['amount'].sum()
df2['revenue'] = df2.transaction_id.map(lookup)
print(df2)
Output
transaction_id location_id customer_id revenue
0 AJGDO-12304 2131234 1234 140
1 ODSKF-99130 213124 1345 200
I have two dataframes, one with news and the other with stock price. Both the dataframes have a "Date" column. I want to merge them on a gap of 5 days.
Lets say my news dataframe is df1 and the other price dataframe as df2.
My df1 looks like this:
News_Dates News
2018-09-29 Huge blow to ABC Corp. as they lost the 2012 tax case
2018-09-30 ABC Corp. suffers a loss
2018-10-01 ABC Corp to Sell stakes
2018-12-20 We are going to comeback strong said ABC CEO
2018-12-22 Shares are down massively for ABC Corp.
My df2 looks like this:
Dates Price
2018-10-04 120
2018-12-24 131
First method of merging I do is:
pd.merge_asof(df1_zscore.sort_values(by=['Dates']), df_n.sort_values(by=['News_Dates']), left_on=['Dates'], right_on=['News_Dates'] \
tolerance=pd.Timedelta('5d'), direction='backward')
The resulting df is:
Dates News_Dates News Price
2018-10-04 2018-10-01 ABC Corp to Sell stakes 120
2018-12-24 2018-12-22 Shares are down massively for ABC Corp. 131
The second way of merging I do is:
pd.merge_asof(df_n.sort_values(by=['Dates']), df1_zscore.sort_values(by=['Dates']), left_on=['News_Dates'], right_no=['Dates'] \
tolerance=pd.Timedelta('5d'), direction='forward').dropna()
And the resulting df as:
News_Dates News Dates Price
2018-09-29 Huge blow to ABC Corp. as they lost the 2012 tax case 2018-10-04 120
2018-09-30 ABC Corp. suffers a loss 2018-10-04 120
2018-10-01 ABC Corp to Sell stakes 2018-10-04 120
2018-12-22 Shares are down massively for ABC Corp. 2018-12-24 131
Both the merging results in separate dfs, however there are values in both the cases which are missing, like for second case for 4th October price, news from 29th, 30th Sept should have been merged. And in case 2 for 24th December price 20th December should also have been merged.
So I'm not quite able to figure out where am I going wrong.
P.S. My objective is to merge the price df with the news df that have come in the last 5 days from the price date.
You can swap the left and right dataframe:
df = pd.merge_asof(
df1,
df2,
left_on='News_Dates',
right_on='Dates',
tolerance=pd.Timedelta('5D'),
direction='nearest'
)
df = df[['Dates', 'News_Dates', 'News', 'Price']]
print(df)
Dates News_Dates News Price
0 2018-10-04 2018-09-29 Huge blow to ABC Corp. as they lost the 2012 t... 120
1 2018-10-04 2018-09-30 ABC Corp. suffers a loss 120
2 2018-10-04 2018-10-01 ABC Corp to Sell stakes 120
3 2018-12-24 2018-12-20 We are going to comeback strong said ABC CEO 131
4 2018-12-24 2018-12-22 Shares are down massively for ABC Corp. 131
Here is my solution using numpy
df_n = pd.DataFrame([('2018-09-29', 'Huge blow to ABC Corp. as they lost the 2012 tax case'), ('2018-09-30', 'ABC Corp. suffers a loss'), ('2018-10-01', 'ABC Corp to Sell stakes'), ('2018-12-20', 'We are going to comeback strong said ABC CEO'), ('2018-12-22', 'Shares are down massively for ABC Corp.')], columns=('News_Dates', 'News'))
df1_zscore = pd.DataFrame([('2018-10-04', '120'), ('2018-12-24', '131')], columns=('Dates', 'Price'))
df_n["News_Dates"] = pd.to_datetime(df_n["News_Dates"])
df1_zscore["Dates"] = pd.to_datetime(df1_zscore["Dates"])
n_dates = df_n["News_Dates"].values
p_dates = df1_zscore[["Dates"]].values
## substract each pair of n_dates and p_dates and create a matrix
mat_date_compare = (p_dates - n_dates).astype('timedelta64[D]')
## get matrix of boolean for which difference is between 0 and 5 day
## to be used as index for original array
comparision = (mat_date_compare <= pd.Timedelta("5d")) & (mat_date_compare >= pd.Timedelta("0d"))
## get cell numbers which is in range 0 to matrix size which meets the condition
ind = np.arange(len(n_dates)*len(p_dates))[comparision.ravel()]
## calculate row and column index from cell number to index the df
pd.concat([df1_zscore.iloc[ind//len(n_dates)].reset_index(drop=True),
df_n.iloc[ind%len(n_dates)].reset_index(drop=True)], sort=False, axis=1)
Result
Dates Price News_Dates News
0 2018-10-04 120 2018-09-29 Huge blow to ABC Corp. as they lost the 2012 t...
1 2018-10-04 120 2018-09-30 ABC Corp. suffers a loss
2 2018-10-04 120 2018-10-01 ABC Corp to Sell stakes
3 2018-12-24 131 2018-12-20 We are going to comeback strong said ABC CEO
4 2018-12-24 131 2018-12-22 Shares are down massively for ABC Corp.
I need to combine 2 pandas dataframes where df1.date is within 2 months previous of df2. I then want to calculate how many traders had traded the same stock during that period and count the total shares purchased.
I have tried using the approach listed below, but found it far to complicated. I believe there would be a smarter/simpler solution.
Pandas: how to merge two dataframes on offset dates?
A sample dataset is below:
DF1 (team_1):
date shares symbol trader
31/12/2013 154 FDX Max
30/06/2016 2367 GOOGL Max
21/07/2015 293 ORCL Max
18/07/2015 304 ORCL Sam
DF2 (team_2):
date shares symbol trader
23/08/2015 345 ORCL John
04/07/2014 567 FB John
06/12/2013 221 ACER Sally
31/11/2012 889 HP John
05/06/2010 445 ABBV Kate
Required output:
date shares symbol trader team_2_traders team_2_shares_bought
23/08/2015 345 ORCL John 2 597
04/07/2014 567 FB John 0 0
06/12/2013 221 ACER Sally 0 0
31/11/2012 889 HP John 0 0
05/06/2010 445 ABBV Kate 0 0
This adds 2 new columns...
'team_2_traders' = count of how many traders from team_1 traded the same stock during the previous 2 months from the date listed on DF2.
'team_2_shares_bought' = count of the total shares purchased by team_1 during the previous 2 months from the date listed on DF2.
If anyone is willing to give this a crack, please use the snippet below to setup the dataframes. Please keep in mind the actual dataset contains millions of rows and 6,000 company stocks.
team_1 = {'symbol':['FDX','GOOGL','ORCL','ORCL'],
'date':['31/12/2013','30/06/2016','21/07/2015','18/07/2015'],
'shares':[154,2367,293,304],
'trader':['Max','Max','Max','Sam']}
df1 = pd.DataFrame(team_1)
team_2 = {'symbol':['ORCL','FB','ACER','HP','ABBV'],
'date':['23/08/2015','04/07/2014','06/12/2013','31/11/2012','05/06/2010'],
'shares':[345,567,221,889,445],
'trader':['John','John','Sally','John','Kate']}
df2 = pd.DataFrame(team_2)
Appreciate the help - thank you.
Please check my solution.
from pandas.tseries.offsets import MonthEnd
df_ = df2.merge(df1, on=['symbol'])
df_['date_x'] = pd.to_datetime(df_['date_x'])
df_['date_y'] = pd.to_datetime(df_['date_y'])
df_2m = df_[df_['date_x'] < df_['date_y'] + MonthEnd(2)] \
.loc[:, ['date_y', 'shares_y', 'symbol', 'trader_y']] \
.groupby('symbol')
df1_ = pd.concat([df_2m['shares_y'].sum(), df_2m['trader_y'].count()], axis=1)
print(df1_)
shares_y trader_y
symbol
ORCL 597 2
print(df2.merge(df1_.reset_index(), on='symbol', how='left').fillna(0))
date shares symbol trader shares_y trader_y
0 23/08/2015 345 ORCL John 597.0 2.0
1 04/07/2014 567 FB John 0.0 0.0
2 06/12/2013 221 ACER Sally 0.0 0.0
3 30/11/2012 889 HP John 0.0 0.0
4 05/06/2010 445 ABBV Kate 0.0 0.0