Pandas sum column for each column pair - python

I have a dataframe as follows. I'm attempting to sum the values in the Total column, for each date, for each unique pair from columns P_buy and P_sell.
+--------+----------+-------+---------+--------+----------+-----------------+
| Index | Date | Type | Quantity| P_buy | P_sell | Total |
+--------+----------+-------+---------+--------+----------+-----------------+
| 0 | 1/1/2020 | 1 | 10 | 1 | 1 | 10 |
| 1 | 1/1/2020 | 1 | 10 | 2 | 1 | 20 |
| 2 | 1/1/2020 | 2 | 20 | 3 | 1 | 25 |
| 3 | 1/1/2020 | 2 | 20 | 4 | 1 | 20 |
| 4 | 2/1/2020 | 3 | 30 | 1 | 1 | 35 |
| 5 | 2/1/2020 | 3 | 30 | 2 | 1 | 30 |
| 6 | 2/1/2020 | 1 | 40 | 3 | 1 | 45 |
| 7 | 2/1/2020 | 1 | 40 | 4 | 1 | 40 |
| 8 | 3/1/2020 | 2 | 50 | 1 | 1 | 55 |
| 9 | 3/1/2020 | 2 | 50 | 2 | 1 | 53 |
+--------+----------+-------+---------+--------+----------+-----------------+
My desired output would be as follows: Where for each combination of unique P_buy/P_sell pairs, I'm receiving a sum of the total at each date.
+--------+----------+-------+---------+
| P_buy | P_sell | Total |
+--------+----------+-------+---------+
| 1 | 1 | 100 |
| 2 | 1 | 103 |
| 3 | 1 | 70 |
+--------+----------+-------+---------+
My attempts have been using the groupby function, but I haven't been able to successfully implement.

# use a groupby on the desired columns and sum the total
df.groupby(['P_buy','P_sell'], as_index=False)['Total'].sum()
P_buy P_sell Total
0 1 1 100
1 2 1 103
2 3 1 70
3 4 1 60

Related

How can I generate a financial summary using pandas dataframes?

I'd like to create a table from a data frame with subtotals per business, totals per business type, and columns summing multiple value columns. Long term is to create a selection tool based on the ingested Excel sheet for whichever month's summary I bring in to compare month summaries (e.g. did minerals item 26 from BA3 disappear the next month) but I believe that is best saved for another question.
For now, I am having trouble figuring out how to summarize the data.
I have a dataframe in Pandas that contains the following:
Business | Business Type | ID | Value-Q1 | Value-Q2 | Value-Q3 | Value-Q4 | Value-FY |
---------+---------------+----+----------+----------+----------+----------+----------+
BA1 | Widgets | 1 | 7 | 0 | 0 | 8 | 15 |
BA1 | Widgets | 2 | 7 | 0 | 0 | 8 | 15 |
BA1 | Cups | 3 | 9 | 10 | 0 | 0 | 19 |
BA1 | Cups | 4 | 9 | 10 | 0 | 0 | 19 |
BA1 | Cups | 5 | 9 | 10 | 0 | 0 | 19 |
BA1 | Snorkels | 6 | 0 | 0 | 8 | 8 | 16 |
BA1 | Snorkels | 7 | 0 | 0 | 8 | 8 | 16 |
BA1 | Snorkels | 8 | 0 | 0 | 8 | 8 | 16 |
BA2 | Widgets | 9 | 100 | 0 | 7 | 0 | 107 |
BA2 | Widgets | 10 | 100 | 0 | 7 | 0 | 107 |
BA2 | Widgets | 11 | 100 | 0 | 7 | 0 | 107 |
BA2 | Widgets | 12 | 100 | 0 | 7 | 0 | 107 |
BA2 | Bread | 13 | 0 | 0 | 0 | 1 | 1 |
BA2 | Bread | 14 | 0 | 0 | 0 | 1 | 1 |
BA2 | Bread | 15 | 0 | 0 | 0 | 1 | 1 |
BA2 | Bread | 16 | 0 | 0 | 0 | 1 | 1 |
BA2 | Cat Food | 17 | 504 | 0 | 0 | 500 | 1004 |
BA2 | Cat Food | 18 | 504 | 0 | 0 | 500 | 1004 |
BA2 | Cat Food | 19 | 504 | 0 | 0 | 500 | 1004 |
BA2 | Cat Food | 20 | 504 | 0 | 0 | 500 | 1004 |
BA2 | Cat Food | 21 | 504 | 0 | 0 | 500 | 1004 |
BA3 | Gravel | 22 | 7 | 7 | 7 | 7 | 28 |
BA3 | Gravel | 23 | 7 | 7 | 7 | 7 | 28 |
BA3 | Gravel | 24 | 7 | 7 | 7 | 7 | 28 |
BA3 | Rocks | 25 | 3 | 2 | 0 | 0 | 5 |
BA3 | Minerals | 26 | 1 | 1 | 0 | 1 | 3 |
BA3 | Minerals | 27 | 1 | 1 | 0 | 1 | 3 |
BA4 | Widgets | 28 | 6 | 4 | 0 | 0 | 10 |
BA4 | Widgets | 29 | 6 | 4 | 0 | 0 | 10 |
BA4 | Widgets | 30 | 6 | 4 | 0 | 0 | 10 |
BA4 | Widgets | 31 | 6 | 4 | 0 | 0 | 10 |
BA4 | Widgets | 32 | 6 | 4 | 0 | 0 | 10 |
BA4 | Something | 33 | 1000 | 0 | 0 | 2 | 1002 |
BA5 | Bonbons | 34 | 60 | 40 | 10 | 0 | 110 |
BA5 | Bonbons | 35 | 60 | 40 | 10 | 0 | 110 |
BA5 | Gummy Bears | 36 | 7 | 0 | 0 | 9 | 16 |
(Imagine each ID has different values as well)
My goal is to slice the data to get the total occurrences of a given business type (e.g. BA 1 has 2 widgets, 3 cups, and 3 snorkels which each have a unique ID) as well as the total values:
Occurrence | Q1 Sum | Q2 Sum | Q3 Sum | Q4 Sum | FY Sum |
BA 1 8 | 41 | 30 | 24 | 40 | 135 |
Widgets 2 | 14 | 0 | 0 | 16 | 30 |
Cups 3 | 27 | 30 | 0 | 0 | 57 |
Snorkels 3 | 0 | 0 | 24 | 24 | 48 |
BA 2 Subtotal of BA2 items below
Widgets Repeat Above
Bread Repeat Above
Cat Food Repeat Above
I have more columns that mirror the Q1-FY columns with other fields (e.g. Value 2 Q1-FY) per line as well that I would like to include on the summary but I imagine I could just repeat whatever process is used to grab the current Value cuts.
I have a list of unique Businesses
businesses = [BA1, BA2, BA3, BA4, BA5]
and a list of unique Business Types
[Widgets, Cups, Snorkels, Bread, Cat Food, Gravel, Rocks, Minerals, Something, Bonbons, Gummy Bears]
and finally a list of the Values
values = [Value-Q1, Value-Q2, Value-Q3, Value-Q4, Value-FY]
and I tried doing a for loop off of the lists
maybe I need to make the dataframe values be on their own individual lines? I tried the following for at least the sum of FY
for b in businesses
for bt in business types
df_sums = df.loc['Business' == b, 'Business Type' == bt, 'Value-FY'].sum()
but it didn't quite give me what I was hoping for
I'm sure there's a better way to at least grab the values (I managed to get FY values per business into a dictionary) for totals but not totals per business per business type (which is also unique per business).
If anyone has any advice or can point me in the right direction I'd really appreciate it!
You should try to use the group_by method for this. Group_by allows for several grouping options. I have attached a link to the documentation on the method. https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html

Can we alter pandas cross tabulation?

I have loaded raw_data from MySQL using sqlalchemy and pymysql
engine = create_engine('mysql+pymysql://[user]:[passwd]#[host]:[port]/[database]')
df = pd.read_sql_table('data', engine)
df is something like this
| Age Category | Category |
|--------------|----------------|
| 31-26 | Engaged |
| 26-31 | Engaged |
| 31-36 | Not Engaged |
| Above 51 | Engaged |
| 41-46 | Disengaged |
| 46-51 | Nearly Engaged |
| 26-31 | Disengaged |
Then i had performed analysis as follow
age = pd.crosstab(df['Age Category'], df['Category'])
| Category | A | B | C | D |
|--------------|---|----|----|---|
| Age Category | | | | |
| 21-26 | 2 | 2 | 4 | 1 |
| 26-31 | 7 | 11 | 12 | 5 |
| 31-36 | 3 | 5 | 5 | 2 |
| 36-41 | 2 | 4 | 1 | 7 |
| 41-46 | 0 | 1 | 3 | 2 |
| 46-51 | 0 | 0 | 2 | 3 |
| Above 51 | 0 | 3 | 0 | 6 |
I want to change it to
Pandas DataFrame something like this.
| Age Category | A | B | C | D |
|--------------|---|----|----|---|
| 21-26 | 2 | 2 | 4 | 1 |
| 26-31 | 7 | 11 | 12 | 5 |
| 31-36 | 3 | 5 | 5 | 2 |
| 36-41 | 2 | 4 | 1 | 7 |
| 41-46 | 0 | 1 | 3 | 2 |
| 46-51 | 0 | 0 | 2 | 3 |
| Above 51 | 0 | 3 | 0 | 6 |
Thank you for your time and consideration
Both texts are called columns and index names, solution for change them is use DataFrame.rename_axis:
age = age.rename_axis(index=None, columns='Age Category')
Or set columns names by index names, and then set index names to default - None:
age.columns.name = age.index.name
age.index.name = None
print (age)
Age Category Disengaged Engaged Nearly Engaged Not Engaged
26-31 1 1 0 0
31-26 0 1 0 0
31-36 0 0 0 1
41-46 1 0 0 0
46-51 0 0 1 0
Above 51 0 1 0 0
But this texts are something like metadata, so some functions should remove them.

I want to add new column into cross tabulation data

I have cross tabulation data something. which i have created using
x = pd.crosstab(a['Age Category'], a['Category'])
| Category | A | B | C | D |
|--------------|---|----|----|---|
| Age Category | | | | |
| 21-26 | 2 | 2 | 4 | 1 |
| 26-31 | 7 | 11 | 12 | 5 |
| 31-36 | 3 | 5 | 5 | 2 |
| 36-41 | 2 | 4 | 1 | 7 |
| 41-46 | 0 | 1 | 3 | 2 |
| 46-51 | 0 | 0 | 2 | 3 |
| Above 51 | 0 | 3 | 0 | 6 |
And I want to add new column Total which will contain row sum something like this in cross tabulation data.
| Category | A | B | C | D | Total |
|--------------|---|----|----|---|-------|
| Age Category | | | | | |
| 21-26 | 2 | 2 | 4 | 1 | 9 |
| 26-31 | 7 | 11 | 12 | 5 | 35 |
| 31-36 | 3 | 5 | 5 | 2 | 15 |
| 36-41 | 2 | 4 | 1 | 7 | 14 |
| 41-46 | 0 | 1 | 3 | 2 | 6 |
| 46-51 | 0 | 0 | 2 | 3 | 5 |
| Above 51 | 0 | 3 | 0 | 6 | 9 |
I tried x['Total'] = x.sum(axis = 1) but this code is giving me TypeError: cannot insert an item into a CategoricalIndex that is not already an existing category
Thanks you for your time and consideration.
Use CategoricalIndex.add_categories for append new category to columns:
x.columns = x.columns.add_categories(['Total'])
x['Total'] = x.sum(axis = 1)
print (x)
A B C D Total
Category
21-26 2 2 4 1 9
26-31 7 11 12 5 35
31-36 3 5 5 2 15
36-41 2 4 1 7 14
41-46 0 1 3 2 6
46-51 0 0 2 3 5
Above 51 0 3 0 6 9

Pandas merge two dataframe and drop extra rows

How can I merge/join these two dataframes ONLY on "sample_id" and drop the extra rows from the second dataframe when merging/joining?
Using pandas in Python.
First dataframe (fdf)
| sample_id | name |
|-----------|-------|
| 1 | Mark |
| 1 | Dart |
| 2 | Julia |
| 2 | Oolia |
| 2 | Talia |
Second dataframe (sdf)
| sample_id | salary | time |
|-----------|--------|------|
| 1 | 20 | 0 |
| 1 | 30 | 5 |
| 1 | 40 | 10 |
| 1 | 50 | 15 |
| 2 | 33 | 0 |
| 2 | 23 | 5 |
| 2 | 24 | 10 |
| 2 | 28 | 15 |
| 2 | 29 | 20 |
So the resulting df will be like -
| sample_id | name | salary | time |
|-----------|-------|--------|------|
| 1 | Mark | 20 | 0 |
| 1 | Dart | 30 | 5 |
| 2 | Julia | 33 | 0 |
| 2 | Oolia | 23 | 5 |
| 2 | Talia | 24 | 10 |
There are duplicates, so need helper column for correct DataFrame.merge with GroupBy.cumcount for counter:
df = (fdf.assign(g=fdf.groupby('sample_id').cumcount())
.merge(sdf.assign(g=sdf.groupby('sample_id').cumcount()), on=['sample_id', 'g'])
.drop('g', axis=1))
print (df)
sample_id name salary time
0 1 Mark 20 0
1 1 Dart 30 5
2 2 Julia 33 0
3 2 Oolia 23 5
4 2 Talia 24 10
final_res = pd.merge(df,df2,on=['sample_id'],how='left')
final_res.sort_values(['sample_id','name','time'],ascending=[True,True,True],inplace=True)
final_res.drop_duplicates(subset=['sample_id','name'],keep='first',inplace=True)

How to add a new column to pySpark dataframe which contains count its column values which are greater to 0?

I want to add a new column to pyspark dataframe which contains count of all columns values which are greater to 0 in a particular row.
Here is my demo dataframe.
+-----------+----+----+----+----+----+----+
|customer_id|2010|2011|2012|2013|2014|2015|
+-----------+----+----+----+----+----+----+
| 1 | 0 | 4 | 0 | 32 | 0 | 87 |
| 2 | 5 | 5 | 56 | 23 | 0 | 09 |
| 3 | 6 | 6 | 87 | 0 | 45 | 23 |
| 4 | 7 | 0 | 12 | 89 | 78 | 0 |
| 6 | 0 | 0 | 0 | 23 | 45 | 64 |
+-----------+----+----+----+----+----+----+
Above data frame have visit by a customer in a year. I want to count how many years a customer visited. So i need a column visit_count which is having count of visits in year (2010,2011,2012,2013,2014,2015) having value greater to 0.
+-----------+----+----+----+----+----+----+-----------+
|customer_id|2010|2011|2012|2013|2014|2015|visit_count|
+-----------+----+----+----+----+----+----+-----------+
| 1 | 0 | 4 | 0 | 32 | 0 | 87 | 3 |
| 2 | 5 | 5 | 56 | 23 | 0 | 09 | 5 |
| 3 | 6 | 6 | 87 | 0 | 45 | 23 | 5 |
| 4 | 7 | 0 | 12 | 89 | 78 | 0 | 4 |
| 6 | 0 | 0 | 0 | 23 | 45 | 64 | 3 |
+-----------+----+----+----+----+----+----+-----------+
How to Achieve this?
Try this:
df.withColumn('visit_count', sum((df[col] > 0).cast('integer') for col in df.columns))

Categories

Resources