Increment rank each time flag changes - python

I have the following pandas dataframe where the first column is the datetime index. I am trying to achieve the desired_output column which increments every time the flag changes from 0 to 1 or 1 to 0. I have been able to achieve this type of thing in SQL however after finding that pandasql sqldf for some strange reason changes the values of the field undergoing the partition I am now trying to achieve this using regular python syntax.
Any help would be much appreciated.
+-------------+------+----------------+
| date(index) | flag | desired_output |
+-------------+------+----------------+
| 1/01/2020 | 0 | 1 |
| 2/01/2020 | 0 | 1 |
| 3/01/2020 | 0 | 1 |
| 4/01/2020 | 1 | 2 |
| 5/01/2020 | 1 | 2 |
| 6/01/2020 | 0 | 3 |
| 7/01/2020 | 1 | 4 |
| 8/01/2020 | 1 | 4 |
| 9/01/2020 | 1 | 4 |
| 10/01/2020 | 1 | 4 |
| 11/01/2020 | 1 | 4 |
| 12/01/2020 | 1 | 4 |
| 13/01/2020 | 0 | 5 |
| 14/01/2020 | 0 | 5 |
| 15/01/2020 | 0 | 5 |
| 16/01/2020 | 0 | 5 |
| 17/01/2020 | 1 | 6 |
| 18/01/2020 | 0 | 7 |
| 19/01/2020 | 0 | 7 |
| 20/01/2020 | 0 | 7 |
| 21/01/2020 | 0 | 7 |
| 22/01/2020 | 1 | 8 |
| 23/01/2020 | 1 | 8 |
+-------------+------+----------------+

Use diff and cumsum:
print (df["flag"].diff().ne(0).cumsum())
0 1
1 1
2 1
3 2
4 2
5 3
6 4
7 4
8 4
9 4
10 4
11 4
12 5
13 5
14 5
15 5
16 6
17 7
18 7
19 7
20 7
21 8
22 8

Related

Pandas sum column for each column pair

I have a dataframe as follows. I'm attempting to sum the values in the Total column, for each date, for each unique pair from columns P_buy and P_sell.
+--------+----------+-------+---------+--------+----------+-----------------+
| Index | Date | Type | Quantity| P_buy | P_sell | Total |
+--------+----------+-------+---------+--------+----------+-----------------+
| 0 | 1/1/2020 | 1 | 10 | 1 | 1 | 10 |
| 1 | 1/1/2020 | 1 | 10 | 2 | 1 | 20 |
| 2 | 1/1/2020 | 2 | 20 | 3 | 1 | 25 |
| 3 | 1/1/2020 | 2 | 20 | 4 | 1 | 20 |
| 4 | 2/1/2020 | 3 | 30 | 1 | 1 | 35 |
| 5 | 2/1/2020 | 3 | 30 | 2 | 1 | 30 |
| 6 | 2/1/2020 | 1 | 40 | 3 | 1 | 45 |
| 7 | 2/1/2020 | 1 | 40 | 4 | 1 | 40 |
| 8 | 3/1/2020 | 2 | 50 | 1 | 1 | 55 |
| 9 | 3/1/2020 | 2 | 50 | 2 | 1 | 53 |
+--------+----------+-------+---------+--------+----------+-----------------+
My desired output would be as follows: Where for each combination of unique P_buy/P_sell pairs, I'm receiving a sum of the total at each date.
+--------+----------+-------+---------+
| P_buy | P_sell | Total |
+--------+----------+-------+---------+
| 1 | 1 | 100 |
| 2 | 1 | 103 |
| 3 | 1 | 70 |
+--------+----------+-------+---------+
My attempts have been using the groupby function, but I haven't been able to successfully implement.
# use a groupby on the desired columns and sum the total
df.groupby(['P_buy','P_sell'], as_index=False)['Total'].sum()
P_buy P_sell Total
0 1 1 100
1 2 1 103
2 3 1 70
3 4 1 60

Assign a total value of 1 if any number is present in a column, else 0

I have a dataset similar to the this sample below:
| id | old_a | old_b | new_a | new_b |
|----|-------|-------|-------|-------|
| 6 | 3 | 0 | 0 | 0 |
| 6 | 9 | 0 | 2 | 0 |
| 13 | 3 | 0 | 0 | 0 |
| 13 | 37 | 0 | 0 | 1 |
| 13 | 30 | 0 | 0 | 6 |
| 13 | 12 | 2 | 0 | 0 |
| 6 | 7 | 0 | 2 | 0 |
| 6 | 8 | 0 | 0 | 0 |
| 6 | 19 | 0 | 3 | 0 |
| 6 | 54 | 0 | 0 | 0 |
| 87 | 6 | 0 | 2 | 0 |
| 87 | 11 | 1 | 1 | 0 |
| 87 | 25 | 0 | 1 | 0 |
| 87 | 10 | 0 | 0 | 0 |
| 9 | 8 | 1 | 0 | 0 |
| 9 | 19 | 0 | 2 | 0 |
| 9 | 1 | 0 | 0 | 0 |
| 9 | 34 | 0 | 7 | 0 |
I'm providing this sample dataset for the above table:
data=[[6,3,0,0,0],[6,9,0,2,0],[13,3,0,0,0],[13,37,0,0,1],[13,30,0,0,6],[13,12,2,0,0],[6,7,0,2,0],
[6,8,0,0,0],[6,19,0,3,0],[6,54,0,0,0],[87,6,0,2,0],[87,11,1,1,0],[87,25,0,1,0],[87,10,0,0,0],
[9,8,1,0,0],[9,19,0,2,0],[9,1,0,0,0],[9,34,0,7,0]]
data= pd.DataFrame(data,columns=['id','old_a','old_b','new_a','new_b'])
I want to look into columns 'new_a' and 'new_b' for each id and even if a single value exists in these two columns for each id, I want to count it as 1 irrespective of the number of times any value has occurred and assign 0 if no value is present. For example, if I look into the id '9', there are two distinct values in new_a, but I want to count it as 1. Similarly, for id '13', there are no values in new_a, so I would want to assign it 0.
My final output should like:
| id | new_a | new_b |
|----|-------|-------|
| 6 | 1 | 0 |
| 9 | 1 | 0 |
| 13 | 0 | 1 |
| 87 | 1 | 0 |
I would eventually want to calculate the % of clients using new_a and new_b. So from the above table, 75% clients use new_a and 25% use new_b. I'm a beginner in python and not sure how to proceed in this.
Use GroupBy.any, because 0 are processing like Falses and convert output boolean to integers:
df = data.groupby('id')[['new_a','new_b']].any().astype(int).reset_index()
print (df)
id new_a new_b
0 6 1 0
1 9 1 0
2 13 0 1
3 87 1 0
For percentage use mean of output above:
s = df[['new_a','new_b']].mean().mul(100)
print (s)
new_a 75.0
new_b 25.0
dtype: float64

How can I turn the following dataframe into a multi-index dataframe?

How can I achieve the following:
I have a table like so
|----------------------|
| Date | A | B | C | D |
|------+---+---+---+---|
| 2000 | 1 | 2 | 5 | 4 |
|------+---+---+---+---|
| 2001 | 2 | 2 | 7 | 4 |
|------+---+---+---+---|
| 2002 | 3 | 1 | 7 | 7 |
|------+---+---+---+---|
| 2003 | 4 | 1 | 5 | 7 |
|----------------------|
and turn it into a multi-index type dataframe:
|------------------------------------|
| Column Name | Date | Value | C | D |
|-------------+------+-------+---+---|
| A | 2000 | 1 | 5 | 4 |
| |------+-------+---+---|
| | 2001 | 2 | 7 | 4 |
| |------+-------+---+---|
| | 2002 | 3 | 7 | 7 |
| |------+-------+---+---|
| | 2003 | 4 | 5 | 7 |
|-------------+------+-------+---+---|
| B | 2000 | 2 | 5 | 4 |
| |------+-------+---+---|
| | 2001 | 2 | 7 | 4 |
| |------+-------+---+---|
| | 2002 | 1 | 7 | 7 |
| |------+-------+---+---|
| | 2003 | 1 | 5 | 7 |
|------------------------------------|
I have tried using the Melt function on a dataframe but could not figure out how to achieve this desired look. I think I would also then have to apply a groupby function to the melted dataframe.
You can use melt with set_index. By adding C and D as id_vars, the columns will keep the same structure, then you can just set the columns of interest as index to get a MultiIndex dataframe:
df.melt(id_vars=['Date', 'C', 'D']).set_index(['variable', 'Date'])
C D value
variable Date
A 2000 5 4 1
2001 7 4 2
2002 7 7 3
2003 5 7 4
B 2000 5 4 2
2001 7 4 2
2002 7 7 1
2003 5 7 1

I want to add new column into cross tabulation data

I have cross tabulation data something. which i have created using
x = pd.crosstab(a['Age Category'], a['Category'])
| Category | A | B | C | D |
|--------------|---|----|----|---|
| Age Category | | | | |
| 21-26 | 2 | 2 | 4 | 1 |
| 26-31 | 7 | 11 | 12 | 5 |
| 31-36 | 3 | 5 | 5 | 2 |
| 36-41 | 2 | 4 | 1 | 7 |
| 41-46 | 0 | 1 | 3 | 2 |
| 46-51 | 0 | 0 | 2 | 3 |
| Above 51 | 0 | 3 | 0 | 6 |
And I want to add new column Total which will contain row sum something like this in cross tabulation data.
| Category | A | B | C | D | Total |
|--------------|---|----|----|---|-------|
| Age Category | | | | | |
| 21-26 | 2 | 2 | 4 | 1 | 9 |
| 26-31 | 7 | 11 | 12 | 5 | 35 |
| 31-36 | 3 | 5 | 5 | 2 | 15 |
| 36-41 | 2 | 4 | 1 | 7 | 14 |
| 41-46 | 0 | 1 | 3 | 2 | 6 |
| 46-51 | 0 | 0 | 2 | 3 | 5 |
| Above 51 | 0 | 3 | 0 | 6 | 9 |
I tried x['Total'] = x.sum(axis = 1) but this code is giving me TypeError: cannot insert an item into a CategoricalIndex that is not already an existing category
Thanks you for your time and consideration.
Use CategoricalIndex.add_categories for append new category to columns:
x.columns = x.columns.add_categories(['Total'])
x['Total'] = x.sum(axis = 1)
print (x)
A B C D Total
Category
21-26 2 2 4 1 9
26-31 7 11 12 5 35
31-36 3 5 5 2 15
36-41 2 4 1 7 14
41-46 0 1 3 2 6
46-51 0 0 2 3 5
Above 51 0 3 0 6 9

How to add a new column to pySpark dataframe which contains count its column values which are greater to 0?

I want to add a new column to pyspark dataframe which contains count of all columns values which are greater to 0 in a particular row.
Here is my demo dataframe.
+-----------+----+----+----+----+----+----+
|customer_id|2010|2011|2012|2013|2014|2015|
+-----------+----+----+----+----+----+----+
| 1 | 0 | 4 | 0 | 32 | 0 | 87 |
| 2 | 5 | 5 | 56 | 23 | 0 | 09 |
| 3 | 6 | 6 | 87 | 0 | 45 | 23 |
| 4 | 7 | 0 | 12 | 89 | 78 | 0 |
| 6 | 0 | 0 | 0 | 23 | 45 | 64 |
+-----------+----+----+----+----+----+----+
Above data frame have visit by a customer in a year. I want to count how many years a customer visited. So i need a column visit_count which is having count of visits in year (2010,2011,2012,2013,2014,2015) having value greater to 0.
+-----------+----+----+----+----+----+----+-----------+
|customer_id|2010|2011|2012|2013|2014|2015|visit_count|
+-----------+----+----+----+----+----+----+-----------+
| 1 | 0 | 4 | 0 | 32 | 0 | 87 | 3 |
| 2 | 5 | 5 | 56 | 23 | 0 | 09 | 5 |
| 3 | 6 | 6 | 87 | 0 | 45 | 23 | 5 |
| 4 | 7 | 0 | 12 | 89 | 78 | 0 | 4 |
| 6 | 0 | 0 | 0 | 23 | 45 | 64 | 3 |
+-----------+----+----+----+----+----+----+-----------+
How to Achieve this?
Try this:
df.withColumn('visit_count', sum((df[col] > 0).cast('integer') for col in df.columns))

Categories

Resources