Summing on all previous values of a dataframe in Python - python

I have data that looks like:
Year Month Region Value
1978 1 South 1
1990 1 North 22
1990 2 South 33
1990 2 Mid W 12
1998 1 South 1
1998 1 North 12
1998 2 South 2
1998 3 South 4
1998 1 Mid W 2
.
.
up to
2010
2010
My end date is 2010 but I want to sum all Values by the Region and Month by adding all previous year values together.
I don't want just a regular cumulative sum but a Monthly Cumulative Sum by Region where Month 1 of Region South is the cumulative Month 1 of Region South of all previous Month 1s before it, etc....
Desired output is something like:
Month Region Cum_Value
1 South 2
2 South 34
3 South 4
.
.
1 North 34
2 North 10
.
.
1 MidW 2
2 MidW 12

Use pd.DataFrame.groupby with pd.DataFrame.cumsum
df1['cumsum'] = df1.groupby(['Month', 'Region'])['Value'].cumsum()
Result:
Year Month Region Value cumsum
0 1978 1 South 1.0 1.0
1 1990 1 North 22.0 22.0
2 1990 2 South 33.0 33.0
3 1990 2 Mid W 12.0 12.0
4 1998 1 South 1.0 2.0
5 1998 1 North 12.0 34.0
6 1998 2 South 2.0 35.0
7 1998 3 South 4.0 4.0
8 1998 1 Mid W 2.0 2.0

Here's another solutions that corresponds more with your expected output.
df = pd.DataFrame({'Year': [1978,1990,1990,1990,1998,1998,1998,1998,1998],
'Month': [1,1,2,2,1,1,2,3,1],
'Region': ['South','North','South','Mid West','South','North','South','South','Mid West'],
'Value' : [1,22,33,12,1,12,2,4,2]})
#DataFrame Result
Year Month Region Value
0 1978 1 South 1
1 1990 1 North 22
2 1990 2 South 33
3 1990 2 Mid West 12
4 1998 1 South 1
5 1998 1 North 12
6 1998 2 South 2
7 1998 3 South 4
8 1998 1 Mid West 2
Code to run:
df1 = df.groupby(['Month','Region']).sum()
df1 = df1.drop('Year',axis=1)
df1 = df1.sort_values(['Month','Region'])
#Final Result
Month Region Value
1 Mid West 2
1 North 34
1 South 2
2 Mid West 12
2 South 35
3 South 4

Related

How to replace the values of a column to other columns only in NaN values?

How to fill the values of column ["state"] with another column ["country"] only in NaN values?
Like in this Pandas DataFrame:
state country sum
0 NaN China 1
1 Assam India 2
2 Odisa India 3
3 Bihar India 4
4 NaN India 5
5 NaN Srilanka 6
6 NaN Malaysia 7
7 NaN Bhutan 8
8 California US 9
9 Texas US 10
10 Newyork US 11
11 NaN US 12
12 NaN Canada 13
What code should I do to fill state columns with country column only in NaN values, like this:
state country sum
0 China China 1
1 Assam India 2
2 Odisa India 3
3 Bihar India 4
4 India India 5
5 Srilanka Srilanka 6
6 Malaysia Malaysia 7
7 Bhutan Bhutan 8
8 California US 9
9 Texas US 10
10 Newyork US 11
11 US US 12
12 Canada Canada 13
I can use this code:
df.loc[df['state'].isnull(), 'state'] = df[df['state'].isnull()]['country'].replace(df['country'])
But in a very large dataset with 300K of rows, it compute for 5-6 minutes and crashed every time. Because it is replacing one value at a time.
Like this
Can anyone help me with efficient code for this?
Please!
Perhaps using fillna without checking for isnull() and replace():
df['state'].fillna(df['country'], inplace=True)
print(df)
Output
state country sum
0 China China 1
1 Assam India 2
2 Odisa India 3
3 Bihar India 4
4 India India 5
5 Srilanka Srilanka 6
6 Malaysia Malaysia 7
7 Bhutan Bhutan 8
8 California US 9
9 Texas US 10
10 Newyork US 11
11 US US 12
12 Canada Canada 13

Iterating through rows in a df and creating a new column based on those values

my want to create a new relative score (column) comparing F1 drivers to theirs team-mates within given year and within given team.
My data look like:
stats_df.head()
> driver year team points
> 0 AIT 2020 Williams 0.0
> 1 ALB 2019 Red Bull 76.0
> 2 ALB 2019 AlphaTauri 16.0
> 3 ALB 2020 Red Bull 105.0
> 4 ALO 2013 Ferrari 242.0
I tired:
teams = stats_df['team'].unique()
years = stats_df['year'].unique()
drivers = stats_df['driver'].unique()
for year in years:
for team in teams:
team_points = stats_df['points'].loc[stats_df['team']==team].loc[stats_df['year']==year].sum()
for driver in drivers:
driver_points = stats_df['points'].loc[stats_df['team']==team].loc[stats_df['year']==year].loc[stats_df['driver']==driver]
power_score = driver_points/(team_points/2)
stats_df['power_score'].loc[stats_df['team']==team].loc[stats_df['year']==year].loc[stats_df['driver']==driver] = power_score
Resulting in NaNs in the new column ('power_score').
Help would be appreciated.
Looking at your code, you can compute team_points by using .groupby(["team", "year"]) and then simply divide points with these values:
team_points = df.groupby(["team", "year"])["points"].transform("sum")
df["power_score"] = df["points"] / (team_points / 2)
print(df)
Prints:
driver year team points power_score
0 AIT 2020 Williams 0.0 NaN
1 ALB 2019 Red Bull 76.0 2.0
2 ALB 2019 AlphaTauri 16.0 2.0
3 ALB 2020 Red Bull 105.0 2.0
4 ALO 2013 Ferrari 242.0 2.0

Replace last value(s) of group with NaN

My goal is to replace the last value (or the last several values) of each id with NaN. My real dataset is quite large and has groups of different sizes.
Example:
import pandas as pd
ids = [1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3]
year = [2000,2001,2002,2003,2004,2005,1990,1991,1992,1993,1994,1995,2010,2011,2012,2013,2014,2015]
percent = [120,70,37,40,50,110,140,100,90,5,52,80,60,40,70,60,50,110]
dictex ={"id":ids,"year":year,"percent [%]": percent}
dfex = pd.DataFrame(dictex)
print(dfex)
id year percent [%]
0 1 2000 120
1 1 2001 70
2 1 2002 37
3 1 2003 40
4 1 2004 50
5 1 2005 110
6 2 1990 140
7 2 1991 100
8 2 1992 90
9 2 1993 5
10 2 1994 52
11 2 1995 80
12 3 2010 60
13 3 2011 40
14 3 2012 70
15 3 2013 60
16 3 2014 50
17 3 2015 110
My goal is to replace the last 1 / or 2 / or 3 values of the "percent [%]" column for each id (group) with NaN.
The result should look like this: (here: replace the last 2 values of each id)
id year percent [%]
0 1 2000 120
1 1 2001 70
2 1 2002 37
3 1 2003 40
4 1 2004 NaN
5 1 2005 NaN
6 2 1990 140
7 2 1991 100
8 2 1992 90
9 2 1993 5
10 2 1994 NaN
11 2 1995 NaN
12 3 2010 60
13 3 2011 40
14 3 2012 70
15 3 2013 60
16 3 2014 NaN
17 3 2015 NaN
I know there should be a relatively easy solution for this, but i'm new to python and simply haven't been able to figure out an elegant way.
Thanks for the help!
try using groupby, tail and index to find the index of those rows that will be modified and use loc to change the values
nrows = 2
idx = df.groupby('id').tail(nrows).index
df.loc[idx, 'percent [%]'] = np.nan
#output
id year percent [%]
0 1 2000 120.0
1 1 2001 70.0
2 1 2002 37.0
3 1 2003 40.0
4 1 2004 NaN
5 1 2005 NaN
6 2 1990 140.0
7 2 1991 100.0
8 2 1992 90.0
9 2 1993 5.0
10 2 1994 NaN
11 2 1995 NaN
12 3 2010 60.0
13 3 2011 40.0
14 3 2012 70.0
15 3 2013 60.0
16 3 2014 NaN
17 3 2015 NaN

Python Pivot: Can I get the count of columns per row(id/index) and store it in a new columns?

hope you can help me this.
The df looks like this.
region AMER
country Brazil Canada Columbia Mexico United States
metro Rio de Janeiro Sao Paulo Toronto Bogota Mexico City Monterrey Atlanta Boston Chicago Culpeper Dallas Denver Houston Los Angeles Miami New York Philadelphia Seattle Silicon Valley Washington D.C.
ID
321321 2 1 1 13 15 29 1 2 1 11 6 15 3 2 14 3
23213 3
231 2 2 3 1 5 6 3 3 4 3 3 4
23213 4 1 1 1 4 1 2 27 1
21321 4 2 2 1 14 3 2 4 2
12321 1 2 1 1 1 1 10
123213 2 45 5 1
12321 1
123 1 3 2
I want to get the count of columns that have data per of metro and country per region of all the rows(id/index) and store that count into a new column.
Regards,
RJ
You may want to try
df['new']df.sum(level=0, axis=1)

how to apply unique function and transform and keep the complete columns in the data frame pandas

My goal here is to extract the count of rows in the data frame in which for each PatienNumber and year and month show the count of them and keep all the columns in the data frame.
This is the original data frame:
PatientNumber QT Answer Answerdate year month dayofyear count formula
1 1 transferring No 2017-03-03 2017 3 62 2.0 (1/3)
2 1 preparing food No 2017-03-03 2017 3 62 2.0 (1/3)
3 1 medications Yes 2017-03-03 2017 3 62 1.0 (1/3)
4 2 transferring No 2006-10-05 2006 10 275 3.0 0
5 2 preparing food No 2006-10-05 2006 10 275 3.0 0
6 2 medications No 2006-10-05 2006 10 275 3.0 0
7 2 transferring Yes 2007-4-15 2007 4 105 2.0 2/3
8 2 preparing food Yes 2007-4-15 2007 4 105 2.0 2/3
9 2 medications No 2007-4-15 2007 4 105 1.0 2/3
10 2 transferring Yes 2007-12-15 2007 12 345 1.0 1/3
11 2 preparing food No 2007-12-15 2007 12 345 2.0 1/3
12 2 medications No 2007-12-15 2007 12 345 2.0 1/3
13 2 transferring Yes 2008-10-10 2008 10 280 1.0 (1/3)
14 2 preparing food No 2008-10-10 2008 10 280 2.0 (1/3)
15 2 medications No 2008-10-10 2008 10 280 2.0 (1/3)
16 3 medications No 2008-10-10 2008 12 280 …… ………..
so the desired output should be the same as this with one more column which shows the unique rows of [patientNumber, year, month]. for patient number=1 shows 1 for the PatientNumber= 2 shows 1 in year 2006, shows 2 in year 2007
I applied this code:
data=data.groupby(['Clinic Number','year'])["month"].nunique().reset_index(name='counts')
the output of this code look like:
Clinic Number year **counts**
0 494383 1999 1
1 494383 2000 2
2 494383 2001 1
3 494383 2002 1
4 494383 2003 1
the output counts is correct, except it does not keep the whole fields. I want the complete columns because later I have to do some calculation on them.
then I tried this code:
data['counts'] = data.groupby(['Clinic Number','year','month'])['month'].transform('count')
Again its not good because it does not show correct count. the output of this code is like this:
Clinic Number Question Text Answer Text ... year month counts
1 3529933 bathing No ... 2011 1 10
2 3529933 dressing No ... 2011 1 10
3 3529933 feeding No ... 2011 1 10
4 3529933 housekeeping No ... 2011 1 10
5 3529933 medications No ... 2011 1 10
here counts should be 1 because for that patient and that year there is just one month.
Use, the following modification to your code.
df['counts'] = df.groupby(['PatientNumber','year'])["month"].transform('nunique')
transform returns a series equal length to your original dataframe, therefore you can add this series into your dataframe as a column.

Categories

Resources