Pandas map groupby with multiple columns in same dataframe - python

I have a dataframe df which i need to groupby multiple column based on a condition.
df
user_id area_id group_id key year value new
10835 48299 1 5 2011 0 ?
10835 48299 1 2 2010 0
10835 48299 2 102 2013 13100
10835 48299 2 5 2016 0
10836 48299 1 78 2017 67100
10836 48299 1 1 2012 54000
10836 48299 1 12 2018 0
10836 48752 1 7 2014 0
10836 48752 2 103 2015 5000
10837 48752 2 102 2016 5000
10837 48752 1 3 2017 0
10837 48752 1 103 2017 0
10837 49226 1 2 2011 4000
10837 49226 1 83 2011 4000
10838 49226 2 16 2011 0
10838 49226 1 75 2012 0
10838 49226 1 2 2012 4000
10838 49226 1 12 2013 1000
10839 49226 1 3 2015 6500
10839 49226 1 102 2016 7900
10839 49226 1 16 2017 0
10839 49226 2 6 2017 5500
22489 49226 2 89 2017 5000
22489 49226 1 102 2017 5000
my goal is to create a new column df['new']
Current solution:
df['new'] =df['user_id'].map(df[df['key'].eq(102)].groupby(['user_id', 'area_id', 'group_id', 'year'])['value'].sum())
I get NaN for all df['new'] values. I'm guessing is not possible to use the the map function to grouped multiple columns this way. Is there a proper way to accomplish this? Thanks in advance for tip to the right direction.

You can add as_index=False for new DataFrame:
df1 = (df[df['key'].eq(102)]
.groupby(['user_id', 'area_id', 'group_id', 'year'], as_index=False)['value']
.sum())
print (df1)
user_id area_id group_id year value
0 10835 48299 2 2013 13100
1 10837 48752 2 2016 5000
2 10839 49226 1 2016 7900
3 22489 49226 1 2017 5000
Then if possible duplicated user_id first get unique rows by DataFrame.drop_duplicates, create Series by DataFrame.set_index and map:
df['new'] = df['user_id'].map(df1.drop_duplicates('user_id').set_index('user_id')['value'])
#if never duplicates
#df['new'] = df['user_id'].map(df1.set_index('user_id')['value'])
print (df)
user_id area_id group_id key year value new
0 10835 48299 1 5 2011 0 13100.0
1 10835 48299 1 2 2010 0 13100.0
2 10835 48299 2 102 2013 13100 13100.0
3 10835 48299 2 5 2016 0 13100.0
4 10836 48299 1 78 2017 67100 NaN
5 10836 48299 1 1 2012 54000 NaN
6 10836 48299 1 12 2018 0 NaN
7 10836 48752 1 7 2014 0 NaN
8 10836 48752 2 103 2015 5000 NaN
9 10837 48752 2 102 2016 5000 5000.0
10 10837 48752 1 3 2017 0 5000.0
11 10837 48752 1 103 2017 0 5000.0
12 10837 49226 1 2 2011 4000 5000.0
13 10837 49226 1 83 2011 4000 5000.0
14 10838 49226 2 16 2011 0 NaN
15 10838 49226 1 75 2012 0 NaN
16 10838 49226 1 2 2012 4000 NaN
17 10838 49226 1 12 2013 1000 NaN
18 10839 49226 1 3 2015 6500 7900.0
19 10839 49226 1 102 2016 7900 7900.0
20 10839 49226 1 16 2017 0 7900.0
21 10839 49226 2 6 2017 5500 7900.0
22 22489 49226 2 89 2017 5000 5000.0
23 22489 49226 1 102 2017 5000 5000.0

Related

How to unpack a list of tuple in various length in a panda dataframe?

ID LIST_OF_TUPLE (2col)
1 [('2012','12'), ('2012','33'), ('2014', '82')]
2 NA
3 [('2012','12')]
4 [('2012','12'), ('2012','33'), ('2014', '82'), ('2022', '67')]
Result:
ID TUP_1 TUP_2(3col)
1 2012 12
1 2012 33
1 2014 82
3 2012 12
4 2012 12
4 2012 33
4 2014 82
4 2022 67
Thanks in advance.
This is explode then create a dataframe and then join:
s = df['LIST_OF_TUPLE'].explode()
out = (df[['ID']].join(pd.DataFrame(s.tolist(),index=s.index)
.add_prefix("TUP_")).reset_index(drop=True)) #you can chain a dropna if reqd
print(out)
ID TUP_0 TUP_1
0 1 2012 12
1 1 2012 33
2 1 2014 82
3 2 NaN None
4 3 2012 12
5 4 2012 12
6 4 2012 33
7 4 2014 82
8 4 2022 67

Replace last value(s) of group with NaN

My goal is to replace the last value (or the last several values) of each id with NaN. My real dataset is quite large and has groups of different sizes.
Example:
import pandas as pd
ids = [1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3]
year = [2000,2001,2002,2003,2004,2005,1990,1991,1992,1993,1994,1995,2010,2011,2012,2013,2014,2015]
percent = [120,70,37,40,50,110,140,100,90,5,52,80,60,40,70,60,50,110]
dictex ={"id":ids,"year":year,"percent [%]": percent}
dfex = pd.DataFrame(dictex)
print(dfex)
id year percent [%]
0 1 2000 120
1 1 2001 70
2 1 2002 37
3 1 2003 40
4 1 2004 50
5 1 2005 110
6 2 1990 140
7 2 1991 100
8 2 1992 90
9 2 1993 5
10 2 1994 52
11 2 1995 80
12 3 2010 60
13 3 2011 40
14 3 2012 70
15 3 2013 60
16 3 2014 50
17 3 2015 110
My goal is to replace the last 1 / or 2 / or 3 values of the "percent [%]" column for each id (group) with NaN.
The result should look like this: (here: replace the last 2 values of each id)
id year percent [%]
0 1 2000 120
1 1 2001 70
2 1 2002 37
3 1 2003 40
4 1 2004 NaN
5 1 2005 NaN
6 2 1990 140
7 2 1991 100
8 2 1992 90
9 2 1993 5
10 2 1994 NaN
11 2 1995 NaN
12 3 2010 60
13 3 2011 40
14 3 2012 70
15 3 2013 60
16 3 2014 NaN
17 3 2015 NaN
I know there should be a relatively easy solution for this, but i'm new to python and simply haven't been able to figure out an elegant way.
Thanks for the help!
try using groupby, tail and index to find the index of those rows that will be modified and use loc to change the values
nrows = 2
idx = df.groupby('id').tail(nrows).index
df.loc[idx, 'percent [%]'] = np.nan
#output
id year percent [%]
0 1 2000 120.0
1 1 2001 70.0
2 1 2002 37.0
3 1 2003 40.0
4 1 2004 NaN
5 1 2005 NaN
6 2 1990 140.0
7 2 1991 100.0
8 2 1992 90.0
9 2 1993 5.0
10 2 1994 NaN
11 2 1995 NaN
12 3 2010 60.0
13 3 2011 40.0
14 3 2012 70.0
15 3 2013 60.0
16 3 2014 NaN
17 3 2015 NaN

Pandas transform function to do custom row manipulation

We want to create a column in the dataframe called feature col which is range of current value and previous 2 values, difference of max and min as shown in the image. How can we calculate this in pandas?
There are several IDs in the dataset
[![enter image description here][2]][2]
ID Year percentage
123 2009 0
123 2010 -27
123 2011 0
123 2012 -50
123 2013 3
123 2014 -3
123 2015 0
123 2016 -28
123 2017 -5
Use Series.rolling with numpy method np.ptp, but first if necessary remove % and convert values to numbers:
df['feature_col'] = df['percentage'].str.strip('%').astype(int).rolling(3).apply(np.ptp)
print (df)
ID Year percentage feature_col
0 123 2009 0% NaN
1 123 2010 -27% NaN
2 123 2011 0% 27.0
3 123 2012 -50% 50.0
4 123 2013 3% 53.0
5 123 2014 -3% 53.0
6 123 2015 0% 6.0
7 123 2016 -28% 28.0
8 123 2017 -5% 28.0
If output is necessary with % then is possible use:
df['feature_col'] = (df['percentage'].str.strip('%')
.astype(int)
.rolling(3)
.apply(np.ptp)
.mask(lambda x: x.notna(), lambda x: x.astype('Int64').astype(str).add('%'))
)
print (df)
ID Year percentage feature_col
0 123 2009 0% NaN
1 123 2010 -27% NaN
2 123 2011 0% 27%
3 123 2012 -50% 50%
4 123 2013 3% 53%
5 123 2014 -3% 53%
6 123 2015 0% 6%
7 123 2016 -28% 28%
8 123 2017 -5% 28%
EDIT: If need processing per groups by ID:
print (df)
ID Year percentage
0 123 2009 0%
1 123 2010 -27%
2 123 2011 0%
3 123 2012 -50%
4 123 2013 3%
5 124 2014 -3%
6 124 2015 0%
7 124 2016 -28%
8 124 2017 -5%
df['feature_col'] = (df['percentage'].str.strip('%')
.astype(int)
.groupby(df['ID'])
.rolling(3)
.apply(np.ptp)
.reset_index(level=0, drop=True))
print (df)
ID Year percentage feature_col
0 123 2009 0% NaN
1 123 2010 -27% NaN
2 123 2011 0% 27.0
3 123 2012 -50% 50.0
4 123 2013 3% 53.0
5 124 2014 -3% NaN
6 124 2015 0% NaN
7 124 2016 -28% 28.0
8 124 2017 -5% 28.0

Panel data pandas, variation according to a certain condition

i am a stata user and i trying to switch to python and i having problem with some codes. If i have the following panel data
id year quarter fecha jobs
1 2007 1 220 10
1 2007 2 221 12
1 2007 3 222 12
1 2007 4 223 12
1 2008 1 224 12
1 2008 2 225 13
1 2008 3 226 14
1 2008 4 227 9
1 2009 1 228 12
1 2009 2 229 15
1 2009 3 230 18
1 2009 4 231 15
1 2010 1 232 15
1 2010 2 233 16
1 2010 3 234 17
1 2010 4 235 18
2 2007 1 220 10
2 2007 2 221 12
2 2007 3 222 12
2 2007 4 223 12
2 2008 1 224 12
2 2008 2 225 13
2 2008 3 226 14
2 2008 4 227 9
2 2009 1 228 12
2 2009 2 229 15
2 2009 3 230 18
2 2009 4 231 15
2 2010 1 232 15
2 2010 2 233 16
2 2010 4 235 18
(My panel data is much bigger than the example, is just to illustrate my problem). I want to calculate the variation of jobs of the same quarter and three year before
So result should look like these
id year quarter fecha jobs jobs_variation
1 2007 1 220 10 Nan
1 2007 2 221 12 Nan
1 2007 3 222 12 Nan
1 2007 4 223 12 Nan
1 2008 1 224 12 Nan
1 2008 2 225 13 Nan
1 2008 3 226 14 Nan
1 2008 4 227 9 Nan
1 2009 1 228 12 Nan
1 2009 2 229 15 Nan
1 2009 3 230 18 Nan
1 2009 4 231 15 Nan
1 2010 1 232 15 0.5
1 2010 2 233 16 0.33
1 2010 3 234 17 0.30769
1 2010 4 235 18 0.5
2 2007 1 220 10 Nan
2 2007 4 223 12 Nan
2 2008 1 224 12 Nan
2 2008 2 225 13 Nan
2 2008 3 226 14 Nan
2 2008 4 227 9 Nan
2 2009 1 228 12 Nan
2 2009 2 229 15 Nan
2 2009 3 230 18 Nan
2 2009 4 231 15 Nan
2 2010 1 232 15 0.5
2 2010 2 233 16 Nan
2 2010 3 234 20 Nan
2 2010 4 235 18 0.5
Check that in the second id year 2010 in the second and thir quarter calculation must not be me made because the id was not present at 2007Q2 and 2007Q3.
In stata the code would be,
bys id: gen jobs_variation=jobs/jobs[_n-12]-1 if fecha[_n-12]==fecha-12
IIUC, you need a groupby on id and quarter followed by apply:
df['jobs_variation'] = df.groupby(['id', 'quarter']).jobs\
.apply(lambda x: x / x.shift(3) - 1)
df
id year quarter fecha jobs jobs_variation
0 1 2007 1 220 10 NaN
1 1 2007 2 221 12 NaN
2 1 2007 3 222 12 NaN
3 1 2007 4 223 12 NaN
4 1 2008 1 224 12 NaN
5 1 2008 2 225 13 NaN
6 1 2008 3 226 14 NaN
7 1 2008 4 227 9 NaN
8 1 2009 1 228 12 NaN
9 1 2009 2 229 15 NaN
10 1 2009 3 230 18 NaN
11 1 2009 4 231 15 NaN
12 1 2010 1 232 15 0.500000
13 1 2010 2 233 16 0.333333
14 1 2010 3 234 17 0.416667
15 1 2010 4 235 18 0.500000
16 2 2007 1 220 10 NaN
17 2 2007 4 223 12 NaN
18 2 2008 1 224 12 NaN
19 2 2008 2 225 13 NaN
20 2 2008 3 226 14 NaN
21 2 2008 4 227 9 NaN
22 2 2009 1 228 12 NaN
23 2 2009 2 229 15 NaN
24 2 2009 3 230 18 NaN
25 2 2009 4 231 15 NaN
26 2 2010 1 232 15 0.500000
27 2 2010 2 233 16 NaN
28 2 2010 3 234 20 NaN
29 2 2010 4 235 18 0.500000
x / x.shift(3) will divide the current year's job count (for that quarter) by the corresponding value from 3 years ago.

Rolling Mean in Pandas

I have this initial DataFrame in Pandas
A B C D E
0 23 2015 1 14937 16.25
1 23 2015 1 19054 7.50
2 23 2015 2 14937 16.75
3 23 2015 2 19054 17.25
4 23 2015 3 14937 71.75
5 23 2015 3 19054 15.00
6 23 2015 4 14937 13.00
7 23 2015 4 19054 37.75
8 23 2015 5 14937 4.25
9 23 2015 5 19054 18.25
10 23 2015 6 14937 16.50
11 23 2015 6 19054 1.00
If I want to obtain this result, how could I do it?
A B C D E
0 23 2015 1 14937 NaN
1 23 2015 2 14937 NaN
2 23 2015 2 14937 16.6
3 23 2015 1 14937 35.1
4 23 2015 2 14937 33.8
5 23 2015 3 14937 29.7
6 23 2015 4 14937 11.3
7 23 2015 4 19054 NaN
8 23 2015 5 19054 NaN
9 23 2015 5 19054 13.3
10 23 2015 6 19054 23.3
11 23 2015 6 19054 23.7
12 23 2015 6 19054 19.0
I tried a GroupBy but I dind't get it
DfMean = pd.DataFrame(DfGby.rolling(center=False,window=3)['E'].mean())
I think you can use groupby with rolling (need at least pandas 0.18.1):
s = df.groupby('D').rolling(3)['E'].mean()
print (s)
D
14937 0 NaN
2 NaN
4 34.916667
6 33.833333
8 29.666667
10 11.250000
19054 1 NaN
3 NaN
5 13.250000
7 23.333333
9 23.666667
11 19.000000
Name: E, dtype: float64
Then set_index by D with swaplevel for same order for matching output:
df = df.set_index('D', append=True).swaplevel(0,1)
df['E'] = s
Last reset_index and reorder columns:
df = df.reset_index(level=0).sort_values(['D','C'])
df = df[['A','B','C','D','E']]
print (df)
A B C D E
0 23 2015 1 14937 NaN
2 23 2015 2 14937 NaN
4 23 2015 3 14937 34.916667
6 23 2015 4 14937 33.833333
8 23 2015 5 14937 29.666667
10 23 2015 6 14937 11.250000
1 23 2015 1 19054 NaN
3 23 2015 2 19054 NaN
5 23 2015 3 19054 13.250000
7 23 2015 4 19054 23.333333
9 23 2015 5 19054 23.666667
11 23 2015 6 19054 19.000000

Categories

Resources