How we can apply pandas groupby to get expected output? - python

Col1 Col2 Col3 Result
2 70 1 15
2 71 2 15
2 72 3 15
3 80 4 16
3 81 5 16
3 82 6 16
3 2 15 16
3 3 16 16
I am new to pandas, can anyone explain how get last column result to add my existing data frame?

Use Series.map with DataFrame.drop_duplicates for all unique rows by col2 values:
df['Res'] = df['Col1'].map(df.drop_duplicates('col2').set_index('Col2')['Col3'])
print (df)
Col1 Col2 Col3 Result Res
0 2 70 1 15 15
1 2 71 2 15 15
2 2 72 3 15 15
3 3 80 4 16 16
4 3 81 5 16 16
5 3 82 6 16 16
6 3 2 15 16 16
7 3 3 16 16 16

Another option is merge:
df.merge(df[['Col2','Col3']].rename(columns={'Col2':'Col1', 'Col3':'Res'}),
on='Col1', how='left')
Output:
Col1 Col2 Col3 Result Res
0 2 70 1 15 15
1 2 71 2 15 15
2 2 72 3 15 15
3 3 80 4 16 16
4 3 81 5 16 16
5 3 82 6 16 16
6 3 2 15 16 16
7 3 3 16 16 16

Related

How do I classify a dataframe in a specific case?

I have a pandas.DataFrame of the form. I'll show you a simple example.(In reality, it consists of hundreds of millions of rows of data.).
I want to change the number as the letter in column '2' changes. Numbers in the remaining columns (columns:1,3 ~) should not change.
df=
index 1 2 3
0 0 a100 1
1 1.04 a100 2
2 32 a100 3
3 5.05 a105 4
4 1.01 a105 5
5 155 a105 6
6 3155.26 a105 7
7 354.12 a100 8
8 5680.13 a100 9
9 125.55 a100 10
10 13.32 a100 11
11 5656.33 a156 12
12 456.61 a156 13
13 23.52 a1235 14
14 35.35 a1235 15
15 350.20 a100 16
16 30. a100 17
17 13.50 a100 18
18 323.13 a231 19
19 15.11 a1111 20
20 11.22 a1111 21
Here is my expected result:
df=
index 1 2 3
0 0 0 1
1 1.04 0 2
2 32 0 3
3 5.05 1 4
4 1.01 1 5
5 155 1 6
6 3155.26 1 7
7 354.12 2 8
8 5680.13 2 9
9 125.55 2 10
10 13.32 2 11
11 5656.33 3 12
12 456.61 3 13
13 23.52 4 14
14 35.35 4 15
15 350.20 5 16
16 30 5 17
17 13.50 5 18
18 323.13 6 19
19 15.11 7 20
20 11.22 7 21
How do I solve this problem?
Use consecutive groups created by compare for not equal shifted values with cumulative sum and then subtract 1:
#if column is string '2'
df['2'] = df['2'].ne(df['2'].shift()).cumsum().sub(1)
#if column is number 2
df[2] = df[2].ne(df[2].shift()).cumsum().sub(1)
print (df)
index 1 2 3
0 0 0.00 0 1
1 1 1.04 0 2
2 2 32.00 0 3
3 3 5.05 1 4
4 4 1.01 1 5
5 5 155.00 1 6
6 6 3155.26 1 7
7 7 354.12 2 8
8 8 5680.13 2 9
9 9 125.55 2 10
10 10 13.32 2 11
11 11 5656.33 3 12
12 12 456.61 3 13
13 13 23.52 4 14
14 14 35.35 4 15
15 15 350.20 5 16
16 16 30.00 5 17
17 17 13.50 5 18
18 18 323.13 6 19
19 19 15.11 7 20
20 20 11.22 7 21

Merge dataframes including extreme values

I have 2 data frames, df1 and df2:
df1
Out[66]:
A B
0 1 11
1 1 2
2 1 32
3 1 42
4 1 54
5 1 66
6 2 16
7 2 23
8 3 13
9 3 24
10 3 35
11 3 46
12 3 51
13 4 12
14 4 28
15 4 39
16 4 49
df2
Out[80]:
B
0 32
1 42
2 13
3 24
4 35
5 39
6 49
I want to merge dataframes but at the same time including the first and/or last value of the set in column A. This is an example of the desired outcome:
df3
Out[93]:
A B
0 1 2
1 1 32
2 1 42
3 1 54
4 3 13
5 3 24
6 3 35
7 3 46
8 4 28
9 4 39
10 4 49
I'm trying to use merge but that only slice the portion of data frames that coincides. Someone have an idea to deal with this? thanks!
Here's one way to do it using merge with indicator, groupby, and rolling:
df[df.merge(df2, on='B', how='left', indicator='Ind').eval('Found=Ind == "both"')
.groupby('A')['Found']
.apply(lambda x: x.rolling(3, center=True, min_periods=2).max()).astype(bool)]
Output:
A B
1 1 2
2 1 32
3 1 42
4 1 54
8 3 13
9 3 24
10 3 35
11 3 46
14 4 28
15 4 39
16 4 49
pd.concat([df1.groupby('A').min().reset_index(), pd.merge(df1,df2, on="B"), df1.groupby('A').max().reset_index()]).reset_index(drop=True).drop_duplicates().sort_values(['A','B'])
A B
0 1 2
4 1 32
5 1 42
1 2 16
2 3 13
7 3 24
8 3 35
3 4 12
9 4 39
10 4 49
Breaking down each part
#Get Minimum
df1.groupby('A').min().reset_index()
# Merge on B
pd.merge(df1,df2, on="B")
# Get Maximum
df1.groupby('A').max().reset_index()
# Reset the Index and drop duplicated rows since there may be similarities between the Merge and Min/Max. Sort values by 'A' then by 'B'
.reset_index(drop=True).drop_duplicates().sort_values(['A','B'])

insert dataframe into a dataframe - Python/Pandas

Question is pretty self explanatory, how would you insert a dataframe with a couple of values in to a bigger dataframe at a given point (between index's 10 and 11). Meaning that .append cant be used
You can use concat with sliced df by loc:
np.random.seed(100)
df1 = pd.DataFrame(np.random.randint(100, size=(5,6)), columns=list('ABCDEF'))
print (df1)
A B C D E F
0 8 24 67 87 79 48
1 10 94 52 98 53 66
2 98 14 34 24 15 60
3 58 16 9 93 86 2
4 27 4 31 1 13 83
df2 = pd.DataFrame({'A':[1,2,3],
'B':[4,5,6],
'C':[7,8,9],
'D':[1,3,5],
'E':[5,3,6],
'F':[7,4,3]})
print (df2)
A B C D E F
0 1 4 7 1 5 7
1 2 5 8 3 3 4
2 3 6 9 5 6 3
#inserted between 4 and 5 index values
print (pd.concat([df1.loc[:4], df2, df1.loc[4:]], ignore_index=True))
A B C D E F
0 8 24 67 87 79 48
1 10 94 52 98 53 66
2 98 14 34 24 15 60
3 58 16 9 93 86 2
4 27 4 31 1 13 83
5 1 4 7 1 5 7
6 2 5 8 3 3 4
7 3 6 9 5 6 3
8 27 4 31 1 13 83

Pandas - Replace values based on index

If I create a dataframe like so:
import pandas as pd, numpy as np
df = pd.DataFrame(np.random.randint(0,100,size=(100, 2)), columns=list('AB'))
How would I change the entry in column A to be the number 16 from row 0 -15, for example? In other words, how do I replace cells based purely on index?
Use loc:
df.loc[0:15,'A'] = 16
print (df)
A B
0 16 45
1 16 5
2 16 97
3 16 58
4 16 26
5 16 87
6 16 51
7 16 17
8 16 39
9 16 73
10 16 94
11 16 69
12 16 57
13 16 24
14 16 43
15 16 77
16 41 0
17 3 21
18 0 98
19 45 39
20 66 62
21 8 53
22 69 47
23 48 53
Solution with ix is deprecated.
In addition to the other answers, here is what you can do if you have a list of individual indices:
indices = [0,1,3,6,10,15]
df.loc[indices,'A'] = 16
print(df.head(16))
Output:
A B
0 16 4
1 16 4
2 4 3
3 16 4
4 1 1
5 3 0
6 16 4
7 2 1
8 4 4
9 3 4
10 16 0
11 3 1
12 4 2
13 2 2
14 2 1
15 16 1
One more solution is
df.at[0:15, 'A']=16
print(df.head(20))
OUTPUT:
A B
0 16 44
1 16 86
2 16 97
3 16 79
4 16 94
5 16 24
6 16 88
7 16 43
8 16 64
9 16 39
10 16 84
11 16 42
12 16 8
13 16 72
14 16 23
15 16 28
16 18 11
17 76 15
18 12 38
19 91 6
Very interesting observation, that code below does change the value in the original dataframe
df.loc[0:15,'A'] = 16
But if you use a pretty similar code like this
df.loc[0:15]['A'] = 16
Than it will give back just a copy of your dataframe with changed value and doesn't change the value in the original df object.
Hope that this will save some time for someone dealing with this issue.
Could you instead of 16, update the value of that column to -1.0? for me, it returns 255 instead of -1.0.
>>> effect_df.loc[3:5, ['city_SF', 'city_Seattle']] = -1.0
Rent city_SF city_Seattle
0 3999 1 0
1 4000 1 0
2 4001 1 0
3 3499 255 255
4 3500 255 255
5 3501 255 255
6 2499 0 1
7 2500 0 1
8 2501 0 1
To Mad Physicist: it appears that at first you need to change the column data types from short integer to float. Looks like your -1.0 was cast as short integer.

pandas drop row below each row containing an 'na'

i have a dataframe with, say, 4 columns [['a','b','c','d']], to which I add another column ['total'] containing the sum of all the other columns for each row. I then add another column ['growth of total'] with the growth rate of the total.
some of the values in [['a','b','c','d']] are blank, rendering the ['total'] column invalid for these rows. I can easily get rid of these rows with df.dropna(how='any').
However, my growth rate will be invalid not only for rows with missing values in [['a','b','c','d']], but also for the following row. How do I drop all these rows?
IIUC correctly you can use notnull with all to mask off any rows with NaN and any rows that follow NaN rows:
In [43]:
df = pd.DataFrame({'a':[0,np.NaN, 2, 3,np.NaN], 'b':[np.NaN, 1,2,3,4], 'c':[0, np.NaN,2,3,4]})
df
Out[43]:
a b c
0 0 NaN 0
1 NaN 1 NaN
2 2 2 2
3 3 3 3
4 NaN 4 4
In [44]:
df[df.notnull().all(axis=1) & df.shift().notnull().all(axis=1)]
Out[44]:
a b c
3 3 3 3
Here's one option that I think does what you're looking for:
In [76]: df = pd.DataFrame(np.arange(40).reshape(10,4))
In [77]: df.ix[1,2] = np.nan
In [78]: df.ix[6,1] = np.nan
In [79]: df['total'] = df.sum(axis=1, skipna=False)
In [80]: df
Out[80]:
0 1 2 3 total
0 0 1 2 3 6
1 4 5 NaN 7 NaN
2 8 9 10 11 38
3 12 13 14 15 54
4 16 17 18 19 70
5 20 21 22 23 86
6 24 NaN 26 27 NaN
7 28 29 30 31 118
8 32 33 34 35 134
9 36 37 38 39 150
In [81]: df['growth'] = df['total'].iloc[1:] - df['total'].values[:-1]
In [82]: df
Out[82]:
0 1 2 3 total growth
0 0 1 2 3 6 NaN
1 4 5 NaN 7 NaN NaN
2 8 9 10 11 38 NaN
3 12 13 14 15 54 16
4 16 17 18 19 70 16
5 20 21 22 23 86 16
6 24 NaN 26 27 NaN NaN
7 28 29 30 31 118 NaN
8 32 33 34 35 134 16
9 36 37 38 39 150 16

Categories

Resources