Finding the maximum difference for a subset of columns with pandas - python

I have a dataframe:
A B C D E
0 a 34 55 43 aa
1 b 53 77 65 bb
2 c 23 100 34 cc
3 d 54 43 23 dd
4 e 23 67 54 ee
5 f 43 98 23 ff
I need to get the maximum difference between the column B,C and D and return the value in column A . in row 'a' maximum difference between columns is 55 - 34 = 21 . data is in a dataframe.
The expected result is
A B C D E
0 21 34 55 43 aa
1 24 53 77 65 bb
2 77 23 100 34 cc
3 31 54 43 23 dd
4 44 23 67 54 ee
5 75 43 98 23 ff

Use np.ptp:
# df['A'] = np.ptp(df.loc[:, 'B':'D'], axis=1)
df['A'] = np.ptp(df[['B', 'C', 'D']], axis=1)
df
A B C D E
0 21 34 55 43 aa
1 24 53 77 65 bb
2 77 23 100 34 cc
3 31 54 43 23 dd
4 44 23 67 54 ee
5 75 43 98 23 ff
Or, find the max and min yourself:
df['A'] = df[['B', 'C', 'D']].max(1) - df[['B', 'C', 'D']].min(1)
df
A B C D E
0 21 34 55 43 aa
1 24 53 77 65 bb
2 77 23 100 34 cc
3 31 54 43 23 dd
4 44 23 67 54 ee
5 75 43 98 23 ff
If performance is important, you can do this in NumPy space:
v = df[['B', 'C', 'D']].values
df['A'] = v.max(1) - v.min(1)
df
A B C D E
0 21 34 55 43 aa
1 24 53 77 65 bb
2 77 23 100 34 cc
3 31 54 43 23 dd
4 44 23 67 54 ee
5 75 43 98 23 ff

Related

Add Sum to all grouped rows in pandas dataframe

I have a dataframe and i want to group its "First" and "Second" column and then to produce the expected output as mentioned below:
df = pd.DataFrame({'First':list('abcababcbc'), 'Second':list('qeeeeqqqeq'),'Value_1':np.random.randint(4,50,10),'Value_2':np.random.randint(40,90,10)})
print(df)
Output>
First Second Value_1 Value_2
0 a q 17 70
1 b e 44 47
2 c e 5 56
3 a e 23 58
4 b e 10 76
5 a q 11 67
6 b q 21 84
7 c q 42 67
8 b e 36 53
9 c q 16 63
When i Grouped this DataFrame using groupby, I am getting below output:
def func(arr,columns):
return arr.sort_values(by = columns).drop(columns, axis = 1)
df.groupby(['First','Second']).apply(func, columns = ['First','Second'])
Value_1 Value_2
First Second
a e 3 23 58
q 0 17 70
5 11 67
b e 1 44 47
4 10 76
8 36 53
q 6 21 84
c e 2 5 56
q 7 42 67
9 16 63
However i want below output:
Expected output:
Value_1 Value_2
First Second
a e 3 23 58
All 23 58
q 0 17 70
5 11 67
All 28 137
b e 1 44 47
4 10 76
8 36 53
All 90 176
q 6 21 84
All 21 84
c e 2 5 56
All 5 56
q 7 42 67
9 16 63
All 58 130
It's not necessary to print "All" string but to print the sum of all grouped rows.
df = pd.DataFrame({'First':list('abcababcbc'), 'Second':list('qeeeeqqqeq'),'Value_1':np.random.randint(4,50,10),'Value_2':np.random.randint(40,90,10)})
First Second Value_1 Value_2
0 a q 4 69
1 b e 20 74
2 c e 13 82
3 a e 9 41
4 b e 11 79
5 a q 32 77
6 b q 6 75
7 c q 39 62
8 b e 26 80
9 c q 26 42
def lambda_t(x):
df = x.sort_values(['First','Second']).drop(['First','Second'],axis=1)
df.loc['all'] = df.sum()
return df
df.groupby(['First','Second']).apply(lambda_t)
Value_1 Value_2
First Second
a e 3 9 41
all 9 41
q 0 4 69
5 32 77
all 36 146
b e 1 20 74
4 11 79
8 26 80
all 57 233
q 6 6 75
all 6 75
c e 2 13 82
all 13 82
q 7 39 62
9 26 42
all 65 104
You can try this:
reset the index in your group by:
d1 = df.groupby(['First','Second']).apply(func, columns = ['First','Second']).reset_index()
Then group by 'First' and 'Second' and sum the values columns.
d2 = d.groupby(['First', 'Second']).sum().reset_index()
Create the 'level_2' column in the new dataframe and concatenate with the initial one to get the desired result
d2.loc[:,'level_2'] = 'All'
pd.concat([d1,d2],0).sort_values(by = ['First', 'Second'])
Not sure about your function; however, you could chunk it into two steps:
Create an indexed dataframe, where you append the First and Second columns to the existing index:
df.index = df.index.astype(str).rename("Total")
indexed = df.set_index(["First", "Second"], append=True).reorder_levels(
["First", "Second", "Total"]
)
indexed
Value_1 Value_2
First Second Total
a q 0 17 70
b e 1 44 47
c e 2 5 56
a e 3 23 58
b e 4 10 76
a q 5 11 67
b q 6 21 84
c q 7 42 67
b e 8 36 53
c q 9 16 63
Create an aggregation, grouped by First and Second:
summary = (
df.groupby(["First", "Second"])
.sum()
.assign(Total="All")
.set_index("Total", append=True)
)
summary
Value_1 Value_2
First Second Total
a e All 23 58
q All 28 137
b e All 90 176
q All 21 84
c e All 5 56
q All 58 130
Combine indexed and summary dataframes:
pd.concat([indexed, summary]).sort_index(level=["First", "Second"])
Value_1 Value_2
First Second Total
a e 3 23 58
All 23 58
q 0 17 70
5 11 67
All 28 137
b e 1 44 47
4 10 76
8 36 53
All 90 176
q 6 21 84
All 21 84
c e 2 5 56
All 5 56
q 7 42 67
9 16 63
All 58 130

Shifting columns in grouped pandas dataframe

I have a dataframe which, after grouping it by country and group looks like this:
A B C D
country group
1 a1 10 20 30 40
a2 11 21 31 41
a3 12 22 32 42
a4 13 23 33 43
A B C D
country group
2 a1 50 60 70 80
a2 51 61 71 81
a3 52 62 72 82
a4 53 63 73 83
My goal is to create another column E that would hold column D values shifted up by 1 row like so:
A B C D E
country group
1 a1 10 20 30 40 41
a2 11 21 31 41 42
a3 12 22 32 42 43
a4 13 23 33 43 nan
A B C D E
country group
2 a1 50 60 70 80 81
a2 51 61 71 81 82
a3 52 62 72 82 83
a4 53 63 73 83 nan
What I've tried:
df.groupby(['country','group']).sum().apply(lambda x['E']: x['D'].shift(-1))
but I get invalid syntax.
Afterwards I am trying to delete those bottom lines in each group where nan is present like so:
df = df[~df.isin([np.nan]).any(1)] which works.
How can I add a column E to the df which would hold column D values shifted by -1?
Use DataFrameGroupBy.shift by first level:
df = df.groupby(['country','group']).sum()
df['E'] = df.groupby(level=0)['D'].shift(-1)
And then DataFrame.dropna:
df = df.dropna(subset=['E'])
Sample:
print (df)
country group A B C D
0 1 a1 10 20 30 40
1 1 a1 11 21 31 41
2 1 a1 12 22 32 42
3 1 a2 13 23 33 43
4 1 a2 11 21 31 41
5 1 a2 12 22 32 42
6 1 a3 13 23 33 43
7 1 a3 11 21 31 41
8 1 a3 12 22 32 42
9 1 a4 13 23 33 43
10 1 a4 11 21 31 41
11 1 a5 12 22 32 42
12 1 a5 13 23 33 43
13 2 a2 50 60 70 80
14 2 a3 51 61 71 81
15 2 a4 52 62 72 82
16 2 a5 53 63 73 83
df = df.groupby(['country','group']).sum()
print (df)
A B C D
country group
1 a1 33 63 93 123
a2 36 66 96 126
a3 36 66 96 126
a4 24 44 64 84
a5 25 45 65 85
2 a2 50 60 70 80
a3 51 61 71 81
a4 52 62 72 82
a5 53 63 73 83
df['E'] = df.groupby(level=0)['D'].shift(-1)
print (df)
A B C D E
country group
1 a1 33 63 93 123 126.0
a2 36 66 96 126 126.0
a3 36 66 96 126 84.0
a4 24 44 64 84 85.0
a5 25 45 65 85 NaN
2 a2 50 60 70 80 81.0
a3 51 61 71 81 82.0
a4 52 62 72 82 83.0
a5 53 63 73 83 NaN
df = df.dropna(subset=['E'])
print (df)
A B C D E
country group
1 a1 33 63 93 123 126.0
a2 36 66 96 126 126.0
a3 36 66 96 126 84.0
a4 24 44 64 84 85.0
2 a2 50 60 70 80 81.0
a3 51 61 71 81 82.0
a4 52 62 72 82 83.0

Split a Pandas Dataframe into multiple Dataframes based on Triangular Number Series

I have a DataFrame (df) and I need to split it into n number of Dataframes based on the column numbers. But, it has to follow the Triangular Series pattern:
df1 = df[[0]]
df2 = df[[1,2]]
df3 = df[[3,4,5]]
df4 = df[[6,7,8,9]]
etc.
Consider the dataframe df
df = pd.DataFrame(
np.arange(100).reshape(10, 10),
columns=list('ABCDEFGHIJ')
)
df
A B C D E F G H I J
0 0 1 2 3 4 5 6 7 8 9
1 10 11 12 13 14 15 16 17 18 19
2 20 21 22 23 24 25 26 27 28 29
3 30 31 32 33 34 35 36 37 38 39
4 40 41 42 43 44 45 46 47 48 49
5 50 51 52 53 54 55 56 57 58 59
6 60 61 62 63 64 65 66 67 68 69
7 70 71 72 73 74 75 76 77 78 79
8 80 81 82 83 84 85 86 87 88 89
9 90 91 92 93 94 95 96 97 98 99
i_s, j_s = np.arange(4).cumsum(), np.arange(1, 5).cumsum()
df1, df2, df3, df4 = [
df.iloc[:, i:j] for i, j in zip(i_s, j_s)
]
Verify
pd.concat(dict(enumerate([df.iloc[:, i:j] for i, j in zip(i_s, j_s)])), axis=1)
0 1 2 3
A B C D E F G H I J
0 0 1 2 3 4 5 6 7 8 9
1 10 11 12 13 14 15 16 17 18 19
2 20 21 22 23 24 25 26 27 28 29
3 30 31 32 33 34 35 36 37 38 39
4 40 41 42 43 44 45 46 47 48 49
5 50 51 52 53 54 55 56 57 58 59
6 60 61 62 63 64 65 66 67 68 69
7 70 71 72 73 74 75 76 77 78 79
8 80 81 82 83 84 85 86 87 88 89
9 90 91 92 93 94 95 96 97 98 99
first get Triangular Number Series, then apply it to dataframe
n = len(df.columns.tolist())
end = 0
i = 0
res = []
while end < n:
begin = end
end = i*(i+1)/2
res.append(begin,end)
idx = map( lambda x:range(x),res)
for i in idx:
df[i]

Shuffle DataFrame rows except the first row

I am trying to randomize all rows in a data frame except for the first. I would like for the first row to always appear first, and the remaining rows can be in any randomized order.
My data frame is:
df = pd.DataFrame(np.random.randn(10, 5), columns=['a', 'b', 'c', 'd', 'e'])
Any suggestions as to how I can approach this?
try this:
df = pd.concat([df[:1], df[1:].sample(frac=1)]).reset_index(drop=True)
test:
In [38]: df
Out[38]:
a b c d e
0 2.070074 2.216060 -0.015823 0.686516 -0.738393
1 -1.213517 0.994057 0.634805 0.517844 -0.128375
2 0.937532 0.814923 -0.231120 1.970019 1.438927
3 1.499967 0.105707 1.255207 0.929084 -3.359826
4 0.418702 -0.894226 -1.088968 0.631398 0.152026
5 1.214119 -0.122633 0.983818 -0.445202 -0.807955
6 0.252078 -0.258703 -0.445209 -0.179094 1.180077
7 1.428827 -0.569009 -0.718485 0.161108 1.300349
8 -1.403100 2.154548 -0.492264 -0.544538 -0.061745
9 0.468671 0.004839 -0.738240 -0.385624 -0.532640
In [39]: df = pd.concat([df[:1], df[1:].sample(frac=1)]).reset_index(drop=True)
In [40]: df
Out[40]:
a b c d e
0 2.070074 2.216060 -0.015823 0.686516 -0.738393
1 0.468671 0.004839 -0.738240 -0.385624 -0.532640
2 0.418702 -0.894226 -1.088968 0.631398 0.152026
3 -1.213517 0.994057 0.634805 0.517844 -0.128375
4 1.428827 -0.569009 -0.718485 0.161108 1.300349
5 0.937532 0.814923 -0.231120 1.970019 1.438927
6 0.252078 -0.258703 -0.445209 -0.179094 1.180077
7 1.499967 0.105707 1.255207 0.929084 -3.359826
8 -1.403100 2.154548 -0.492264 -0.544538 -0.061745
9 1.214119 -0.122633 0.983818 -0.445202 -0.807955
Use numpy's shuffle
import pandas as pd
import numpy as np
df = pd.DataFrame(np.arange(100).reshape(20, 5), columns=list('ABCDE'))
np.random.shuffle(df.values[1:, :])
print df
A B C D E
0 0 1 2 3 4
1 55 56 57 58 59
2 10 11 12 13 14
3 80 81 82 83 84
4 90 91 92 93 94
5 70 71 72 73 74
6 25 26 27 28 29
7 40 41 42 43 44
8 65 66 67 68 69
9 5 6 7 8 9
10 45 46 47 48 49
11 85 86 87 88 89
12 15 16 17 18 19
13 30 31 32 33 34
14 60 61 62 63 64
15 20 21 22 23 24
16 35 36 37 38 39
17 95 96 97 98 99
18 75 76 77 78 79
19 50 51 52 53 54
np.random.shuffle shuffles an ndarray in place. The dataframe is just a wrapper on an ndarray. You can access that ndarray with the values attribute. To specify that all but the first row get shiffled, operate on the array slice [1:, :].

Drop range of columns by labels

Suppose I had this large data frame:
In [31]: df
Out[31]:
A B C D E F G H I J ... Q R S T U V W X Y Z
0 0 1 2 3 4 5 6 7 8 9 ... 16 17 18 19 20 21 22 23 24 25
1 26 27 28 29 30 31 32 33 34 35 ... 42 43 44 45 46 47 48 49 50 51
2 52 53 54 55 56 57 58 59 60 61 ... 68 69 70 71 72 73 74 75 76 77
[3 rows x 26 columns]
which you can create using
alphabet = [chr(letter_i) for letter_i in range(ord('A'), ord('Z')+1)]
df = pd.DataFrame(np.arange(3*26).reshape(3, 26), columns=alphabet)
What's the best way to drop all columns between column 'D' and 'R' using the labels of the columns?
I found one ugly way to do it:
df.drop(df.columns[df.columns.get_loc('D'):df.columns.get_loc('R')+1], axis=1)
Here's my entry:
>>> df.drop(df.columns.to_series()["D":"R"], axis=1)
A B C S T U V W X Y Z
0 0 1 2 18 19 20 21 22 23 24 25
1 26 27 28 44 45 46 47 48 49 50 51
2 52 53 54 70 71 72 73 74 75 76 77
By converting df.columns from an Index to a Series, we can take advantage of the ["D":"R"]-style selection:
>>> df.columns.to_series()["D":"R"]
D D
E E
F F
G G
H H
I I
J J
... ...
Q Q
R R
dtype: object
Here you are:
print df.ix[:,'A':'C'].join(df.ix[:,'S':'Z'])
Out[1]:
A B C S T U V W X Y Z
0 0 1 2 18 19 20 21 22 23 24 25
1 26 27 28 44 45 46 47 48 49 50 51
2 52 53 54 70 71 72 73 74 75 76 77
Here's another way ...
low, high = df.columns.get_slice_bound(('D', 'R'), 'left')
drops = df.columns[low:high+1]
print df.drop(drops, axis=1)
A B C S T U V W X Y Z
0 0 1 2 18 19 20 21 22 23 24 25
1 26 27 28 44 45 46 47 48 49 50 51
2 52 53 54 70 71 72 73 74 75 76 77
Use numpy for more flexibility ... numpy allows comparison of letters (probably by comparing on ASCII bit level, or something):
import numpy as np
array = (['A','B','C','D'])
array > 'B'
print(array)
print(array>'B')
gives:
['A' 'B' 'C' 'D']
array([False, False, True, True], dtype=bool)
More difficult selections are also easily possible:
b[np.logical_and(b>'B', b<'D')]
gives:
array(['C'],
dtype='|S1')

Categories

Resources