Preserving multindex column structure after performing a groupby summation - python

I have a three-level multiindex column. At the lowest level, I want to add a subtotal column.
So in the example here, I would expect a new column zone: day, person:dave, find:'subtotal' with value = 49+27+63=138. similarly for all the other combinations of zone and person.
cols = pd.MultiIndex.from_product([['day', 'night'], ['dave', 'matt', 'mike'], ['gems', 'rocks', 'paper']])
rows = pd.date_range(start='20191201', periods=5, freq="d")
data = np.random.randint(0, high=100,size=(len(rows), len(cols)))
xf = pd.DataFrame(data, index=rows, columns=cols)
xf.columns.names = ['zone', 'person', 'find']
I can generate the correct subtotal data with xf.groupby(level=[0,1], axis="columns").sum() but then I lose the find level of the columns, it only leaves the zone and person levels. I need that third level of column called subtotal so that I can join that back with the original xf dataframe. But I cannot figure out a nice pythonic way to add a third level back into the multindex.

You can use sum first and then MultiIndex.from_product with new level:
df = xf.sum(level=[0,1], axis="columns")
df.columns = pd.MultiIndex.from_product(df.columns.levels + [['subtotal']])
print (df)
day night
dave matt mike dave matt mike
subtotal subtotal subtotal subtotal subtotal subtotal
2019-12-01 85 99 163 210 93 252
2019-12-02 38 113 101 211 110 135
2019-12-03 145 75 122 181 165 176
2019-12-04 220 184 173 179 134 192
2019-12-05 126 77 29 184 178 199
And then join together by concat with DataFrame.sort_index:
df = pd.concat([xf, df], axis=1).sort_index(axis=1)
print (df)
zone day \
person dave matt mike
find gems paper rocks subtotal gems paper rocks subtotal gems paper
2019-12-01 33 96 24 153 34 89 90 213 15 51
2019-12-02 74 48 61 183 94 83 2 179 75 4
2019-12-03 88 85 51 224 65 3 52 120 95 80
2019-12-04 43 28 60 131 43 14 77 134 88 54
2019-12-05 41 72 44 157 63 77 37 177 8 66
zone ... night \
person ... dave matt mike
find ... rocks subtotal gems paper rocks subtotal gems paper rocks
2019-12-01 ... 24 102 19 49 4 72 43 57 92
2019-12-02 ... 90 206 96 55 92 243 75 58 68
2019-12-03 ... 29 182 11 90 85 186 9 20 46
2019-12-04 ... 30 84 25 55 89 169 98 41 85
2019-12-05 ... 73 167 52 90 49 191 51 80 37
zone
person
find subtotal
2019-12-01 192
2019-12-02 201
2019-12-03 75
2019-12-04 224
2019-12-05 168
[5 rows x 24 columns]

Related

Using apply to the pandas group object with original function

I have a multi-index df and I want to add a new column by apply an operation
class weight height time
A 45 150 85
50 160 80
55 155 74
B 78 180 90
51 158 65
40 155 68
C 80 185 90
86 175 81
52 162 73
def operation(col):
concat = ''
for i in col:
concat += (str(i))
return concat
and the result df should look like
df['new'] = df.groupby(level=0)['height'].apply(operation)
class weight height time new
A 45 150 85 150160155
50 160 80
55 155 74
B 78 180 90 180158155
51 158 65
40 155 68
C 80 185 90 185175162
86 175 81
52 162 73
However, the resultant df actually add NaN to new column. What am I doing wrong?
IIUC,
use transform instead of apply
df['new'] = df.groupby(level=0)['height'].transform(operation)
Output:
height time new
class weight
A 45 150 85 150160155
50 160 80 150160155
55 155 74 150160155
B 78 180 90 180158155
51 158 65 180158155
40 155 68 180158155
C 80 185 90 185175162
86 175 81 185175162
52 162 73 185175162
OR
df['new'] = df.groupby(level=0)['height'].transform(operation).drop_duplicates()
Output:
height time new
class weight
A 45 150 85 150160155
50 160 80 NaN
55 155 74 NaN
B 78 180 90 180158155
51 158 65 NaN
40 155 68 NaN
C 80 185 90 185175162
86 175 81 NaN
52 162 73 NaN
#Concat height in each class, put it in a dict and map it back to the column class
df['new']=df['class'].map(df.groupby('class').height.apply(lambda x: x.astype(str).str.cat()).to_dict())
#Select duplicated(keep=first), invert in np.where clause to null what you need done so
df['new']=np.where(~df['new'].duplicated(keep='first'),df['new'],'')
print(df)
class weight height time new
0 A 45 150 85 150160155
1 A 50 160 80
2 A 55 155 74
3 B 78 180 90 180158155
4 B 51 158 65
5 B 40 155 68
6 C 80 185 90 185175162
7 C 86 175 81
8 C 52 162 73

How i can transform Dataframe in many temporal feature in Python?

i have this dataframe:
Timestamp DATA0 DATA1 DATA2 DATA3 DATA4 DATA5 DATA6 DATA7
0 1.478196e+09 219 128 220 27 141 193 95 50
1 1.478196e+09 95 237 27 121 90 194 232 137
2 1.478196e+09 193 22 103 217 138 195 153 172
3 1.478196e+09 181 120 186 73 120 239 121 218
4 1.478196e+09 70 194 36 16 81 129 95 217
... ... ... ... ... ... ... ... ... ...
242 1.478198e+09 15 133 112 2 236 81 94 252
243 1.478198e+09 0 123 163 160 13 156 145 32
244 1.478198e+09 83 147 61 61 33 199 147 110
245 1.478198e+09 172 95 87 220 226 99 108 176
246 1.478198e+09 123 240 180 145 132 213 47 60
I need to create a temporal features like this:
Timestamp DATA0 DATA1 DATA2 DATA3 DATA4 DATA5 DATA6 DATA7
0 1.478196e+09 219 128 220 27 141 193 95 50
1 1.478196e+09 95 237 27 121 90 194 232 137
2 1.478196e+09 193 22 103 217 138 195 153 172
3 1.478196e+09 181 120 186 73 120 239 121 218
4 1.478196e+09 70 194 36 16 81 129 95 217
Timestamp DATA0 DATA1 DATA2 DATA3 DATA4 DATA5 DATA6 DATA7
1 1.478196e+09 95 237 27 121 90 194 232 137
2 1.478196e+09 193 22 103 217 138 195 153 172
3 1.478196e+09 181 120 186 73 120 239 121 218
4 1.478196e+09 70 194 36 16 81 129 95 217
5 1.478196e+09 121 69 111 204 134 92 51 190
Timestamp DATA0 DATA1 DATA2 DATA3 DATA4 DATA5 DATA6 DATA7
2 1.478196e+09 193 22 103 217 138 195 153 172
3 1.478196e+09 181 120 186 73 120 239 121 218
4 1.478196e+09 70 194 36 16 81 129 95 217
5 1.478196e+09 121 69 111 204 134 92 51 190
6 1.478196e+09 199 132 39 197 159 242 153 104
How can I do this automatically?
what structure should I use, what functions?
I was told that the dataframe should become an array of arrays
it's not very clear to me
If I understand it correctly, you want e.g. a list of dataframes, where each dataframe is a progressing slice of the original frame. This example would give you a list of dataframes:
import pandas as pd
# dummy dataframe
df = pd.DataFrame({'col_1': range(10), 'col_2': range(10)})
# returns slices of size slice_length with step size 1
slice_length = 5
lst = [df.iloc[i:i+slice_length,: ] for i in range(df.shape[0] - slice_length)]
Please note that you are duplicating a lot of data and thus increasing memory usage. If you merely have to perform an operation on subsequent slices, you should better loop over the dataframe and apply your function. Even better, if possible, you should try to verctorize your operation, as this will likely make a huge difference in performance.
EDIT: saving the slices to file:
If you're only interested in saving the slices to file (e.g. in a csv), you don't need to first create a list of all slices (with the associated memory usage). Instead, loop over the slices (by looping over the starting indices that define each slice), and save each slice to file.
slice_length = 5
# loop over indices (i.e. slices)
for idx_from in range(df.shape[0] - slice_length):
# create the slice and write to file
df.iloc[idx_from: idx_from + slice_length, :].to_csv(f'slice_starting_idx_{idx_from}.csv', sep=';', index=False)
hi I have tried this which might results to your expectations, based on indexes:
import numpy as np
import pandas as pd
x=np.array([[8,9],[2,3],[9,10],[25,78],[56,67],[56,67],[72,12],[98,24],
[8,9],[2,3],[9,10],[25,78],[56,67],[56,67],[72,12],[98,24]])
df=pd.DataFrame(np.reshape(x,(16,2)),columns=['Col1','Col2'])
print(df)
print("**********************************")
count=df['Col1'].count() # number of rows in dataframe
i=0 # to set index from starting point for every iteration
n=4 # to set index to end point for every iteration
count2=3 # This is important , if you want 4 row then yo must set this count2 4-1 i.e 3,let say if you want 5 rows then count2 must be 5-1 i.e 4
while count !=0: # condition till the count gets set to 0
df1=df[i:n] # first iteration i=0, n=4(if you want four rows), second iteration i=n i.e i=4, and n will be n=n+4 i.e 8
if i>0:
print(df1.set_index(np.arange(i-count2,n-count2)))
count2=count2+3 # Incrementing count2, so the index will be like in first iteration 0 to 3 then 1 to 4 and so on.
else:
print(df1.set_index(np.arange(i,n)))
i=n
count=count-4
n=n+4
First output of Dataframe
Col1 Col2
0 8 9
1 2 3
2 9 10
3 25 78
4 56 67
5 56 67
6 72 12
7 98 24
8 8 9
9 2 3
10 9 10
11 25 78
12 56 67
13 56 67
14 72 12
15 98 24
Final Ouput
Col1 Col2
0 8 9
1 2 3
2 9 10
3 25 78
Col1 Col2
1 56 67
2 56 67
3 72 12
4 98 24
Col1 Col2
2 8 9
3 2 3
4 9 10
5 25 78
Col1 Col2
3 56 67
4 56 67
5 72 12
6 98 24
Note: I am also new in python there can be some possible shortest ways to achieve the expected output.

Transpose .csv File: changing Header Time Stamps to Line TimeStamp

My Data Looks Like this:
statnr datum ele h01 h02 h03 h04 h05 h06 h07 h08 h09 h10 h11 h12 h13 h14 h15 h16 h17 h18 h19 h20 h21 h22 h23 h24
----------- ----------- --- ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------ ------
20101 20020401 D6K 103 126 115 114 105 101 118 118 130 129 126 128 132 133 131 130 130 131 130 130 125 117 122 124
20101 20020402 D6K 126 118 119 120 114 111 107 119 124 126 122 130 130 130 128 128 126 119 129 134 132 127 112 118
........
20101 20150909 D6K 72 82 75 76 82 93 91 96 99 101 108 108 103 100 94 90 82 92 88 79 77 89 94 92
20101 20020401 FLP 54 61 58 61 66 67 65 56 47 46 40 40 39 32 34 34 37 43 45 45 50 54 59 63
20101 20020402 FLP 64 61 67 66 68 69 67 56 50 46 42 39 33 32 33 34 39 48 55 58 61 62 65 68
........
20101 20150909 FLP 93 95 92 94 94 96 95 92 90 84 87 75 81 75 75 74 83 87 89 96 94 92 91 94
20101 20070906 GSE 32700 0 0 0 0 0 3 10 17 30 28 27 37 44 37 25 16 5 1 0 0 0 0 0
20101 20070907 GSE 0 0 0 0 0 0 11 48 72 107 257 264 290 216 255 178 122 57 6 0 0 0 0 0
........
20101 20150909 GSE 0 0 0 0 0 1 17 51 71 118 82 200 116 130 142 156 48 15 1 0 0 0 0 0
20101 20020101 SUV 0 0 0 0 0 0 0 0 9 10 10 10 10 10 10 10 2 0 0 0 0 0 0 0
........
20101 20150909 SUV 0 0 0 0 0 0 0 0 0 1 0 5 1 4 4 9 2 0 0 0 0 0 0 0
20101 20020401 TEX 30 18 21 18 9 10 18 42 69 91 114 117 126 135 133 127 114 87 58 47 39 33 27 24
........
20101 20150909 TEX 50 46 48 50 50 49 57 67 77 85 80 111 95 100 101 92 74 67 59 53 49 49 49 47
20101 20020401 QVX 6 10 9 8 13 25 19 15 16 19 24 24 19 23 24 22 24 23 19 13 12 16 16 18
........
20101 20150909 QVX 40 42 37 34 30 34 22 22 27 31 26 28 37 38 42 43 52 54 59 81 80 69 78 60
as you can see it is a huge sheet with a statnr Row, DateRow, ele stands for the parameter and than h01 - h24 are as you can imagine the hours.
I need to adjust the format from that Sheet to the Format of the other Files I'm working with (Plotting and processing reasons)
I'm currently trying to bring this FileSheet into this Format:
Date Time D6K FLP GSE SUV TEX QVX
01.04.2002 01:00 103 54 0 30 6
.....
09.09.2015 23:59 92 94 0 0 47 60
So what I'm trying to do is:
1) Get rid of row[0] (statnr)
2) Switch the Header with Row[2] so that all parameters are in the header and link them to the new Time Date fmt in the lines
3) Convert the time fmt from %H%M%D to %D%M&Y %H:%M
Since I'm new to python and coding I thought I'd ask if there's maybe a package out there that deals with that kind of Problem, and if there's a term for that Problem in general (switching header with lines) --> thanks (Peter Wood) I switched the Title to Transpose
Thanks for suggestions
For Clarification:
the ........ indicates that I left some rows out
the ----------- are in the file
Because you may have missing data, this isn't a simple case of transposing blocks. I think what you need to do is read the input file into a data structure from which you can then look up the values as required to generate your output. In Python you can use a dictionary whose key is a tuple of your element type, date, and hour:
mydict = {}
with open('F:\myfile.txt') as f:
z = f.readline() # discard headings
z = f.readline() # discard row of dashes
for line in f:
fields = line.split()
date = fields[1]
ele = fields[2]
for hour, value in enumerate(fields[3:27]):
mydict[(ele, date, hour)] = value
Now you have all the data in a big dictionary that's addressable by ele, date and hour. I'm going to guess that the ele values are fixed and you can hardcode them, but you'll want to build a list of the unique dates you actually found in the input file, and put them in ascending order:
dateset=set()
for k in mydict.keys():
dateset.add(k[1])
dates=list(dateset)
dates.sort()
Now you're ready to build your output file.
for date in dates:
for hour in range(24):
output = date + '\t' + hour
for ele in ['D6K', 'FLP', 'GSE', 'SUV', 'TEX', 'QVX']:
output = output + '\t' + mydict.get((ele, date, hour), '')
print(output)
Using the get method on the dictionary allows you to specify a default value to be returned if the key you supplied isn't in the dictionary.
I haven't dealt with the date formatting (note that 'hour' ranges from 0 to 23), or writing the output to a file, but the above should get you going.

Conditional summing of columns in pandas

I have the following database in Pandas:
Student-ID Last-name First-name HW1 HW2 HW3 HW4 HW5 M1 M2 Final
59118211 Alf Brian 96 90 88 93 96 78 60 59.0
59260567 Anderson Jill 73 83 96 80 84 80 52 42.5
59402923 Archangel Michael 99 80 60 94 98 41 56 0.0
59545279 Astor John 93 88 97 100 55 53 53 88.9
59687635 Attach Zach 69 75 61 65 91 90 63 69.0
I want to add only those columns which have "HW" in them. Any suggestions on how I can do that?
Note: The number of columns containing HW may differ. So I can't reference them directly.
You could all df.filter(regex='HW') to return column names like 'HW' and then apply sum row-wise via sum(axis-1)
In [23]: df
Out[23]:
StudentID Lastname Firstname HW1 HW2 HW3 HW4 HW5 HW6 HW7 M1
0 59118211 Alf Brian 96 90 88 93 96 97 88 10
1 59260567 Anderson Jill 73 83 96 80 84 99 80 100
2 59402923 Archangel Michael 99 80 60 94 98 73 97 50
3 59545279 Astor John 93 88 97 100 55 96 86 60
4 59687635 Attach Zach 69 75 61 65 91 89 82 55
5 59829991 Bake Jake 56 0 77 78 0 79 0 10
In [24]: df.filter(regex='HW').sum(axis=1)
Out[24]:
0 648
1 595
2 601
3 615
4 532
5 290
dtype: int64
John's solution - using df.filter() - is more elegant, but you could also consider a list comprehension ...
df[[x for x in df.columns if 'HW' in x]].sum(axis=1)

Issue combining columns in Dataframe?

I have the following dataframe:
Obj BIT BIT BIT GAS GAS GAS OIL OIL OIL
Date
2007-01-03 18 7 0 184 35 2 52 14 0
2007-01-09 43 3 0 249 35 2 68 11 1
2007-01-16 60 6 0 254 35 5 72 13 1
2007-01-23 69 11 1 255 43 2 81 6 0
2007-01-30 74 8 0 263 29 4 69 9 0
2007-02-06 78 6 1 259 34 2 79 6 0
2007-02-14 76 9 1 263 24 2 70 10 1
2007-02-20 85 7 0 241 20 6 72 4 0
2007-02-27 79 6 0 242 35 3 68 7 0
2007-03-06 68 14 0 225 26 2 57 10 1
How can I sum each of the 9 columns into 3 columns. "BIT","GAS" and "OIL"
This is the code for the dataframe which basically just gets me a cross section from a larger df I want:
ABrigsA = ndfAB.xs(['BIT','GAS','OIL'],axis=1)
Any suggestions?
Assuming that you want to sum similarly-named columns, you can use groupby [tutorial docs]:
>>> df.groupby(level=0, axis='columns').sum()
Obj BIT GAS OIL
Date
2007-01-03 25 221 66
2007-01-09 46 286 80
2007-01-16 66 294 86
2007-01-23 81 300 87
2007-01-30 82 296 78
2007-02-06 85 295 85
2007-02-14 86 289 81
2007-02-20 92 267 76
2007-02-27 85 280 75
2007-03-06 82 253 68

Categories

Resources