Padding and reshaping pandas dataframe - python
I have a dataframe with the following form:
data = pd.DataFrame({'ID':[1,1,1,2,2,2,2,3,3],'Time':[0,1,2,0,1,2,3,0,1],
'sig':[2,3,1,4,2,0,2,3,5],'sig2':[9,2,8,0,4,5,1,1,0],
'group':['A','A','A','B','B','B','B','A','A']})
print(data)
ID Time sig sig2 group
0 1 0 2 9 A
1 1 1 3 2 A
2 1 2 1 8 A
3 2 0 4 0 B
4 2 1 2 4 B
5 2 2 0 5 B
6 2 3 2 1 B
7 3 0 3 1 A
8 3 1 5 0 A
I want to reshape and pad such that each 'ID' has the same number of Time values, the sig1,sig2 are padded with zeros (or mean value within ID) and the group carries the same letter value. The output after repadding would be :
data_pad = pd.DataFrame({'ID':[1,1,1,1,2,2,2,2,3,3,3,3],'Time':[0,1,2,3,0,1,2,3,0,1,2,3],
'sig1':[2,3,1,0,4,2,0,2,3,5,0,0],'sig2':[9,2,8,0,0,4,5,1,1,0,0,0],
'group':['A','A','A','A','B','B','B','B','A','A','A','A']})
print(data_pad)
ID Time sig1 sig2 group
0 1 0 2 9 A
1 1 1 3 2 A
2 1 2 1 8 A
3 1 3 0 0 A
4 2 0 4 0 B
5 2 1 2 4 B
6 2 2 0 5 B
7 2 3 2 1 B
8 3 0 3 1 A
9 3 1 5 0 A
10 3 2 0 0 A
11 3 3 0 0 A
My end goal is to ultimately reshape this into something with shape (number of ID, number of time points, number of sequences {2 here}).
It seems that if I pivot data, it fills in with nan values, which is fine for the signal values, but not the groups. I am also hoping to avoid looping through data.groupby('ID'), since my actual data has a large number of groups and the looping would likely be very slow.
Here's one approach creating the new index with pd.MultiIndex.from_product and using it to reindex on the Time column:
df = data.set_index(['ID', 'Time'])
# define a the new index
ix = pd.MultiIndex.from_product([df.index.levels[0],
df.index.levels[1]],
names=['ID', 'Time'])
# reindex using the above multiindex
df = df.reindex(ix, fill_value=0)
# forward fill the missing values in group
df['group'] = df.group.mask(df.group.eq(0)).ffill()
print(df.reset_index())
ID Time sig sig2 group
0 1 0 2 9 A
1 1 1 3 2 A
2 1 2 1 8 A
3 1 3 0 0 A
4 2 0 4 0 B
5 2 1 2 4 B
6 2 2 0 5 B
7 2 3 2 1 B
8 3 0 3 1 A
9 3 1 5 0 A
10 3 2 0 0 A
11 3 3 0 0 A
IIUC:
(data.pivot_table(columns='Time', index=['ID','group'], fill_value=0)
.stack('Time')
.sort_index(level=['ID','Time'])
.reset_index()
)
Output:
ID group Time sig sig2
0 1 A 0 2 9
1 1 A 1 3 2
2 1 A 2 1 8
3 1 A 3 0 0
4 2 B 0 4 0
5 2 B 1 2 4
6 2 B 2 0 5
7 2 B 3 2 1
8 3 A 0 3 1
9 3 A 1 5 0
10 3 A 2 0 0
11 3 A 3 0 0
Related
pandas explode column only non zero values
How to filter tolist or values below to get only non-zeros import pandas as pd df = pd.DataFrame({'A':[0,2,3],'B':[2,0,4], 'C': [3,4,0]}) df['D']=df[['A','B','C']].values.tolist() df.explode('D') Data A B C 0 0 2 3 1 2 0 4 2 3 4 0 On Explode on Column D rows now becomes 9. But i want 4 rows in the output Expected result A B C D 0 0 2 3 2 0 0 2 3 3 1 2 0 4 2 1 2 0 4 4 2 3 4 0 3 2 3 4 0 4 I got list(filter(None, [1,0,2,3,0])) to return only non-zeros. But not sure how to apply it in the above code
index.repeat m = df.ne(0) df.loc[df.index.repeat(m.sum(1))].assign(D=df.values[m]) A B C D 0 0 2 3 2 0 0 2 3 3 1 2 0 4 2 1 2 0 4 4 2 3 4 0 3 2 3 4 0 4
Simpliest is query: df['D']=df[['A','B','C']].values.tolist() df.explode('D').query('D != 0') Output: A B C D 0 0 2 3 2 0 0 2 3 3 1 2 0 4 2 1 2 0 4 4 2 3 4 0 3 2 3 4 0 4
All possible permutations within groups of column pandas
I have a df a b c d 1 0 1 2 4 2 0 1 3 5 3 0 2 1 7 4 1 3 2 5 Within groups, grouped by 'a' and 'b' I want all possible permutations of 'c' a b c d 1 0 1 2 4 0 1 3 5 0 2 1 7 2 0 1 3 5 0 1 2 4 0 2 1 7 3 1 3 2 5 ... ... I tried: s=pd.Series({x: list(it.permutations(y) )for x , y in df.groupby(['a','b']).c}) 0 1 [(3,2),(2,3)] 2 [(1,)] 1 3 [(2,)] Explode() only does not do what I need, since I need all combinations of groups within subgroups. For example in this case there are 2 different ways to combine rows 1 and 2. If row 2 would have been 2 different permutations, it would be 2*2=4 ways. Does anybody have an idea?
Fix your code with groupby and explode s=pd.Series({x: list(itertools.permutations(y) )for x , y in df.groupby('a').b}).explode().explode().reset_index() index 0 0 0 1 1 0 2 2 0 3 3 0 1 4 0 3 5 0 2 6 0 2 7 0 1 8 0 3 9 0 2 10 0 3 11 0 1 12 0 3 13 0 1 14 0 2 15 0 3 16 0 2 17 0 1 18 1 1 19 1 2 20 1 2 21 1 1
Creating a column that assigns max value of set of rows by condition to all rows in that group
I have a dataframe that looks like this: data metadata A 0 A 1 A 2 A 3 A 4 B 0 B 1 B 2 A 0 A 1 B 0 A 0 A 1 B 0 df.data contains two different categories, A and B. df.metadata stores a running count the number of times a category appears consecutively before the category changes. I want to create a column consecutive_count that assigns the max value of metadata per consecutive group to every row in that group. It should look like this: data metadata consecutive_count A 0 4 A 1 4 A 2 4 A 3 4 A 4 4 B 0 2 B 1 2 B 2 2 A 0 1 A 1 1 B 0 0 A 0 1 A 1 1 B 0 0 Please advise. Thank you.
Method 1: You may try transform max on groupby of each group of data s = df.data.ne(df.data.shift()).cumsum() df['consecutive_count'] = df.groupby(s).metadata.transform('max') Out[96]: data metadata consecutive_count 0 A 0 4 1 A 1 4 2 A 2 4 3 A 3 4 4 A 4 4 5 B 0 2 6 B 1 2 7 B 2 2 8 A 0 1 9 A 1 1 10 B 0 0 11 A 0 1 12 A 1 1 13 B 0 0 Method 2: Since metadata is sorted per group, you may reverse dataframe and do groupby cummax s = df.data.ne(df.data.shift()).cumsum() df['consecutive_count'] = df[::-1].groupby(s).metadata.cummax() Out[101]: data metadata consecutive_count 0 A 0 4 1 A 1 4 2 A 2 4 3 A 3 4 4 A 4 4 5 B 0 2 6 B 1 2 7 B 2 2 8 A 0 1 9 A 1 1 10 B 0 0 11 A 0 1 12 A 1 1 13 B 0 0
Merge multiple group ids to form a single consolidated group id?
I have following dataset in pandas Dataframe. group_id sub_group_id 0 0 0 1 1 0 2 0 2 1 2 2 3 0 3 0 But the I want to those group ids and form a consolidated group id group_id sub_group_id consolidated_group_id 0 0 0 0 1 1 1 0 2 2 0 3 2 1 4 2 2 5 2 2 5 3 0 6 3 0 6 Is there any generic or mathematical way to do it?
cols = ['group_id', 'sub_group_id'] df.assign( consolidated_group_id=pd.factorize( pd.Series(list(zip(*df[cols].values.T.tolist()))) )[0] ) group_id sub_group_id consolidated_group_id 0 0 0 0 1 0 1 1 2 1 0 2 3 2 0 3 4 2 1 4 5 2 2 5 6 3 0 6 7 3 0 6
You need convert values to tuples and then use factorize: df['consolidated_group_id'] = pd.factorize(df.apply(tuple,axis=1))[0] print (df) group_id sub_group_id consolidated_group_id 0 0 0 0 1 0 1 1 2 1 0 2 3 2 0 3 4 2 1 4 5 2 2 5 6 3 0 6 7 3 0 6 Numpy solutions are a bit modify this answer - change ordering by [::-1] with selecting by [0] for return array (numpy.unique): a = df.values def unique_return_inverse_2D(a): # a is array a1D = a.dot(np.append((a.max(0)+1)[:0:-1].cumprod()[::-1],1)) return np.unique(a1D, return_inverse=1)[::-1][0] def unique_return_inverse_2D_viewbased(a): # a is array a = np.ascontiguousarray(a) void_dt = np.dtype((np.void, a.dtype.itemsize * np.prod(a.shape[1:]))) return np.unique(a.view(void_dt).ravel(), return_inverse=1)[::-1][0] df['consolidated_group_id'] = unique_return_inverse_2D(a) df['consolidated_group_id1'] = unique_return_inverse_2D_viewbased(a) print (df) group_id sub_group_id consolidated_group_id consolidated_group_id1 0 0 0 0 0 1 0 1 1 1 2 1 0 2 2 3 2 0 3 3 4 2 1 4 4 5 2 2 5 5 6 3 0 6 6 7 3 0 6 6
Transform dataframe to have a row for every observation at every time point
I have the following short dataframe: A B C 1 1 3 2 1 3 3 2 3 4 2 3 5 0 0 I want the output to look like this: A B C 1 1 3 2 1 3 3 0 0 4 0 0 5 0 0 1 1 3 2 1 3 3 2 3 4 2 3 5 0 0
use pd.MultiIndex.from_product with unique As and Bs. Then reindex. cols = list('AB') mux = pd.MultiIndex.from_product([df.A.unique(), df.B.unique()], names=cols) df.set_index(cols).reindex(mux, fill_value=0).reset_index() A B C 0 1 1 3 1 1 2 0 2 1 0 0 3 2 1 3 4 2 2 0 5 2 0 0 6 3 1 0 7 3 2 3 8 3 0 0 9 4 1 0 10 4 2 3 11 4 0 0 12 5 1 0 13 5 2 0 14 5 0 0