Given the following:
import pandas as pd
arrays = [['bar', 'bar', 'bar', 'baz', 'baz', 'baz', 'baz'],
['total', 'two', 'one', 'two', 'four', 'total', 'five']]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
s = pd.Series(np.random.randn(7), index=index)
s
first second
bar total 0.334158
two -0.267854
one 1.161727
baz two -0.748685
four -0.888634
total 0.383310
five 0.506120
dtype: float64
How do I ensure that the 'total' rows (per the second index) are always at the bottom of each group like this?:
first second
bar one 0.210911
two 0.628357
total -0.911331
baz two 0.315396
four -0.195451
five 0.060159
total 0.638313
dtype: float64
solution 1
I'm not happy with this. I'm working on a different solution
unstacked = s.unstack(0)
total = unstacked.loc['total']
unstacked.drop('total').append(total).unstack().dropna()
first second
bar one 1.682996
two 0.343783
total 1.287503
baz five 0.360170
four 1.113498
two 0.083691
total -0.377132
dtype: float64
solution 2
I feel better about this one
second = pd.Categorical(
s.index.levels[1].values,
categories=['one', 'two', 'three', 'four', 'five', 'total'],
ordered=True
)
s.index.set_levels(second, level='second', inplace=True)
cols = s.index.names
s.reset_index().sort_values(cols).set_index(cols)
0
first second
bar one 1.682996
two 0.343783
total 1.287503
baz two 0.083691
four 1.113498
five 0.360170
total -0.377132
unstack for creating DataFrame with columns with second level of MultiIndex, then reorder columns for total to last column and last use ordered CategoricalIndex.
So if stack level total is last.
np.random.seed(123)
arrays = [['bar', 'bar', 'bar', 'baz', 'baz', 'baz', 'baz'],
['total', 'two', 'one', 'two', 'four', 'total', 'five']]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
s = pd.Series(np.random.randn(7), index=index)
print (s)
first second
bar total -1.085631
two 0.997345
one 0.282978
baz two -1.506295
four -0.578600
total 1.651437
five -2.426679
dtype: float64
df = s.unstack()
df = df[df.columns[df.columns != 'total'].tolist() + ['total']]
df.columns = pd.CategoricalIndex(df.columns, ordered=True)
print (df)
second five four one two total
first
bar NaN NaN 0.282978 0.997345 -1.085631
baz -2.426679 -0.5786 NaN -1.506295 1.651437
s1 = df.stack()
print (s1)
first second
bar one 0.282978
two 0.997345
total -1.085631
baz five -2.426679
four -0.578600
two -1.506295
total 1.651437
dtype: float64
print (s1.sort_index())
first second
bar one 0.282978
two 0.997345
total -1.085631
baz five -2.426679
four -0.578600
two -1.506295
total 1.651437
dtype: float64
Related
Here are my dataframe. I want to drop the 'YTD2017' columns in level 1 in red exclude the green because the number is what I need.
I know the 'drop' function, and tried to put it in my programe. However, all "YTD2017" have dropped including the green area.
So, how to drop the red area and keep the green area. In other words, is there any way to drop the colmuns according to my passed column name without any impact for other columns?
Thanks.
overall.drop('YTD2017',axis=1,level = 1 ,inplace = True)
You can specify values in Index.isin for droping:
arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['YTD2017', 'two', 'YTD2017', 'two', 'YTD2017', 'two', 'YTD2017', 'two']]
tuples = list(zip(*arrays))
mux = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = pd.DataFrame(index=[0], columns=mux)
print (df)
first bar baz foo qux
second YTD2017 two YTD2017 two YTD2017 two YTD2017 two
0 NaN NaN NaN NaN NaN NaN NaN NaN
df = (df.loc[:, ~df.columns.get_level_values(0).isin(['foo','qux']) |
(df.columns.get_level_values(1) != 'YTD2017')])
print (df)
first bar baz foo qux
second YTD2017 two YTD2017 two two two
0 NaN NaN NaN NaN NaN NaN
Or create tuples of both levels values for DataFrame.drop:
df = df.drop([('foo','YTD2017'), ('qux','YTD2017')], axis=1)
I've got the following multiindex dataframe:
first bar baz foo
second one two one two one two
first second
bar one NaN -0.056213 0.988634 0.103149 1.5858 -0.101334
two -0.47464 -0.010561 2.679586 -0.080154 <LQ -0.422063
baz one <LQ 0.220080 1.495349 0.302883 -0.205234 0.781887
two 0.638597 0.276678 -0.408217 -0.083598 -1.15187 -1.724097
foo one 0.275549 -1.088070 0.259929 -0.782472 -1.1825 -1.346999
two 0.857858 0.783795 -0.655590 -1.969776 -0.964557 -0.220568
I would like to to extract the max along one level. Expected result:
first bar baz foo
second
one 0.275549 1.495349 1.5858
two 0.857858 2.679586 -0.964557
Here is what I tried:
df.xs('one', level=1, axis = 1).max(axis=0, level=1, skipna = True, numeric_only = False)
And the obtained result:
first baz
second
one 1.495349
two 2.679586
How do I get Pandas to not ignore the whole column if one cell contains a string?
(created like this:)
arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(6, 6), index=index[:6], columns=index[:6])
df['bar','one'].loc['bar','one'] = np.NaN
df['bar','one'].loc['baz','one'] = '<LQ'
df['foo','one'].loc['bar','two'] = '<LQ'
I guess you would need to replace the non-numeric with na:
(df.xs('one', level=1, axis=1)
.apply(pd.to_numeric, errors='coerce')
.max(level=1,skipna=True)
)
Output (with np.random.seed(1)):
first bar baz foo
second
one 0.900856 1.133769 0.865408
two 1.744812 0.319039 0.901591
I have a pandas dataframe that looks like this:
import pandas as pd
import numpy as np
arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])]
df = pd.DataFrame(np.random.randn(8,4),index=arrays,columns=['A','B','C','D'])
I want to add a column E such that df.loc[(slice(None),'one'),'E'] = 1 and df.loc[(slice(None),'two'),'E'] = 2, and I want to do this without iterating over ['one', 'two']. I tried the following:
df.loc[(slice(None),slice('one','two')),'E'] = pd.Series([1,2],index=['one','two'])
but it just adds a column E with NaN. What's the right way to do this?
Here is one way reindex
df.loc[:,'E']=pd.Series([1,2],index=['one','two']).reindex(df.index.get_level_values(1)).values
df
A B C D E
bar one -0.856175 -0.383711 -0.646510 0.110204 1
two 1.640114 0.099713 0.406629 0.774960 2
baz one 0.097198 -0.814920 0.234416 -0.057340 1
two -0.155276 0.788130 0.761469 0.770709 2
foo one 1.593564 -1.048519 -1.194868 0.191314 1
two -0.755624 0.678036 -0.899805 1.070639 2
qux one -0.560672 0.317915 -0.858048 0.418655 1
two 1.198208 0.662354 -1.353606 -0.184258 2
Methinks this is a good use case for Index.map:
df['E'] = df.index.get_level_values(1).map({'one':1, 'two':2})
df
A B C D E
bar one 0.956122 -0.705841 1.192686 -0.237942 1
two 1.155288 0.438166 1.122328 -0.997020 2
baz one -0.106794 1.451429 -0.618037 -2.037201 1
two -1.942589 -2.506441 -2.114164 -0.411639 2
foo one 1.278528 -0.442229 0.323527 -0.109991 1
two 0.008549 -0.168199 -0.174180 0.461164 2
qux one -1.175983 1.010127 0.920018 -0.195057 1
two 0.805393 -0.701344 -0.537223 0.156264 2
You can just get it from df.index.labels:
df['E'] = df.index.labels[1] + 1
print(df)
Output:
A B C D E
bar one 0.746123 1.264906 0.169694 -0.180074 1
two -1.439730 -0.100075 0.929750 0.511201 2
baz one 0.833037 1.547624 -1.116807 0.425093 1
two 0.969887 -0.705240 -2.100482 0.728977 2
foo one -0.977623 -0.800136 -0.361394 0.396451 1
two 1.158378 -1.892137 -0.987366 -0.081511 2
qux one 0.155531 0.275015 0.571397 -0.663358 1
two 0.710313 -0.255876 0.420092 -0.116537 2
Thanks to coldspeed, if you want different values (i.e x and y), use:
df['E'] = pd.Series(df.index.labels[1]).map({0: 'x', 1: 'y'}).tolist()
print(df)
I have the following dataframe s:
arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
[1, 2, 1, 2, 1, 2, 3, 2,]]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
s = pd.Series(np.random.randn(8), index=index)
first second
bar 1 -0.493897
2 -0.274826
baz 1 -0.337298
2 -0.564097
foo 1 -1.545826
2 0.159494
qux 3 -0.876819
2 0.780388
dtype: float64
I would like to convert it to:
first second
bar 2 -0.274826
baz 2 -0.564097
foo 2 0.159494
qux 3 -0.876819
dtype: float64
By taking the max second of every first.
I tried doing s.groupby(level=1).apply(max), but this returns:
second
1 -0.337298
2 0.780388
dtype: float64
Clearly my attempt returns the max for each group in second, instead of the max second for each first.
Any idea how to do this?
Use idxmax and boolean indexing:
s[s.groupby(level=0).idxmax()]
Output:
first second
bar 2 0.482328
baz 1 0.244788
foo 2 1.310233
qux 2 0.297813
dtype: float64
Using sort_values + tail
s.sort_values().groupby(level=0).tail(1)
Out[33]:
first second
bar 2 -1.806466
baz 2 -0.776890
foo 1 -0.641193
qux 2 -0.455319
dtype: float64
I have he following pandas dataframe:
data = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' :[2,1,2,1,2,1,2,1]})
that looks like:
A B C
0 foo one 2
1 bar one 1
2 foo two 2
3 bar three 1
4 foo two 2
5 bar two 1
6 foo one 2
7 foo three 1
What I need is to compute the mean of each unique combination of A and B. i.e.:
A B C
foo one 2
foo two 2
foo three 1
mean = 1.66666667
and having as output the 'means' computed per value of A i.e.:
foo 1.666667
bar 1
I tried with :
data.groupby(['A'], sort=False, as_index=False).mean()
but it returns me:
foo 1.8
bar 1
Is there a way to compute the mean of only unique combinations? How ?
This is essentially the same as #S_A's answer, but a bit more concise.
You can calculate the means across A and B with:
In [41]: df.groupby(['A', 'B']).mean()
Out[41]:
C
A B
bar one 1
three 1
two 1
foo one 2
three 1
two 2
And then calculate the mean of these over A with:
In [42]: df.groupby(['A', 'B']).mean().groupby(level='A').mean()
Out[42]:
C
A
bar 1.000000
foo 1.666667
Yes. Here is a solution which you want. Firstly you make group corresponding column for making unique combination A and B column. Later from making group, you count mean() corresponding A column.
You can do this like:
from pandas import *
data = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'foo', 'foo'], 'B' : ['one', 'one', 'two', 'three', 'two', 'two', 'one', 'three'], 'C' :[2.0,1,2,1,2,1,2,1]})
data = data.groupby(['A','B'], sort=False, as_index=False).mean()
print data.groupby('A', sort=False, as_index=False).mean()
Output:
A C
0 foo 1.666667
1 bar 1.000000
When you data.groupby(['A'], sort=False, as_index=False).mean() do, it's mean you count group_by all value of C column according to A Column. That's why it return
foo 1.8 (9/8)
bar 1.0 (3/3)
I think you should find your answer :) :)
This worked for me
test = data
test = test.drop_duplicates()
test = test.groupby(['A']).mean()
Output:
C
A
bar 1.000000
foo 1.666667