Consider this dataframe:
import pandas as pd
import numpy as np
iterables = [['bar', 'baz', 'foo'], ['one', 'two']]
index = pd.MultiIndex.from_product(iterables, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(3, 6), index=['A', 'B', 'C'], columns=index)
print(df)
first bar baz foo
second one two one two one two
A -1.954583 -1.347156 -1.117026 -1.253150 0.057197 -1.520180
B 0.253937 1.267758 -0.805287 0.337042 0.650892 -0.379811
C 0.354798 -0.835234 1.172324 -0.663353 1.145299 0.651343
I would like to drop 'one' from each column, while retaining other structure.
With the end result looking something like this:
first bar baz foo
second two two two
A -1.347156 -1.253150 -1.520180
B 1.267758 0.337042 -0.379811
C -0.835234 -0.663353 0.651343
Use drop:
df.drop('one', axis=1, level=1)
first bar baz foo
second two two two
A 0.127419 -0.319655 -0.878161
B -0.563335 1.193819 -0.469539
C -1.324932 -0.550495 1.378335
This should work as well:
df.loc[:,df.columns.get_level_values(1)!='one']
Try:
print(df.loc[:, (slice(None), "two")])
Prints:
first bar baz foo
second two two two
A -1.104831 0.286379 1.121148
B -1.637677 -2.297138 0.381137
C -1.556391 0.779042 2.316628
Use pd.IndexSlice:
indx = pd.IndexSlice
df.loc[:, indx[:, 'two']]
Output:
first bar baz foo
second two two two
A 1.169699 1.434761 0.917152
B -0.732991 -0.086613 -0.803092
C -0.813872 -0.706504 0.227000
Related
In pandas I got a function takes the value of one column and returns its result to two columns. Now I want to also restrict the rows that are used / the results are returned to. I know I can return to multiple columns like this:
df = pd.DataFrame({'a':['foo bar', 'bar foo', 'foo foo', 'bar baz'],
'b': ['']*4,
'c': ['']*4})
def func(text):
return pd.Series(text.split(' ', maxsplit=1))
df[['b', 'c']] = df.a.apply(func)
If I had a function that returns only to a single column this works:
df.loc[:2, 'b'] = df.loc[:2, 'a'].apply(lambda x: x+' baz')
However if I combine both approaches, only np.NaN is produced in the respective cells, as you can see from this piece of code.
df.loc[:2, ['b', 'c']] = df.loc[:2, 'a'].apply(func)
You need adding to_numpy() to remove the index and column different impact, both Dataframe and Series are index sensitive, so if the index or col mismatch the output will return NaN
df.loc[:2, ['b', 'c']] = df.loc[:2, 'a'].apply(func).to_numpy()
df
Out[227]:
a b c
0 foo bar foo bar
1 bar foo bar foo
2 foo foo foo foo
3 bar baz
Check your output
df.loc[:2, 'a'].apply(func)
Out[228]:
0 1
0 foo bar
1 bar foo
2 foo foo
columns are [0, 1] not ['b', 'c'] which will make the assign fail
I've got the following multiindex dataframe:
first bar baz foo
second one two one two one two
first second
bar one NaN -0.056213 0.988634 0.103149 1.5858 -0.101334
two -0.47464 -0.010561 2.679586 -0.080154 <LQ -0.422063
baz one <LQ 0.220080 1.495349 0.302883 -0.205234 0.781887
two 0.638597 0.276678 -0.408217 -0.083598 -1.15187 -1.724097
foo one 0.275549 -1.088070 0.259929 -0.782472 -1.1825 -1.346999
two 0.857858 0.783795 -0.655590 -1.969776 -0.964557 -0.220568
I would like to to extract the max along one level. Expected result:
first bar baz foo
second
one 0.275549 1.495349 1.5858
two 0.857858 2.679586 -0.964557
Here is what I tried:
df.xs('one', level=1, axis = 1).max(axis=0, level=1, skipna = True, numeric_only = False)
And the obtained result:
first baz
second
one 1.495349
two 2.679586
How do I get Pandas to not ignore the whole column if one cell contains a string?
(created like this:)
arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
df = pd.DataFrame(np.random.randn(6, 6), index=index[:6], columns=index[:6])
df['bar','one'].loc['bar','one'] = np.NaN
df['bar','one'].loc['baz','one'] = '<LQ'
df['foo','one'].loc['bar','two'] = '<LQ'
I guess you would need to replace the non-numeric with na:
(df.xs('one', level=1, axis=1)
.apply(pd.to_numeric, errors='coerce')
.max(level=1,skipna=True)
)
Output (with np.random.seed(1)):
first bar baz foo
second
one 0.900856 1.133769 0.865408
two 1.744812 0.319039 0.901591
I have a pandas dataframe that looks like this:
import pandas as pd
import numpy as np
arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])]
df = pd.DataFrame(np.random.randn(8,4),index=arrays,columns=['A','B','C','D'])
I want to add a column E such that df.loc[(slice(None),'one'),'E'] = 1 and df.loc[(slice(None),'two'),'E'] = 2, and I want to do this without iterating over ['one', 'two']. I tried the following:
df.loc[(slice(None),slice('one','two')),'E'] = pd.Series([1,2],index=['one','two'])
but it just adds a column E with NaN. What's the right way to do this?
Here is one way reindex
df.loc[:,'E']=pd.Series([1,2],index=['one','two']).reindex(df.index.get_level_values(1)).values
df
A B C D E
bar one -0.856175 -0.383711 -0.646510 0.110204 1
two 1.640114 0.099713 0.406629 0.774960 2
baz one 0.097198 -0.814920 0.234416 -0.057340 1
two -0.155276 0.788130 0.761469 0.770709 2
foo one 1.593564 -1.048519 -1.194868 0.191314 1
two -0.755624 0.678036 -0.899805 1.070639 2
qux one -0.560672 0.317915 -0.858048 0.418655 1
two 1.198208 0.662354 -1.353606 -0.184258 2
Methinks this is a good use case for Index.map:
df['E'] = df.index.get_level_values(1).map({'one':1, 'two':2})
df
A B C D E
bar one 0.956122 -0.705841 1.192686 -0.237942 1
two 1.155288 0.438166 1.122328 -0.997020 2
baz one -0.106794 1.451429 -0.618037 -2.037201 1
two -1.942589 -2.506441 -2.114164 -0.411639 2
foo one 1.278528 -0.442229 0.323527 -0.109991 1
two 0.008549 -0.168199 -0.174180 0.461164 2
qux one -1.175983 1.010127 0.920018 -0.195057 1
two 0.805393 -0.701344 -0.537223 0.156264 2
You can just get it from df.index.labels:
df['E'] = df.index.labels[1] + 1
print(df)
Output:
A B C D E
bar one 0.746123 1.264906 0.169694 -0.180074 1
two -1.439730 -0.100075 0.929750 0.511201 2
baz one 0.833037 1.547624 -1.116807 0.425093 1
two 0.969887 -0.705240 -2.100482 0.728977 2
foo one -0.977623 -0.800136 -0.361394 0.396451 1
two 1.158378 -1.892137 -0.987366 -0.081511 2
qux one 0.155531 0.275015 0.571397 -0.663358 1
two 0.710313 -0.255876 0.420092 -0.116537 2
Thanks to coldspeed, if you want different values (i.e x and y), use:
df['E'] = pd.Series(df.index.labels[1]).map({0: 'x', 1: 'y'}).tolist()
print(df)
I'm trying to join two dataframes - one with multiindex columns and the other with a single column name. They have similar index.
I get the following warning:
"UserWarning: merging between different levels can give an unintended result (3 levels on the left, 1 on the right)"
For example:
import pandas as pd
import numpy as np
arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
tuples = list(zip(*arrays))
index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
np.random.seed(2022) # so the data is the same each time
df = pd.DataFrame(np.random.randn(3, 8), index=['A', 'B', 'C'], columns=index)
df2 = pd.DataFrame(np.random.randn(3), index=['A', 'B', 'C'],columns=['w'])
df3 = df.join(df2)
DataFrame Views
df
first bar baz foo qux
second one two one two one two one two
A -0.000528 -0.274901 -0.139286 1.984686 0.282109 0.760809 0.300982 0.540297
B 0.373497 0.377813 -0.090213 -2.305943 1.142760 -1.535654 -0.863752 1.016545
C 1.033964 -0.824492 0.018905 -0.383344 -0.304185 0.997292 -0.127274 -1.475886
df2
w
A -1.940906
B 0.833649
C -0.567218
df3 - Result
(bar, one) (bar, two) (baz, one) (baz, two) (foo, one) (foo, two) (qux, one) (qux, two) w
A -0.000528 -0.274901 -0.139286 1.984686 0.282109 0.760809 0.300982 0.540297 -1.940906
B 0.373497 0.377813 -0.090213 -2.305943 1.142760 -1.535654 -0.863752 1.016545 0.833649
C 1.033964 -0.824492 0.018905 -0.383344 -0.304185 0.997292 -0.127274 -1.475886 -0.567218
df.join(df2) from pandas v1.3.0 results in a FutureWarning
FutureWarning: merging between different levels is deprecated and will be removed in a future version. (2 levels on the left, 1 on the right) df3 = df.join(df2).
What is the best way to join these two dataframes?
It depends on what you want! Do you want the column from df2 to be aligned with the 1st or second level of columns from df?
You have to add a level to the columns of df2
Super cheezy with pd.concat
df.join(pd.concat([df2], axis=1, keys=['a']))
Better way
df2.columns = pd.MultiIndex.from_product([['a'], df2.columns])
df.join(df2)
I think the simplest way is to convert df2 to MultiIndex, and then use concat or join:
df2.columns = pd.MultiIndex.from_tuples([('a','w')])
print (df2)
a
w
A -1.940906
B 0.833649
C -0.567218
Or:
df2.columns = [['a'], df2.columns]
print (df2)
a
w
A -1.940906
B 0.833649
C -0.567218
df3 = pd.concat([df, df2], axis=1)
Or:
df3 = df.join(df2)
Result:
print (df3)
first bar baz foo qux a
second one two one two one two one two w
A -0.000528 -0.274901 -0.139286 1.984686 0.282109 0.760809 0.300982 0.540297 -1.940906
B 0.373497 0.377813 -0.090213 -2.305943 1.142760 -1.535654 -0.863752 1.016545 0.833649
C 1.033964 -0.824492 0.018905 -0.383344 -0.304185 0.997292 -0.127274 -1.475886 -0.567218
Additional Resources
pandas docs: Joining a single Index to a MultiIndex
pandas docs: Joining with two MultiIndexes
Say we have the following dataframe:
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : randn(8), 'D' : randn(8)})
shown below:
> df
A B C D
0 foo one 0.846192 0.478651
1 bar one 2.352421 0.141416
2 foo two -1.413699 -0.577435
3 bar three 0.569572 -0.508984
4 foo two -1.384092 0.659098
5 bar two 0.845167 -0.381740
6 foo one 3.355336 -0.791471
7 foo three 0.303303 0.452966
And then I do the following:
df2 = df
df = df[df['C']>0]
If you now look at df and df2 you will see that df2 holds the original data, whereas df was updated to only keep the values where C was greater than 0.
I thought Pandas wasn't supposed to make a copy in an assignment like df2 = df and that it would only make copies with either:
df2 = df.copy(deep=True)
df2 = copy.deepcopy(df)
What happened above then? Did df2 = df make a copy? I presume that the answer is no, so it must have been df = df[df['C']>0] that made a copy, and I presume that, if I didn't have df2=df above, there would have been a copy without any reference to it floating in memory. Is that correct?
Note: I read through Returning a view versus a copy and I wonder if the following:
Whenever an array of labels or a boolean vector are involved in the indexing operation, the result will be a copy.
explains this behavior.
It's not that df2 is making the copy, it's that the df = df[df['C'] > 0] is returning a copy.
Just print out the ids and you'll see:
print id(df)
df2 = df
print id(df2)
df = df[df['C'] > 0]
print id(df)