use column name as condition for where on pandas DataFrame - python

Say I have the following DataFrame:
arrays = [['foo', 'foo', 'bar', 'bar'],
['A', 'B', 'C', 'D']]
tuples = list(zip(*arrays))
columnValues = pd.MultiIndex.from_tuples(tuples)
df = pd.DataFrame(np.random.rand(4,4), columns = columnValues)
print(df)
foo bar
A B C D
0 0.037362 0.470010 0.315396 0.333798
1 0.339038 0.396307 0.487242 0.064883
2 0.691654 0.793609 0.044490 0.384154
3 0.605801 0.967021 0.156839 0.123816
I want to produce the following output:
foo bar
A B C D
0 0 0 0.315396 0.333798
1 0 0 0.487242 0.064883
2 0 0 0.044490 0.384154
3 0 0 0.156839 0.123816
I think I can use pd.DataFrame.where() for this, however I don't see how to pass the column name bar as a condition.
EDIT: I'm looking for a way to specifically use bar instead of foo to produce the desired outcome, as foo would actually be many columns
EDIT2: Unfortunately list comprehension breaks if the list contains all the column labels. Explicitly writing out the for loop does work though.
So instead of this:
df.loc[:, [col for col in df.columns.levels[0] if col != 'bar']] = 0
I use this:
for col in df.columns.levels[0]:
if not(col in nameList):
df.loc[:,col]=0

Use slicing to set your data. Here, you could access sub-columns (A, B), under foo.
In [12]: df
Out[12]:
foo bar
A B C D
0 0.040251 0.119267 0.170111 0.582362
1 0.978192 0.592043 0.515702 0.630627
2 0.762532 0.667234 0.450505 0.103858
3 0.871375 0.397503 0.966837 0.870184
In [13]: df.loc[:, 'foo'] = 0
In [14]: df
Out[14]:
foo bar
A B C D
0 0 0 0.170111 0.582362
1 0 0 0.515702 0.630627
2 0 0 0.450505 0.103858
3 0 0 0.966837 0.870184
If you want to set all columns except bar, you could do.
In [15]: df.loc[:, [col for col in df.columns.levels[0] if col != 'bar']] = 0

You could use get_level_values, I guess:
>>> df
foo bar
A B C D
0 0.039728 0.065875 0.825380 0.240403
1 0.617857 0.895751 0.484237 0.506315
2 0.332381 0.047287 0.011291 0.346073
3 0.216224 0.024978 0.834353 0.500970
>>> df.loc[:, df.columns.get_level_values(0) != "bar"] = 0
>>> df
foo bar
A B C D
0 0 0 0.825380 0.240403
1 0 0 0.484237 0.506315
2 0 0 0.011291 0.346073
3 0 0 0.834353 0.500970
df.columns.droplevel(1) != "bar" should also work, although I don't like it as much even though it's shorter because it inverts the selection logic.

Easier, without loc
df['foo'] = 0

If you happen not to have this multi index you can use:
df.ix[:,['A','B']] = 0
This replaces automatically the values in your columns 'A' and 'B' by 0.

Related

How to delete "heading above headings" in pandas dataframe [duplicate]

If I've got a multi-level column index:
>>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
>>> pd.DataFrame([[1,2], [3,4]], columns=cols)
a
---+--
b | c
--+---+--
0 | 1 | 2
1 | 3 | 4
How can I drop the "a" level of that index, so I end up with:
b | c
--+---+--
0 | 1 | 2
1 | 3 | 4
You can use MultiIndex.droplevel:
>>> cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
>>> df = pd.DataFrame([[1,2], [3,4]], columns=cols)
>>> df
a
b c
0 1 2
1 3 4
[2 rows x 2 columns]
>>> df.columns = df.columns.droplevel()
>>> df
b c
0 1 2
1 3 4
[2 rows x 2 columns]
As of Pandas 0.24.0, we can now use DataFrame.droplevel():
cols = pd.MultiIndex.from_tuples([("a", "b"), ("a", "c")])
df = pd.DataFrame([[1,2], [3,4]], columns=cols)
df.droplevel(0, axis=1)
# b c
#0 1 2
#1 3 4
This is very useful if you want to keep your DataFrame method-chain rolling.
Another way to drop the index is to use a list comprehension:
df.columns = [col[1] for col in df.columns]
b c
0 1 2
1 3 4
This strategy is also useful if you want to combine the names from both levels like in the example below where the bottom level contains two 'y's:
cols = pd.MultiIndex.from_tuples([("A", "x"), ("A", "y"), ("B", "y")])
df = pd.DataFrame([[1,2, 8 ], [3,4, 9]], columns=cols)
A B
x y y
0 1 2 8
1 3 4 9
Dropping the top level would leave two columns with the index 'y'. That can be avoided by joining the names with the list comprehension.
df.columns = ['_'.join(col) for col in df.columns]
A_x A_y B_y
0 1 2 8
1 3 4 9
That's a problem I had after doing a groupby and it took a while to find this other question that solved it. I adapted that solution to the specific case here.
Another way to do this is to reassign df based on a cross section of df, using the .xs method.
>>> df
a
b c
0 1 2
1 3 4
>>> df = df.xs('a', axis=1, drop_level=True)
# 'a' : key on which to get cross section
# axis=1 : get cross section of column
# drop_level=True : returns cross section without the multilevel index
>>> df
b c
0 1 2
1 3 4
A small trick using sum with level=1(work when level=1 is all unique)
df.sum(level=1,axis=1)
Out[202]:
b c
0 1 2
1 3 4
More common solution get_level_values
df.columns=df.columns.get_level_values(1)
df
Out[206]:
b c
0 1 2
1 3 4
You could also achieve that by renaming the columns:
df.columns = ['a', 'b']
This involves a manual step but could be an option especially if you would eventually rename your data frame.
I have struggled with this problem since I don’t know why my droplevel() function does not work. Work through several and learn that ‘a’ in your table is columns name and ‘b’, ‘c’ are index. Do like this will help
df.columns.name = None
df.reset_index() #make index become label

Mapping string list to pandas column?

Just curios better way to map pandas column against a list.
ref_list=['a','b','c','d']
lst = [0,2,1]
df = pd.DataFrame(lst,columns=['no'])
expected output
no map
0 0 a
1 2 c
2 1 b
map with a enumerated dictionary:
df['map_'] = df['no'].map(dict(enumerate(ref_list)))
#df['map_'] = np.array(ref_list)[lst]
print(df)
no map_
0 0 a
1 2 c
2 1 b
df = pd.DataFrame(zip(lst, np.array(ref_list)[lst]), columns=["no", "map"])
print(df)
Prints:
no map
0 0 a
1 2 c
2 1 b

Most efficient way to return Column name in a pandas df

I have a pandas df that contains 4 different columns. For every row theres a value thats of importance. I want to return the Column name where that value is displayed. So for the df below I want to return the Column name when the value 2 is labelled.
d = ({
'A' : [2,0,0,2],
'B' : [0,0,2,0],
'C' : [0,2,0,0],
'D' : [0,0,0,0],
})
df = pd.DataFrame(data=d)
Output:
A B C D
0 2 0 0 0
1 0 0 2 0
2 0 2 0 0
3 2 0 0 0
So it would be A,C,B,A
I'm doing this via
m = (df == 2).idxmax(axis=1)[0]
And then changing the row. But this isn't very efficient.
I'm also hoping to produce the output as a Series from pandas df
Use DataFrame.dot:
df.astype(bool).dot(df.columns).str.cat(sep=',')
Or,
','.join(df.astype(bool).dot(df.columns))
'A,C,B,A'
Or, as a list:
df.astype(bool).dot(df.columns).tolist()
['A', 'C', 'B', 'A']
...or a Series:
df.astype(bool).dot(df.columns)
0 A
1 C
2 B
3 A
dtype: object

Pandas: set the value of a column in a row to be the value stored in a different df at the index of its other rows

>>> df
0 1
0 0 0
1 1 1
2 2 1
>>> df1
0 1 2
0 A B C
1 D E F
>>> crazy_magic()
>>> df
0 1 3
0 0 0 A #df1[0][0]
1 1 1 E #df1[1][1]
2 2 1 F #df1[2][1]
Is there a way to achieve this without for?
import pandas as pd
df = pd.DataFrame([[0,0],[1,1],[2,1]])
df1 = pd.DataFrame([['A', 'B', 'C'],['D', 'E', 'F']])
df2 = df1.reset_index(drop=False)
# index 0 1 2
# 0 0 A B C
# 1 1 D E F
df3 = pd.melt(df2, id_vars=['index'])
# index variable value
# 0 0 0 A
# 1 1 0 D
# 2 0 1 B
# 3 1 1 E
# 4 0 2 C
# 5 1 2 F
result = pd.merge(df, df3, left_on=[0,1], right_on=['variable', 'index'])
result = result[[0, 1, 'value']]
print(result)
yields
0 1 value
0 0 0 A
1 1 1 E
2 2 1 F
My reasoning goes as follows:
We want to use two columns of df as coordinates.
The word "coordinates" reminds me of pivot, since
if you have two columns whose values represent "coordinates" and a third
column representing values, and you want to convert that to a grid, then
pivot is the tool to use.
But df does not have a third column of values. The values are in df1. In fact df1 looks like the result of a pivot operation. So instead of pivoting df, we want to unpivot df1.
pd.melt is the function to use when you want to unpivot.
So I tried melting df1. Comparison with other uses of pd.melt led me to conclude df1 needed the index as a column. That's the reason for defining df2. So we melt df2.
Once you get that far, visually comparing df3 to df leads you naturally to the use of pd.merge.

Assign to selection in pandas

I have a pandas dataframe and I want to create a new column, that is computed differently for different groups of rows. Here is a quick example:
import pandas as pd
data = {'foo': list('aaade'), 'bar': range(5)}
df = pd.DataFrame(data)
The dataframe looks like this:
bar foo
0 0 a
1 1 a
2 2 a
3 3 d
4 4 e
Now I am adding a new column and try to assign some values to selected rows:
df['xyz'] = 0
df.loc[(df['foo'] == 'a'), 'xyz'] = df.loc[(df['foo'] == 'a')].apply(lambda x: x['bar'] * 2, axis=1)
The dataframe has not changed. What I would expect is the dataframe to look like this:
bar foo xyz
0 0 a 0
1 1 a 2
2 2 a 4
3 3 d 0
4 4 e 0
In my real-world problem, the 'xyz' column is also computated for the other rows, but using a different function. In fact, I am also using different columns for the computation. So my questions:
Why does the assignment in the above example not work?
Is it neccessary to do df.loc[(df['foo'] == 'a') twice (as I am doing it now)?
You're changing a copy of df (a boolean mask of the DataFrame is a copy, see docs).
Another way to achieve the desired result is as follows:
In [11]: df.apply(lambda row: (row['bar']*2 if row['foo'] == 'a' else row['xyz']), axis=1)
Out[11]:
0 0
1 2
2 4
3 0
4 0
dtype: int64
In [12]: df['xyz'] = df.apply(lambda row: (row['bar']*2 if row['foo'] == 'a' else row['xyz']), axis=1)
In [13]: df
Out[13]:
bar foo xyz
0 0 a 0
1 1 a 2
2 2 a 4
3 3 d 0
4 4 e 0
Perhaps a neater way is just to:
In [21]: 2 * (df1.bar) * (df1.foo == 'a')
Out[21]:
0 0
1 2
2 4
3 0
4 0
dtype: int64

Categories

Resources