Pandas multiindex boolean indexing - python

So given a multiindexed dataframe, I would like to return only rows that satisfy a condition for all levels of the lower index in a multi index. Here is a small working example:
df = pd.DataFrame({'a': [1, 1, 2, 2], 'b': [1, 2, 3, 4], 'c': [0, 2, 2, 2]})
df = df.set_index(['a', 'b'])
print(df)
out:
c
a b
1 1 0
2 2
2 3 2
4 2
Now, I would like to return the entries for which c > 1. For instance, I would like to do something like
df[df[c > 1]]
out:
c
a b
1 2 2
2 3 2
4 2
But I want to get
out:
c
a b
2 3 2
4 2
Any thoughts on how to do this in the most efficient way?

I ended up using groupby:
df.groupby(level=0).filter(lambda x: all([c > 1 for v in x['c']]))

Related

How to get all unique combinations of values in one column that are in another column

Starting with a dataframe like this:
df = pd.DataFrame({'A': [1, 2, 3, 4, 5], 'B': ['a', 'b', 'b', 'b', 'a']})
A B
0 1 a
1 2 b
2 3 b
3 4 b
4 5 a
What is the best way of getting to a dataframe like this?
pd.DataFrame({'source': [1, 2, 2, 3], 'target': [5, 3, 4, 4]})
source target
0 1 5
1 2 3
2 2 4
3 3 4
For each time a row in column A has the same value in column B as another row in column A, I want to save the unique instances of that relationship in a new dataframe.
This is pretty close:
df.groupby('B')['A'].unique()
B
a [1, 5]
b [2, 3, 4]
Name: A, dtype: object
But I'd ideally convert it into a single dataframe now and my brain has gone kaput.
In your case , you can do itertools.combinations
import itertools
s = df.groupby('B')['A'].apply(lambda x : set(list(itertools.combinations(x, 2)))).explode().tolist()
out = pd.DataFrame(s,columns=['source','target'])
out
Out[312]:
source target
0 1 5
1 3 4
2 2 3
3 2 4
use merge function
df.merge(df, how = "outer", on = ["B"]).query("A_x < A_y")

Pandas: MultiIndex from Nested Dictionary

Suppose I have a nested dictionary of the format:
dictionary={
"A":[1, 2],
"B":[2, 3],
"Coords":[{
"X":[1,2,3],
"Y":[1,2,3],
"Z":[1,2,3],
},{
"X":[2,3],
"Y":[2,3],
"Z":[2,3],
}]
}
How can I turn this into a Pandas MultiIndex Dataframe?
Equivalently, how can I produce a Dataframe where the information in the row is not duplicated for every co-ordinate?
In what I imagine, the two rows of output DataFrame should appear as follows:
Index A B Coords
---------------------
0 1 2 X Y Z
1 1 1
2 2 2
3 3 3
--------------------
---------------------
1 2 3 X Y Z
2 2 2
3 3 3
--------------------
From your dictionary :
>>> import pandas as pd
>>> df = pd.DataFrame.from_dict(dictionary)
>>> df
A B Coords
0 1 2 {'X': [1, 2, 3], 'Y': [1, 2, 3], 'Z': [1, 2, 3]}
1 2 3 {'X': [2, 3], 'Y': [2, 3], 'Z': [2, 3]}
Then we can use pd.Series to extract the data in dict in the column Coords like so :
df_concat = pd.concat([df.drop(['Coords'], axis=1), df['Coords'].apply(pd.Series)], axis=1)
>>> df_concat
A B X Y Z
0 1 2 [1, 2, 3] [1, 2, 3] [1, 2, 3]
1 2 3 [2, 3] [2, 3] [2, 3]
To finish we use the explode method to get the list as rows and set the index on columns A and B to get the expected result :
>>> df_concat.explode(['X', 'Y', 'Z']).reset_index().set_index(['index', 'A', 'B'])
X Y Z
index A B
0 1 2 1 1 1
2 2 2 2
2 3 3 3
1 2 3 2 2 2
3 3 3 3
UPDATE :
If you are using a version of Pandas lower than 1.3.0, we can use the trick given by #MillerMrosek in this answer :
def explode(df, columns):
df['tmp']=df.apply(lambda row: list(zip(*[row[_clm] for _clm in columns])), axis=1)
df=df.explode('tmp')
df[columns]=pd.DataFrame(df['tmp'].tolist(), index=df.index)
df.drop(columns='tmp', inplace=True)
return df
explode(df_concat, ["X", "Y", "Z"]).reset_index().set_index(['index', 'A', 'B'])
Output :
X Y Z
index A B
0 1 2 1 1 1
2 2 2 2
2 3 3 3
1 2 3 2 2 2
3 3 3 3

Remove duplicate columns in pandas

I try to delete columns with duplicate data in pandas, for example, the following data(They have the same data but different column names):
df1 = pd.DataFrame({'one': [1, 2, 3, 4], 'two': ['a', 'b', 'c', 'd'], 'three': [1, 2, 3, 4]})
one two three
0 1 a 1
1 2 b 2
2 3 c 3
3 4 d 4
I hope to get this result:
one two
0 1 a
1 2 b
2 3 c
3 4 d
The method I use now is:
df2 = df1.T.drop_duplicates().T
But this is too inefficient, is there a better way?
Hope to get your help, thanks
I tried to improve a little efficiency like this:
In [935]: df_int = df1.select_dtypes(include=['int'])
In [933]: df_other = df1.select_dtypes(exclude=['int'])
In [949]: if df_int.T.drop_duplicates().shape[0] == 1:
...: res = pd.concat([df_int.iloc[:,0], df_other], axis=1)
...:
In [950]: res
Out[950]:
one two
0 1 a
1 2 b
2 3 c
3 4 d
To remove transpose completely, you can do something like this:
In [995]: import numpy as np
In [997]: if (pd.DataFrame(np.diff(df_int.values)).sum() == 0).all():
...: res = pd.concat([df_int.iloc[:,0], df_other], axis=1)

Python Panda - concatenate two column values into a single column with label name [duplicate]

I have a dataframe like this where the columns are the scores of some metrics:
A B C D
4 3 3 1
2 5 2 2
3 5 2 4
I want to create a new column to summarize which metrics each row scored over a set threshold in, using the column name as a string. So if the threshold was A > 2, B > 3, C > 1, D > 3, I would want the new column to look like this:
A B C D NewCol
4 3 3 1 AC
2 5 2 2 BC
3 5 2 4 ABCD
I tried using a series of np.where:
df[NewCol] = np.where(df['A'] > 2, 'A', '')
df[NewCol] = np.where(df['B'] > 3, 'B', '')
etc.
but realized the result was overwriting with the last metric any time all four metrics didn't meet the conditions, like so:
A B C D NewCol
4 3 3 1 C
2 5 2 2 C
3 5 2 4 ABCD
I am pretty sure there is an easier and correct way to do this.
You could do:
import pandas as pd
data = [[4, 3, 3, 1],
[2, 5, 2, 2],
[3, 5, 2, 4]]
df = pd.DataFrame(data=data, columns=['A', 'B', 'C', 'D'])
th = {'A': 2, 'B': 3, 'C': 1, 'D': 3}
df['result'] = [''.join(k for k in df.columns if record[k] > th[k]) for record in df.to_dict('records')]
print(df)
Output
A B C D result
0 4 3 3 1 AC
1 2 5 2 2 BC
2 3 5 2 4 ABCD
Using dot
s=pd.Series([2,3,1,3],index=df.columns)
df.gt(s,1).dot(df.columns)
Out[179]:
0 AC
1 BC
2 ABCD
dtype: object
#df['New']=df.gt(s,1).dot(df.columns)
Another option that operates in an array fashion. It would be interesting to compare performance.
import pandas as pd
import numpy as np
# Data to test.
data = pd.DataFrame(
[
[4, 3, 3, 1],
[2, 5, 2, 2],
[3, 5, 2, 4]
]
, columns = ['A', 'B', 'C', 'D']
)
# Series to hold the thresholds.
thresholds = pd.Series([2, 3, 1, 3], index = ['A', 'B', 'C', 'D'])
# Subtract the series from the data, broadcasting, and then use sum to concatenate the strings.
data['result'] = np.where(data - thresholds > 0, data.columns, '').sum(axis = 1)
print(data)
Gives:
A B C D result
0 4 3 3 1 AC
1 2 5 2 2 BC
2 3 5 2 4 ABCD

update column value of pandas groupby().last()

Given dataframe:
dfd = pd.DataFrame({'A': [1, 1, 2,2,3,3],
'B': [4, 5, 6,7,8,9],
'C':['a','b','c','c','d','e']
})
I can find the last C value of each A group by using
dfd.groupby('A').last()['C']
However, I want to update the C values to np.nan. I don't know how to do that. Method such as:
def replace(df):
df['C']=np.nan
return replace
dfd.groupby('A').last().apply(lambda dfd: replace(dfd))
Does not work.
I want the result like:
dfd_result= pd.DataFrame({'A': [1, 1, 2,2,3,3],
'B': [4, 5, 6,7,8,9],
'C':['a',np.nan,'c',np.nan,'d',np.nan]
})
IIUIC, you need loc. Get the index of last values using tail
In [1145]: dfd.loc[dfd.groupby('A')['C'].tail(1).index, 'C'] = np.nan
In [1146]: dfd
Out[1146]:
A B C
0 1 4 a
1 1 5 NaN
2 2 6 c
3 2 7 NaN
4 3 8 d
5 3 9 NaN
dfd.loc[dfd.groupby('A').tail(1).index, 'C'] = np.nan should be fine too.

Categories

Resources