I don't understand why apply and transform return different dtypes when called on the same data frame. The way I explained the two functions to myself before went something along the lines of "apply collapses the data, and transform does exactly the same thing as apply but preserves the original index and doesn't collapse." Consider the following.
df = pd.DataFrame({'id': [1,1,1,2,2,2,2,3,3,4],
'cat': [1,1,0,0,1,0,0,0,0,1]})
Let's identify those ids which have a nonzero entry in the cat column.
>>> df.groupby('id')['cat'].apply(lambda x: (x == 1).any())
id
1 True
2 True
3 False
4 True
Name: cat, dtype: bool
Great. If we wanted to create an indicator column, however, we could do the following.
>>> df.groupby('id')['cat'].transform(lambda x: (x == 1).any())
0 1
1 1
2 1
3 1
4 1
5 1
6 1
7 0
8 0
9 1
Name: cat, dtype: int64
I don't understand why the dtype is now int64 instead of the boolean returned by the any() function.
When I change the original data frame to contain some booleans (note that the zeros remain), the transform approach returns booleans in an object column. This is an extra mystery to me since all of the values are boolean, but it's listed as object apparently to match the dtype of the original mixed-type column of integers and booleans.
df = pd.DataFrame({'id': [1,1,1,2,2,2,2,3,3,4],
'cat': [True,True,0,0,True,0,0,0,0,True]})
>>> df.groupby('id')['cat'].transform(lambda x: (x == 1).any())
0 True
1 True
2 True
3 True
4 True
5 True
6 True
7 False
8 False
9 True
Name: cat, dtype: object
However, when I use all booleans, the transform function returns a boolean column.
df = pd.DataFrame({'id': [1,1,1,2,2,2,2,3,3,4],
'cat': [True,True,False,False,True,False,False,False,False,True]})
>>> df.groupby('id')['cat'].transform(lambda x: (x == 1).any())
0 True
1 True
2 True
3 True
4 True
5 True
6 True
7 False
8 False
9 True
Name: cat, dtype: bool
Using my acute pattern-recognition skills, it appears that the dtype of the resulting column mirrors that of the original column. I would appreciate any hints about why this occurs or what's going on under the hood in the transform function. Cheers.
It looks like SeriesGroupBy.transform() tries to cast the result dtype to the same one as the original column has, but DataFrameGroupBy.transform() doesn't seem to do that:
In [139]: df.groupby('id')['cat'].transform(lambda x: (x == 1).any())
Out[139]:
0 1
1 1
2 1
3 1
4 1
5 1
6 1
7 0
8 0
9 1
Name: cat, dtype: int64
# v v
In [140]: df.groupby('id')[['cat']].transform(lambda x: (x == 1).any())
Out[140]:
cat
0 True
1 True
2 True
3 True
4 True
5 True
6 True
7 False
8 False
9 True
In [141]: df.dtypes
Out[141]:
cat int64
id int64
dtype: object
Just adding another illustrative example with sum as I find it more explicit:
df = (
pd.DataFrame(pd.np.random.rand(10, 3), columns=['a', 'b', 'c'])
.assign(a=lambda df: df.a > 0.5)
)
Out[70]:
a b c
0 False 0.126448 0.487302
1 False 0.615451 0.735246
2 False 0.314604 0.585689
3 False 0.442784 0.626908
4 False 0.706729 0.508398
5 False 0.847688 0.300392
6 False 0.596089 0.414652
7 False 0.039695 0.965996
8 True 0.489024 0.161974
9 False 0.928978 0.332414
df.groupby('a').apply(sum) # drop rows
a b c
a
False 0.0 4.618465 4.956997
True 1.0 0.489024 0.161974
df.groupby('a').transform(sum) # keep dims
b c
0 4.618465 4.956997
1 4.618465 4.956997
2 4.618465 4.956997
3 4.618465 4.956997
4 4.618465 4.956997
5 4.618465 4.956997
6 4.618465 4.956997
7 4.618465 4.956997
8 0.489024 0.161974
9 4.618465 4.956997
However when applied to pd.DataFrame and not pd.GroupBy object I was not able to see any difference.
Related
I have the following dataframe as below.
0 1 2 3 4 5 6 7
True False False False False False False False
[1 rows * 8 columns]
As you can see, there is one True value which is the first column.
Therefore, I want to get the 0 index which is True element in the dataframe.
In other case, there is True in the 4th column index, then I would like to get the 4 as 4th column has the True value for below dataframe.
0 1 2 3 4 5 6 7
False False False False True False False False
[1 rows * 8 columns]
I tried to google it but failed to get what I want.
And for assumption, there is no designated column name in the case.
Look forward to your help.
Thanks.
IIUC, you are looking for idxmax:
>>> df
0 1 2 3 4 5 6 7
0 True False False False False False False False
>>> df.idxmax(axis=1)
0 0
dtype: object
>>> df
0 1 2 3 4 5 6 7
0 False False False False True False False False
>>> df.idxmax(axis=1)
0 4
dtype: object
Caveat: if all values are False, Pandas returns the first index because index 0 is the lowest index of the highest value:
>>> df
0 1 2 3 4 5 6 7
0 False False False False False False False False
>>> df.idxmax(axis=1)
0 0
dtype: object
Workaround: replace False by np.nan:
>>> df.replace(False, np.nan).idxmax(axis=1)
0 NaN
dtype: float64
if you want every field that is true:
cols_true = []
for idx, row in df.iterrows():
for i in cols:
if row[i]:
cols_true.append(i)
print(cols_true)
Use boolean indexing:
df.columns[df.iloc[0]]
output:
Index(['0'], dtype='object')
Or numpy.where
np.where(df)[1]
You may want to index the dataframe's index by a column itself (0 in this case), as follows:
df.index[df[0]]
You'll get:
Int64Index([0], dtype='int64')
df.loc[:, df.any()].columns[0]
# 4
If you have several True values you can also get them all with columns
Generalization
Imagine we have the following dataframe (several True values in positions 4, 6 and 7):
0 1 2 3 4 5 6 7
0 False False False False True False True True
With the formula above :
df.loc[:, df.any()].columns
# Int64Index([4, 6, 7], dtype='int64')
df1.apply(lambda ss:ss.loc[ss].index.min(),axis=1).squeeze()
out:
0
or
df1.loc[:,df1.iloc[0]].columns.min()
I have a boolean column in a dataframe that looks like the following:
True
False
False
False
False
True
False
False
False
I want to forward propagate/fill the True values n number of times. e.g. 2 times:
True
True
True
False
False
True
True
True
False
the ffill does something similar for NaN values, but I can't find anything for a specific value as described. Is the easiest way to do this just to do a standard loop and just iterate over the rows and modify the column in question with a counter?
Each row is an equi-distant time series entry
EDIT:
The current answers all solve my specific problem with a bool column, but one answer can be modified to be more general purpose:
>> s = pd.Series([1, 2, 3, 4, 5, 1, 2, 3])
0 1
1 2
2 3
3 4
4 5
5 1
6 2
7 3
>> condition_mask = s == 2
>> s.mask(~(condition_mask)).ffill(limit=2).fillna(s).astype(int)
0 1
1 2
2 2
3 2
4 5
5 1
6 2
7 2
You can still use ffill but first you have to mask the False values
s.mask(~s).ffill(limit=2).fillna(s)
0 True
1 True
2 True
3 False
4 False
5 True
6 True
7 True
8 False
Name: 0, dtype: bool
For 2 times you could have:
s = s | s.shift(1) | s.shift(2)
You could generalize to n-times from there.
Try with rolling
n = 3
s.rolling(n, min_periods=1).max().astype(bool)
Out[147]:
0 True
1 True
2 True
3 False
4 False
5 True
6 True
7 True
8 False
Name: s, dtype: bool
I have been trying to read a CSV file in a dataframe which has "?" values in some of the rows.
I want to find the rows which contain these values (?) over all the columns
I tried using loc but it returns an Empty Dataframe
test_df.loc(test_df['rbc'] == "?"]
test_df.loc(test_df['rbc'] == None]
This returns an Empty DataFrame
I want to iterate the dataframe over all the columns
Can someone suggest a way to do this
If want check ? values only in all columns:
df1 = df.loc[:, (df.astype(str) == '?').any()]
More general if want check all possible substrings ? in all columns:
df2 = df.loc[:, df.apply(lambda x: x.astype(str).str.contains('\?')).any()]
EDIT:
df = pd.DataFrame({'A':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,'?',2,3],
'D':['?',3,5,7,1,0],
'E':[5,3,6,9,2,'?'],
'F':list('aaabbb')})
print (df)
A B C D E F
0 a 4 7 ? 5 a
1 b 5 8 3 3 a
2 c 4 9 5 6 a
3 d 5 ? 7 9 b
4 e 5 2 1 2 b
5 f 4 3 0 ? b
You can create boolean DataFrame first and then check any True per rows and per columns for filtering:
mask = df.apply(lambda x: x.astype(str).str.contains('\?'))
df2 = df.loc[mask.any(axis=1), mask.any()]
print (df2)
C D E
0 7 ? 5
3 ? 7 9
5 3 0 ?
Detail:
print (mask)
A B C D E F
0 False False False True False False
1 False False False False False False
2 False False False False False False
3 False False True False False False
4 False False False False False False
5 False False False False True False
print (mask.any(axis=1))
0 True
1 False
2 False
3 True
4 False
5 True
dtype: bool
print (mask.any())
A False
B False
C True
D True
E True
F False
dtype: bool
This will work.
result = test_df[test_df['rbc'].str.contains("?")]
Consider the dataframe df
A B C D match?
0 x y 1 1 true
1 x y 1 2 false
2 x y 2 1 false
3 x y 2 2 true
4 x y 3 4 false
5 x y 5 6 false
I would like to drop the unmatched rows that are already matched somewhere else.
A B C D match?
1 x y 1 1 true
3 x y 2 2 true
4 x y 3 4 false
5 x y 5 6 false
How can I do that with Pandas?
You could sort those two columns so that their order of positioning could be made same throughout. Then, drop off all such duplicated entries present by providing keep=False in DF.drop_duplicates() method.
df[['C','D']] = np.sort(df[['C','D']].values)
df.drop_duplicates(keep=False)
you can compare the two columns with
df.C == df.D
0 True
1 False
2 False
3 True
4 False
dtype: bool
Then shift the series down.
0 NaN
1 True
2 False
3 False
4 True
dtype: object
Each True value indicates the start of a new group. We can use cumsum to create the groupings we need for groupby
(df.C == df.D).shift().fillna(False).cumsum()
0 0
1 1
2 1
3 1
4 2
dtype: int64
Then use groupy + last
df.groupby(df.C.eq(df.D).shift().fillna(False).cumsum()).last()
A B C D
0 x y 1 1
1 x y 2 2
2 x y 3 4
If you would like to remove the rows where "C" and "D" matched, the method .ix will help you:
df = df.ix[(df['C'] != df['D'])]
Therefore, df['C'] != df['D'] generates a list of booleans and .ix allows you to extract the corresponding DataFrame :)
I need something similar to
.str.startswith()
.str.endswith()
but for the middle part of a string.
For example, given the following pd.DataFrame
str_name
0 aaabaa
1 aabbcb
2 baabba
3 aacbba
4 baccaa
5 ababaa
I need to throw rows 1, 3 and 4 which contain (at least one) letter 'c'.
The position of the specific letter ('c') is not known.
The task is to remove all rows which do contain at least one specific letter
You want df['string_column'].str.contains('c')
>>> df
str_name
0 aaabaa
1 aabbcb
2 baabba
3 aacbba
4 baccaa
5 ababaa
>>> df['str_name'].str.contains('c')
0 False
1 True
2 False
3 True
4 True
5 False
Name: str_name, dtype: bool
Now, you can "delete" like this
>>> df = df[~df['str_name'].str.contains('c')]
>>> df
str_name
0 aaabaa
2 baabba
5 ababaa
>>>
Edited to add:
If you only want to check the first k characters, you can slice. Suppose k=3:
>>> df.str_name.str.slice(0,3)
0 aaa
1 aab
2 baa
3 aac
4 bac
5 aba
Name: str_name, dtype: object
>>> df.str_name.str.slice(0,3).str.contains('c')
0 False
1 False
2 False
3 True
4 True
5 False
Name: str_name, dtype: bool
Note, Series.str.slice does not behave like a typical Python slice.
you can use numpy
df[np.core.chararray.find(df.str_name.values.astype(str), 'c') < 0]
str_name
0 aaabaa
2 baabba
5 ababaa
You can use str.contains()
str_name = pd.Series(['aaabaa', 'aabbcb', 'baabba', 'aacbba', 'baccaa','ababaa'])
str_name.str.contains('c')
This will return the boolean
The following will return the inverse of the above
~str_name.str.contains('c')