I have a pandas dataframe like below.
id A B C
0 1 1 1 1
1 1 5 7 2
2 2 6 9 3
3 3 1 5 4
4 3 4 6 2
After evaluating conditions,
id A B C a_greater_than_b b_greater_than_c c_greater_than_a
0 1 1 1 1 False False False
1 1 5 7 2 False True False
2 2 6 9 3 False True False
3 3 1 5 4 False True True
4 3 4 6 2 False True False
And after evaluating conditions, want to aggregate the results per id.
id a_greater_than_b b_greater_than_c c_greater_than_a
1 False False False
2 False True False
3 False True False
The logic is not fully clear, but you can combine pandas.get_dummies and aggregation per group (here I am assuming the min as your example showed that 1/1/0 -> 0 and 1/1/1 -> 1, but you can use other logics, e.g. last if you want to get the last row per group after sorting by date):
out = (pd
.get_dummies(df[['color', 'size']])
.groupby(df['id'])
.min()
)
print(out)
Output:
color_blue color_yellow size_l
id
A1 0 0 1
Related
I am trying to create a column 'count' on a pandas DF that cumulatively counts when field 'boolean' is True but resets and stays at 0 when 'boolean' is False. Also needs to be grouped by the ID column, so the count resets when looking at a new ID. No loops please as working with a big data set
Used the code from the following question which works but need to add a group by to include the ID column grouping
Pandas Dataframe - Row Iteration with Resetting Count-Value by Condition without loop
Expected output below: (ID, Boolean columns already exist, just need to create Count)
ID Boolean Count
1 True 1
1 True 2
1 True 3
1 True 4
1 True 5
1 False 0
1 False 0
1 False 0
1 False 0
1 True 1
1 True 2
1 True 3
2 True 1
2 True 2
2 True 3
2 True 4
2 False 0
2 False 0
2 False 0
2 True 1
2 True 2
2 True 3
Identify blocks by using cumsum on inverted boolean mask, then group the dataframe by ID and blocks and use cumsum on Boolean to create a counter
b = (~df['Boolean']).cumsum()
df['Count'] = df.groupby(['ID', b])['Boolean'].cumsum()
ID Boolean Count
0 1 True 1
1 1 True 2
2 1 True 3
3 1 True 4
4 1 True 5
5 1 False 0
6 1 False 0
7 1 False 0
8 1 False 0
9 1 True 1
10 1 True 2
11 1 True 3
12 2 True 1
13 2 True 2
14 2 True 3
15 2 True 4
16 2 False 0
17 2 False 0
18 2 False 0
19 2 True 1
20 2 True 2
21 2 True 3
df['Count'] = df.groupby('ID')['Boolean'].diff()
df = df.fillna(False)
df['Count'] = df.groupby('ID')['Count'].cumsum()
df['Count'] = df.groupby(['ID', 'Count'])['Boolean'].cumsum()
df
ID Boolean Count
0 1 True 1
1 1 True 2
2 1 True 3
3 1 True 4
4 1 True 5
5 1 False 0
6 1 False 0
7 1 False 0
8 1 False 0
9 1 True 1
10 1 True 2
11 1 True 3
12 2 True 1
13 2 True 2
14 2 True 3
15 2 True 4
16 2 False 0
17 2 False 0
18 2 False 0
19 2 True 1
20 2 True 2
21 2 True 3
You can use a column shift for ID and Boolean columns to identify the groups to do the groupby on. Then do a cumsum for each of those groups.
groups = ((df['ID']!=df['ID'].shift()) | (df['Boolean']!=df['Boolean'].shift())).cumsum()
df.assign(Count2=df.groupby(groups)['Boolean'].cumsum())
Result
ID Boolean Count Count2
0 1 True 1 1
1 1 True 2 2
2 1 True 3 3
3 1 True 4 4
4 1 True 5 5
5 1 False 0 0
6 1 False 0 0
7 1 False 0 0
8 1 False 0 0
9 1 True 1 1
10 1 True 2 2
11 1 True 3 3
12 2 True 1 1
13 2 True 2 2
14 2 True 3 3
15 2 True 4 4
16 2 False 0 0
17 2 False 0 0
18 2 False 0 0
19 2 True 1 1
20 2 True 2 2
21 2 True 3 3
Given the following dataframe:
col_1 col_2
False 1
False 1
False 1
False 1
False 1
False 1
False 1
False 1
False 1
False 1
False 1
False 1
False 1
False 1
False 2
True 2
False 2
False 2
True 2
False 2
False 2
False 2
False 2
False 2
False 2
False 2
False 2
False 2
False 2
False 2
How can I create a new index that help to identify when a True value is present in col_1? That is, when in the first column a True value appears I would like to fill backward with a number starting from one the new column. For example, this is the expected output for the above dataframe:
col_1 col_2 new_id
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 1 1
False 2 1
True 2 1 --------- ^ (fill with 1 and increase the counter)
False 2 2
False 2 2
True 2 2 --------- ^ (fill with 2 and increase the counter)
False 2 3
False 2 3
False 2 3
False 2 3
False 2 3
False 2 3
False 2 3
False 2 3
False 2 3
False 2 3
False 2 3
True 2 4 --------- ^ (fill with 3 and increase the counter)
The problem is that I do not know how to create the id although I know that pandas provide a bfill object that may help to achieve this purpose. So far I tried to iterate with a simple for loop:
count = 0
for index, row in df.iterrows():
if row['col_1'] == False:
print(count+1)
else:
print(row['col_2'] + 1)
However, I do not know how to increase the counter to the next number. Also I tried to create a function and then apply it to the dataframe:
def create_id(col_1, col_2):
counter = 0
if col_1 == True and col_2.bool() == True:
return counter + 1
else:
pass
Nevertheless, i lose control of filling backward the column.
Just do with cumsum
df['new_id']=(df.col_1.cumsum().shift().fillna(0)+1).astype(int)
df
Out[210]:
col_1 col_2 new_id
0 False 1 1
1 False 1 1
2 False 1 1
3 False 1 1
4 False 1 1
5 False 1 1
6 False 1 1
7 False 1 1
8 False 1 1
9 False 1 1
10 False 1 1
11 False 1 1
12 False 1 1
13 False 1 1
14 False 2 1
15 True 2 1
16 False 2 2
17 False 2 2
18 True 2 2
19 False 2 3
20 False 2 3
21 False 2 3
22 False 2 3
23 False 2 3
24 False 2 3
25 False 2 3
26 False 2 3
27 False 2 3
28 False 2 3
29 False 2 3
If you aim to append the new_id column to your dataframe:
new_id=[]
counter=1
for index, row in df.iterrows():
new_id+= [counter]
if row['col_1']==True:
counter+=1
df['new_id']=new_id
I got a dataframe like this:
A B C
1 1 1
2 2 2
3 3 3
4 1 1
I want to 'merge' the three columns to form a D column, the rule is: if there is at least one '1' in the row, then the value of D is '1' else is '0'. How can I achieve it?
Use DataFrame.eq for compare values with DataFrame.any for check at least one True per row and last cast boolean mask to integers:
df['D'] = df.eq(1).any(axis=1).astype(int)
print (df)
A B C D
0 1 1 1 1
1 2 2 2 0
2 3 3 3 0
3 4 1 1 1
Detail:
print (df.eq(1))
A B C
0 True True True
1 False False False
2 False False False
3 False True True
print (df.eq(1).any(axis=1))
0 True
1 False
2 False
3 True
dtype: bool
I have the following dataframe:
df1 = pd.DataFrame({1:[1,2,3,4], 2:[1,2,4,5], 3:[8,1,5,6]})
df1
Out[7]:
1 2 3
0 1 1 8
1 2 2 1
2 3 4 5
3 4 5 6
and I would like to create a new column that will show the distance the last column with a particular value, 2 in this case, from the reference column, 3 in this example, or return an NaN result is no such value is found in a row. Output would be something like:
df1
Out[11]:
1 2 3 dist
0 1 1 8 NaN
1 2 2 1 1
2 3 4 5 NaN
3 4 5 6 NaN
What would be an effective way of accomplishing this task?
I think need subtract 3 (last) because reference column with column name of last 2:
df1.columns = df1.columns.astype(int)
print((df1.columns.max() - df1.eq(2).iloc[:,::-1].idxmax(axis=1)).mask(lambda x: x == 0))
0 NaN
1 1.0
2 NaN
3 NaN
dtype: float64
Details:
Compare by 2:
print (df1.eq(2))
1 2 3
0 False False False
1 True True False
2 False False False
3 False False False
Inverse order of columns:
print (df1.eq(2).iloc[:,::-1])
3 2 1
0 False False False
1 False True True
2 False False False
3 False False False
Check column name of first True (because inverse columns, it is last)
print (df1.eq(2).iloc[:,::-1].idxmax(axis=1))
0 3
1 2
2 3
3 3
dtype: int64
Subtract by max value, but it also return 0 if value in reference column and if no value match:
print (df1.columns.max() - df1.eq(2).iloc[:,::-1].idxmax(1))
0 0
1 1
2 0
3 0
dtype: int64
This question already has answers here:
How to analyze all duplicate entries in this Pandas DataFrame?
(3 answers)
Closed 5 years ago.
I have a dataframe
x c
0 0 1
1 3 2
2 1 1
3 2 1
4 3 1
5 4 1
6 1 0
7 3 1
8 2 1
9 1 2
I would like to produce
c x duplicated
0 1 0 False
1 2 3 False
2 1 1 False
3 1 2 True
4 1 3 True
5 1 4 False
6 0 1 False
7 1 3 True
8 1 2 True
9 2 1 False
that is to group by c first, and mark all duplicated rows in the group.
My current approach is
c = np.random.randint(0, 3, 10)
x = np.random.randint(0, 5, 10)
d = pd.DataFrame({'x': x, 'c': c})
d['duplicated'] = d.groupby('c').apply(
lambda x: x.duplicated(keep=False)
).reset_index(level=0, drop=True)
Is there any better way?
Use duplicated only - by default it verify all columns:
d['duplicated'] = d.duplicated(keep=False)
print (d)
x c duplicated
0 0 1 False
1 3 2 False
2 1 1 False
3 2 1 True
4 3 1 True
5 4 1 False
6 1 0 False
7 3 1 True
8 2 1 True
9 1 2 False
d['duplicated'] = d.duplicated(subset=['c','x'],keep=False)
print (d)
x c duplicated
0 0 1 False
1 3 2 False
2 1 1 False
3 2 1 True
4 3 1 True
5 4 1 False
6 1 0 False
7 3 1 True
8 2 1 True
9 1 2 False