pandas: select certain amount of rows based on column ranking using loop - python

I have a dataframe which looks like this
pd.DataFrame({'a':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'b':['N', 'Y', 'Y', 'N', 'Y', 'N', 'Y', 'N', 'N', 'Y'],
'c':[4, 5, 9, 8, 1, 3, 7, 2, 6, 10]})
a b c
0 A N 4
1 B Y 5
2 C Y 9
3 D N 8
4 E Y 1
5 F N 3
6 G Y 7
7 H N 2
8 I N 6
9 J Y 10
Out of the 10 rows I want to select 5 rows based on the following criteria:
column 'c' is my rank column.
select the rows with lowest 2 ranks (rows 4 and 7 selected)
select all rows where column 'b' = 'Y' AND rank <=5 (row 1 selected)
in the event fewer than 5 rows are selected using the above criteria the remaining open positions should be filled by rank order (lowest) with rows where 'b' = 'Y' and which have rank <= 7 (row 6 selected)
in the event fewer than 5 rows pass the first 3 criteria fill remaining positions in rank order (lowest) where 'b' = 'N'
I have tried this (which covers rule 1 & 2) but struggling how to go on from there
df['selected'] = ''
df.loc[(df.c <= 2), 'selected'] = 'rule_1'
df.loc[((df.c <= 5) & (df.b == 'Y')), 'selected'] = 'rule_2'
my resulting dataframe should look like this
a b c selected
0 A N 4 False
1 B Y 5 rule_2
2 C Y 9 False
3 D N 8 rule_4
4 E Y 1 rule_1
5 F N 3 False
6 G Y 7 rule_3
7 H N 2 rule_1
8 I N 6 False
9 J Y 10 False
based on on of the solutions provided by Vinod Karantothu below I went for the following which seems to work:
def solution(df):
def sol(df, b='Y'):
result_df_rule1 = df.sort_values('c')[:2]
result_df_rule1['action'] = 'rule_1'
result_df_rule2 = df.sort_values('c')[2:].loc[df['b'] == b].loc[df['c'] <= 5]
result_df_rule2['action'] = 'rule_2'
result = pd.concat([result_df_rule1, result_df_rule2]).head(5)
if len(result) < 5:
remaining_rows = pd.concat([df, result, result]).drop_duplicates(subset='a', keep=False)
result_df_rule3 = remaining_rows.loc[df['b'] == b].loc[df['c'] <= 7]
result_df_rule3['action'] = 'rule_3'
result = pd.concat([result, result_df_rule3]).head(5)
return result, pd.concat([remaining_rows, result, result]).drop_duplicates(subset='a', keep=False)
result, remaining_data = sol(df)
if len(result) < 5:
result1, remaining_data = sol(remaining_data, 'N')
result1['action'] = 'rule_4'
result = pd.concat([result, result1]).head(5).drop_duplicates(subset='a', keep=False).merge(df, how='outer', on='a')
return result
if __name__ == '__main__':
df = pd.DataFrame({'a': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'b': ['N', 'Y', 'Y', 'N', 'Y', 'N', 'Y', 'N', 'N', 'Y'],
'c': [4, 5, 9, 8, 1, 3, 7, 2, 6, 10]})
result = solution(df)
print(result)

import pandas as pd
def solution(df):
def sol(df, b='Y'):
result_df_rule1 = df.sort_values('c')[:2]
result_df_rule2 = df.sort_values('c')[2:].loc[df['b'] == b].loc[df['c'] <= 5]
result = pd.concat([result_df_rule1, result_df_rule2]).head(5)
if len(result) < 5:
remaining_rows = pd.concat([df, result, result]).drop_duplicates(keep=False)
result_df_rule3 = remaining_rows.loc[df['b'] == b].loc[df['c'] <= 7]
result = pd.concat([result, result_df_rule3]).head(5)
return result, pd.concat([remaining_rows, result, result]).drop_duplicates(keep=False)
result, remaining_data = sol(df)
if len(result) < 5:
result1, remaining_data = sol(remaining_data, 'N')
result = pd.concat([result, result1]).head(5)
return result
if __name__ == '__main__':
df = pd.DataFrame({'a':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'b':['N', 'Y', 'Y', 'N', 'Y', 'N', 'Y', 'N', 'N', 'Y'],
'c':[4, 5, 9, 8, 1, 3, 7, 2, 6, 10]})
result = solution(df)
print(result)
Result:
a b c
4 E Y 1
7 H N 2
1 B Y 5
6 G Y 7
5 F N 3

For your 4th RULE, you have mentioned in your resulting dataframe, ROW_INDEX 3 will come, but it has rank of 8 which is not lowest, ROW_INDEX 5 should come according to the RULES you have given:
import pandas as pd
data = pd.DataFrame({'a':['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'b':['N', 'Y', 'Y', 'N', 'Y', 'N', 'Y', 'N', 'N', 'Y'],
'c':[4, 5, 9, 8, 1, 3, 7, 2, 6, 10]})
data1 = data.nsmallest(2, ['c'])
dataX = data.drop(data1.index)
data2 = dataX[((dataX.b == "Y") & (dataX.c<=5))]
dataX = dataX.drop(data2.index)
data3 = dataX[((dataX.b == "Y") & (dataX.c<=7))]
dataX = dataX.drop(data3.index)
data4 = dataX[((dataX.b == "N"))]
data4 = data4.nsmallest(1, ['c'])
resultframes = [data1, data2, data3, data4]
resultfinal = pd.concat(resultframes)
print(resultfinal)
And here is the output:
a b c
4 E Y 1
7 H N 2
1 B Y 5
6 G Y 7
5 F N 3

You can create extra columns for the rules, then sort and take the head. IIUC from the comments then rule 3 already covers rule 2 so no need to calculate it separately.
df['r1'] = df.c < 3
df['r3'] = (df.c <= 7) & (df.b == 'Y')
print(df.sort_values(['r1', 'r3', 'c'], ascending=[False, False, True])[['a', 'b', 'c']].head(5))
a b c
4 E Y 1
7 H N 2
1 B Y 5
6 G Y 7
5 F N 3
Sorting on boolean column works because True > False.
Note: You might need to tweak the code to your expectations with different datasets. For example your last row 9 J Y 10 is currently not covered by any of the rules. You can take this approach and extend it if needed.

Related

Aggregate values by multiple columns

My dataframe looks like this.
df = pd.DataFrame({
'ID': [1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3],
'text': ['a', 'a', 'b', 'b', 'c', 'c', 'c', 'd', 'd', 'e', 'e', 'e', 'f', 'g'] ,
'out_text': ['x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', 'x8', 'x9', 'x10', 'x11', 'x12', 'x13', 'x14'] ,
'Rule_1': ['N', 'N', 'N', 'Y', 'N', 'N', 'N', 'N', 'N', 'N','N', 'N', 'Y', 'Y'],
'Rule_2': ['Y', 'N', 'N', 'N', 'Y', 'N', 'N', 'N', 'N', 'N','N', 'N', 'Y', 'N'],
'Rule_3': ['N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N','N', 'N', 'Y', 'Y']})
ID text out_text Rule_1 Rule_2 Rule_3
0 1 a x1 N Y N
1 1 a x2 N N N
2 1 b x3 N N N
3 1 b x4 Y N N
4 2 c x5 N Y N
5 2 c x6 N N N
6 2 c x7 N N N
7 2 d x8 N N N
8 2 d x9 N N N
9 2 e x10 N N N
10 2 e x11 N N N
11 2 e x12 N N N
12 3 f x13 Y Y Y
13 3 g x14 Y N Y
I have to aggregate Rule_1, Rule_2, Rule_3 to such that if a combination of ID and Text has a 'Y' in any of these columns, the overall result is a Y for that combination. In our example 1-a and 1-b are Y overall. 2-d and 2-e are 'N'. How do I aggregate multiple columns?
Let's try using max(1) to aggregate the rules by rows, then groupyby().any() to check if any row has Y:
(df[['Rule_1','Rule_2','Rule_3']].eq('Y')
.max(axis=1)
.groupby([df['ID'],df['text']])
.any()
)
Output:
ID text
1 a True
b True
2 c True
d False
e False
3 f True
g True
dtype: bool
Or if you want Y/N, we can change max/any to max, and drop comparison:
(df[['Rule_1','Rule_2','Rule_3']]
.max(axis=1)
.groupby([df['ID'],df['text']])
.max()
)
Output:
ID text
1 a Y
b Y
2 c Y
d N
e N
3 f Y
g Y
dtype: object

How can I stack rows that share any column value in common?

I'm not sure the wording of the title is optimal, because the problem I have is a little tricky to explain. In code, I have a df that looks something like this:
import pandas as pd
import numpy as np
a = ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'D', 'E', 'E']
b = [3, 1, 2, 3, 12, 4, 7, 8, 3, 10, 12]
df = pd.DataFrame([a, b]).T
df
Yields
0 1
0 A 3
1 A 1
2 A 2
3 B 3
4 B 12
5 B 4
6 C 7
7 C 8
8 D 3
9 E 10
10 E 12
I'm aware of groupby methods to group by values in a column, but that's not exactly what I want. I kind of want to go a step past that, where any intersection in column 1 between groups of column 0 are grouped together. My wording is terrible (which is probably why I'm having trouble putting this into code), but here's basically what I want as output:
0 1
0 A-B-D-E 3
1 A-B-D-E 1
2 A-B-D-E 2
3 A-B-D-E 3
4 A-B-D-E 12
5 A-B-D-E 4
6 C 7
7 C 8
8 A-B-D-E 3
9 A-B-D-E 10
10 A-B-D-E 12
Basically, A, B, and D all share the value 3 in column 1, so their labels get grouped together in column 0. Now, because B and E share value 12 in column 1, and B shares the value 3 in column 1 with A and D, E gets grouped in with A, B, and D as well. The only value in column 0 that remained independent is C, because it has no intersections with any other group.
In my head this ends up being a recursive loop, but I can't seem to figure out the exact logic. Any help would be appreciated.
If anyone in the future is experiencing the same thing, this works (it's probably not the best solution in the world, though):
import pandas as pd
import numpy as np
a = ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'D', 'E', 'E']
b = ['3', '1', '2', '3', '12', '4', '7', '8', '3', '10', '12']
df = pd.DataFrame([a, b]).T
df.columns = 'a', 'b'
df2 = df.copy()
def flatten(container):
for i in container:
if isinstance(i, (list,tuple)):
for j in flatten(i):
yield j
else:
yield i
bad = True
i =1
while bad:
print("Round "+str(i))
i = i+1
len_checker = []
for variant in list(set(df.a)):
eGenes = list(set(df.loc[df.a==variant, 'b']))
inter_variants = []
for gene in eGenes:
inter_variants.append(list(set(df.loc[df.b==gene, 'a'])))
if type(inter_variants[0]) is not str:
inter_variants = [x for x in flatten(inter_variants)]
inter_variants = list(set(inter_variants))
len_checker.append(inter_variants)
if len(inter_variants) != 1:
df2.loc[df2.a.isin(inter_variants),'a']='-'.join(inter_variants)
good_checker = max([len(x) for x in len_checker])
df['a'] = df2.a
if good_checker == 1:
bad=False
df.a = df.a.apply(lambda x: '-'.join(list(set(x.split('-')))))
df.drop_duplicates(inplace=True)
The following creates the output you want, without recursions. I have not tested it with other constellations though (other order, more combinations etc.).
a = ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'D', 'E', 'E']
b = [3, 1, 2, 3, 12, 4, 7, 8, 3, 10, 12]
df = list(zip(a, b))
print(df)
class Bucket:
def __init__(self, keys, values):
self.keys = set(keys)
self.values = set(values)
def contains_key(self, key):
return key in self.keys
def add_if_contained(self, key, value):
if value in self.values:
self.keys.add(key)
return True
elif key in self.keys:
self.values.add(value)
return True
return False
def merge(self, bucket):
self.keys.update(bucket.keys)
self.values.update(bucket.values)
def __str__(self):
return f'{self.keys} :: {self.values}>'
def __repr__(self):
return str(self)
res = []
for tup in df:
added = False
if res:
selected_bucket = None
remove_idx = None
for idx, bucket in enumerate(res):
if not added:
added = bucket.add_if_contained(tup[0], tup[1])
selected_bucket = bucket
elif bucket.contains_key(tup[0]):
selected_bucket.merge(bucket)
remove_idx = idx
if remove_idx is not None:
res.pop(remove_idx)
if not added:
res.append(Bucket({tup[0]}, {tup[1]}))
print(res)
Generates the following output:
$ python test.py
[('A', 3), ('A', 1), ('A', 2), ('B', 3), ('B', 12), ('B', 4), ('C', 7), ('C', 8), ('D', 3), ('E', 10), ('E', 12)]
[{'B', 'D', 'A', 'E'} :: {1, 2, 3, 4, 10, 12}>, {'C'} :: {8, 7}>]

How do you slice a DF using .loc with a list which might have elements not found in index / column

I have couple of list which might have elements that are not found in the index / columns for a DataFrame. I want to get the particular rows / columns using these indices such that the if the elements in the list is not found in the index / column then it is ignored.
df1 = pd.DataFrame({"x":[1, 2, 3, 4, 5],
"y":[3, 4, 5, 6, 7]},
index=['a', 'b', 'c', 'd', 'e'])
df1.loc[['c', 'd', 'e', 'f'], ['x', 'z']]
I want to get:
x
c 3.0
d 4.0
e 5.0
instead of:
x z
c 3.0 NaN
d 4.0 NaN
e 5.0 NaN
f NaN NaN
I think you need Index.intersection:
a = ['c', 'd', 'e', 'f']
b = ['x', 'z']
print (df1.index.intersection(a))
Index(['c', 'd', 'e'], dtype='object')
print (df1.columns.intersection(b))
Index(['x'], dtype='object')
df2 = df1.loc[df1.index.intersection(a),df1.columns.intersection(b)]
print (df2)
x
c 3
d 4
e 5
Using the filter function:
row_index = ['c', 'd', 'e', 'f']
col_index = ['x', 'z']
df1.filter(row_index, axis=0).filter(col_index, axis=1)
# x
#c 3
#d 4
#e 5
I believe you can just drop rows and columns containing all null values.
>>> df1.loc[['c', 'd', 'e', 'f'], ['x', 'z']].dropna(how='all').dropna(how='all', axis=1)
x
c 3
d 4
e 5

Creating a new column based on selecting by multiple conditions between two pandas dataframes

I have two dataframes that contain (some) common columns (A,B,C), but are ordered differently and have different values for C.
I'd like to replace the 'C' values in first dataframe with those from the second.
I can create a toy example like this:
A = [ 1, 1, 1, 2, 2, 2, 3, 3, 3 ]
B = [ 'x', 'y', 'z', 'x', 'y', 'y', 'x', 'x', 'x' ]
C = [ 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i' ]
df1 = pd.DataFrame( { 'A' : A,
'B' : B,
'C' : C } )
A.reverse()
B.reverse()
C = [ c.upper() for c in reversed(C) ]
df2 = pd.DataFrame( { 'A' : A,
'B' : B,
'C' : C } )
I'd like to update df1 so that it looks like this - i.e. it has the 'C' values from df2:
A = [ 1, 1, 1, 2, 2, 2, 3, 3, 3 ]
B = [ 'x', 'y', 'z', 'x', 'y', 'y', 'x', 'x', 'x' ]
C = [ 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I' ]
I've tried:
df1['C'] = df2[ (df2['A'] == df1['A']) & (df2['B'] == df1['B']) ]['C']
but that doesn't work because, I think, the order of A and B are different.
merge_df = pd.merge(df1, df2, on=['A', 'B'])
df1['C'] = merge_df['C_y']
I think your toy code has a problem in [ c.upper() for c in C.reverse() ].
C.reverse() return None.
It is not easy, because duplicates in columns A and B (3,x).
So I create new columns D by cumcount and then use
merge, last remove unnecessary columns:
df1['D'] = df1.groupby(['A','B']).C.cumcount()
df2['D'] = df2.groupby(['A','B']).C.cumcount(ascending=False)
df3 = pd.merge(df1, df2, on=['A','B','D'], how='right', suffixes=('_',''))
df3 = df3.drop(['C_', 'D'], axis=1)
print (df3)
A B C
0 1 x A
1 1 y B
2 1 z C
3 2 x D
4 2 y E
5 2 y F
6 3 x G
7 3 x H
8 3 x I

Python: Create vectors of same length using two DataFrames

I have two dataframes as follows:
d1 = {'person' : ['1', '1', '1', '2', '2', '3', '3', '4', '4'],
'category' : ['A', 'B', 'C', 'B', 'D', 'E', 'F', 'F', 'D'],
'value' : [2, 3, 1, 2, 1, 4, 2, 1, 3]}
d2 = {'group' : [100, 100, 100, 200, 200, 300, 300],
'category' : ['A', 'D', 'F', 'B', 'C', 'A', 'F'],
'value' : [10, 8, 8, 6, 7, 8, 5]}
I want to get vectors of the same length out of the column category (i.e. indexed by category) for each person and group. In other words, I want to transform this long dataframes into wide format where the name of the new columns are the values of the column category.
What is the best way to do this? This is an example of what I need:
id type A B C D E F
0 100 group 10 0 0 8 0 8
1 200 group 0 6 7 0 0 0
2 300 group 8 0 0 0 0 5
3 1 person 2 3 1 0 0 0
4 2 person 0 2 0 1 0 0
5 3 person 0 0 0 0 4 2
6 4 person 0 0 0 3 0 1
My current script appends both dataframes and then it gets a pivot table. My concern is that in this case the types of the id columns are different.
I do this because sometimes not all the categories are in each dataframe (e.g. 'E' is not in df2).
This is what I have:
import pandas as pd
d1 = {'person' : ['1', '1', '1', '2', '2', '3', '3', '4', '4'],
'category' : ['A', 'B', 'C', 'B', 'D', 'E', 'F', 'F', 'D'],
'value' : [2, 3, 1, 2, 1, 4, 2, 1, 3]}
d2 = {'group' : [100, 100, 100, 200, 200, 300, 300],
'category' : ['A', 'D', 'F', 'B', 'C', 'A', 'F'],
'value' : [10, 8, 8, 6, 7, 8, 5]}
df1 = pd.DataFrame(d1)
df2 = pd.DataFrame(d2)
df1['type'] = 'person'
df2['type'] = 'group'
df1.rename(columns={'person': 'id'}, inplace = True)
df2.rename(columns={'group': 'id'}, inplace = True)
rawpivot = pd.DataFrame([])
rawpivot = rawpivot.append(df1)
rawpivot = rawpivot.append(df2)
pivot = rawpivot.pivot_table(index=['id','type'], columns='category', values='value', aggfunc='sum', fill_value=0)
pivot.reset_index(inplace = True)
import pandas as pd
d1 = {'person' : ['1', '1', '1', '2', '2', '3', '3', '4', '4'],
'category' : ['A', 'B', 'C', 'B', 'D', 'E', 'F', 'F', 'D'],
'value' : [2, 3, 1, 2, 1, 4, 2, 1, 3]}
d2 = {'group' : [100, 100, 100, 200, 200, 300, 300],
'category' : ['A', 'D', 'F', 'B', 'C', 'A', 'F'],
'value' : [10, 8, 8, 6, 7, 8, 5]}
cols = ['idx', 'type', 'A', 'B', 'C', 'D', 'E', 'F']
df1 = pd.DataFrame(columns=cols)
def add_data(type_, data):
global df1
for id_, category, value in zip(data[type_], data['category'], data['value']):
if id_ not in df1.idx.values:
row = pd.DataFrame({'idx': id_, 'type': type_}, columns = cols, index=[0])
df1 = df1.append(row, ignore_index = True)
df1.loc[df1['idx']==id_, category] = value
add_data('group', d2)
add_data('person', d1)
df1 = df1.fillna(0)
df1 now holds the following values
idx type A B C D E F
0 100 group 10 0 0 8 0 8
1 200 group 0 6 7 0 0 0
2 300 group 8 0 0 0 0 5
3 1 person 2 3 1 0 0 0
4 2 person 0 2 0 1 0 0
5 3 person 0 0 0 0 4 2
6 4 person 0 0 0 3 0 1

Categories

Resources