pythonic way to count multiple columns conditionaly check - python

I'm trying to make a ordinary loop under specific conditions.
I want to interact over rows, checking conditions, and then interact over columns counting how many times the condition was meet.
This counting should generate a new column e my dataframe indicating the total count for each row.
I tried to use apply and mapapply with no success.
I successfully generated the following code to reach my goal.
But I bet there is more efficient ways, or even, built-in pandas functions to do it.
Anyone know how?
sample code:
import pandas as pd
df = pd.DataFrame({'1column': [11, 22, 33, 44],
'2column': [32, 42, 15, 35],
'3column': [33, 77, 26, 64],
'4column': [99, 11, 110, 22],
'5column': [20, 64, 55, 33],
'6column': [10, 77, 77, 10]})
check_columns = ['3column','5column', '6column' ]
df1 = df.copy()
df1['bignum_count'] = 0
for column in check_columns:
inner_loop_count = []
bigseries = df[column]>=50
for big in bigseries:
if big:
inner_loop_count.append(1)
else:
inner_loop_count.append(0)
df1['bignum_count'] += inner_loop_count
# View the dataframe
df1
results:
1column 2column 3column 4column 5column 6column bignum_count
0 11 32 33 99 20 10 0
1 22 42 77 11 64 77 3
2 33 15 26 110 55 77 2
3 44 35 64 22 33 10 1

Index on the columns of interest and check which are greater or equal (ge) than a threshold:
df['bignum_count'] = df[check_columns].ge(50).sum(1)
print(df)
1column 2column 3column 4column 5column 6column bignum_count
0 11 32 33 99 20 10 0
1 22 42 77 11 64 77 3
2 33 15 26 110 55 77 2
3 44 35 64 22 33 10 1
check_columns
df1 = df.copy()

Use DataFrame.ge for >= with counts Trues values by sum:
df['bignum_count'] = df[check_columns].ge(50).sum(axis=1)
#alternative
#df['bignum_count'] = (df[check_columns]>=50).sum(axis=1)
print(df)
1column 2column 3column 4column 5column 6column bignum_count
0 11 32 33 99 20 10 0
1 22 42 77 11 64 77 3
2 33 15 26 110 55 77 2
3 44 35 64 22 33 10 1

Related

split column header number and string in python

Here is my toy example, my question is how to create new column called trial={2,3}, 2 & 3comes from the number part in the columns names 2.0__sum_values, 3.0__sum_values,
my code is:
import pandas as pd
before_spliting = {"ID": [1, 2,3], "2.0__sum_values": [33,28,40],"2.0__mediane": [33,70,20],"2.0__root_mean_square":[33,4,30],"3.0__sum_values": [33,28,40],"3.0__mediane": [33,70,20],"3.0__root_mean_square":[33,4,30]}
before_spliting = pd.DataFrame(before_spliting)
print(before_spliting)
ID 2.0__sum_values 2.0__mediane 2.0__root_mean_square 3.0__sum_values \
0 1 33 33 33 33
1 2 28 70 4 28
2 3 40 20 30 40
3.0__mediane 3.0__root_mean_square
0 33 33
1 70 4
2 20 30
after_spliting = { "ID": [1,1,2, 2,3,3], "trial": [2, 3,2,3,2,3],"sum_values": [33,33,28,28,40,40],"mediane": [33,33,70,70,20,20],"root_mean_square":[33,33,4,4,30,30]}
after_spliting = pd.DataFrame(after_spliting)
print(after_spliting)
ID trial sum_values mediane root_mean_square
0 1 2 33 33 33
1 1 3 33 33 33
2 2 2 28 70 4
3 2 3 28 70 4
4 3 2 40 20 30
5 3 3 40 20 30
You could try:
res = df.melt(id_vars="ID")
res[["trial", "columns"]] = res["variable"].str.split("__", expand=True)
res = (
res
.pivot_table(
index=["ID", "trial"], columns="columns", values="value", aggfunc=list
)
.explode(sorted(set(res["columns"])))
.reset_index()
)
Result for the following input dataframe
data = {
"ID": [1, 2, 3],
"2.0__sum_values": [33, 28, 40], "2.0__mediane": [43, 80, 30], "2.0__root_mean_square":[37, 4, 39],
"3.0__sum_values": [34, 29, 41], "3.0__mediane": [44, 81, 31], "3.0__root_mean_square":[38, 5, 40]
}
df = pd.DataFrame(data)
is
columns ID trial mediane root_mean_square sum_values
0 1 2.0 43 37 33
1 1 3.0 44 38 34
2 2 2.0 80 4 28
3 2 3.0 81 5 29
4 3 2.0 30 39 40
5 3 3.0 31 40 41
Alternative solution with the same output:
res = df.melt(id_vars="ID")
res[["trial", "columns"]] = res["variable"].str.split("__", expand=True)
res = res.set_index(["ID", "trial"]).drop(columns="variable").sort_index()
res = pd.concat(
(group[["value"]].rename(columns={"value": key})
for key, group in res.groupby("columns")),
axis=1
).reset_index()
Because you are using curly brackets {}, it is not possible to have duplicate variables inside the curly brackets {}, so instead you can use brackets [], for creating new column.
trial = []
for i in range(len(d1)):
trial.append([d1['2.0__sum_values'][i], d1['3.0__sum_values'][i]])
d1['trial'] = trial
Best of Luck

1 to 2 matching in two dataframes with different sizes in Python/R

please help me with this problem I've been struggling all day lol, solution in either Python or R is fine! Please help I'm really stuck!!!
I have two dataframes - df1 has 44 rows, df2 has 100 rows, they both have these columns:
ID, status (0,1), Age, Gender, Race, Ethnicity, Height, Weight
for each row in df1, I need to find an age match in df2:
it can be exact age match, but the criteria should be used is - df2[age]-5 <= df1[age]<= df2[age]+5
I need a list/dictionary to store which are the age matches for df1, and their IDs
Then I need to randomly select 2 IDs from df2 as the final match for df1 age
I also need to make sure the 2 df2 matches shares the same gender and race as df1
I have tried R and Python, and both stuck on the nested loops part.
I'm not sure how to loop through each record both df1 and df2, compare df1 age with df2 age-5 and df2 age+5, and store the matches
Here are the sample data format for df1 and df2:
| ID | sex | age | race |
| -------- | -------------- |--------|-------|
| 284336 | female | 42.8 | 2 |
| 294123 | male | 48.5 | 1 |
Here is what I've attempted in R:
id_match <- NULL
for (i in 1:nrow(gwi_case)){
age <- gwi_case$age[i]
gender <- gwi_case$gender[i]
ethnicity <- gwi_case$hispanic_non[i]
race <- gwi_case$race[i]
x <- which(gwi_control$gender==gender & gwi_control$age>=age-5 & gwi_control$age<=age+5 & gwi_control$hispanic_non==ethnicity & gwi_control$race==race)
y <- sample(x, min(2, length(x)))
id_match <- c(id_match, y)
}
id_match <- id_match[!duplicated(id_match)]
length(id_match)
The question asks this:
for each row in df1, find an age match in df2 such that df2[age] - 5 <= df1[age] <= df2[age] + 5
create a list/dictionary to hold age matches and IDs for df1
randomly select 2 IDs from df2 as the final match for df1 age
Here is some Python code that:
uses the criteria to populate list of lists ageMatches with a list of unique df2 ages matching each unique df1 age
calls DataFrame.query() on df2 for each age in df1 to populate idMatches with a list of df2 IDs with age matching each unique df1 age
populates age1ToID2 with unique df1 age keys and with values that are lists of 2 (or fewer if available number < 2) randomly selected df2 IDs of matching age
adds a column to df1 containing the pair of selected df2 IDs corresponding to each row's age (i.e., the values in age1ToID2)
import pandas as pd
import numpy as np
df1 = pd.DataFrame({'ID':list(range(101,145)), 'Age':[v % 11 + 21 for v in range(44)], 'Height':[67]*44})
df2 = pd.DataFrame({'ID':list(range(1,101)), 'Age':[v % 10 + 14 for v in range(50)] + [v % 20 + 25 for v in range(0,100,2)], 'Height':[67]*100})
ages1 = np.sort(df1['Age'].unique())
ages2 = np.sort(df2['Age'].unique())
ageMatches = [[] for _ in ages1]
j1, j2 = 0, 0
for i, age1 in enumerate(ages1):
while j1 < len(ages2) and ages2[j1] < age1 - 5:
j1 += 1
if j2 <= j1:
j2 = j1 + 1
while j2 < len(ages2) and ages2[j2] <= age1 + 5:
j2 += 1
ageMatches[i] += list(ages2[j1:j2])
idMatches = [df2.query('Age in #m')['ID'].to_list() for i, m in enumerate(ageMatches)]
# select random pair of df2 IDs for each unique df1 age and put them into a new df1 column
from random import sample
age1ToID2 = {ages1[i]:m if len(m) < 2 else sample(m, 2) for i, m in enumerate(idMatches)}
df1['df2_matches'] = df1['Age'].apply(lambda x: age1ToID2[x])
print(df1)
Output:
ID Age Height df2_matches
0 101 21 67 [24, 30]
1 102 22 67 [50, 72]
2 103 23 67 [10, 37]
3 104 24 67 [63, 83]
4 105 25 67 [83, 49]
5 106 26 67 [20, 52]
6 107 27 67 [49, 84]
7 108 28 67 [54, 55]
8 109 29 67 [91, 55]
9 110 30 67 [65, 51]
10 111 31 67 [75, 72]
11 112 21 67 [24, 30]
...
42 143 30 67 [65, 51]
43 144 31 67 [75, 72]
This hopefully provides the result and intermediate collections that OP is asking for, or something close enough to get to the desired result.
Alternatively, to have the random selection be different for each row in df1, we can do this:
# select random pair of df2 IDs for each df1 row and put them into a new df1 column
from random import sample
age1ToID2 = {ages1[i]:m for i, m in enumerate(idMatches)}
def foo(x):
m = age1ToID2[x]
return m if len(m) < 2 else sample(m, 2)
df1['df2_matches'] = df1['Age'].apply(foo)
print(df1)
Output:
ID Age Height df2_matches
0 101 21 67 [71, 38]
1 102 22 67 [71, 5]
2 103 23 67 [9, 38]
3 104 24 67 [49, 61]
4 105 25 67 [27, 93]
5 106 26 67 [40, 20]
6 107 27 67 [9, 19]
7 108 28 67 [53, 72]
8 109 29 67 [82, 53]
9 110 30 67 [74, 62]
10 111 31 67 [52, 62]
11 112 21 67 [71, 39]
...
42 143 30 67 [96, 66]
43 144 31 67 [63, 83]
not sure I fully understand the requirement but... in python you can use apply to the dataframe and a lambda function to perform some funky things
df1['age_matched_ids'] = df1.apply(lambda x: list(df2.loc[df2['Age'] >= x['Age'] - 5 & df2['Age'] <= x['Age'] + 5, 'ID']), axis=1)
this will store in column 'age_matched_ids' the list of IDs from df2 that fall in between Age +/- 5. You can do #2 and #3 from here.

Select columns and create new dataframe

I have a dataframe with more than 5000 columns but here is an example what it looks like:
data = {'AST_0-1': [1, 2, 3],
'AST_0-45': [4, 5, 6],
'AST_0-135': [7, 8, 20],
'AST_10-1': [10, 20, 32],
'AST_10-45': [47, 56, 67],
'AST_10-135': [48, 57, 64],
'AST_110-1': [100, 85, 93],
'AST_110-45': [100, 25, 37],
'AST_110-135': [44, 55, 67]}
I want to create multiple new dataframes based on the numbers after the "-" in the columns names. For example, a dataframe with all the columns that endes with "1" [df1=(AST_0-1;AST_10-1;AST_100-1)], another that ends with "45" and another ends with "135". To do that I know I will need a loop but I am actually having trouble to select the columns to then create the dataframes.
You can use str.extract on the v column names to get the wanted I'd, then groupby on axis=1.
Here creating a dictionary of dataframes.
group = df.columns.str.extract(r'(\d+)$', expand=False)
out = dict(list(df.groupby(group, axis=1)))
Output:
{'1': AST_0-1 AST_10-1 AST_110-1
0 1 10 100
1 2 20 85
2 3 32 93,
'135': AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67,
'45': AST_0-45 AST_10-45 AST_110-45
0 4 47 100
1 5 56 25
2 6 67 37}
Accessing ID 135:
out['135']
AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67
Use:
df = pd.DataFrame(data)
dfs = dict(list(df.groupby(df.columns.str.rsplit('-', n=1).str[1], axis=1)))
Output:
>>> dfs
{'1': AST_0-1 AST_10-1 AST_110-1
0 1 10 100
1 2 20 85
2 3 32 93,
'135': AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67,
'45': AST_0-45 AST_10-45 AST_110-45
0 4 47 100
1 5 56 25
2 6 67 37}
I know it's strongly discouraged but maybe you want to create dataframes like df1, df135, df45. In this case, you can use:
for name, df in dfs.items():
locals()[f'df{name}'] = df
>>> df1
AST_0-1 AST_10-1 AST_110-1
0 1 10 100
1 2 20 85
2 3 32 93
>>> df135
AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67
>>> df45
AST_0-45 AST_10-45 AST_110-45
0 4 47 100
1 5 56 25
2 6 67 37
data = {'AST_0-1': [1, 2, 3],
'AST_0-45': [4, 5, 6],
'AST_0-135': [7, 8, 20],
'AST_10-1': [10, 20, 32],
'AST_10-45': [47, 56, 67],
'AST_10-135': [48, 57, 64],
'AST_110-1': [100, 85, 93],
'AST_110-45': [100, 25, 37],
'AST_110-135': [44, 55, 67]}
import pandas as pd
df = pd.DataFrame(data)
value_list = ["1", "45", "135"]
for value in value_list:
interest_columns = [col for col in df.columns if col.split("-")[1] == value]
df_filtered = df[interest_columns]
print(df_filtered)
Output:
AST_0-1 AST_10-1 AST_110-1
0 1 10 100
1 2 20 85
2 3 32 93
AST_0-45 AST_10-45 AST_110-45
0 4 47 100
1 5 56 25
2 6 67 37
AST_0-135 AST_10-135 AST_110-135
0 7 48 44
1 8 57 55
2 20 64 67
I assume your problem is with the keys of the dictionary. you can get list of the keys with data.keys() then iterate it
for example
df1 = pd.DataFrame()
df45 = pd.DataFrame()
df135 = pd.DataFrame()
for i in list(data.keys()):
the_key = i.split('-')
if the_key[1] == '1':
df1[i] = data[i]
elif the_key[1] == '45':
df45[i] = data[i]
elif the_key[1] == '135':
df135[i] = data[i]

Pandas first 5 and last 5 rows in single iloc operation

I need to check df.head() and df.tail() many times.
When using df.head(), df.tail() jupyter notebook dispalys the ugly output.
Is there any single line command so that we can select only first 5 and last 5 rows:
something like:
df.iloc[:5 | -5:] ?
Test example:
df = pd.DataFrame(np.random.rand(20,2))
df.iloc[:5]
Update
Ugly but working ways:
df.iloc[(np.where( (df.index < 5) | (df.index > len(df)-5)))[0]]
or,
df.iloc[np.r_[np.arange(5), np.arange(df.shape[0]-5, df.shape[0])]]
Try look at numpy.r_
df.iloc[np.r_[0:5, -5:0]]
Out[358]:
0 1
0 0.899673 0.584707
1 0.443328 0.126370
2 0.203212 0.206542
3 0.562156 0.401226
4 0.085070 0.206960
15 0.082846 0.548997
16 0.435308 0.669673
17 0.426955 0.030303
18 0.327725 0.340572
19 0.250246 0.162993
Also head + tail is not a bad solution
df.head(5).append(df.tail(5))
Out[362]:
0 1
0 0.899673 0.584707
1 0.443328 0.126370
2 0.203212 0.206542
3 0.562156 0.401226
4 0.085070 0.206960
15 0.082846 0.548997
16 0.435308 0.669673
17 0.426955 0.030303
18 0.327725 0.340572
19 0.250246 0.162993
df.query("index<5 | index>"+str(len(df)-5))
Here's a way to query the index. You can change the values to whatever you want.
Another approach (per this SO post)
uses only Pandas .isin()
Generate some dummy/demo data
df = pd.DataFrame({'a':range(10,100)})
print(df.head())
a
0 10
1 11
2 12
3 13
4 14
print(df.tail())
a
85 95
86 96
87 97
88 98
89 99
print(df.shape)
(90, 1)
Generate list of required indexes
ls = list(range(5)) + list(range(len(df)-5, len(df)))
print(ls)
[0, 1, 2, 3, 4, 85, 86, 87, 88, 89]
Slice DataFrame using list of indexes
df_first_last_5 = df[df.index.isin(ls)]
print(df_first_last_5)
a
0 10
1 11
2 12
3 13
4 14
85 95
86 96
87 97
88 98
89 99

new python pandas dataframe column based on value of variable, using function

I have a variable, 'ImageName' which ranges from 0-1600. I want to create a new variable, 'LocationCode', based on the value of 'ImageName'.
If 'ImageName' is less than 70, I want 'LocationCode' to be 1. if 'ImageName' is between 71 and 90, I want 'LocationCode' to be 2. I have 13 different codes in all. I'm not sure how to write this in python pandas. Here's what I tried:
def spatLoc(ImageName):
if ImageName <=70:
LocationCode = 1
elif ImageName >70 and ImageName <=90:
LocationCode = 2
return LocationCode
df['test'] = df.apply(spatLoc(df['ImageName'])
but it returned an error. I'm clearly not defining things the right way but I can't figure out how to.
You can just use 2 boolean masks:
df.loc[df['ImageName'] <= 70, 'Test'] = 1
df.loc[(df['ImageName'] > 70) & (df['ImageName'] <= 90), 'Test'] = 2
By using the masks you only set the value where the boolean condition is met, for the second mask you need to use the & operator to and the conditions and enclose the conditions in parentheses due to operator precedence
Actually I think it would be better to define your bin values and call cut, example:
In [20]:
df = pd.DataFrame({'ImageName': np.random.randint(0, 100, 20)})
df
Out[20]:
ImageName
0 48
1 78
2 5
3 4
4 9
5 81
6 49
7 11
8 57
9 17
10 92
11 30
12 74
13 62
14 83
15 21
16 97
17 11
18 34
19 78
In [22]:
df['group'] = pd.cut(df['ImageName'], range(0, 105, 10), right=False)
df
Out[22]:
ImageName group
0 48 [40, 50)
1 78 [70, 80)
2 5 [0, 10)
3 4 [0, 10)
4 9 [0, 10)
5 81 [80, 90)
6 49 [40, 50)
7 11 [10, 20)
8 57 [50, 60)
9 17 [10, 20)
10 92 [90, 100)
11 30 [30, 40)
12 74 [70, 80)
13 62 [60, 70)
14 83 [80, 90)
15 21 [20, 30)
16 97 [90, 100)
17 11 [10, 20)
18 34 [30, 40)
19 78 [70, 80)
Here the bin values were generated using range but you could pass your list of bin values yourself, once you have the bin values you can define a lookup dict:
In [32]:
d = dict(zip(df['group'].unique(), range(len(df['group'].unique()))))
d
Out[32]:
{'[0, 10)': 2,
'[10, 20)': 4,
'[20, 30)': 9,
'[30, 40)': 7,
'[40, 50)': 0,
'[50, 60)': 5,
'[60, 70)': 8,
'[70, 80)': 1,
'[80, 90)': 3,
'[90, 100)': 6}
You can now call map and add your new column:
In [33]:
df['test'] = df['group'].map(d)
df
Out[33]:
ImageName group test
0 48 [40, 50) 0
1 78 [70, 80) 1
2 5 [0, 10) 2
3 4 [0, 10) 2
4 9 [0, 10) 2
5 81 [80, 90) 3
6 49 [40, 50) 0
7 11 [10, 20) 4
8 57 [50, 60) 5
9 17 [10, 20) 4
10 92 [90, 100) 6
11 30 [30, 40) 7
12 74 [70, 80) 1
13 62 [60, 70) 8
14 83 [80, 90) 3
15 21 [20, 30) 9
16 97 [90, 100) 6
17 11 [10, 20) 4
18 34 [30, 40) 7
19 78 [70, 80) 1
The above can be modified to suit your needs but it's just to demonstrate an approach which should be fast and without the need to iterate over your df.
In Python, you use the dictionary lookup notation to find a field within a row. The field name is ImageName. In the spatLoc() function below, the parameter row is a dictionary containing the entire row, and you would find an individual column by using the field name as key to the dictionary.
def spatLoc(row):
if row['ImageName'] <=70:
LocationCode = 1
elif row['ImageName'] >70 and row['ImageName'] <=90:
LocationCode = 2
return LocationCode
df['test'] = df.apply(spatLoc, axis=1)

Categories

Resources