Subset original dataframe based on grouped quantiles - python

This is my df:
NAME DEPTH A1 A2 A3 AA4 AA5 AI4 AC5 Surface
0 Ron 2800.04 8440.53 1330.99 466.77 70.19 56.79 175.96 77.83 C
1 Ron 2801.04 6084.15 997.13 383.31 64.68 51.09 154.59 73.88 C
2 Ron 2802.04 4496.09 819.93 224.12 62.18 47.61 108.25 63.86 C
3 Ben 2803.04 5766.04 927.69 228.41 65.51 49.94 106.02 62.61 L
4 Ron 2804.04 6782.89 863.88 223.79 63.68 47.69 101.95 61.83 L
... ... ... ... ... ... ... ... ... ... ...
So, my first problem has been answered here:
Find percentile in pandas dataframe based on groups
Using:
df.groupby('Surface')['DEPTH'].quantile([.1, .9])
I can get the percentiles [.1,.9] from DEPTH grouped by Surface, which is what I need:
Surface
C 0.1 2800.24
0.9 2801.84
L 0.1 3799.74
0.9 3960.36
N 0.1 2818.24
0.9 2972.86
P 0.1 3834.94
0.9 4001.16
Q 0.1 3970.64
0.9 3978.62
R 0.1 3946.14
0.9 4115.96
S 0.1 3902.03
0.9 4073.26
T 0.1 3858.14
0.9 4029.96
U 0.1 3583.01
0.9 3843.76
V 0.1 3286.01
0.9 3551.06
Y 0.1 2917.00
0.9 3135.86
X 0.1 3100.01
0.9 3345.76
Z 0.1 4128.56
0.9 4132.56
Name: DEPTH, dtype: float64
Now, I believe that was already the hardest part. What is left is subsetting the original df to include only the values in between those DEPTH percentiles .1 & .9. So for example: DEPTH values in Surface group "Z" have to be greater than 4128.56 and less than 4132.56.
Note that I need df again, not df.groupby("Surface"): the final df would be exactly the same, but the rows whose depths are outside the borders should be dropped.
This seems so easy ... any ideas?
Thanks!

When you need to filter rows within groups it's often simpler and faster to use groupby + transform to broadcast the result to every row within a group and then filter the original DataFrame. In this case we can check if 'DEPTH' is between those two quantiles.
Sample Data
import pandas as pd
import numpy as np
np.random.seed(42)
df = pd.DataFrame({'DEPTH': np.random.normal(0,1,100),
'Surface': np.random.choice(list('abcde'), 100)})
Code
gp = df.groupby('Surface')['DEPTH']
df1 = df[df['DEPTH'].between(gp.transform('quantile', 0.1),
gp.transform('quantile', 0.9))]
For clarity, here you can see that transform will broadcast the scalar result to every row that belongs to the group, in this case defined by 'Surface'
pd.concat([df['Surface'], gp.transform('quantile', 0.1).rename('q = 0.1')], axis=1)
# Surface q = 0.1
#0 a -1.164557
#1 e -0.967809
#2 a -1.164557
#3 c -1.426986
#4 b -1.544816
#.. ... ...
#95 a -1.164557
#96 e -0.967809
#97 b -1.544816
#98 b -1.544816
#99 b -1.544816
#
#[100 rows x 2 columns]

Related

Filling dataframe with average of previous columns values

I have a dataframe with having 5 columns with having missing values.
How do i fill the missing values with taking the average of previous two column values.
Here is the sample code for the same.
coh0 = [0.5, 0.3, 0.1, 0.2,0.2]
coh1 = [0.4,0.3,0.6,0.5]
coh2 = [0.2,0.2,0.3]
coh3 = [0.8,0.8]
coh4 = [0.5]
df= pd.DataFrame({'coh0': pd.Series(coh0), 'coh1': pd.Series(coh1),'coh2': pd.Series(coh2), 'coh3': pd.Series(coh3),'coh4': pd.Series(coh4)})
df
Here is the sample output
coh0coh1coh2coh3coh4
0 0.5 0.4 0.2 0.8 0.5
1 0.3 0.3 0.2 0.8 NaN
2 0.1 0.6 0.3 NaN NaN
3 0.2 0.5 NaN NaN NaN
4 0.2 NaN NaN NaN NaN
Here is the desired result i am looking for.
The NaN value in each column should be replaced by the previous two columns average value at the same position. However for the first NaN value in second column, it will take the default last value of first column.
The sample desired output would be like below.
For the exception you named, the first NaN, you can do
df.iloc[1, -1] = df.iloc[0, -1]
though it doesn't make a difference in this case as the mean of .2 and .8 is .5, anyway.
Either way, the rest is something like a rolling window calculation, except it has to be computed incrementally. Normally, you want to vectorize your operations and avoid iterating over the dataframe, but IMHO this is one of the rarer cases where it's actually appropriate to loop over the columns (cf. this excellent post), i.e.,
compute the row-wise (axis=1) mean of up to two columns left of the current one (df.iloc[:, max(0, i-2):i]),
and fill its NaN values from the resulting series.
for i in range(1, df.shape[1]):
mean_df = df.iloc[:, max(0, i-2):i].mean(axis=1)
df.iloc[:, i] = df.iloc[:, i].fillna(mean_df)
which results in
coh0 coh1 coh2 coh3 coh4
0 0.5 0.4 0.20 0.800 0.5000
1 0.3 0.3 0.20 0.800 0.5000
2 0.1 0.6 0.30 0.450 0.3750
3 0.2 0.5 0.35 0.425 0.3875
4 0.2 0.2 0.20 0.200 0.2000

calculate cosine similarity for all columns in a group by in a dataframe

I have a dataframe df: where APer columns range from 0-60
ID FID APerc0 ... APerc60
0 X 0.2 ... 0.5
1 Z 0.1 ... 0.3
2 Y 0.4 ... 0.9
3 X 0.2 ... 0.3
4 Z 0.9 ... 0.1
5 Z 0.1 ... 0.2
6 Y 0.8 ... 0.3
7 W 0.5 ... 0.4
8 X 0.6 ... 0.3
I want to calculate the cosine similarity of the values for all APerc columns between each row. So the result for the above should be:
ID CosSim
1 0,2,4 0.997
2 1,8,7 0.514
1 3,5,6 0.925
I know how to generate cosine similarity for the whole df:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(df)
But I want to find similarity between each ID and group them together(or create separate df). How to do it fast for big dataset?
One possible solution could be get the particular rows you want to use for cosine similarity computation and do the following.
Here, combinations is basically the list pair of row index which you want to consider for computation.
cos = nn.CosineSimilarity(dim=0)
for i in range(len(combinations)):
row1 = df.loc[combinations[i][0], 2:62]
row2 = df.loc[combinations[i][1], 2:62]
sim = cos(row1, row2)
print(sim)
The result you can use in the way you want.
create a function for calculation, then df.apply(cosine_similarity_function()), one said that using apply function may perform hundreds times faster than row by row.
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html

correlation matrix filtering based on high variables correlation with selection of least correlated with target variable at scale using vectors

I have this resulting correlation matrix:
id
row
col
corr
target_corr
0
a
b
0.95
0.2
1
a
c
0.7
0.2
2
a
d
0.2
0.2
3
b
a
0.95
0.7
4
b
c
0.35
0.7
5
b
d
0.65
0.7
6
c
a
0.7
0.6
7
c
b
0.35
0.6
8
c
d
0.02
0.6
9
d
a
0.2
0.3
10
d
b
0.65
0.3
11
d
c
0.02
0.3
After filtering high correlated variables based on "corr" variable I
try to add new column that will compare will decide to mark "keep" the
least correlated variable from "row" or mark "drop" of that variable
for the most correlated variable "target_corr" column. In other works
from corelated variables matching cut > 0.5 select the one least correlated to
"target_corr":
Expected result:
id
row
col
corr
target_corr
drop/keep
0
a
b
0.95
0.2
keep
1
a
c
0.7
0.2
keep
2
b
a
0.95
0.7
drop
3
b
d
0.65
0.7
drop
4
c
a
0.7
0.6
drop
5
d
b
0.65
0.3
keep
This approach does use very large dataframes so resulting corr matrix for example is > 100kx100k and generated using pyspark:
def corrwith_matrix_no_save(df, data_cols=None, select_targets = None, method='pearson'):
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.stat import Correlation
from pyspark.mllib.stat import Statistics
start_time = time.time()
vector_col = "corr_features"
if data_cols == None and select_targets == None:
data_cols = df.columns
select_target = list(df.columns)
assembler = VectorAssembler(inputCols=data_cols, outputCol=vector_col)
df_vector = assembler.transform(df).select(vector_col)
matrix = Correlation.corr(df_vector, vector_col, method)
result = matrix.collect()[0]["pearson({})".format(vector_col)].values
final_df = pd.DataFrame(result.reshape(-1, len(data_cols)), columns=data_cols, index=data_cols)
final_df = final_df.apply(lambda x: x.abs() if np.issubdtype(x.dtype, np.number) else x )
corr_df = final_df[select_target]
#corr_df.columns = [str(col) + '_corr' for col in corr_df.columns]
corr_df['column_names'] = corr_df.index
print('Execution time for correlation_matrix function:', time.time() - start_time)
return corr_df
created the dataframe from uper triagle with numpy.triuand numpy.stack + added the target column my merging 2 resulting dataframes (if code is required can provide but will increase the content a lot so will provide only if needs clarifcation).
def corrX_to_ls(corr_mtx) :
# Get correlation matrix and upper triagle
df_target = corr_mtx['target']
corr_df = corr_mtx.drop('target', inplace=True)
up = corr_df.where(np.triu(np.ones(corr_df.shape), k=1).astype(np.bool))
print('This is triu: \n', up )
df = up.stack().reset_index()
df.columns = ['row','col','corr']
df_lsDF = df.query("row" != "col")
df_target_corr = df_target.reset_index()
df_target_corr.columns = ['target_col', 'target_corr']
sample_df = df_lsDF.merge(df_target_corr, how='left', left_ob='row', right_on='target_col')
sample_df = sample_df.drop('target_col', 1)
return (sample_df)
Now after filtering resulting dataframe based on df.Corr > cut where cut > 0.50 got stuck at marking what variable o keep and what to drop
( I do look to mark them only then select into a list variables) ...
so help on solving it will be greatly appreciated and will also
benefit community when working on distributed system.
Note: Looking for example/solution to scale so I can distribute
operations on executors so lists or like a group/subset of the
dataframe to be done in parallel and avoid loops is what I do look, so
numpy.vectorize, threading and/or multiprocessing
approaches is what I do look.
Additional "thinking" from top of my mind: I do think on grouping by
"row" column so can distribute processing each group on executors or
by using lists distribute processing in parallel on executors so each
list will generate a job for each thread from ThreadPool ( I done
done this approach for column vectors but for very large
matrix/dataframes can become inefficient so for rows I think will
work).
Given final_df as the sample input, you can try:
# filter
output = final_df.query('corr>target_corr').copy()
# assign drop/keep
output['drop_keep'] = np.where(output['corr']>2*output['target_corr'],
'keep','drop')
Output:
id row col corr target_corr drop_keep
0 0 a b 0.95 0.2 keep
1 1 a c 0.70 0.2 keep
3 3 b a 0.95 0.7 drop
6 6 c a 0.70 0.6 drop
10 10 d b 0.65 0.3 keep

Slice multiple dataframes based in different ranges values in a specific column and categorize them in new columns

Is there any way to select values within 5 certain ranges for a given column, and to each different dataframe, apply in a new column, a label?
I mean, I have a list a of dataframes. All dataframes have 2 columns and share the same first column, but differs in the second (header and values). For example:
>> df1
GeneID A
1 0.3
2 0.0
3 143
4 9
5 0.6
>> df2
GeneID B
1 0.2
2 0.3
3 0.1
4 0.7
5 0.4
....
I would like to:
For each dataframe on the list, perform a calculation which gives the probability of that value occur within 1 of 5 different range. Append a new column with those values;
For each dataframe on the list, attach the respective range label in another new column.
Where the ranges are:
*Range_Values* -> *Range_Label*
**[0]** -> 'l1'
**]0,1]** -> 'l2'
**]1,10]** -> 'l3'
**]10,100]** -> 'l4'
**>100** 'l5'
This 2 steps approaches would led to something like:
>> list_dfs[df1]
GeneID A Prob_val Exp_prof
1 0.3 0.4 'l2'
2 0.0 0.2 'l1'
3 143 0.2 'l5'
4 9 0.2 'l3'
5 0.6 0.4 'l2'
You have to first define the bins and labels -
bins = [0, 1, 10, 100, float("inf")]
labels = ['l1', 'l2', 'l3', 'l4', 'l5']
Then use pd.cut() -
pd.cut(df1['A'], bins, right=False)
There is a labels parameter in pd.cut() that you can use to get labels -
pd.cut(df1['A'], bins, labels=labels, right=False)
You can use the bins generated to compute probabilities I leave it upto you to do that.
You can do this for the rest of the dfs in a loop and finally assign them to a list -
list_dfs = [df1, df2, ...]
If you have dynamic number of dfs use a loop -
Framework
for df in dfs:
df['bins'] = pd.cut(df['A'], bins, right=False)
df['label'] = pd.cut(df['A'], bins, labels=labels, right=False)
For the labels and bins, you can use pandas.cut. Note that you can't use a singleton as a bin in this function. Therefore you will have to create it afterwards. Here is how you can do this.
First I recreate one of your dataframes:
import io
temp = u"""
GeneID A
1 0.3
2 0.0
3 143
4 9
5 0.6"""
foo = pd.read_csv(io.StringIO(temp),delim_whitespace = True)
Then I create the new column and fill the NaN values with the label l1 which corresponds to the singleton [0].
foo['Exp_prof'] = pd.cut(foo.A,bins = [0,1,10,100,np.inf],labels = ['l2','l3','l4','l5'])
foo['Exp_prof'] = foo['Exp_prof'].cat.add_categories(['l1'])
foo['Exp_prof'] = foo['Exp_prof'].fillna('l1')
And I use this new column to compute the probabilities:
foo['Prob_val'] = foo.Exp_prof.map((foo.Exp_prof.value_counts()/len(foo)).to_dict())
And the output is:
GeneID A Exp_prof Prob_val
0 1 0.3 l2 0.4
1 2 0.0 l1 0.2
2 3 143.0 l5 0.2
3 4 9.0 l3 0.2
4 5 0.6 l2 0.4

calculate cosine similarity for two columns in a group by in a dataframe

I have a dataframe df:
AID VID FID APerc VPerc
1 A X 0.2 0.5
1 A Z 0.1 0.3
1 A Y 0.4 0.9
2 A X 0.2 0.3
2 A Z 0.9 0.1
1 B Z 0.1 0.2
1 B Y 0.8 0.3
1 B W 0.5 0.4
1 B X 0.6 0.3
I want to calculate the cosine similarity of the values APerc and VPerc for all pairs of AID and VID. So the result for the above should be:
AID VID CosSim
1 A 0.997
2 A 0.514
1 B 0.925
I know how to groupby: df.groupby(['AID','VID'])
and I know how to generate cosine similarity for the whole column:
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(df['APerc'], df['VPerc'])
What's the best and fastest way to do this, given I have a really large file.
Not sure if it is the fastest, groupby.apply is usually the way to do this:
(df.groupby(['AID','VID'])
.apply(lambda g: cosine_similarity(g['APerc'], g['VPerc'])[0][0]))
#AID VID
#1 A 0.997097
# B 0.924917
#2 A 0.514496
#dtype: float64
Pairwise cosine_similarity is designed for 2D arrays so you'll need to do some reshaping before and after. Instead of that, use scipy's cosine distance:
from scipy.spatial.distance import cosine
df.groupby(['AID','VID']).apply(lambda x: 1 - cosine(x['APerc'], x['VPerc']))
Out:
AID VID
1 A 0.997097
B 0.924917
2 A 0.514496
dtype: float64
Timing on a df of shape (10k, 5) gives 2.87ms for scipy and 4.08ms for sklearn. A fair amount of that 4.08ms is probably due to the warnings it outputs because with Alexander's version it drops down to 3.31ms. I suspect sklearn version becomes much faster when called on a single 2D array.
Extend the solution of #Psidom to convert the series to numpy arrays before calculating cosine_similarity and also reshape:
(df.groupby(['AID','VID'])
.apply(lambda g: cosine_similarity(g['APerc'].values.reshape(1, -1),
g['VPerc'].values.reshape(1, -1))[0][0]))

Categories

Resources