Statistical significance test after matching - python

I have a dataframe consisting of four columns and around 20000 rows, like this.
import pandas as pd
import numpy as np
d = {'x': [1,1,0,1,0,0,1],'BPM':[70,55,45,np.nan,35,25,np.nan],'AGE': [50, 47,21, 50,24,47,16], 'WEIGHT': [50,100,50,np.nan,np.nan,100,27]}
df = pd.DataFrame(data=d)
x BPM AGE WEIGHT
1 70 50 50
1 55 47 100
0 45 21 50
1 nan 24 nan
0 35 50 nan
0 25 47 100
1 nan 16 27
Is there any significant difference in "BPM" between class '1' and class '0' after matching AGE and WEIGHT?
There are two classes: 0 and 1. The number of samples is not equal in both classes. I understand that first I have to match values, then I can apply t-test. I am new to this field, so I do not understand how to proceed.

You could calculate the t-scores by hand.
mean_bpm_df = df.groupby(['AGE','WEIGHT','x']).mean().unstack(level=-1)
mean_bpm_df.columns = ['mean_bpm_0','mean_bpm_1']
std_count_df = df.drop(columns='x').groupby(['AGE','WEIGHT']).agg(['std','count'])
std_count_df.columns = ['std_bpm','count_bpm']
t_df = (mean_bpm_df.mean_bpm_0 - mean_bpm_df.mean_bpm_1) / (std_count_df.std_bpm / np.sqrt(std_count_df.count_bpm))
Now, if you also want the p-values, those can be calculated by hand too. Assume a 2-sided t-test (you can modify this if needed).
from scipy.stats import t
p_df = pd.DataFrame(index=t_df.index, data=2*(1 - t.cdf(abs(t_df), std_count_df.count_bpm-1)))

Related

How can build a model that will predict duplicate records in your dataset?

What are the algorithms that will predict duplicates in your dataset.
For example -
Name Marks
A 100
B 90
C 80
A 100
I need something like this -
Name Marks S/D
A 100 Single
B 90 Single
C 80 Single
A 100 Duplicate
I'm looking for some algorithms that can help in this case.
IIUC, you need this:
import pandas as pd
df = pd.DataFrame({'Name':['A','B','C','A'],'Marks': [100, 90, 80, 100]})
df['res'] = df.duplicated().map({False:"Single", True:"Duplicated"})
Output:
>>> df
Name Marks res
0 A 100 Single
1 B 90 Single
2 C 80 Single
3 A 100 Duplicated

Dividing Dataframes with Different Dimensions

I prefer to use matrix multiplication for coding, because it's so much more efficient than iterating, but curious on how to do this if the dimensions are different.
I have two different dataframes
A:
Orig_vintage
Q12018
185
Q22018.
200
and B:
default_month
1
2
3
orig_vintage
Q12018
0
25
35
Q22018
0
15
45
Q32018
0
35
65
and I'm trying to divide A through columns of B, so the B dataframe becomes (note I've rounded random percentages):
default_month
1
2
3
orig_vintage
Q12018
0
.03
.04
Q22018
0
.04
.05
Q32018
0
.06
.07
But bottom line want to divide the monthly defaults by the total origination figure to get to a monthly default %.
first step is get data side by side with a right join()
then divide all columns by required value Divide multiple columns by another column in pandas
required value as I understand is sum, if join did not give a value.
import pandas as pd
import io
df1 = pd.read_csv(
io.StringIO("""Orig_vintage,Unnamed: 1\nQ12018,185\nQ22018,200\n"""), sep=","
)
df2 = pd.read_csv(
io.StringIO(
"""default_month,1,2,3\nQ12018,0.0,25.0,35.0\nQ22018,0.0,15.0,45.0\nQ32018,0.0,35.0,65.0\n"""
),
sep=",",
)
df1.set_index("Orig_vintage").join(df2.set_index("default_month"), how="right").pipe(
lambda d: d.div(d["Unnamed: 1"].fillna(d["Unnamed: 1"].sum()), axis=0)
)
default_month
Unnamed: 1
1
2
3
Q12018
1
0
0.135135
0.189189
Q22018
1
0
0.075
0.225
Q32018
nan
0
0.0909091
0.168831

How to loop through a pandas dataframe to run an independent ttest for each of the variables?

I have a dataset that consists of around 33 variables. The dataset contains patient information and the outcome of interest is binary in nature. Below is a snippet of the data.
The dataset is stored as a pandas dataframe
df.head()
ID Age GAD PHQ Outcome
1 23 17 23 1
2 54 19 21 1
3 61 23 19 0
4 63 16 13 1
5 37 14 8 0
I want to run independent t-tests looking at the differences in patient information based on outcome. So, if I were to run a t-test for each alone, I would do:
age_neg_outcome = df.loc[df.outcome ==0, ['Age']]
age_pos_outcome = df.loc[df.outcome ==1, ['Age']]
t_age, p_age = stats.ttest_ind(age_neg_outcome ,age_pos_outcome, unequal = True)
print('\t Age: t= ', t_age, 'with p-value= ', p_age)
How can I do this in a for loop for each of the variables?
I've seen this post which is slightly similar but couldn't manage to use it.
Python : T test ind looping over columns of df
You are almost there. ttest_ind accepts multi-dimensional arrays too:
cols = ['Age', 'GAD', 'PHQ']
cond = df['outcome'] == 0
neg_outcome = df.loc[cond, cols]
pos_outcome = df.loc[~cond, cols]
# The unequal parameter is invalid so I'm leaving it out
t, p = stats.ttest_ind(neg_outcome, pos_outcome)
for i, col in enumerate(cols):
print(f'\t{col}: t = {t[i]:.5f}, with p-value = {p[i]:.5f}')
Output:
Age: t = 0.12950, with p-value = 0.90515
GAD: t = 0.32937, with p-value = 0.76353
PHQ: t = -0.96683, with p-value = 0.40495

Pandas DataFrame with Function: Columns Varying

Given the following DataFrame:
import pandas as pd
import numpy as np
d=pd.DataFrame({' Label':['a','a','b','b'],'Count1':[10,20,30,40],'Count2':[20,45,10,35],
'Count3':[40,30,np.nan,22],'Nobs1':[30,30,70,70],'Nobs2':[65,65,45,45],
'Nobs3':[70,70,22,32]})
d
Label Count1 Count2 Count3 Nobs1 Nobs2 Nobs3
0 a 10 20 40.0 30 65 70
1 a 20 45 30.0 30 65 70
2 b 30 10 NaN 70 45 22
3 b 40 35 22.0 70 45 32
I would like to apply the z test for proportions on each combination of column groups (1 and 2, 1 and 3, 2 and 3) per row. By column group, I mean, for example, "Count1" and "Nobs1".
For example, one such test would be:
count = np.array([10, 20]) #from first row of Count1 and Count2, respectively
nobs = np.array([30, 65]) #from first row of Nobs1 and Nobs2, respectively
pv = proportions_ztest(count=count,nobs=nobs,value=0,alternative='two-sided')[1] #this returns just the p-value, which is of interest
pv
0.80265091465415639
I would want the result (pv) to go into a new column (first row) called "p_1_2" or something logical that corresponds to its respective columns.
In summary, here are the challenges I'm facing:
How to apply this per row.
...for each paired combination, mentioned above.
...where the column names and number of pairs of "Count" and "Nobs" columns may vary (assuming that there will always be a "Nobs" column for each "Count" column).
Related to 3: For example, I might have a column called "18-24" and another called "18-24_Nobs".
Thanks in advance!
To 1) and 2) for one test, additional tests can be coded similar or within an additonal loop
for i,row in d.iterrows():
d.loc[i,'test'] = proportions_ztest(count=row['Count1':'Count2'].values,
nobs=row['Nobs1':'Nobs2'].values,
value=0,alternative='two-sided')[1]
for 3) it should be possible the handle these case with pure python inside the loop

How to check correlation between matching columns of two data sets?

If we have the data set:
import pandas as pd
a = pd.DataFrame({"A":[34,12,78,84,26], "B":[54,87,35,25,82], "C":[56,78,0,14,13], "D":[0,23,72,56,14], "E":[78,12,31,0,34]})
b = pd.DataFrame({"A":[45,24,65,65,65], "B":[45,87,65,52,12], "C":[98,52,32,32,12], "D":[0,23,1,365,53], "E":[24,12,65,3,65]})
How does one create a correlation matrix, in which the y-axis represents "a" and the x-axis represents "b"?
The aim is to see correlations between the matching columns of the two datasets like this:
If you won't mind a NumPy based vectorized solution, based on this solution post to Computing the correlation coefficient between two multi-dimensional arrays -
corr2_coeff(a.values.T,b.values.T).T # func from linked solution post.
Sample run -
In [621]: a
Out[621]:
A B C D E
0 34 54 56 0 78
1 12 87 78 23 12
2 78 35 0 72 31
3 84 25 14 56 0
4 26 82 13 14 34
In [622]: b
Out[622]:
A B C D E
0 45 45 98 0 24
1 24 87 52 23 12
2 65 65 32 1 65
3 65 52 32 365 3
4 65 12 12 53 65
In [623]: corr2_coeff(a.values.T,b.values.T).T
Out[623]:
array([[ 0.71318502, -0.5923714 , -0.9704441 , 0.48775228, -0.07401011],
[ 0.0306753 , -0.0705457 , 0.48801177, 0.34685977, -0.33942737],
[-0.26626431, -0.01983468, 0.66110713, -0.50872017, 0.68350413],
[ 0.58095645, -0.55231196, -0.32053858, 0.38416478, -0.62403866],
[ 0.01652716, 0.14000468, -0.58238879, 0.12936016, 0.28602349]])
This achieves exactly what you want:
from scipy.stats import pearsonr
# create a new DataFrame where the values for the indices and columns
# align on the diagonals
c = pd.DataFrame(columns = a.columns, index = a.columns)
# since we know set(a.columns) == set(b.columns), we can just iterate
# through the columns in a (although a more robust way would be to iterate
# through the intersection of the two sets of columns, in the case your actual dataframes' columns don't match up
for col in a.columns:
correl_signif = pearsonr(a[col], b[col]) # correlation of those two Series
correl = correl_signif[0] # grab the actual Pearson R value from the tuple from above
c.loc[col, col] = correl # locate the diagonal for that column and assign the correlation coefficient
Edit: Well, it achieved exactly what you wanted, until the question was modified. Although this can easily be changed:
c = pd.DataFrame(columns = a.columns, index = a.columns)
for col in c.columns:
for idx in c.index:
correl_signif = pearsonr(a[col], b[idx])
correl = correl_signif[0]
c.loc[idx, col] = correl
c is now this:
Out[16]:
A B C D E
A 0.713185 -0.592371 -0.970444 0.487752 -0.0740101
B 0.0306753 -0.0705457 0.488012 0.34686 -0.339427
C -0.266264 -0.0198347 0.661107 -0.50872 0.683504
D 0.580956 -0.552312 -0.320539 0.384165 -0.624039
E 0.0165272 0.140005 -0.582389 0.12936 0.286023
I use this function that breaks it down with numpy
def corr_ab(a, b):
a_ = a.values
b_ = b.values
ab = a_.T.dot(b_)
n = len(a)
sums_squared = np.outer(a_.sum(0), b_.sum(0))
stds_squared = np.outer(a_.std(0), b_.std(0))
return pd.DataFrame((ab - sums_squared / n) / stds_squared / n,
a.columns, b.columns)
demo
corr_ab(a, b)
Do you have to use Pandas? This seem can be done via numpy rather easily. Did i understand the task incorrectly?
import numpy
X = {"A":[34,12,78,84,26], "B":[54,87,35,25,82], "C":[56,78,0,14,13], "D":[0,23,72,56,14], "E":[78,12,31,0,34]}
Y = {"A":[45,24,65,65,65], "B":[45,87,65,52,12], "C":[98,52,32,32,12], "D":[0,23,1,365,53], "E":[24,12,65,3,65]}
for key,value in X.items():
print "correlation stats for %s is %s" % (key, numpy.corrcoef(value,Y[key]))

Categories

Resources