Correlation between a pandas Series and a whole DataFrame - python

I have a series of values and I'm looking to compute the pearson correlation with every row of a given table.
How do I do I do that?
Example:
import pandas as pd
v = [-1, 5, 0, 0, 10, 0, -7]
v1 = [1, 0, 0, 0, 0, 0, 0]
v2 = [0, 1, 0, 0, 1, 0, 0]
v3 = [1, 1, 0, 0, 0, 0, 1]
s = pd.Series(v)
df = pd.DataFrame([v1, v2, v3], columns=['a', 'b', 'c', 'd', 'e', 'f', 'g'])
# Here I expect ot do df.corrwith(s) - but won't work
Using Series.corr() to calculate, the expected output is
-0.1666666666666666 # correlation with the first row
0.83914639167827343 # correlation with the second row
-0.35355339059327379 # correlation with the third row

You need same index of Series as columns of DataFrame for align Series by DataFrame and add axis=1 in corrwith for row-wise correlation:
s1 = pd.Series(s.values, index=df.columns)
print (s1)
a -1
b 5
c 0
d 0
e 10
f 0
g -7
dtype: int64
print (df.corrwith(s1, axis=1))
0 -0.166667
1 0.839146
2 -0.353553
dtype: float64
print (df.corrwith(pd.Series(v, index=df.columns), axis=1))
0 -0.166667
1 0.839146
2 -0.353553
dtype: float64
EDIT:
You can specify columns and use subset:
cols = ['a','b','e']
print (df[cols])
a b e
0 1 0 0
1 0 1 1
2 1 1 0
print (df[cols].corrwith(pd.Series(v, index=df.columns), axis=1))
0 -0.891042
1 0.891042
2 -0.838628
dtype: float64

This might be useful to those concerned with performance.
I have found this runs in half the time compared to pandas corrwith.
Your data:
import pandas as pd
v = [-1, 5, 0, 0, 10, 0, -7]
v1 = [1, 0, 0, 0, 0, 0, 0]
v2 = [0, 1, 0, 0, 1, 0, 0]
v3 = [1, 1, 0, 0, 0, 0, 1]
df = pd.DataFrame([v1, v2, v3], columns=['a', 'b', 'c', 'd', 'e', 'f', 'g'])
The solution (note that v is not transformed into a series):
from scipy.stats.stats import pearsonr
s_corrs = df.apply(lambda x: pearsonr(x.values, v)[0], axis=1)

Related

Insert value in numpy array with conditions

I want to insert the value in the NumPy array as follows,
If Nth row is the same as (N-1)th row insert 1 for Nth row and (N-1)th row and rest 0
If Nth row is different from (N_1)th row then change column and repeat condition
Here is the example
d = {'col1': [2,2, 3,3,3, 4,4, 5,5,5,],
'col2': [3,3, 4,4,4, 1,1, 0,0,0]}
df = pd.DataFrame(data=d)
np.zeros((10,4))
###########################################################
OUTPUT MATRIX
1 0 0 0 First two rows are the same so 1,1 in a first column
1 0 0 0
0 1 0 0 Three-rows are same 1,1,1
0 1 0 0
0 1 0 0
0 0 1 0 Again two rows are the same 1,1
0 0 1 0
0 0 0 1 Again three rows are same 1,1,1
0 0 0 1
0 0 0 1
IIUC, you can achieve this simply with numpy indexing:
# group by successive identical values
group = df.ne(df.shift()).all(1).cumsum().sub(1)
# craft the numpy array
a = np.zeros((len(group), group.max()+1), dtype=int)
a[np.arange(len(df)), group] = 1
print(a)
Alternative with numpy.identity:
# group by successive identical values
group = df.ne(df.shift()).all(1).cumsum().sub(1)
shape = df.groupby(group).size()
# craft the numpy array
a = np.repeat(np.identity(len(shape), dtype=int), shape, axis=0)
print(a)
output:
array([[1, 0, 0, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1]])
intermediates:
group
0 0
1 0
2 1
3 1
4 1
5 2
6 2
7 3
8 3
9 3
dtype: int64
shape
0 2
1 3
2 2
3 3
dtype: int64
other option
for fun, likely no so efficient on large inputs:
a = pd.get_dummies(df.agg(tuple, axis=1)).to_numpy()
Note that this second option uses groups of identical values, not successive identical values. For identical values with the first (numpy) approach, you would need to use group = df.groupby(list(df)).ngroup() and the numpy indexing option (this wouldn't work with repeating the identity).

if row in multiple column contains 1 how to add new column in dataframe containing column names where value is 1

my dataframe looks like below
My question is under Locations column how can i add column names where row value is 1 , for example against Japan/US , Newyork,Osaka should be printed under Locations column....Pls advice how to solve this in Python ?
Try this:
import pandas as pd
DF = pd.DataFrame
S = pd.Series
def construct_df() -> DF:
data = {
"Countries": ["Japan/US", "Australia & NZ", "America & India"],
"Portugal": [0, 0, 0],
"Newyork": [1, 0, 1],
"Delhi": [0, 0, 0],
"Osaka": [1, 0, 1],
"Bangalore": [0, 0, 0],
"Sydney": [0, 0, 0],
"Mexico": [0, 0, 0],
}
return pd.DataFrame(data)
def calc_locations(x: DF) -> S:
x__location_cols_only = x.select_dtypes("integer")
x__ones_as_location_col_name = x__location_cols_only.apply(
lambda ser: ser.replace({0: "", 1: ser.name})
)
location_cols = x__location_cols_only.columns.tolist()
ret = x__ones_as_location_col_name[location_cols[0]]
for colname in location_cols[1:]:
col = x__ones_as_location_col_name[colname]
ret = ret.str.cat(col, sep=",")
ret = ret.str.replace(r",+", ",").str.strip(",")
return ret
df_final = construct_df().assign(Locations=calc_locations)
assert df_final["Locations"].tolist() == ["Newyork,Osaka", "", "Newyork,Osaka"]
You can do:
data = {
"Countries": ["Japan/US", "Australia & NZ", "America & India"],
"Portugal": [0, 0, 0],
"Newyork": [1, 0, 1],
"Delhi": [0, 0, 0],
"Osaka": [1, 0, 1],
"Bangalore": [0, 0, 0],
"Sydney": [0, 0, 0],
"Mexico": [0, 0, 0],
}
df=pd.DataFrame(data)
df1=df.iloc[:,1:]*df.iloc[:,1:].columns
df['Location']=df1.values.tolist()
df['Location']=df['Location'].apply(lambda x:','.join([y for y in x if len(y)>1]))
Lets say your data is like below
import pandas as pd
data = {'Countries': ['JP/US', 'Aus/NZ', 'America/India'],
'Portugal': [0, 0, 0],
'Newyork': [1, 0, 1],
'Delhi': [0, 0, 1],
'Osaka': [1, 0, 0],
'Sydney': [0, 0, 0],
'Mexico': [0, 0, 0],
}
data_df = pd.DataFrame(data)
DF looks like (request you to provide above set of data to build a DF for us to provide you results):
Countries Delhi Mexico Newyork Osaka Portugal Sydney
0 JP/US 0 0 1 1 0 0
1 Aus/NZ 0 0 0 0 0 0
2 America/India 1 0 1 0 0 0
If you execute below statements
data_df = data_df.set_index('Countries')
data_df['Locations'] = data_df.apply(lambda x: ", ".join(x[x!=0].index.tolist()), axis=1)
Your output will look like below
Delhi Mexico Newyork Osaka Portugal Sydney Locations
Countries
JP/US 0 0 1 1 0 0 Newyork, Osaka
Aus/NZ 0 0 0 0 0 0
America/India 1 0 1 0 0 0 Delhi, Newyork

Count occurrences of feature pairs for a given set of mappings

I'm trying to aggregate a DataFrame such that for each from, and each to given in the mappings table (e.g. .iloc[0] where a maps to b), we take the corresponding f# (feature) columns from the labels table, and find the number of times that that feature mapping occurred.
The expected output is given in the output table.
Example: in the output table we can see there are 4 times when a from element mapped to a to element (i.e. where the from had an f1 feature and the to had an f2 feature). We can deduce these as being a->b, a->c, d->e, and d->g.
Mappings
from to
0 a b
1 a c
2 d e
3 d f
4 d g
Labels
name f1 f2 f3
0 a 1 0 0
1 b 0 1 0
2 c 0 1 0
3 d 1 1 0
4 e 0 1 0
5 f 0 0 1
6 g 1 1 0
Output
f1 f2 f3
f1 1 4 1
f2 1 2 1
f3 0 0 0
Table construction code
# dataframe 1 - the mappings
mappings = pd.DataFrame({
'from': ['a', 'a', 'd', 'd', 'd'],
'to': ['b', 'c', 'e', 'f', 'g']
})
# dataframe 2 - the labels
labels = pd.DataFrame({
'name': ['a', 'b', 'c', 'd', 'e', 'f', 'g'],
'f1': [1, 0, 0, 1, 0, 0, 1],
'f2': [0, 1, 1, 1, 1, 0, 1],
'f3': [0, 0, 0, 0, 0, 1, 0],
})
# dataframe 3 - the expected output
output = pd.DataFrame(
index = ['f1', 'f2', 'f3'],
data = {
'f1': [1, 1, 0],
'f2': [4, 2, 0],
'f3': [1, 1, 0],
})
First we melt your labels dataframe from columns to rows, so we can easily match on them. Then we merge these values to our mapping and finally use crosstab to get your final result:
labels = labels.set_index('name').where(lambda x: x > 0).melt(ignore_index=False).dropna()
df = (
mappings.merge(labels.add_suffix('_from'), left_on='from', right_on='name')
.merge(labels.add_suffix('_to'), left_on='to', right_on='name')
)
final = pd.crosstab(index=df['variable_from'], columns=df['variable_to'])
final = (
final.reindex(index=final.columns, fill_value=0)
.rename_axis(index=None, columns=None)
).convert_dtypes()
Output
f1 f2 f3
f1 1 4 1
f2 1 2 1
f3 0 0 0
Note:
melt(ignore_index=False) requires pandas >= 1.1.0
convert_dtypes requires pandas >= 1.0.0
For pandas < 1.1.0 we can use stack instead of melt:
(
labels.set_index('name')
.where(lambda x: x > 0)
.stack()
.reset_index(level=1)
.rename(columns={'level_1': 'variable', 0: 'value'})
)

Sampling from within Pandas groups with defined probabilities

Consider the following Pandas dataframe,
df = pd.DataFrame(
[
['X', 0, 0.5],
['X', 1, 0.5],
['Y', 0, 0.25],
['Y', 1, 0.3],
['Y', 2, 0.45],
['Z', 0, 0.6],
['Z', 1, 0.1],
['Z', 2, 0.3]
], columns=['NAME', 'POSITION', 'PROB'])
Notice that df defines a discrete probability distribution for each unique NAME value i.e.
assert ((df.groupby('NAME')['PROB'].sum() - 1)**2 < 1e-10).all()
What I would like to do is sample from these probability distributions.
We can think of POSITION as being the values corresponding to the probabilities. So when considering X the sample will be 0 with probability 0.5 and 1 with probability 0.5.
I would like to create a new dataframe with columns ['NAME', 'POSITION', 'PROB', 'SAMPLE'] representing these samples. Each unique SAMPLE value represents a new sample. The PROB column is now always 0 or 1, representing whether the given row was selected in the given sample. For example, if I were to select 3 samples an example outcome is below,
df_samples = pd.DataFrame(
[
['X', 0, 1, 0],
['X', 1, 0, 0],
['X', 0, 0, 1],
['X', 1, 1, 1],
['X', 0, 1, 2],
['X', 1, 0, 2],
['Y', 0, 1, 0],
['Y', 1, 0, 0],
['Y', 2, 0, 0],
['Y', 0, 0, 1],
['Y', 1, 0, 1],
['Y', 2, 1, 1],
['Y', 0, 1, 2],
['Y', 1, 0, 2],
['Y', 2, 0, 2],
['Z', 0, 0, 0],
['Z', 1, 0, 0],
['Z', 2, 1, 0],
['Z', 0, 0, 1],
['Z', 1, 0, 1],
['Z', 2, 1, 1],
['Z', 0, 1, 2],
['Z', 1, 0, 2],
['Z', 2, 0, 2],
], columns=['NAME', 'POSITION', 'PROB', 'SAMPLE'])
Of course due to the randomness involved, this is just one of a number of possible outcomes.
A unittest for the program would be that as the samples increases, by the law of large numbers, the mean number of our samples for each (NAME, POSITION) pair, should tend to the actual probability. One could calculate a confidence region based on the total samples used and then make sure the true probability lies within it. For example using a normal approximation to binomial outcomes (requires total samples n_samples to be 'large') a (-4 sd, 4 sd) region test would be:
z = 4
p_est = df_samples.groupby(['NAME', 'POSITION'])['PROB'].mean()
p_true = df.set_index(['NAME', 'POSITION'])['PROB']
CI_lower = p_est - z*np.sqrt(p_est*(1-p_est)/n_samples)
CI_upper = p_est + z*np.sqrt(p_est*(1-p_est)/n_samples)
assert p_true < CI_upper
assert p_true > CI_lower
What is the most efficient way to do this in Pandas? I feel like I want to apply some sample function to the df.groupby('NAME') object.
P.S.
To be even more explicit, here is a very long winded way of doing this using Numpy.
n_samples = 3
df_list = []
for name in ['X', 'Y', 'Z']:
idx = df['NAME'] == name
position_samples = np.random.choice(df.loc[idx, 'POSITION'],
n_samples,
p=df.loc[idx, 'PROB'])
prob = np.zeros([idx.sum(), n_samples])
prob[position_samples, np.arange(n_samples)] = 1
position = np.tile(np.arange(idx.sum())[:, None], n_samples)
sample = np.tile(np.arange(n_samples)[:,None], idx.sum()).T
df_list.append(pd.DataFrame(
[[name, prob.ravel()[i], position.ravel()[i],
sample.ravel()[i]]
for i in range(n_samples*idx.sum())],
columns=['NAME', 'PROB', 'POSITION', 'SAMPLE']))
df_samples = pd.concat(df_list)
If I understand correctly, you're looking for groupby + sample and then some indexing stuff
First sample by the probabilites:
n_samples = 3
df_samples = df.groupby('NAME').apply(lambda x: x[['NAME', 'POSITION']] \
.sample(n_samples, replace=True,
weights=x.PROB)) \
.reset_index(drop=True)
Now add the extra columns:
df_samples['SAMPLE'] = df_samples.groupby('NAME').cumcount()
df_samples['PROB'] = 1
print(df_samples)
NAME POSITION SAMPLE PROB
0 X 1 0 1
1 X 0 1 1
2 X 1 2 1
3 Y 1 0 1
4 Y 1 1 1
5 Y 1 2 1
6 Z 2 0 1
7 Z 0 1 1
8 Z 0 2 1
Note that this doesn't include the 0 probability positions for each sample as requested in the initial question but it is a more concise way of storing the information.
If we want to also include the 0 probability positions we can merge in the other positions as follows:
domain = df[['NAME', 'POSITION']].drop_duplicates()
df_samples.drop('PROB', axis=1, inplace=True)
df_samples = pd.merge(df_samples, domain, on='NAME',
suffixes=['_sample', ''])
df_samples['PROB'] = (df_samples['POSITION'] ==
df_samples['POSITION_sample']).astype(int)
df_samples.drop('POSITION_sample', axis=1, inplace=True)

Pandas DataFrame column concatenation

I have a pandas Dataframe y with 1 million rows and 5 columns.
np.shape(y)
(1037889, 5)
The column values are all 0 or 1. Looks something like this:
y.head()
a, b, c, d, e
0, 0, 1, 0, 0
1, 0, 0, 1, 1
0, 1, 1, 1, 1
0, 0, 0, 0, 0
I want a Dataframe with 1 million rows and 1 column.
np.shape(y)
(1037889, )
where the column is just the 5 columns concatenated together.
New column
0, 0, 1, 0, 0
1, 0, 0, 1, 1
0, 1, 1, 1, 1
0, 0, 0, 0, 0
I keep trying different things like merge, concat, dstack, etc...
but can't seem to figure this out.
If you want new column to have all data concatenated to string, it's good case for apply() function:
>>> df = pd.DataFrame({'a':[0,1,0,0], 'b':[0,0,1,0], 'c':[1,0,1,0], 'd':[0,1,1,0], 'c':[0,1,1,0]})
>>> df
a b c d
0 0 0 0 0
1 1 0 1 1
2 0 1 1 1
3 0 0 0 0
>>> df2 = df.apply(lambda row: ','.join(map(str, row)), axis=1)
>>> df2
0 0,0,0,0
1 1,0,1,1
2 0,1,1,1
3 0,0,0,0

Categories

Resources