Get the counts and unique counts of every columns in python - python

Suppose I have a pandas dataframe as follows,
data
id A B C D E
1 NaN 1 NaN 1 1
1 NaN 2 NaN 2 2
1 NaN 3 NaN NaN 3
1 NaN 4 NaN NaN 4
1 NaN 5 NaN NaN 5
2 NaN 6 NaN NaN 6
2 NaN 7 5 NaN 7
2 NaN 8 6 2 8
2 NaN 9 NaN NaN 9
2 NaN 10 NaN 4 10
3 NaN 11 NaN NaN 11
3 NaN 12 NaN NaN 12
3 NaN 13 NaN NaN 13
3 NaN 14 NaN NaN 14
3 NaN 15 NaN NaN 15
3 NaN 16 NaN NaN 16
I am using the following command,
pd.DataFrame(data.count().sort_values(ascending=False)).reset_index()
and get the following output,
index 0
0 E 16
1 B 16
2 id 16
3 D 4
4 C 2
5 A 0
I want the following output,
columns count unique(id) count
E 16 3
B 16 3
D 4 2
C 2 1
A 0 0
where count is same as the previous one but the unique(id) count is the unique ids for each columns present. I want both to be as two fields.
Can anybody help me in doing this?
Thanks

Starting with:
In [7]: df
Out[7]:
id A B C D E
0 1 NaN 1 NaN 1.0 1
1 1 NaN 2 NaN 2.0 2
2 1 NaN 3 NaN NaN 3
3 1 NaN 4 NaN NaN 4
4 1 NaN 5 NaN NaN 5
5 2 NaN 6 NaN NaN 6
6 2 NaN 7 5.0 NaN 7
7 2 NaN 8 6.0 2.0 8
8 2 NaN 9 NaN NaN 9
9 2 NaN 10 NaN 4.0 10
10 3 NaN 11 NaN NaN 11
11 3 NaN 12 NaN NaN 12
12 3 NaN 13 NaN NaN 13
13 3 NaN 14 NaN NaN 14
14 3 NaN 15 NaN NaN 15
15 3 NaN 16 NaN NaN 16
Here is a rather inelegant way:
In [8]: (df.groupby('id').count() > 0).sum()
Out[8]:
A 0
B 3
C 1
D 2
E 3
dtype: int64
So, simply make your base DataFrame as you've specified:
In [9]: counts = (df[['A','B','C','D','E']].count().sort_values(ascending=False)).to_frame(name='counts')
In [10]: counts
Out[10]:
counts
E 16
B 16
D 4
C 2
A 0
And then simply:
In [11]: counts['unique(id) counts'] = (df.groupby('id').count() > 0).sum()
In [12]: counts
Out[12]:
counts unique(id) counts
E 16 3
B 16 3
D 4 2
C 2 1
A 0 0

Related

Pandas assign series to another Series based on index

I have three Pandas Dataframes:
df1:
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
9 NaN
df2:
0 1
3 7
6 5
9 2
df3:
1 2
4 6
7 6
My goal is to assign the values of df2 and df3 to df1 based on the index.
df1 should then become:
0 1
1 2
2 NaN
3 7
4 6
5 NaN
6 5
7 6
8 NaN
9 2
I tried with simple assinment:
df1.loc[df2.index] = df2.values
or
df1.loc[df2.index] = df2
but this gives me an ValueError:
ValueError: Must have equal len keys and value when setting with an iterable
Thanks for your help!
You can do concat with combine_first:
pd.concat([df2,df3]).combine_first(df1)
Or reindex:
pd.concat([df2,df3]).reindex_like(df1)
0 1.0
1 2.0
2 NaN
3 7.0
4 6.0
5 NaN
6 5.0
7 6.0
8 NaN
9 2.0

Pandas ffill for certain values in a column

I have a df like this:
time data
0 1
1 1
2 nan
3 nan
4 6
5 nan
6 nan
7 nan
8 5
9 4
10 nan
Is there a way to use pd.Series.ffill() to ffill on for certain occurences of values? Specifically, I want to forward fill only if values in df.data are == 1 or 4. Should look like this:
time data
0 1
1 1
2 1
3 1
4 6
5 nan
6 nan
7 nan
8 5
9 4
10 4
One option would be to forward fill (ffill) the column, then only populate where the values are 1 or 4 using (isin) and (mask):
s = df['data'].ffill()
df['data'] = df['data'].mask(s.isin([1, 4]), s)
df:
time data
0 0 1.0
1 1 1.0
2 2 1.0
3 3 1.0
4 4 6.0
5 5 NaN
6 6 NaN
7 7 NaN
8 8 5.0
9 9 4.0
10 10 4.0

How to loop through each row in pandas dataframe and set values equal to nan after a threshold is surpassed?

If I have a pandas dataframe like this:
0 1 2 3 4 5
A 5 5 10 9 4 5
B 10 10 10 8 1 1
C 8 8 0 9 6 3
D 10 10 11 4 2 9
E 0 9 1 5 8 3
If I set a threshold of 7, how do I loop through each row and set the values after the threshold is no longer met equal to np.nan such that I get a data frame like this:
0 1 2 3 4 5
A 5 5 10 9 NaN NaN
B 10 10 10 8 NaN NaN
C 8 8 0 9 NaN NaN
D 10 10 11 4 2 9
E 0 9 1 5 8 NaN
Where everything after the last number greater than 7 is set equal to np.nan.
Let's try this:
df.where(df.where(df > 7).bfill(axis=1).notna())
Output:
0 1 2 3 4 5
A 5 5 10 9 NaN NaN
B 10 10 10 8 NaN NaN
C 8 8 0 9 NaN NaN
D 10 10 11 4 2.0 9.0
E 0 9 1 5 8.0 NaN
create a mask m by using df.where on df.gt(7) and bfill and isna. Finally, indexing df using m
m = df.where(df.gt(7)).bfill(1).notna()
df[m]
Out[24]:
0 1 2 3 4 5
A 5 5 10 9 NaN NaN
B 10 10 10 8 NaN NaN
C 8 8 0 9 NaN NaN
D 10 10 11 4 2.0 9.0
E 0 9 1 5 8.0 NaN
A very nice question , reverse the order then cumsum the one equal to 0 should be NaN
df.where(df.iloc[:,::-1].gt(7).cumsum(1).ne(0))
0 1 2 3 4 5
A 5 5 10 9 NaN NaN
B 10 10 10 8 NaN NaN
C 8 8 0 9 NaN NaN
D 10 10 11 4 2.0 9.0
E 0 9 1 5 8.0 NaN

Rolling sum of all values associated with the ids from two columns

Basically, I want it to groupby all ids from the two columns (['id1','id2]), and get the rolling sum (from past 2 rows) of their respective values from columns ['value1','value2']
df:
id1 id2 value1 value2
-----------------------------------
a b 10 5
c a 5 10
b c 0 0
c d 2 1
d a 10 20
a c 5 10
b a 10 5
a b 5 2
c a 2 5
d b 5 2
df (if df.id = 'a') -- just to simplify I'm showing with only id 'a':
id1 id2 value1 value2 a.rolling.sum(2)
-----------------------------------------------------
a b 10 5 NaN
c a 5 10 20
d a 10 20 30
a c 5 10 25
b a 10 5 10
a b 5 2 10
c a 2 5 10
expect df (including all ids in columns ['id1','id2']):
id1 id2 value1 value2 a.rolling.sum(2) b.rolling.sum(2) c.rolling.sum(2)
---------------------------------------------------------------------------------------------
a b 10 5 NaN NaN NaN
c a 5 10 20 NaN NaN
b c 0 0 NaN 5 5
c d 2 1 NaN NaN 2
d a 10 20 30 NaN NaN
a c 5 10 25 NaN 12
b a 10 5 10 10 NaN
a b 5 2 10 12 NaN
c a 2 5 10 NaN 12
d b 5 2 NaN 4 NaN
Preferably I need a groupby function that assigns all ids involved with a x.rolling(2) as original dataset has hundreds of ids to compute.
Reconfigure
i = df.filter(like='id')
i.columns = [i.columns.str[:2], i.columns.str[2:]]
v = df.filter(like='va')
v.columns = [v.columns.str[:5], v.columns.str[5:]]
d = i.join(v)
d
id value
1 2 1 2
0 a b 10 5
1 c a 5 10
2 b c 0 0
3 c d 2 1
4 d a 10 20
5 a c 5 10
6 b a 10 5
7 a b 5 2
8 c a 2 5
9 d b 5 2
Shuffle Stuff About
def modified_roll(x):
return x.dropna().rolling(2).sum()
extra_bit = d.stack().set_index('id', append=True).unstack().value \
.apply(modified_roll).groupby(level=0).first()
extra_bit
id a b c d
0 NaN NaN NaN NaN
1 20.0 NaN NaN NaN
2 NaN 5.0 5.0 NaN
3 NaN NaN 2.0 NaN
4 30.0 NaN NaN 11.0
5 25.0 NaN 12.0 NaN
6 10.0 10.0 NaN NaN
7 10.0 12.0 NaN NaN
8 10.0 NaN 12.0 NaN
9 NaN 4.0 NaN 15.0
join
df.join(extra_bit)
id1 id2 value1 value2 a b c d
0 a b 10 5 NaN NaN NaN NaN
1 c a 5 10 20.0 NaN NaN NaN
2 b c 0 0 NaN 5.0 5.0 NaN
3 c d 2 1 NaN NaN 2.0 NaN
4 d a 10 20 30.0 NaN NaN 11.0
5 a c 5 10 25.0 NaN 12.0 NaN
6 b a 10 5 10.0 10.0 NaN NaN
7 a b 5 2 10.0 12.0 NaN NaN
8 c a 2 5 10.0 NaN 12.0 NaN
9 d b 5 2 NaN 4.0 NaN 15.0

Getting every combination of two Pandas DataFrames?

Let's say I have two dataframes:
import pandas as pd
import numpy as np
df1 = pd.DataFrame({'person':[1,1,2,2,3], 'sub_id':[20,21,21,21,21], 'otherval':[np.nan, np.nan, np.nan, np.nan, np.nan], 'other_stuff':[1,1,1,1,1]}, columns=['person','sub_id','otherval','other_stuff'])
df2 = pd.DataFrame({'sub_id':[20,21,22,23,24,25], 'otherval':[8,9,10,11,12,13]})
I want each level of person in df1 to have all levels of sub_id (including any duplicates) and their respective otherval from df2. In other words, my merged result should look like:
person sub_id otherval other_stuff
1 20 8 1
1 21 9 NaN
1 22 10 NaN
1 23 11 Nan
1 24 12 NaN
1 25 13 NaN
2 20 8 NaN
2 21 9 1
2 21 9 1
2 22 10 NaN
2 23 11 NaN
2 24 12 NaN
2 25 13 NaN
3 20 8 NaN
3 21 9 1
3 22 10 NaN
3 23 11 NaN
3 24 12 NaN
3 25 13 NaN
Notice how person==2 has two rows where sub_id==21.
You can get your desired output with the following:
df3 = df1.groupby('person').apply(lambda x: pd.merge(x,df2, on='sub_id', how='right')).reset_index(level = (0,1), drop = True)
df3.person = df3.person.ffill().astype(int)
print df3
That should yield:
# person sub_id otherval_x other_stuff otherval_y
# 0 1 20 NaN 1.0 8
# 1 1 21 NaN 1.0 9
# 2 1 22 NaN NaN 10
# 3 1 23 NaN NaN 11
# 4 1 24 NaN NaN 12
# 5 1 25 NaN NaN 13
# 6 2 21 NaN 1.0 9
# 7 2 21 NaN 1.0 9
# 8 2 20 NaN NaN 8
# 9 2 22 NaN NaN 10
# 10 2 23 NaN NaN 11
# 11 2 24 NaN NaN 12
# 12 2 25 NaN NaN 13
# 13 3 21 NaN 1.0 9
# 14 3 20 NaN NaN 8
# 15 3 22 NaN NaN 10
# 16 3 23 NaN NaN 11
# 17 3 24 NaN NaN 12
# 18 3 25 NaN NaN 13
I hope that helps.

Categories

Resources