I expect to describe well want I need. I have a data frame with the same columns name and another column that works as an index. The data frame looks as follows:
df = pd.DataFrame({'ID':[1,1,1,1,1,2,2,2,3,3,3,3],'X':[1,2,3,4,5,2,3,4,1,3,4,5],'Y':[1,2,3,4,5,2,3,4,5,4,3,2]})
df
Out[21]:
ID X Y
0 1 1 1
1 1 2 2
2 1 3 3
3 1 4 4
4 1 5 5
5 2 2 2
6 2 3 3
7 2 4 4
8 3 1 5
9 3 3 4
10 3 4 3
11 3 5 2
My intention is to copy X as an index or one column (it doesn't matter) and append Y columns from each 'ID' in the following way:
You can try
out = pd.concat([group.rename(columns={'Y': f'Y{name}'}) for name, group in df.groupby('ID')])
out.columns = out.columns.str.replace(r'\d+$', '', regex=True)
print(out)
ID X Y Y Y
0 1 1 1.0 NaN NaN
1 1 2 2.0 NaN NaN
2 1 3 3.0 NaN NaN
3 1 4 4.0 NaN NaN
4 1 5 5.0 NaN NaN
5 2 2 NaN 2.0 NaN
6 2 3 NaN 3.0 NaN
7 2 4 NaN 4.0 NaN
8 3 1 NaN NaN 5.0
9 3 3 NaN NaN 4.0
10 3 4 NaN NaN 3.0
11 3 5 NaN NaN 2.0
Here's another way to do it:
df_org = pd.DataFrame({'ID':[1,1,1,1,1,2,2,2,3,3,3,3],'X':[1,2,3,4,5,2,3,4,1,3,4,5]})
df = df_org.copy()
for i in set(df_org['ID']):
df1 = df_org[df_org['ID']==i]
col = 'Y'+str(i)
df1.columns = ['ID', col]
df = pd.concat([ df, df1[[col]] ], axis=1)
df.columns = df.columns.str.replace(r'\d+$', '', regex=True)
print(df)
Output:
ID X Y Y Y
0 1 1 1.0 NaN NaN
1 1 2 2.0 NaN NaN
2 1 3 3.0 NaN NaN
3 1 4 4.0 NaN NaN
4 1 5 5.0 NaN NaN
5 2 2 NaN 2.0 NaN
6 2 3 NaN 3.0 NaN
7 2 4 NaN 4.0 NaN
8 3 1 NaN NaN 1.0
9 3 3 NaN NaN 3.0
10 3 4 NaN NaN 4.0
11 3 5 NaN NaN 5.0
Another solution could be as follow.
Get unique values for column ID (stored in array s).
Use np.transpose to repeat column ID n times (n == len(s)) and evaluate the array's matches with s.
Use np.where to replace True with values from df.Y and False with NaN.
Finally, drop the orignal df.Y and rename the new columns as required.
import pandas as pd
import numpy as np
df = pd.DataFrame({'ID':[1,1,1,1,1,2,2,2,3,3,3,3],
'X':[1,2,3,4,5,2,3,4,1,3,4,5],
'Y':[1,2,3,4,5,2,3,4,5,4,3,2]})
s = df.ID.unique()
df[s] = np.where((np.transpose([df.ID]*len(s))==s),
np.transpose([df.Y]*len(s)),
np.nan)
df.drop('Y', axis=1, inplace=True)
df.rename(columns={k:'Y' for k in s}, inplace=True)
print(df)
ID X Y Y Y
0 1 1 1.0 NaN NaN
1 1 2 2.0 NaN NaN
2 1 3 3.0 NaN NaN
3 1 4 4.0 NaN NaN
4 1 5 5.0 NaN NaN
5 2 2 NaN 2.0 NaN
6 2 3 NaN 3.0 NaN
7 2 4 NaN 4.0 NaN
8 3 1 NaN NaN 5.0
9 3 3 NaN NaN 4.0
10 3 4 NaN NaN 3.0
11 3 5 NaN NaN 2.0
If performance is an issue, this method should be faster than this answer, especially when the number of unique values for ID increases.
I have a csv that I import as a dataframe with pandas. The columns are like:
Step1:A Step1:B Step1:C Step1:D Step2:A Step2:B Step2:D Step3:B Step3:D Step3:E
0 1 2 3 4 5 6 7 8 9
Where the step and parameter are separated by ':'. I want to reshape the dataframe to look like this:
Step1 Step2 Step3
A 0 4 nan
B 1 5 7
C 2 nan nan
D 3 6 8
E nan nan 9
Now, If I want to maintain column sequential order such that I have this case:
Step2:A Step2:B Step2:C Step2:D Step1:A Step1:B Step1:D AStep3:B AStep3:D AStep3:E
0 1 2 3 4 5 6 7 8 9
Where the step and parameter are separated by ':'. I want to reshape the dataframe to look like this:
Step2 Step1 AStep3
A 0 4 nan
B 1 5 7
C 2 nan nan
D 3 6 8
E nan nan 9
Try read_csv with delim_whitespace:
df = pd.read_csv('file.csv', delim_whitespace=True)
df.columns = df.columns.str.split(':', expand=True)
df.stack().reset_index(level=0, drop=True)
output:
Step1 Step2 Step3
A 0.0 4.0 NaN
B 1.0 5.0 7.0
C 2.0 NaN NaN
D 3.0 6.0 8.0
E NaN NaN 9.0
I want to add a list as a column to the df dataframe. The list has a different size than the column length.
df =
A B C
1 2 3
5 6 9
4
6 6
8 4
2 3
4
6 6
8 4
D = [11,17,18]
I want the following output
df =
A B C D
1 2 3 11
5 6 9 17
4 18
6 6
8 4
2 3
4
6 6
8 4
I am doing the following to extend the list to the size of the dataframe by adding "nan"
# number of nan value require for the list to match the size of the column
extend_length = df.shape[0]-len(D)
# extend the list
D.extend(extend_length * ['nan'])
# add to the dataframe
df["D"] = D
A B C D
1 2 3 11
5 6 9 17
4 18
6 6 nan
8 4 nan
2 3 nan
4 nan
6 6 nan
8 4 nan
Where "nan" is treated like string but I want it to be empty ot "nan", thus, if I search for number of valid cell in D column it will provide output of 3.
Adding the list as a Series will handle this directly.
D = [11,17,18]
df.loc[:, 'D'] = pd.Series(D)
A simple pd.concat on df and series of D as follows:
pd.concat([df, pd.Series(D, name='D')], axis=1)
or
df.assign(D=pd.Series(D))
Out[654]:
A B C D
0 1 2.0 3.0 11.0
1 5 6.0 9.0 17.0
2 4 NaN NaN 18.0
3 6 NaN 6.0 NaN
4 8 NaN 4.0 NaN
5 2 NaN 3.0 NaN
6 4 NaN NaN NaN
7 6 NaN 6.0 NaN
8 8 NaN 4.0 NaN
I have a Data Frame with more than 450 variables and more than 500 000 rows. However, some variables have null values over 90%. I would like to delete features with more than > 90% empty rows.
I made my description of my variables:
Data Frame :
df = pd.DataFrame({
'A':list('abcdefghij'),
'B':[4,np.nan,np.nan,np.nan,np.nan,np.nan, np.nan, np.nan, np.nan, np.nan],
'C':[7,8,np.nan,4,2,3,6,5, 4, 6],
'D':[1,3,5,np.nan,1,0,10,7, np.nan, 5],
'E':[5,3,6,9,2,4,7,3, 5, 9],
'F':list('aaabbbckfr'),
'G':[np.nan,8,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan, np.nan, np.nan]})
print(df)
A B C D E F G
0 a 4.0 7 1 5 a NaN
1 b NaN 8 3 3 a 8.0
2 c NaN NaN 5 6 a NaN
3 d NaN 4 NaN 9 b NaN
4 e NaN 2 1 2 b NaN
5 f NaN 3 0 4 b NaN
6 g NaN 6 10 7 c NaN
7 h NaN 5 7 3 k NaN
8 i NaN 4 NaN 5 f NaN
9 j NaN 6 5 9 r NaN
Describe:
desc = df.describe(include = 'all')
d1 = desc.loc['varType'] = desc.dtypes
d3 = desc.loc['rowsNull'] = df.isnull().sum()
d4 = desc.loc['%rowsNull'] = round((d3/len(df))*100, 2)
print(desc)
A B C D E F G
count 10 1 10 10 10 10 1
unique 10 NaN NaN NaN NaN 6 NaN
top i NaN NaN NaN NaN b NaN
freq 1 NaN NaN NaN NaN 3 NaN
mean NaN 4 5.4 4.3 5.3 NaN 8
std NaN NaN 2.22111 3.16403 2.45176 NaN NaN
min NaN 4 2 0 2 NaN 8
25% NaN 4 4 1.5 3.25 NaN 8
50% NaN 4 5.5 4.5 5 NaN 8
75% NaN 4 6.75 6.5 6.75 NaN 8
max NaN 4 9 10 9 NaN 8
varType object float64 float64 float64 float64 object float64
rowsNull 0 9 1 2 0 0 9
%rowsNull 0 90 10 20 0 0 90
In this exemple we have juste 2 features to delete 'B' and 'G'.
But in my case i find 40 variables whose '%rowsNull' greater than > 90%, how should i do not take into account these variables in my modeling?
I have no idea how to do this.
Please help me.
Thanks.
First compare missing values and then get mean (it working because Trues are processing like 1s), last filter by boolean indexing with loc, because removing columns:
df = df.loc[:, df.isnull().mean() <.9]
print (df)
A C D E F
0 a 7.0 1.0 5 a
1 b 8.0 3.0 3 a
2 c NaN 5.0 6 a
3 d 4.0 NaN 9 b
4 e 2.0 1.0 2 b
5 f 3.0 0.0 4 b
6 g 6.0 10.0 7 c
7 h 5.0 7.0 3 k
8 i 4.0 NaN 5 f
9 j 6.0 5.0 9 r
Detail:
print (df.isnull().mean())
A 0.0
B 0.9
C 0.1
D 0.2
E 0.0
F 0.0
G 0.9
dtype: float64
You can find columns with more than 90% null values and drop
cols_to_drop = df.columns[df.isnull().sum()/len(df) >= .90]
df.drop(cols_to_drop, axis = 1, inplace = True)
A C D E F
0 a 7.0 1.0 5 a
1 b 8.0 3.0 3 a
2 c NaN 5.0 6 a
3 d 4.0 NaN 9 b
4 e 2.0 1.0 2 b
5 f 3.0 0.0 4 b
6 g 6.0 10.0 7
7 h 5.0 7.0 3 k
8 i 4.0 NaN 5 f
9 j 6.0 5.0 9 r
Based on your code, you could do something like
keepCols = desc.columns[desc.loc['%rowsNull'] < 90]
df = df[keepCols]
I have a dataframe in which I want to apply a rolling mean over a column of numbers that come in 3-pairs where I only want 4 unique values to go into the mean.
Lets say my dataframe looks like:
Group Column to roll
1 9
2 5
2 5
2 4
2 4
2 4
2 3
2 3
2 3
2 6
2 6
2 6
2 8
Since I want 4 unique values to go into the mean but all values to be of equal weight and within the same group, my expected output (assuming I need 4 unique values) would be:
Group Output
1 nan
2 nan
2 nan
2 nan
2 nan
2 nan
2 nan
2 nan
2 nan
2 (6+3+4+5)/4
2 (6+3+4+5)/4
2 (6+3+4+5)/4
2 (8+6+3+4)/4
Any ideas how to do this?
You could try something like this:
df['Column to roll'].drop_duplicates().rolling(4).mean().reindex(df.index).ffill()
Output:
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 4.50
9 4.50
10 4.50
11 5.25
Name: Column to roll, dtype: float64
Edit question changed
df_out = df.groupby('Group')['Column to roll']\
.apply(lambda x: x.drop_duplicates().rolling(4).mean()).rename('Output')
df.set_index('Group',append=True).swaplevel(0,1)\
.join(df_out, how='left').ffill().reset_index(level=1, drop=True)
Output:
Column to roll Output
Group
1 9 NaN
2 5 NaN
2 5 NaN
2 4 NaN
2 4 NaN
2 4 NaN
2 3 NaN
2 3 NaN
2 3 NaN
2 6 4.50
2 6 4.50
2 6 4.50
2 8 5.25