I'm importing data where from excel where some rows may have notes in a column and are not truly part of the dataframe. dummy Eg. below:
H1 H2 H3
*highlighted cols are PII
sam red 5
pam blue 3
rod green 11
* this is the end of the data
When the above file is imported into dfPA it looks like:
dfPA:
Index H1 H2 H3
1 *highlighted cols are PII
2 sam red 5
3 pam blue 3
4 rod green 11
5 * this is the end of the data
I want to delete the first and last row. This is what I've done.
#get count of cols in df
input: cntcols = dfPA.shape[1]
output: 3
#get count of cols with nan in df
input: a = dfPA.shape[1] - dfPA.count(axis=1)
output:
0 2
1 3
2 3
4 3
5 2
(where a is a series)
#convert a from series to df
dfa = a.to_frame()
#delete rows where no. of nan's are greater than 'n'
n = 1
for r, row in dfa.iterrows():
if (cntcols - dfa.iloc[r][0]) > n:
i = row.name
dfPA = dfPA.drop(index=i)
This doesn't work. Is there way to do this?
You should use the pandas.DataFrame.dropna method. It has a thresh parameter that you can use to define a minimum number of NaN to drop the row/column.
Imagine the following dataframe:
>>> import numpy as np
>>> df = pd.DataFrame([[1,np.nan,1,np.nan], [1,1,1,1], [1,np.nan,1,1], [np.nan,1,1,1]], columns=list('ABCD'))
A B C D
0 1.0 NaN 1 NaN
1 1.0 1.0 1 1.0
2 1.0 NaN 1 1.0
3 NaN 1.0 1 1.0
You can drop columns with NaN using:
>>> df.dropna(axis=1)
C
0 1
1 1
2 1
3 1
The thresh parameter defines the minimum number of non-NaN values to keep the column:
>>> df.dropna(thresh=3, axis=1)
A C D
0 1.0 1 NaN
1 1.0 1 1.0
2 1.0 1 1.0
3 NaN 1 1.0
If you want to reason in terms of the number of NaN:
# example for a minimum of 2 NaN to drop the column
>>> df.dropna(thresh=len(df.columns)-(2-1), axis=1)
If the rows rather than the columns need to be filtered, remove the axis parameter or use axis=0:
>>> df.dropna(thresh=3)
Related
What is the most pandastic way to create running total columns at various levels (without iterating over the rows)?
input:
import pandas as pd
import numpy as np
df = pd.DataFrame()
df['test'] = np.nan,np.nan,'X','X','X','X',np.nan,'X','X','X','X','X','X',np.nan,np.nan,'X','X'
df['desired_output_level_1'] = np.nan,np.nan,'1','1','1','1',np.nan,'2','2','2','2','2','2',np.nan,np.nan,'3','3'
df['desired_output_level_2'] = np.nan,np.nan,'1','2','3','4',np.nan,'1','2','3','4','5','6',np.nan,np.nan,'1','2'
output:
test desired_output_level_1 desired_output_level_2
0 NaN NaN NaN
1 NaN NaN NaN
2 X 1 1
3 X 1 2
4 X 1 3
5 X 1 4
6 NaN NaN NaN
7 X 2 1
8 X 2 2
9 X 2 3
10 X 2 4
11 X 2 5
12 X 2 6
13 NaN NaN NaN
14 NaN NaN NaN
15 X 3 1
16 X 3 2
The test column can only contain X's or NaNs.
The number of consecutive X's is random.
In the 'desired_output_level_1' column, trying to count up the number of series of X's.
In the 'desired_output_level_2' column, trying to find the duration of each series.
Can anyone help? Thanks in advance.
Perhaps not the most pandastic way, but seems to yield what you are after.
Three key points:
we are operating on only rows that are not NaN, so let's create a mask:
mask = df['test'].notna()
For level 1 computation, it's easy to compare when there is a change from NaN to not NaN by shifting rows by one:
df.loc[mask, "level_1"] = (df["test"].isna() & df["test"].shift(-1).notna()).cumsum()
For level 2 computation, it's a bit trickier. One way to do it is to run the computation for each level_1 group and do .transform to preserve the indexing:
df.loc[mask, "level_2"] = (
df.loc[mask, ["level_1"]]
.assign(level_2=1)
.groupby("level_1")["level_2"]
.transform("cumsum")
)
Last step (if needed) is to transform columns to strings:
df['level_1'] = df['level_1'].astype('Int64').astype('str')
df['level_2'] = df['level_2'].astype('Int64').astype('str')
I have a pandas DataFrame with some labels for n classes. Now I want to add a column and store how many items are between two elements of the same class.
Class
0 0
1 1
2 1
3 1
4 0
and I want to get this:
Class Shift
0 0 NaN
1 1 NaN
2 1 1.0
3 1 1.0
4 0 4.0
This is the code I used:
df = pd.DataFrame({'Class':[0,1,1,1,0]})
df['Shift'] = np.nan
for item in df.Class.unique():
_df = df[df['Class'] == item]
_df = _df.reset_index().rename({'index':'idx'}, axis=1)
df.loc[_df.idx, 'Shift'] = _df['idx'].diff().values
df
This seems circuitous to me. Is there a more elegant way of producing this output?
You could do:
df['shift'] = np.arange(len(df))
df['shift'] = df.groupby('Class')['shift'].diff()
print(df)
Output
Class shift
0 0 NaN
1 1 NaN
2 1 1.0
3 1 1.0
4 0 4.0
As an alternative:
df['shift'] = df.assign(shift=np.arange(len(df))).groupby('Class')['shift'].diff()
The idea is to create a column with consecutive values, group by the Class column and compute the diff on the new column.
If there is default RangeIndex use Index.to_series with grouping by column df['Class'] and DataFrameGroupBy.diff:
df['Shift'] = df.index.to_series().groupby(df['Class']).diff()
Similar alternative is create helper column:
df['Shift'] = df.assign(tmp = df.index).groupby('Class')['tmp'].diff()
print (df)
Class Shift
0 0 NaN
1 1 NaN
2 1 1.0
3 1 1.0
4 0 4.0
Your solution with reseting index should be simplify by:
df['Shift'] = df.reset_index().groupby('Class')['index'].diff().to_numpy()
I need to sum up values of 'D' column for every row with the same combination of values from columns 'A','B' and 'C. Eventually I need to create DataFrame with unique combinations of values from
columns 'A','B' and 'C' with corresponding sum in column D.
import numpy as np
df = pd.DataFrame(np.random.randint(0,3,size=(10,4)),columns=list('ABCD'))
df
OT:
A B C D
0 0 2 0 2
1 0 1 2 1
2 0 0 2 0
3 1 2 2 2
4 0 2 2 2
5 0 2 2 2
6 2 2 2 1
7 2 1 1 1
8 1 0 2 0
9 1 2 0 0
I've tried to create temporary data frame with empty cells
D = pd.DataFrame([i for i in range(len(df))]).rename(columns = {0:'D'})
D['D'] = ''
D
OT:
D
0
1
2
3
4
5
6
7
8
9
And use apply() to sum up all 'D' column values for unique row consisted of columns 'A','B' and 'C'. For example below line returns sum of values from 'D' column for 'A'=0,'B'=2,'C'=2:
df[(df['A']==0) & (df['B']==2) & (df['C']==2)]['D'].sum()
OT:
4
function:
def Sumup(cols):
A = cols[0]
B = cols[1]
C = cols[2]
D = cols[3]
sum = df[(df['A']==A) & (df['B']==B) & (df['C']==C)]['D'].sum()
return sum
apply on df and saved in temp df D['D']:
D['D'] = df[['A','B','C','D']].apply(Sumup)
Later I wanted to use drop_duplicates but I receive dataframe consisted of NaN's.
D
OT:
D
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
9 NaN
Anyone could give me a hint how to manage the NaN problem or what other approach can I apply to solve the original
problem?
df.groupby(['A','B','C']).sum()
import numpy as np
df = pd.DataFrame(np.random.randint(0,3,size=(10,4)),columns=list('ABCD'))
df.groupby(["A", "B", "C"])["D"].sum()
I would like to interpolate missing values within groups in dataframe using preceding and following rows value.
Here is the df (there are more records within a group but for this example I left 3 per group):
import numpy as np
import pandas as pd
df = pd.DataFrame({'Group': ['a','a','a','b','b','b','c','c','c'],'Yval': [1,np.nan,5,2,np.nan,8,5,np.nan,10],'Xval': [0,3,2,4,5,8,3,1,9],'PTC': [0,1,0,0,1,0,0,1,0]})
df:
Group Yval Xval PTC
0 a 1.0 0 0
1 a NaN 3 1
2 a 5.0 2 0
3 b 2.0 4 0
4 b NaN 5 1
5 b 8.0 8 0
6 c 5.0 3 0
7 c NaN 1 1
8 c 10.0 9 0
For PTC (point to calculate) I need Yval interpolation using Xval,Yval from -1, +1 rows.
I.e. for A Group I would like:
df.iloc[1,1]=np.interp(3, [0,2], [1,5])
Here is what I tried to do using loc and shift method
and interp function found in this post:
df.loc[(df['PTC'] == 1), ['Yval']]= \
np.interp(df['Xval'], (df['Xval'].shift(+1),df['Xval'].shift(-1)),(df['Yval'].shift(+1),df['Yval'].shift(-1)))
Error I get:
ValueError: object too deep for desired array
df['Xval-1'] = df['Xval'].shift(-1)
df['Xval+1'] = df['Xval'].shift(+1)
df['Yval-1'] = df['Yval'].shift(-1)
df['Yval+1'] = df['Yval'].shift(+1)
df["PTC_interpol"] = df.apply(lambda x: np.interp(x['Xval'], [x['Xval-1'], x['Xval+1']], [x['Yval-1'], x['Yval+1']]), axis=1)
df['PTC'] = np.where(df['PTC'].isnull(), df["PTC_interpol"], df['PTC'])
Assume that we have the following pandas dataframe:
df = pd.DataFrame({'x':[0,0,1,0,0,0,0],'y':[1,1,1,1,1,1,0],'z':[0,1,1,1,0,0,1]})
x y z
0 0 1 0
1 0 1 1
2 1 1 1
3 0 1 1
4 0 1 0
5 0 1 0
6 0 0 1
All dataframe is filled either by 1 or 0. Looking at each column separately, if current row value is different than previous value I need to count number of previous consecutive values:
x y z
0
1 1
2 2
3 1
4 3
5
6 6 2
I tried to write a lambda function and apply it to entire dataframe, but I failed. Any idea?
Let's try this:
def f(col):
x = (col != col.shift().bfill())
s = x.cumsum()
return s.groupby(s).transform('count').shift().where(x)
df.apply(f).fillna('')
Output:
x y z
0
1 1
2 2
3 1
4 3
5
6 6 2
Details:
Use apply, to apply a custom function on each column of the dataframe.
Find the difference spots in the column then use cumsum to create groups of consecutive values, then groupby and transform to create a count for each record, then mask the values in the column using where for the difference spots.
You can try the following, where you identify the "runs" first, get the "runs" lengths. You will only entry at where it switches, so it is the lengths of the runs except the last one.
import pandas as pd
import numpy as np
def func(x,missing=np.NaN):
runs = np.cumsum(np.append(0,np.diff(x)!=0))
switches = np.where(np.diff(x!=0))[0] + 1
out = np.repeat(missing,len(x))
out[switches] = np.bincount(runs)[:-1]
# thanks to Scott see comments below
##out[switches] = pd.value_counts(runs,sort=False)[:-1]
return(out)
df.apply(func)
x y z
0 NaN NaN NaN
1 NaN NaN 1.0
2 2.0 NaN NaN
3 1.0 NaN NaN
4 NaN NaN 3.0
5 NaN NaN NaN
6 NaN 6.0 2.0
It might be faster with a good implementation of run length encoding.. but I am not too familiar with it in python..