Let me say I have a DataFrame where the data is ordered with respect to time. I have a column as weights and I want to find the maximum weight relative to the current index. For example the max value found for the 10th Row would be from elements 11 to the end.
I ended up writing this function. But performance is a big threat.
import pandas as pd
df=pd.DataFrame({"time":[100,200,300,400,500,600,700,800],"weights":
[120,160,190,110,34,55,66,33]})
totalRows=df['time'].count()
def findMaximumValRelativeToCurrentRow(row):
index= row.name
if index!= totalRows:
tempDf = df[index:totalRows]
val=tempDf['weights'].max()
df.set_value(index,'max',val)
else:
df.set_value(index,'max',row['weights'])
df.apply(findMaximumValRelativeToCurrentRow,axis=1)
print df
Is there any better way to do the operation than this?
You can use cummax with iloc for reverse order:
print (df['weights'].iloc[::-1])
7 33
6 66
5 55
4 34
3 110
2 190
1 160
0 120
Name: weights, dtype: int64
df['max1'] = df['weights'].iloc[::-1].cummax()
print (df)
time weights max max1
0 100 120 190.0 190
1 200 160 190.0 190
2 300 190 190.0 190
3 400 110 110.0 110
4 500 34 66.0 66
5 600 55 66.0 66
6 700 66 66.0 66
7 800 33 33.0 33
Related
I have a dataframe which has nan or empty cell in specific column for example column index 2. unfortunately I don't have subset. I just have index. I want to delete the rows which has this features. in stackoverflow there are too many soluntions which are using subset
This is the dataframe for example:
12 125 36 45 665
15 212 12 65 62
65 9 nan 98 84
21 54 78 5 654
211 65 58 26 65
...
output:
12 125 36 45 665
15 212 12 65 62
21 54 78 5 654
211 65 58 26 65
If need test third column (with index=2) use boolean indexing if nan is missing value np.nan or string nan:
idx = 2
df1 = df[df.iloc[:, idx].notna() & df.iloc[:, idx].ne('nan')]
#if no value is empty string or nan string or missing value NaN/None
#df1 = df[df.iloc[:, idx].notna() & ~df.iloc[:, idx].isin(['nan',''])]
print (df1)
0 1 2 3 4
0 12 125 36.0 45 665
1 15 212 12.0 65 62
3 21 54 78.0 5 654
4 211 65 58.0 26 65
If nans are missing values:
df1 = df.dropna(subset=df.columns[[idx]])
print (df1)
0 1 2 3 4
0 12 125 36.0 45 665
1 15 212 12.0 65 62
3 21 54 78.0 5 654
4 211 65 58.0 26 65
Not sure what you mean by
there are too many soluntions which are using subset
but the way to do this would be
df[~df.isna().any(axis=1)]
You can use notnull()
df = df.loc[df[df.columns[idx]].notnull()]
I have the following problem and do not know how to solve it in a perfomant way:
Input Pandas DataFrame:
timestep
article
volume
35
1
20
37
2
5
123
2
12
155
3
10
178
2
23
234
1
17
478
1
28
Output Pandas DataFrame:
timestep
volume
35
20
37
25
123
32
178
53
234
50
478
61
Calculation Example for timestep 478:
28 (last article 1 volume) + 23 (last article 2 volume) + 10 (last article 3 volume) = 61
What ist the best way to do this in pandas?
Try with ffill:
#sort if needed
df = df.sort_values("timestep")
df["volume"] = (df["volume"].where(df["article"].eq(1)).ffill().fillna(0) +
df["volume"].where(df["article"].eq(2)).ffill().fillna(0))
output = df.drop("article", axis=1)
>>> output
timestep volume
0 35 20.0
1 37 25.0
2 123 32.0
3 178 43.0
4 234 40.0
5 478 51.0
Group By article & Take last element & Sum
df.groupby(['article']).tail(1)["volume"].sum()
You can set group number of consecutive article by .cumsum(). Then get the value of previous group last item by .map() with GroupBy.last(). Finally, add volume with this previous last, as follows:
# Get group number of consecutive `article`
g = df['article'].ne(df['article'].shift()).cumsum()
# Add `volume` to previous group last
df['volume'] += g.sub(1).map(df.groupby(g)['volume'].last()).fillna(0, downcast='infer')
Result:
print(df)
timestep article volume
0 35 1 20
1 37 2 25
2 123 2 32
3 178 2 43
4 234 1 40
5 478 1 51
Breakdown of steps
Previous group last values:
g.sub(1).map(df.groupby(g)['volume'].last()).fillna(0, downcast='infer')
0 0
1 20
2 20
3 20
4 43
5 43
Name: article, dtype: int64
Try:
df["new_volume"] = (
df.loc[df["article"] != df["article"].shift(-1), "volume"]
.reindex(df.index, method='ffill')
.shift()
+ df["volume"]
).fillna(df["volume"])
df
Output:
timestep article volume new_volume
0 35 1 20 20.0
1 37 2 5 25.0
2 123 2 12 32.0
3 178 2 23 43.0
4 234 1 17 40.0
5 478 1 28 51.0
Explained:
Find the last record of each group by checking the 'article' from the previous row, then reindex that series aligning to the original dataframe and fill forward and shift to the next group with that 'volume'. And this to the current row's 'volume' and fill that first value with the original 'volume' value.
I am new to python and I am trying to understand how to work with aggregating data and manipulation.
I have a dataframe:
df3
Out[122]:
SBK SSC CountRecs
0 99 22 9
1 99 12 10
2 99 121 11
3 99 138 12
4 99 123 8
... ... ...
160247 184 1318 1
160248 394 2659 1
160249 412 757 1
160250 357 1312 1
160251 202 106 1
I want to understand in the entire data frame, what percentage of CountRecs for each SBK.
For example, in this case, I want to understand 80618 is what % of the summation total number of SBK's with 99. in this case it is 9/50 * 100. But I want this to be done automated for all rows. How can I go about this?
you need to group by the column you want,
marge by the grouped column.
2.1 you can change the name of the new column.
add the percentage column.
a = df3.merge(pd.DataFrame(df3.groupby('SBK' ['CountRecs'].sum()),on='SBK')
df3['percent'] = (a['CountRecs_x']/a['CountRecs_y']) *100
df3
Use GroupBy.transform for Series with same size like original DataFrame filled by counts, so you can divide original column:
df3['percent'] = df3['CountRecs'] / df3.groupby('SBK')['CountRecs'].transform('sum') * 100
print (df3)
SBK SSC CountRecs percent
0 99 22 9 18.0
1 99 12 10 20.0
2 99 121 11 22.0
3 99 138 12 24.0
4 99 123 8 16.0
160247 184 1318 1 100.0
160248 394 2659 1 100.0
160249 412 757 1 100.0
160250 357 1312 1 100.0
160251 202 106 1 100.0
My aim is to get the percentage of multiple columns, that are divided by another column. The resulting columns should be kept in the same dataframe.
A B Divisor
2000 8 31 166
2001 39 64 108
2002 68 8 142
2003 28 2 130
2004 55 61 150
result:
A B Divisor perc_A perc_B
2000 8 31 166 4.8 18.7
2001 39 64 108 36.1 59.3
2002 68 8 142 47.9 5.6
2003 28 2 130 21.5 1.5
2004 55 61 150 36.7 40.7
My solution:
def percentage(divisor,columns,heading,dframe):
for col in columns:
heading_new = str(heading+col)
dframe[heading_new] = (dframe.loc[:,col]/dframe.loc[:,divisor])*100
return dframe
df_new = division("Divisor",df.columns.values[:2],"perc_",df)
The solution above worked.But is there a more effective way to get the solution?
(I know there are already similar questions. But I couldn't find one, where I can save the results in the same dataframe without loosing the original columns)
Thanks
Use DataFrame.join for add new columns created by DataFrame.div by first 2 columns selected by DataFrame.iloc, multiple by 100 and DataFrame.add_prefix:
df = df.join(df.iloc[:, :2].div(df['Divisor'], axis=0).mul(100).add_prefix('perc_'))
print (df)
A B Divisor perc_A perc_B
2000 8 31 166 4.819277 18.674699
2001 39 64 108 36.111111 59.259259
2002 68 8 142 47.887324 5.633803
2003 28 2 130 21.538462 1.538462
2004 55 61 150 36.666667 40.666667
Your function should be changed:
def percentage(divisor,columns,heading,dframe):
return df.join(df[columns].div(df[divisor], axis=0).mul(100).add_prefix(heading))
df_new = percentage("Divisor",df.columns.values[:2],"perc_",df)
You can reshape the divisor:
df[['perc_A', 'perc_B']] = df[['A', 'B']] / df['Divisor'].values[:,None] * 100
I am new to numpy and need some help in solving my problem.
I read records from a binary file using dtypes, then I am selecting 3 columns
df = pd.DataFrame(np.array([(124,90,5),(125,90,5),(126,90,5),(127,90,0),(128,91,5),(129,91,5),(130,91,5),(131,91,0)]), columns = ['atype','btype','ctype'] )
which gives
atype btype ctype
0 124 90 5
1 125 90 5
2 126 90 5
3 127 90 0
4 128 91 5
5 129 91 5
6 130 91 5
7 131 91 0
'atype' is of no interest to me for now.
But what I want is the row numbers when
(x,90,5) appears in 2nd and 3rd columns
(x,90,0) appears in 2nd and 3rd columns
when (x,91,5) appears in 2nd and 3rd columns
and (x,91,0) appears in 2nd and 3rd columns
etc
There are 7 variables like 90,91,92,93,94,95,96 and correspondingly there will be values of either 5 or 0 in the 3rd column.
The entries are 1 million. So is there anyway to find out these without a for loop.
Using pandas you could try the following.
df[(df['btype'].between(90, 96)) & (df['ctype'].isin([0, 5]))]
Using your example. if some of the values are changed, such that df is
atype btype ctype
0 124 90 5
1 125 90 5
2 126 0 5
3 127 90 100
4 128 91 5
5 129 0 5
6 130 91 5
7 131 91 0
then using the solution above, the following is returned.
atype btype ctype
0 124 90 5
1 125 90 5
4 128 91 5
6 130 91 5
7 131 91 0