How to normalize values in a dataframe column in different ranges - python

I have a dataframe like this:
T data
0 0 10
1 1 20
2 2 30
3 3 40
4 4 50
5 0 5
6 1 13
7 2 21
8 0 3
9 1 7
10 2 11
11 3 15
12 4 19
The values in T are sequences which all range from 0 up to a certain value whereby the maximal number can differ between the sequences.
Normally, the values in data are NOT equally spaced, that is now just for demonstration purposes.
What I want to achieve is to add a third column called dataDiv where each value in data of a certain sequence is divided by the value at T = 0 that belongs to the respective sequence. In my case, I have 3 sequences and for the first sequence I want to divide each value by 10, in the second sequence each value should be divided by 5 and in the third by 3.
So the expected outcome would look like this:
T data dataDiv
0 0 10 1.000000
1 1 20 2.000000
2 2 30 3.000000
3 3 40 4.000000
4 4 50 5.000000
5 0 5 1.000000
6 1 13 2.600000
7 2 21 4.200000
8 0 3 1.000000
9 1 7 2.333333
10 2 11 3.666667
11 3 15 5.000000
12 4 19 6.333333
The way I currently implement it is as follows:
I first determine the indices at which T = 0. Then I loop through these indices and divide the data in data by the value at T=0 of the respective sequence which gives me the desired output (which is shown above). The code looks as follows:
import pandas as pd
df = pd.DataFrame({'T': range(5) + range(3) + range(5),
'data': range(10, 60, 10) + range(5, 25, 8) + range(3, 21, 4)})
# get indices where T = 0
idZE = df[df['T'] == 0].index.tolist()
# last index of dataframe
idZE.append(max(df.index)+1)
# add the column with normalzed values
df['dataDiv'] = df['data']
# loop through indices where T = 0 and normalize values
for ix, indi in enumerate(idZE[:-1]):
df['dataDiv'].iloc[indi:idZE[ix + 1]] = df['data'].iloc[indi:idZE[ix + 1]] / df['data'].iloc[indi]
My question is: Is there any smarter solution than this which avoids the loop?

The following approach avoids loops if favour of vectorized computations and should perform faster. The basic idea is to label runs of integers in column 'T', find the first value in each of these groups and then divide the values in 'data' by the appropriate first value.
df['grp'] = (df['T'] == 0).cumsum() # label consecutive runs of integers
x = df.groupby('grp')['data'].first() # first value in each group
df['dataDiv'] = df['data'] / df['grp'].map(x) # divide
This gives the DataFrame with the desired column:
T data grp dataDiv
0 0 10 1 1.000000
1 1 20 1 2.000000
2 2 30 1 3.000000
3 3 40 1 4.000000
4 4 50 1 5.000000
5 0 5 2 1.000000
6 1 13 2 2.600000
7 2 21 2 4.200000
8 0 3 3 1.000000
9 1 7 3 2.333333
10 2 11 3 3.666667
11 3 15 3 5.000000
12 4 19 3 6.333333
(You can then drop the 'grp' column if you wish: df.drop('grp', axis=1).)
As #DSM points out below, the three lines of code could be collapsed to into one with the use of groupby.transform:
df['dataDiv'] = df['data'] / df.groupby((df['T'] == 0).cumsum())['data'].transform('first')

Related

Why does pd.rolling and .apply() return multiple outputs from a function returning a single value?

I'm trying to create a rolling function that:
Divides two DataFrames with 3 columns in each df.
Calculate the mean of each row from the output in step 1.
Sums the averages from step 2.
This could be done by using pd.iterrows() hence looping through each row. However, this would be inefficient when working with larger datasets. Therefore, my objective is to create a pd.rolling function that could do this much faster.
What I would need help with is to understand why my approach below returns multiple values while the function I'm using only returns a single value.
EDIT : I have updated the question with the code that produces my desired output.
This is the test dataset I'm working with:
#import libraries
import pandas as pd
import numpy as np
#create two dataframes
values = {'column1': [7,2,3,1,3,2,5,3,2,4,6,8,1,3,7,3,7,2,6,3,8],
'column2': [1,5,2,4,1,5,5,3,1,5,3,5,8,1,6,4,2,3,9,1,4],
"column3" : [3,6,3,9,7,1,2,3,7,5,4,1,4,2,9,6,5,1,4,1,3]
}
df1 = pd.DataFrame(values)
df2 = pd.DataFrame([[2,3,4],[3,4,1],[3,6,1]])
print(df1)
print(df2)
column1 column2 column3
0 7 1 3
1 2 5 6
2 3 2 3
3 1 4 9
4 3 1 7
5 2 5 1
6 5 5 2
7 3 3 3
8 2 1 7
9 4 5 5
10 6 3 4
11 8 5 1
12 1 8 4
13 3 1 2
14 7 6 9
15 3 4 6
16 7 2 5
17 2 3 1
18 6 9 4
19 3 1 1
20 8 4 3
0 1 2
0 2 3 4
1 3 4 1
2 3 6 1
One method to achieve my desired output by looping through each row:
RunningSum = []
for index, rows in df1.iterrows():
if index > 3:
Div = abs((((df2 / df1.iloc[index-3+1:index+1].reset_index(drop="True").values)-1)*100))
Average = Div.mean(axis=0)
SumOfAverages = np.sum(Average)
RunningSum.append(SumOfAverages)
#printing my desired output values
print(RunningSum)
[330.42328042328046,
212.0899470899471,
152.06349206349208,
205.55555555555554,
311.9047619047619,
209.1269841269841,
197.61904761904765,
116.94444444444444,
149.72222222222223,
430.0,
219.51058201058203,
215.34391534391537,
199.15343915343914,
159.6031746031746,
127.6984126984127,
326.85185185185185,
204.16666666666669]
However, this would be timely when working with large datasets. Therefore, I've tried to create a function which applies to a pd.rolling() object.
def SumOfAverageFunction(vals):
Div = df2 / vals.reset_index(drop="True")
Average = Div.mean(axis=0)
SumOfAverages = np.sum(Average)
return SumOfAverages
RunningSum = df1.rolling(window=3,axis=0).apply(SumOfAverageFunction)
The problem here is that my function returns multiple output. How can I solve this?
print(RunningSum)
column1 column2 column3
0 NaN NaN NaN
1 NaN NaN NaN
2 3.214286 4.533333 2.277778
3 4.777778 3.200000 2.111111
4 5.888889 4.416667 1.656085
5 5.111111 5.400000 2.915344
6 3.455556 3.933333 5.714286
7 2.866667 2.066667 5.500000
8 2.977778 3.977778 3.063492
9 3.555556 5.622222 1.907937
10 2.750000 4.200000 1.747619
11 1.638889 2.377778 3.616667
12 2.986111 2.005556 5.500000
13 5.333333 3.075000 4.750000
14 4.396825 5.000000 3.055556
15 2.174603 3.888889 2.148148
16 2.111111 2.527778 1.418519
17 2.507937 3.500000 3.311111
18 2.880952 3.000000 5.366667
19 2.722222 3.370370 5.750000
20 2.138889 5.129630 5.666667
After reordering of operations, your calculations can be simplified
BASE = df2.sum(axis=0) /3
BASE_series = pd.Series({k: v for k, v in zip(df1.columns, BASE)})
result = df1.rdiv(BASE_series, axis=1).sum(axis=1)
print(np.around(result[4:], 3))
Outputs:
4 5.508
5 4.200
6 2.400
7 3.000
...
if you dont want to calculate anything before index 4 then change:
df1.iloc[4:].rdiv(...

Division of multiple dimension data in pandas using groupby

Since pandas can't work in multi-dimensions, I usually stack the data row-wise and use a dummy column to mark the data dimensions. Now, I need to divide one dimension by another.
For example, given this dataframe where key define the dimensions
index key value
0 a 10
1 b 12
2 a 20
3 b 15
4 a 8
5 b 9
I want to achieve this:
index key value ratio_a_b
0 a 10 0.833333
1 b 12 NaN
2 a 20 1.33333
3 b 15 NaN
4 a 8 0.888889
5 b 9 NaN
Is there a way to do it using groupby?
You don't really need (and should not use) groupby for this:
# interpolate the b values
s = df['value'].where(df['key'].eq('b')).bfill()
# mask the a values and divide
# change to df['key'].ne('b') if you have many values of a
df['ratio'] = df['value'].where(df['key'].eq('a')).div(s)
Output:
index key value ratio
0 0 a 10 0.833333
1 1 b 12 NaN
2 2 a 20 1.333333
3 3 b 15 NaN
4 4 a 8 0.888889
5 5 b 9 NaN
Using eq, cumsum and GroupBy.apply with shift.
We use .eq to get a boolean where the value is a then we use cumsum to make an unique identifier for each a, b pair.
Then we use groupby and divide each value by the value one row below with shift
s = df['key'].eq('a').cumsum()
df['ratio_a_b'] = df.groupby(s)['value'].apply(lambda x: x.div(x.shift(-1)))
Output
key value ratio_a_b
0 a 10 0.833333
1 b 12 NaN
2 a 20 1.333333
3 b 15 NaN
4 a 8 0.888889
5 b 9 NaN
This is what s returns, our unique identifier for each a,b pair:
print(s)
0 1
1 1
2 2
3 2
4 3
5 3
Name: key, dtype: int32

Replacing values with the next unique one

In my pandas dataframe I have a column of non-unique values
I want to add a second column that contains the next unique value
i.e,
col
1
5
5
2
2
4
col addedCol
1 5
5 2
5 2
2 4
2 4
4 (last value doesn't matter)
how can i achieve this using pandas?
I'll clarify what I meant, I want each row to contain the next value that is different than of that row's
I hope I better explained myself now
IIUC, you need the next value which is different from the current value.
df.loc[:, 'col2'] = df.drop_duplicates().shift(-1).col
df['col2'].ffill(inplace=True)
col col2
0 1 5.0
1 5 2.0
2 5 2.0
3 2 2.0
(Notice that last 2.0 value doesn't matter). As suggest by #MartijnPieters,
df['col2'] = df['col2'].astype(int)
Can make values back to original integers if needed.
Adding another good solution from #piRSquared
df.assign(addedcol=df.index.to_series().shift(-1).map(df.col.drop_duplicates()).bfill())
col addedcol
0 1 5.0
1 5 2.0
2 5 2.0
3 2 NaN
Another example, if df is
col
0 1
1 5
2 5
3 2
4 3
5 3
6 10
7 9
Then
df.loc[:, 'col2'] = df.drop_duplicates().shift(-1).col
df = df.ffill()
yields
col col2
0 1 5.0
1 5 2.0
2 5 2.0
3 2 3.0
4 3 10.0
5 3 10.0
6 10 9.0
7 9 9.0
Using factorize
s=pd.factorize(df.col)[0]
pd.Series(s+1).map(dict(zip(s,df.col)))
Out[242]:
0 5.0
1 2.0
2 2.0
3 NaN
dtype: float64
#df['newadd']=pd.Series(s+1).map(dict(zip(s,df.col))).values
Under Mart 's condition
s=df.col.diff().ne(0).cumsum()
(s+1).map(dict(zip(s,df.col)))
Out[260]:
0 5.0
1 2.0
2 2.0
3 4.0
4 4.0
5 5.0
6 NaN
7 NaN
Name: col, dtype: float64
Setup
Added additional data with multiple clusters
df = pd.DataFrame({'col': [*map(int, '1552554442')]})
Two interpretations
We have to consider when there exist non-contiguous clusters
df
col
0 1 # First instance of `1` Next unique is `5`
1 5 # First instance of `5` Next unique is `2`
2 5 # Next unique is `2`
3 2 # First instance of `2` Next unique is `4` because `5` is not new
4 5 # Next unique is `4`
5 5 # Next unique is `4`
6 4 # First instance of `4` Next unique is null
7 4 # First instance of `4` Next unique is null
8 4 # First instance of `4` Next unique is null
9 2 # Second time seen `2` Should Next unique be null or what it was before `4`
Allowed to look back
Use factorize and add 1. This is very much in the spirit of #Wen's answer
i, u = df.col.factorize()
u_ = np.append(u, -1) # Append an integer value to represent null
df.assign(addedcol=u_[i + 1])
col addedcol
0 1 5
1 5 2
2 5 2
3 2 4
4 5 2
5 5 2
6 4 -1
7 4 -1
8 4 -1
9 2 4
Only Forward
Similar to before except we'll track the cumulative maximum factorized value
i, u = df.col.factorize()
u_ = np.append(u, -1) # Append an integer value to represent null
x = np.maximum.accumulate(i)
df.assign(addedcol=u_[x + 1])
col addedcol
0 1 5
1 5 2
2 5 2
3 2 4
4 5 4
5 5 4
6 4 -1
7 4 -1
8 4 -1
9 2 -1
You'll notice that the difference is in the last value. When we can only look forward, we see that there is no next unique value.

Conditional sum from rows into a new column in pandas

I am looking to create a new column in panda based on the value in the row. My sample data:
df=pd.DataFrame({"A":['a','a','a','a','a','a','b','b','b'],
"Sales":[2,3,7,1,4,3,5,6,9,10,11,8,7,13,14],
"Week":[1,2,3,4,5,11,1,2,3,4])
I want a new column "Last3WeekSales" corresponding to each week, having the sum of sales for the previous 3 weeks.
NOTE: Shift() won't work here as data for some weeks is missing.
Logic which I thought:
Checking the week no. in each row, then summing up the data from w-1, w-2, w-3.
Output required:
A Week Last3WeekSales
0 a 1 0
1 a 2 2
2 a 3 5
3 a 4 12
4 a 5 11
5 a 11 0
6 b 1 0
7 b 2 5
8 b 3 11
9 b 4 20
Use groupby, shift and rolling:
df['Last3WeekSales'] = df.groupby('A')['Sales']\
.apply(lambda x: x.shift(1)
.rolling(3, min_periods=1)
.sum())\
.fillna(0)
Output:
A Sales Week Last3WeekSales
0 a 2 1 0.0
1 a 3 2 2.0
2 a 7 3 5.0
3 a 1 4 12.0
4 a 4 5 11.0
5 a 3 6 12.0
6 b 5 1 0.0
7 b 6 2 5.0
8 b 9 3 11.0
you can use pandas.rolling_sum to sum over 3 last values, and shift(n) to shift your column by n times (1 in your case).
if we suppose you a column 'sales' with the sales of each week, the code would be :
df["Last3WeekSales"] = df.groupby("A")["sales"].apply(lambda x: pd.rolling_sum(x.shoft(1),3))

How can I fill in a missing values in range with Pandas?

I have a dataset with a number of values like below.
>>> a.head()
value freq
3 9 1
2 11 1
0 12 4
1 15 2
I need to fill in the values between the integers in the value column. For example, I need to insert one new row between 9 & 11 filled with zeroes, then another two between 12-15. The end result should be the dataset with 9-15 with 'missing' rows as zeroes across the board.
Is there anyway to insert a new row at an specific location without replacing data? The only methods I've found involve slicing the dataframe at a location then appending a new row and concatenating the remainder.
UPDATE: The index is completely irrelevant so don't worry about that.
You didn't say what should happen to your Index, so I'm assuming it's unimportant.
In [12]: df.index = df['value']
In [15]: df.reindex(np.arange(df.value.min(), df.value.max() + 1)).fillna(0)
Out[15]:
value freq
value
9 9 1
10 0 0
11 11 1
12 12 4
13 0 0
14 0 0
15 15 2
Another option is to create a second dataframe with values from min to max, and outer join this to your dataframe:
import pandas as pd
a = pd.DataFrame({'value':[9,11,12,15], 'freq':[1,1,4,2]})
# value freq
#0 9 1
#1 11 1
#2 12 4
#3 15 2
b = pd.DataFrame({'value':[x for x in range(a.value.min(), a.value.max()+1)]})
value
0 9
1 10
2 11
3 12
4 13
5 14
6 15
a = pd.merge(left=a, right=b, on='value', how='outer').fillna(0).sort_values(by='value')
# value freq
#0 9 1.0
#4 10 0.0
#1 11 1.0
#2 12 4.0
#5 13 0.0
#6 14 0.0
#3 15 2.0

Categories

Resources