How can I fill in a missing values in range with Pandas? - python

I have a dataset with a number of values like below.
>>> a.head()
value freq
3 9 1
2 11 1
0 12 4
1 15 2
I need to fill in the values between the integers in the value column. For example, I need to insert one new row between 9 & 11 filled with zeroes, then another two between 12-15. The end result should be the dataset with 9-15 with 'missing' rows as zeroes across the board.
Is there anyway to insert a new row at an specific location without replacing data? The only methods I've found involve slicing the dataframe at a location then appending a new row and concatenating the remainder.
UPDATE: The index is completely irrelevant so don't worry about that.

You didn't say what should happen to your Index, so I'm assuming it's unimportant.
In [12]: df.index = df['value']
In [15]: df.reindex(np.arange(df.value.min(), df.value.max() + 1)).fillna(0)
Out[15]:
value freq
value
9 9 1
10 0 0
11 11 1
12 12 4
13 0 0
14 0 0
15 15 2

Another option is to create a second dataframe with values from min to max, and outer join this to your dataframe:
import pandas as pd
a = pd.DataFrame({'value':[9,11,12,15], 'freq':[1,1,4,2]})
# value freq
#0 9 1
#1 11 1
#2 12 4
#3 15 2
b = pd.DataFrame({'value':[x for x in range(a.value.min(), a.value.max()+1)]})
value
0 9
1 10
2 11
3 12
4 13
5 14
6 15
a = pd.merge(left=a, right=b, on='value', how='outer').fillna(0).sort_values(by='value')
# value freq
#0 9 1.0
#4 10 0.0
#1 11 1.0
#2 12 4.0
#5 13 0.0
#6 14 0.0
#3 15 2.0

Related

Dataframe fill column with previous value until condition

I have a dataframe that looks like this:
Step
Text
Parameter
15
print
1
16
control
2
17
printout
3
18
print2
1
19
Nan
2
20
Nan
3
21
Nan
4
22
Nan
1
23
Nan
2
24
Nan
1
And I want my dataframe to look like this:
Step
Text
Parameter
15
print
1
15
print
2
15
print
3
16
control
1
16
control
2
17
control
3
17
control
4
18
printout
1
18
printout
2
19
print2
1
So basically when I have "1" in Parameter column, I need the next value from Step and Text.
Any ideas?:)
You can use repeat on a custom group:
# ensure NaN
df['Text'] = df['Text'].replace('Nan', pd.NA)
# get the number of rows per group starting with 1
n = df.groupby(df['Parameter'].eq(1).cumsum()).size()
# repeat the index of the non NaN values as many times
idx = df['Text'].dropna().index.repeat(n)
# replace the values ignoring the index
# (using the underlying numpy array)
df[['Step', 'Text']] = df.loc[idx, ['Step', 'Text']].to_numpy()
output:
Step Text Parameter
0 15 print 1
1 15 print 2
2 15 print 3
3 16 control 1
4 16 control 2
5 16 control 3
6 16 control 4
7 17 printout 1
8 17 printout 2
9 18 print2 1

Why does pd.rolling and .apply() return multiple outputs from a function returning a single value?

I'm trying to create a rolling function that:
Divides two DataFrames with 3 columns in each df.
Calculate the mean of each row from the output in step 1.
Sums the averages from step 2.
This could be done by using pd.iterrows() hence looping through each row. However, this would be inefficient when working with larger datasets. Therefore, my objective is to create a pd.rolling function that could do this much faster.
What I would need help with is to understand why my approach below returns multiple values while the function I'm using only returns a single value.
EDIT : I have updated the question with the code that produces my desired output.
This is the test dataset I'm working with:
#import libraries
import pandas as pd
import numpy as np
#create two dataframes
values = {'column1': [7,2,3,1,3,2,5,3,2,4,6,8,1,3,7,3,7,2,6,3,8],
'column2': [1,5,2,4,1,5,5,3,1,5,3,5,8,1,6,4,2,3,9,1,4],
"column3" : [3,6,3,9,7,1,2,3,7,5,4,1,4,2,9,6,5,1,4,1,3]
}
df1 = pd.DataFrame(values)
df2 = pd.DataFrame([[2,3,4],[3,4,1],[3,6,1]])
print(df1)
print(df2)
column1 column2 column3
0 7 1 3
1 2 5 6
2 3 2 3
3 1 4 9
4 3 1 7
5 2 5 1
6 5 5 2
7 3 3 3
8 2 1 7
9 4 5 5
10 6 3 4
11 8 5 1
12 1 8 4
13 3 1 2
14 7 6 9
15 3 4 6
16 7 2 5
17 2 3 1
18 6 9 4
19 3 1 1
20 8 4 3
0 1 2
0 2 3 4
1 3 4 1
2 3 6 1
One method to achieve my desired output by looping through each row:
RunningSum = []
for index, rows in df1.iterrows():
if index > 3:
Div = abs((((df2 / df1.iloc[index-3+1:index+1].reset_index(drop="True").values)-1)*100))
Average = Div.mean(axis=0)
SumOfAverages = np.sum(Average)
RunningSum.append(SumOfAverages)
#printing my desired output values
print(RunningSum)
[330.42328042328046,
212.0899470899471,
152.06349206349208,
205.55555555555554,
311.9047619047619,
209.1269841269841,
197.61904761904765,
116.94444444444444,
149.72222222222223,
430.0,
219.51058201058203,
215.34391534391537,
199.15343915343914,
159.6031746031746,
127.6984126984127,
326.85185185185185,
204.16666666666669]
However, this would be timely when working with large datasets. Therefore, I've tried to create a function which applies to a pd.rolling() object.
def SumOfAverageFunction(vals):
Div = df2 / vals.reset_index(drop="True")
Average = Div.mean(axis=0)
SumOfAverages = np.sum(Average)
return SumOfAverages
RunningSum = df1.rolling(window=3,axis=0).apply(SumOfAverageFunction)
The problem here is that my function returns multiple output. How can I solve this?
print(RunningSum)
column1 column2 column3
0 NaN NaN NaN
1 NaN NaN NaN
2 3.214286 4.533333 2.277778
3 4.777778 3.200000 2.111111
4 5.888889 4.416667 1.656085
5 5.111111 5.400000 2.915344
6 3.455556 3.933333 5.714286
7 2.866667 2.066667 5.500000
8 2.977778 3.977778 3.063492
9 3.555556 5.622222 1.907937
10 2.750000 4.200000 1.747619
11 1.638889 2.377778 3.616667
12 2.986111 2.005556 5.500000
13 5.333333 3.075000 4.750000
14 4.396825 5.000000 3.055556
15 2.174603 3.888889 2.148148
16 2.111111 2.527778 1.418519
17 2.507937 3.500000 3.311111
18 2.880952 3.000000 5.366667
19 2.722222 3.370370 5.750000
20 2.138889 5.129630 5.666667
After reordering of operations, your calculations can be simplified
BASE = df2.sum(axis=0) /3
BASE_series = pd.Series({k: v for k, v in zip(df1.columns, BASE)})
result = df1.rdiv(BASE_series, axis=1).sum(axis=1)
print(np.around(result[4:], 3))
Outputs:
4 5.508
5 4.200
6 2.400
7 3.000
...
if you dont want to calculate anything before index 4 then change:
df1.iloc[4:].rdiv(...

Replace by previous values

I have some dataframe like the one shown above. The goal of this program is to replace some specific value by the previous one.
import pandas as pd
test = pd.DataFrame([2,2,3,1,1,2,4,6,43,23,4,1,3,3,1,1,1,4,5], columns = ['A'])
obtaining:
If one want to replace all 1 by the previous values, a possible solution is:
for li in test[test['A'] == 1].index:
test['A'].iloc[li] = test['A'].iloc[li-1]
However, it is very inefficient. Can you suggest a more efficient solution?
IIUC, replace to np.nan then ffill
test.replace(1,np.nan).ffill().astype(int)
Out[881]:
A
0 2
1 2
2 3
3 3
4 3
5 2
6 4
7 6
8 43
9 23
10 4
11 4
12 3
13 3
14 3
15 3
16 3
17 4
18 5

How to normalize values in a dataframe column in different ranges

I have a dataframe like this:
T data
0 0 10
1 1 20
2 2 30
3 3 40
4 4 50
5 0 5
6 1 13
7 2 21
8 0 3
9 1 7
10 2 11
11 3 15
12 4 19
The values in T are sequences which all range from 0 up to a certain value whereby the maximal number can differ between the sequences.
Normally, the values in data are NOT equally spaced, that is now just for demonstration purposes.
What I want to achieve is to add a third column called dataDiv where each value in data of a certain sequence is divided by the value at T = 0 that belongs to the respective sequence. In my case, I have 3 sequences and for the first sequence I want to divide each value by 10, in the second sequence each value should be divided by 5 and in the third by 3.
So the expected outcome would look like this:
T data dataDiv
0 0 10 1.000000
1 1 20 2.000000
2 2 30 3.000000
3 3 40 4.000000
4 4 50 5.000000
5 0 5 1.000000
6 1 13 2.600000
7 2 21 4.200000
8 0 3 1.000000
9 1 7 2.333333
10 2 11 3.666667
11 3 15 5.000000
12 4 19 6.333333
The way I currently implement it is as follows:
I first determine the indices at which T = 0. Then I loop through these indices and divide the data in data by the value at T=0 of the respective sequence which gives me the desired output (which is shown above). The code looks as follows:
import pandas as pd
df = pd.DataFrame({'T': range(5) + range(3) + range(5),
'data': range(10, 60, 10) + range(5, 25, 8) + range(3, 21, 4)})
# get indices where T = 0
idZE = df[df['T'] == 0].index.tolist()
# last index of dataframe
idZE.append(max(df.index)+1)
# add the column with normalzed values
df['dataDiv'] = df['data']
# loop through indices where T = 0 and normalize values
for ix, indi in enumerate(idZE[:-1]):
df['dataDiv'].iloc[indi:idZE[ix + 1]] = df['data'].iloc[indi:idZE[ix + 1]] / df['data'].iloc[indi]
My question is: Is there any smarter solution than this which avoids the loop?
The following approach avoids loops if favour of vectorized computations and should perform faster. The basic idea is to label runs of integers in column 'T', find the first value in each of these groups and then divide the values in 'data' by the appropriate first value.
df['grp'] = (df['T'] == 0).cumsum() # label consecutive runs of integers
x = df.groupby('grp')['data'].first() # first value in each group
df['dataDiv'] = df['data'] / df['grp'].map(x) # divide
This gives the DataFrame with the desired column:
T data grp dataDiv
0 0 10 1 1.000000
1 1 20 1 2.000000
2 2 30 1 3.000000
3 3 40 1 4.000000
4 4 50 1 5.000000
5 0 5 2 1.000000
6 1 13 2 2.600000
7 2 21 2 4.200000
8 0 3 3 1.000000
9 1 7 3 2.333333
10 2 11 3 3.666667
11 3 15 3 5.000000
12 4 19 3 6.333333
(You can then drop the 'grp' column if you wish: df.drop('grp', axis=1).)
As #DSM points out below, the three lines of code could be collapsed to into one with the use of groupby.transform:
df['dataDiv'] = df['data'] / df.groupby((df['T'] == 0).cumsum())['data'].transform('first')

I want to get the relative index of a column in a pandas dataframe

I want to make a new column of the 5 day return for a stock, let's say. I am using pandas dataframe. I computed a moving average using the rolling_mean function, but I'm not sure how to reference lines like i would in a spreadsheet (B6-B1) for example. Does anyone know how I can do this index reference and subtraction?
sample data frame:
day price 5-day-return
1 10 -
2 11 -
3 15 -
4 14 -
5 12 -
6 18 i want to find this ((day 5 price) -(day 1 price) )
7 20 then continue this down the list
8 19
9 21
10 22
Are you wanting this:
In [10]:
df['5-day-return'] = (df['price'] - df['price'].shift(5)).fillna(0)
df
Out[10]:
day price 5-day-return
0 1 10 0
1 2 11 0
2 3 15 0
3 4 14 0
4 5 12 0
5 6 18 8
6 7 20 9
7 8 19 4
8 9 21 7
9 10 22 10
shift returns the row at a specific offset, we use this to subtract this from the current row. fillna fills the NaN values which will occur prior to the first valid calculation.

Categories

Resources