I need add three columns in a pandas dataframe, from existing data.
df
>>
n a b
0 3 1.2 1.4
1 2 2.8 3.8
2 3 2.3 2.0
3 3 1.7 5.7
4 2 6.9 4.9
5 1 3.9 19.0
6 9 2.3 8.3
7 5 8.5 3.1
8 18 6.7 7.0
9 10 5.6 6.4
I have done the following
import pandas
import numpy
def add_tests(add_df):
new_tests = """
(a+b)/n
(a*b)/n
((a+b)/n)**-1
""".split()
rows = add_df.shape[0]
cols = len(new_tests)
U = pandas.DataFrame(numpy.empty([rows, cols]), columns=new_tests)
add_df = pandas.concat([df, U], axis=1)
for i, row in add_df.iterrows():
# 1) good calculation:
add_df['(a+b)/n'].loc[i] = (add_df['a'].loc[i] + add_df['b'].loc[i])/ add_df['n'].loc[i]
# 2) good calculation (Both ways):
add_df['(a*b)/n'].loc[i] = (row['a'] * row['b'])/ row['n']
# 3) bad calculation
add_df['((a+b)/n)**-1'].loc[i] = row['(a+b)/n'] ** -1
pass
return add_df
I get the next warning message:
df = add_tests(df)
df
>>
C:...\pandas\core\indexing.py:141: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
n a b (a+b)/n (a*b)/n ((a+b)/n)**-1
0 3 1.2 1.4 0.866667 0.560000 0.833333
1 2 2.8 3.8 3.300000 5.320000 0.588235
2 3 2.3 2.0 1.433333 1.533333 0.434783
3 3 1.7 5.7 2.466667 3.230000 0.178571
4 2 6.9 4.9 5.900000 16.905000 0.500000
5 1 3.9 19.0 22.900000 74.100000 0.052632
6 9 2.3 8.3 1.177778 2.121111 0.142857
7 5 8.5 3.1 2.320000 5.270000 0.263158
8 18 6.7 7.0 0.761111 2.605556 0.111111
9 10 5.6 6.4 1.200000 3.584000 0.666667
Obviously step 3 does not work properly ...
How to do it the right way?
Fun with eval
define tuples of temporary column names with formulas
create a \n separated string of formulas to pass to eval
use dictionary to make formulas into column names
ftups = [('aa', '(a+b)/n'), ('bb', '(a*b)/n'), ('cc', '((a+b)/n)**-1')]
forms = '\n'.join([' = '.join(tup) for tup in ftups])
fdict = dict(ftups)
df.eval(forms, inplace=False).rename(columns=fdict)
n a b (a+b)/n (a*b)/n ((a+b)/n)**-1
0 3 1.2 1.4 0.866667 0.560000 1.153846
1 2 2.8 3.8 3.300000 5.320000 0.303030
2 3 2.3 2.0 1.433333 1.533333 0.697674
3 3 1.7 5.7 2.466667 3.230000 0.405405
4 2 6.9 4.9 5.900000 16.905000 0.169492
5 1 3.9 19.0 22.900000 74.100000 0.043668
6 9 2.3 8.3 1.177778 2.121111 0.849057
7 5 8.5 3.1 2.320000 5.270000 0.431034
8 18 6.7 7.0 0.761111 2.605556 1.313869
9 10 5.6 6.4 1.200000 3.584000 0.833333
Related
I have to consider nth row and check n+1 to n+3 rows, if it is in the range of (nth row value)-0.5 to (nth row value)+0.5, and(&) the results of 3 rows.
A result
0 1.1 1 # 1.2 1.3 and 1.5 are in range of 0.6 to 1.6, ( 1 & 1 & 1)
1 1.2 0 # 1.3 and 1.5 are in range of 0.7 to 1.7, but not 2, hence ( 1 & 0 & 0)
2 1.3 0 # 1.5 and 1 are in range of 0.8 to 1.8, but not 2 ( 1 & 0 & 1)
3 1.5
4 2.0
5 1.0
6 2.5
7 1.8
8 4.0
9 4.2
10 4.5
11 3.9
df = pd.DataFrame( {
'A': [1.1,1.2,1.3,1.9,2,1,2.5,1.8,4,4.2,4.5,3.9]
} )
I have done some research on the site, but couldn't able to find exact syntax. I tried using rolling function for taking 3 rows and use between function check range and then and the results. Could you please help here.
s = pd.Series([1, 2, 3, 4])
s.rolling(2).between(s-1,s+1)
getting error :
AttributeError: 'Rolling' object has no attribute 'between'
You can also achieve the result without using rolling() while keep using .between(), as follows:
df['result'] = (
(df['A'].shift(-1).between(df['A'] - 0.5, df['A'] + 0.5)) &
(df['A'].shift(-2).between(df['A'] - 0.5, df['A'] + 0.5)) &
(df['A'].shift(-3).between(df['A'] - 0.5, df['A'] + 0.5))
).astype(int)
Result:
print(df)
A result
0 1.1 1
1 1.2 0
2 1.3 0
3 1.5 0
4 2.0 0
5 1.0 0
6 2.5 0
7 1.8 0
8 4.0 1
9 4.2 0
10 4.5 0
11 3.9 0
Rolling windows tend to be quite slow in pandas. One quick solution can be to generate a dataframe with the values of the windows per row:
df_temp = pd.concat([df['A'].shift(i) for i in range(-1, 2)], axis=1)
df_temp
A A A
0 1.2 1.1 NaN
1 1.3 1.2 1.1
2 1.9 1.3 1.2
3 2.0 1.9 1.3
4 1.0 2.0 1.9
5 2.5 1.0 2.0
6 1.8 2.5 1.0
7 4.0 1.8 2.5
8 4.2 4.0 1.8
9 4.5 4.2 4.0
10 3.9 4.5 4.2
11 NaN 3.9 4.5
Then you can check per row if the value is in the desired range:
df['result'] = df_temp.apply(lambda x: (x - x.iloc[0]).between(-0.5, 0.5), axis=1).all(axis=1).astype(int)
A result
0 1.1 0
1 1.2 1
2 1.3 0
3 1.9 0
4 2.0 0
5 1.0 0
6 2.5 0
7 1.8 0
8 4.0 0
9 4.2 1
10 4.5 0
11 3.9 0
I have a pandas dataframe where in the first row I have multiple entries but the 2nd row has repeating columns.
A B C
Date open r close open r close open r close
2000-07-03 19.7 5 17.1 66.26 4 6.22 23.26. 1 9.9
2000-07-05 49.8 2 8.3 78.81 6 4.34 39.81 5 5.1
2000-07-15 89.5 3 4.1 43.45 7 2.45 29.3 8 1.2
2000-08-13 74.7 6 7.4 34.26 8 6.4 72.26 9 5.4
2000-08-25 39.84 1 8.4 95.43 3 4.3 69.81. 0 5.2
2000-08-28 61.8 4 4.2 43.81 1 2.2 129.81 6 1.3
2000-09-11 82.79 7 7.4 66.26 1 6.5 72.25 6 5.6
2000-09-16 64.8 8 8.7 73.45 5 4.7 69.45 4 5.4
2000-09-22 58.5 9 3.3 13.81 8 2.9 777.8 8 1.4
I want to extract data for 7th month of 2000 and find out which is the lowest (Open - Close) from A or B or C?
MY PLAN:
s=data.stack(level=0)
values = s[s.index.get_level_values(1)]['open', 'close'].reset_index()
values['Date'] = pd.to_datetime(values['Date'])
start_date = 2000-07-01
end_date = 2000-08-01
mask = (data['date'] > start_date) & (data['date'] <= end_date)
df = data.loc[mask]
df['Val_Diff'] = df['open'] - df['close']
print(df['Val_Diff'].max())
I get the error
KeyError: "None of [Index are in the [columns]"
why is multiindex a problem for this code?
I think it's caused by the unnamed columns in the index when the stack deforms vertically.
Process flow:
Flatten the column names of multi-indexes.
Transform from horizontal to vertical using the wide_to_long function
Convert the date sequence to 'Datetime' format for conditional extraction.
import pandas as pd
import numpy as np
import io
import datetime
data = '''
Date open r close open r close open r close
2000-07-03 19.7 5 17.1 66.26 4 6.22 23.26 1 9.9
2000-07-05 49.8 2 8.3 78.81 6 4.34 39.81 5 5.1
2000-07-15 89.5 3 4.1 43.45 7 2.45 29.3 8 1.2
2000-08-13 74.7 6 7.4 34.26 8 6.4 72.26 9 5.4
2000-08-25 39.84 1 8.4 95.43 3 4.3 69.81 0 5.2
2000-08-28 61.8 4 4.2 43.81 1 2.2 129.81 6 1.3
2000-09-11 82.79 7 7.4 66.26 1 6.5 72.25 6 5.6
2000-09-16 64.8 8 8.7 73.45 5 4.7 69.45 4 5.4
2000-09-22 58.5 9 3.3 13.81 8 2.9 777.8 8 1.4
'''
data = pd.read_csv(io.StringIO(data), sep='\s+')
idx = pd.MultiIndex.from_arrays([['','A','A','A','B','B','B','C','C','C'], ['Date','open','r','close','open','r','close','open','r','close']])
data.columns = idx
new_cols = [k[1]+'_'+k[0] for k in data.columns[1:]]
new_cols.insert(0, 'Date')
data.columns = new_cols
data = pd.wide_to_long(data,['open','r','close'], i='Date', j='item', sep='_', suffix='\\w+')
data.reset_index(inplace=True)
data['Date'] = pd.to_datetime(data['Date'])
start_date = datetime.datetime(2000,7,1)
end_date = datetime.datetime(2000,8,1)
mask = (data.Date > start_date) & (data.Date <= end_date)
data = data.loc[mask]
data
Date item open r close
0 2000-07-03 A 19.70 5 17.10
1 2000-07-05 A 49.80 2 8.30
2 2000-07-15 A 89.50 3 4.10
9 2000-07-03 B 66.26 4 6.22
10 2000-07-05 B 78.81 6 4.34
11 2000-07-15 B 43.45 7 2.45
18 2000-07-03 C 23.26 1 9.90
19 2000-07-05 C 39.81 5 5.10
20 2000-07-15 C 29.30 8 1.20
data['Val_Diff'] = data['open'] - data['close']
print(data['Val_Diff'].max())
85.4
Using pandas I like to use groupby and an aggregate function, e.g. mean
and then put the results back in the original dataframe, but in the next group and not in the group itself. How to do this in a vectorized way?
I have a pandas dataframe like this:
data = {'Group': ['A','A','B','B','B','B', 'C','C', 'D','D'],
'Value': [1.1,1.3,9.1,9.2,9.5,9.4,6.2,6.4,2.2,2.3]
}
df = pd.DataFrame(data, columns = ['Group','Value'])
print (df)
Group Value
0 A 1.1
1 A 1.3
2 B 9.1
3 B 9.2
4 B 9.5
5 B 9.4
6 C 6.2
7 C 6.4
8 D 2.2
9 D 2.3
I like to get this, where each group has the mean value of the previous group.
Group Value
0 A NaN
1 A NaN
2 B 1.2
3 B 1.2
4 B 1.2
5 B 1.2
6 C 9.3
7 C 9.3
8 D 6.3
9 D 6.3
I tried this, but this is without the shift to the next group
df.groupby('Group')['Value'].transform('mean')
Easy, use map on a groupby result:
df['Value'] = df['Group'].map(df.groupby('Group')['Value'].mean().shift())
df
Group Value
0 A NaN
1 A NaN
2 B 1.2
3 B 1.2
4 B 1.2
5 B 1.2
6 C 9.3
7 C 9.3
8 D 6.3
9 D 6.3
How It Works
Get the mean
df.groupby('Group')['Value'].mean()
Group
A 1.20
B 9.30
C 6.30
D 2.25
Name: Value, dtype: float64
Shift it down by 1
df.groupby('Group')['Value'].mean().shift()
Group
A NaN
B 1.2
C 9.3
D 6.3
Name: Value, dtype: float64
Map it back.
df['Group'].map(df.groupby('Group')['Value'].mean().shift())
0 NaN
1 NaN
2 1.2
3 1.2
4 1.2
5 1.2
6 9.3
7 9.3
8 6.3
9 6.3
Name: Group, dtype: float64
You can calculate aggregated GroupBy.mean of each group value and use pd.Series.shift and take advantage of pandas index alignment.
df.set_index('Group').assign(value = df.groupby('Group').mean().shift()).reset_index()
Group Value value
0 A 1.1 NaN
1 A 1.3 NaN
2 B 9.1 1.2
3 B 9.2 1.2
4 B 9.5 1.2
5 B 9.4 1.2
6 C 6.2 9.3
7 C 6.4 9.3
8 D 2.2 6.3
9 D 2.3 6.3
I am trying to calculate rolling averages within groups. For this task I want a rolling average from the rows above so thought the easiest way would be to use shift() and then do rolling(). The problem is that shift() shifts the data from previous groups which makes first row in group 2 and 3 incorrect. Column 'ma' should have NaN in rows 4 and 7. How can I achieve this?
import pandas as pd
df = pd.DataFrame(
{"Group": [1, 2, 3, 1, 2, 3, 1, 2, 3],
"Value": [2.5, 2.9, 1.6, 9.1, 5.7, 8.2, 4.9, 3.1, 7.5]
})
df = df.sort_values(['Group'])
df.reset_index(inplace=True)
df['ma'] = df.groupby('Group', as_index=False)['Value'].shift(1).rolling(3, min_periods=1).mean()
print(df)
I get this:
index Group Value ma
0 0 1 2.5 NaN
1 3 1 9.1 2.50
2 6 1 4.9 5.80
3 1 2 2.9 5.80
4 4 2 5.7 6.00
5 7 2 3.1 4.30
6 2 3 1.6 4.30
7 5 3 8.2 3.65
8 8 3 7.5 4.90
I tried answers from couple similar questions but nothing seems to work.
If I understand the question correctly, then the solution you require can be achieved in 2 steps using the following:
df['sa'] = df.groupby('Group', as_index=False)['Value'].transform(lambda x: x.shift(1))
df['ma'] = df.groupby('Group', as_index=False)['sa'].transform(lambda x: x.rolling(3, min_periods=1).mean())
I got the below output, where 'ma' is the desired column
index Group Value sa ma
0 0 1 2.5 NaN NaN
1 3 1 9.1 2.5 2.5
2 6 1 4.9 9.1 5.8
3 1 2 2.9 NaN NaN
4 4 2 5.7 2.9 2.9
5 7 2 3.1 5.7 4.3
6 2 3 1.6 NaN NaN
7 5 3 8.2 1.6 1.6
8 8 3 7.5 8.2 4.9
Edit: Example with one groupby
def shift_ma(x):
return x.shift(1).rolling(3, min_periods=1).mean()
df['ma'] = df.groupby('Group', as_index=False)['Value'].apply(shift_ma).reset_index(drop=True)
I need to interpolate multi index dataframe:
for example:
this is the main dataframe:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
I need to find the result for:
1.3 1.7 1.55
What I've been doing so far is appending a pd.Series inside with NaN
for each index individually.
As you can see. this seems like a VERY inefficient way.
I would be happy if someone can enrich me.
P.S.
I spent some time looking over SO, and if the answer is in there, I missed it:
Fill multi-index Pandas DataFrame with interpolation
Resampling Within a Pandas MultiIndex
pandas multiindex dataframe, ND interpolation for missing values
Fill multi-index Pandas DataFrame with interpolation
Algorithm:
stage 1:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
1.3 1 1 6.3
1.3 1 2 9.3
1.3 2 1 8.3
1.3 2 2 11.3
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
stage 2:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
1.3 1 1 6.3
1.3 1 2 9.3
1.3 1.7 1 7.7
1.3 1.7 2 10.7
1.3 2 1 8.3
1.3 2 2 11.3
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
stage 3:
a b c result
1 1 1 6
1 1 2 9
1 2 1 8
1 2 2 11
1.3 1 1 6.3
1.3 1 2 9.3
1.3 1.7 1 7.7
1.3 1.7 1.55 9.35
1.3 1.7 2 10.7
1.3 2 1 8.3
1.3 2 2 11.3
2 1 1 7
2 1 2 10
2 2 1 9
2 2 2 12
You can use scipy.interpolate.LinearNDInterpolator to do what you want. If the dataframe is a MultiIndex with the column 'a','b' and 'c', then:
from scipy.interpolate import LinearNDInterpolator as lNDI
print (lNDI(points=df.index.to_frame().values, values=df.result.values)([1.3, 1.7, 1.55]))
now if you have dataframe with all the tuples (a, b, c) as index you want to calculate, you can do for example:
def pd_interpolate_MI (df_input, df_toInterpolate):
from scipy.interpolate import LinearNDInterpolator as lNDI
#create the function of interpolation
func_interp = lNDI(points=df_input.index.to_frame().values, values=df_input.result.values)
#calculate the value for the unknown index
df_toInterpolate['result'] = func_interp(df_toInterpolate.index.to_frame().values)
#return the dataframe with the new values
return pd.concat([df_input, df_toInterpolate]).sort_index()
Then for example with your df and df_toI = pd.DataFrame(index=pd.MultiIndex.from_tuples([(1.3, 1.7, 1.55),(1.7, 1.4, 1.9)],names=df.index.names))
then you get
print (pd_interpolate_MI(df, df_toI))
result
a b c
1.0 1.0 1.00 6.00
2.00 9.00
2.0 1.00 8.00
2.00 11.00
1.3 1.7 1.55 9.35
1.7 1.4 1.90 10.20
2.0 1.0 1.00 7.00
2.00 10.00
2.0 1.00 9.00
2.00 12.00