I'm trying to get the expanding mean for the 3 max value:
import pandas as pd
import numpy as np
np.random.seed(seed=10)
df = pd.DataFrame ({'ID' : ['foo', 'bar'] * 10,
'ORDER' : np.arange(20),
'VAL' : np.random.randn(20)})
df = df.sort(columns=['ID','ORDER'])
I have try the expanding_apply function :
pd.expanding_apply(df['VAL'],lambda x : np.mean(((np.sort(np.array(x)))[-3:])))
It works, but for all my ID. and i need it for each of my id so i try something with groupby, with no success ...
i have tried :
df['AVG_MAX3']= df.groupby('ID')['VAL'].apply(pd.expanding_apply(lambda x : np.mean(((np.sort(np.array(x)))[-3:]))))
my expanding_mean have to restard from 0 for each ID
how can i do that ? any suggestion
desired output :
ID ORDER VAL exp_mean
bar 1 0.715278974 0.715278974
bar 3 -0.00838385 0.353447562
bar 5 -0.720085561 -0.004396812
bar 7 0.108548526 0.27181455
bar 9 -0.174600211 0.27181455
bar 11 1.203037374 0.675621625
bar 13 1.028274078 0.982196809
bar 15 0.445137613 0.982196809
bar 17 0.135136878 0.982196809
bar 19 -1.079804886 0.982196809
foo 0 1.331586504 1.331586504
foo 2 -1.545400292 -0.106906894
foo 4 0.621335974 0.135840729
foo 6 0.265511586 0.739478021
foo 8 0.004291431 0.739478021
foo 10 0.43302619 0.795316223
foo 12 -0.965065671 0.795316223
foo 14 0.22863013 0.795316223
foo 16 -1.136602212 0.795316223
foo 18 1.484537002 1.145819827
You're close, but you're missing the first argument in pd.expanding_apply when you're calling it in the groupby operation. I pulled your expanding mean into a separate function to make it a little clearer.
In [158]: def expanding_max_mean(x, size=3):
...: return np.mean(np.sort(np.array(x))[-size:])
In [158]: df['exp_mean'] = df.groupby('ID')['VAL'].apply(lambda x: pd.expanding_apply(x, expanding_max_mean))
In [159]: df
Out[159]:
ID ORDER VAL exp_mean
1 bar 1 0.715279 0.715279
3 bar 3 -0.008384 0.353448
5 bar 5 -0.720086 -0.004397
7 bar 7 0.108549 0.271815
9 bar 9 -0.174600 0.271815
11 bar 11 1.203037 0.675622
13 bar 13 1.028274 0.982197
15 bar 15 0.445138 0.982197
17 bar 17 0.135137 0.982197
19 bar 19 -1.079805 0.982197
0 foo 0 1.331587 1.331587
2 foo 2 -1.545400 -0.106907
4 foo 4 0.621336 0.135841
6 foo 6 0.265512 0.739478
8 foo 8 0.004291 0.739478
10 foo 10 0.433026 0.795316
12 foo 12 -0.965066 0.795316
14 foo 14 0.228630 0.795316
16 foo 16 -1.136602 0.795316
18 foo 18 1.484537 1.145820
Related
I have the following dataframe:
Month
1 -0.075844
2 -0.089111
3 0.042705
4 0.002147
5 -0.010528
6 0.109443
7 0.198334
8 0.209830
9 0.075139
10 -0.062405
11 -0.211774
12 -0.109167
1 -0.075844
2 -0.089111
3 0.042705
4 0.002147
5 -0.010528
6 0.109443
7 0.198334
8 0.209830
9 0.075139
10 -0.062405
11 -0.211774
12 -0.109167
Name: Passengers, dtype: float64
As you can see numbers are listed twice from 1-12 / 1-12, instead, I would like to change the index to 1-24. The problem is that when plotting it I see the following:
plt.figure(figsize=(15,5))
plt.plot(esta2,color='orange')
plt.show()
I would like to see a continuous line from 1 to 24.
esta2 = esta2.reset_index() will get you 0-23. If you need 1-24 then you could just do esta2.index = np.arange(1, len(esta2) + 1).
quite simply :
df.index = [i for i in range(1,len(df.index)+1)]
df.index.name = 'Month'
print(df)
Val
Month
1 -0.075844
2 -0.089111
3 0.042705
4 0.002147
5 -0.010528
6 0.109443
7 0.198334
8 0.209830
9 0.075139
10 -0.062405
11 -0.211774
12 -0.109167
13 -0.075844
14 -0.089111
15 0.042705
16 0.002147
17 -0.010528
18 0.109443
19 0.198334
20 0.209830
21 0.075139
22 -0.062405
23 -0.211774
24 -0.109167
Just reassign the index:
df.index = pd.Index(range(1, len(df) + 1), name='Month')
i have a pandas series like this:
0 $233.94
1 $214.14
2 $208.74
3 $232.14
4 $187.15
5 $262.73
6 $176.35
7 $266.33
8 $174.55
9 $221.34
10 $199.74
11 $228.54
12 $228.54
13 $196.15
14 $269.93
15 $257.33
16 $246.53
17 $226.74
i want to get rid of the dollar sign so i can convert the values to numeric. I made a function in order to do this:
def strip_dollar(series):
for number in dollar:
if number[0] == '$':
number[0].replace('$', ' ')
return dollar
This function is returning the original series untouched, nothing changes, and i don't know why.
Any ideas about how to get this right?
Thanks in advance
Use lstrip and convert to floats:
s = s.str.lstrip('$').astype(float)
print (s)
0 233.94
1 214.14
2 208.74
3 232.14
4 187.15
5 262.73
6 176.35
7 266.33
8 174.55
9 221.34
10 199.74
11 228.54
12 228.54
13 196.15
14 269.93
15 257.33
16 246.53
17 226.74
Name: A, dtype: float64
Setup:
s = pd.Series(['$233.94', '$214.14', '$208.74', '$232.14', '$187.15', '$262.73', '$176.35', '$266.33', '$174.55', '$221.34', '$199.74', '$228.54', '$228.54', '$196.15', '$269.93', '$257.33', '$246.53', '$226.74'])
print (s)
0 $233.94
1 $214.14
2 $208.74
3 $232.14
4 $187.15
5 $262.73
6 $176.35
7 $266.33
8 $174.55
9 $221.34
10 $199.74
11 $228.54
12 $228.54
13 $196.15
14 $269.93
15 $257.33
16 $246.53
17 $226.74
dtype: object
Using str.replace("$", "")
Ex:
import pandas as pd
df = pd.DataFrame({"Col" : ["$233.94", "$214.14"]})
df["Col"] = pd.to_numeric(df["Col"].str.replace("$", ""))
print(df)
Output:
Col
0 233.94
1 214.14
CODE:
ser = pd.Series(data=['$123', '$234', '$232', '$6767'])
def rmDollar(x):
return x[1:]
serWithoutDollar = ser.apply(rmDollar)
serWithoutDollar
OUTPUT:
0 123
1 234
2 232
3 6767
dtype: object
Hope it helps!
I am trying to do a rolling sum across partitioned data based on a moving 2 business day window. It feels like it should be both easy and widely used, but the solution is beyond me.
#generate sample data
import pandas as pd
import numpy as np
import datetime
vals = [-4,17,-4,-16,2,20,3,10,-17,-8,-21,2,0,-11,16,-24,-10,-21,5,12,14,9,-15,-15]
grp = ['X']*6 + ['Y'] * 6 + ['X']*6 + ['Y'] * 6
typ = ['foo']*12+['bar']*12
dat = ['19/01/18','19/01/18','22/01/18','22/01/18','23/01/18','24/01/18'] * 4
#create dataframe with sample data
df = pd.DataFrame({'group': grp,'type':typ,'value':vals,'date':dat})
df.date = pd.to_datetime(df.date)
df.head(12)
gives the following (note this is just the head 12 rows):
date group type value
0 19/01/2018 X foo -4
1 19/01/2018 X foo 17
2 22/01/2018 X foo -4
3 22/01/2018 X foo -16
4 23/01/2018 X foo 2
5 24/01/2018 X foo 20
6 19/01/2018 Y foo 3
7 19/01/2018 Y foo 10
8 22/01/2018 Y foo -17
9 22/01/2018 Y foo -8
10 23/01/2018 Y foo -21
11 24/01/2018 Y foo 2
The desired results are (all rows shown here):
date group type 2BD Sum
1 19/01/2018 X foo 13
2 22/01/2018 X foo -7
3 23/01/2018 X foo -18
4 24/01/2018 X foo 22
5 19/01/2018 Y foo 13
6 22/01/2018 Y foo -12
7 23/01/2018 Y foo -46
8 24/01/2018 Y foo -19
9 19/01/2018 X bar -11
10 22/01/2018 X bar -19
11 23/01/2018 X bar -18
12 24/01/2018 X bar -31
13 19/01/2018 Y bar 17
14 22/01/2018 Y bar 40
15 23/01/2018 Y bar 8
16 24/01/2018 Y bar -30
I have viewed this question and tried
df.groupby(['group','type']).rolling('2d',on='date').agg({'value':'sum'}
).reset_index().groupby(['group','type','date']).agg({'value':'sum'}).reset_index()
Which would work fine if 'value' is always positive, but this is not the case here. I have tried many other ways that have caused errors that I can list if it is of value. Can anyone help?
I expected the following to work:
g = lambda ts: ts.rolling('2B', on='date')['value'].sum()
df.groupby(['group', 'type']).apply(g)
However, I get an error as a business day is not a fixed frequency.
This brings me to suggesting the following solution, a lot uglier:
value_per_bday = lambda df: df.resample('B', on='date')['value'].sum()
df = df.groupby(['group', 'type']).apply(value_per_bday).stack()
value_2_bdays = lambda x: x.rolling(2, min_periods=1).sum()
df = df.groupby(axis=0, level=['group', 'type']).apply(value_2_bdays)
Maybe it sounds better with a function, your pick.
def resample_and_sum(x):
x = x.resample('B', on='date')['value'].sum()
x = x.rolling(2, min_periods=1).sum()
return x
df = df.groupby(['group', 'type']).apply(resample_and_sum).stack()
I am trying to create a Pandas DataFrame that holds label values to a 2D DataFrame. This is what I have done so far:
I am reading csv files using pd.read_csv and appending them to list, for the purpose of this question let's consider the following code:
import numpy as np
import pandas as pd
raw_sample = []
labels = [1,1,1,2,2,2]
samples = np.random.randn(6, 5, 4)
for contents in range(samples.shape[0]):
raw_sample.append(pd.DataFrame(samples[contents]))
Then, I added raw_sample to df=d.DataFrame(raw_sample). Then I added the labels to df by doing the following:
df = df.set_index([df.index, labels])
df.index = df.index.set_names('index', level=0)
df.index = df.index.set_names('labels', level=1)
I tried printing this and I got
0
index labels
0 1 0 1 2 3
0 0...
1 1 0 1 2 3
0 0...
2 1 0 1 2 3
0 1...
3 2 0 1 2 3
0 -0...
4 2 0 1 2 3
0 0...
5 2 0 1 2 3
0 -0...
I have also tried printing df[0], I still got the same thing.
I wanted to know if it is in the form of
index labels 0
0 1 1 2 3 4 5 6 7
3 5 6 7 9 5 4
3 4 5 6 7 8 9
1 1 4 3 2 4 5 6 7
3 5 6 7 4 5 6
2 3 4 3 4 5 3
...
I know that a DataFrame cannot take 2D array, the other thing was to use pd.Panel, for this I converted all the contents of raw_sample to numpy array and then converted raw_sample itself to numpy array and did the following:
p1 = pd.Panel(samples, items=map(str, labels))
but when I print this, I get
<class 'pandas.core.panel.Panel'>
Dimensions: 6 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: 1 to 2
Major_axis axis: 0 to 4
Minor_axis axis: 0 to 3
Looking at the Items, it looks like all the common values are grouped together.
I am not sure what to do at this point. Help!!
Update
Inputs:
labels = [1,1,1,2,2,2]
samples = [5x4 pd.DataFrame, 5x4 pd.DataFrame, 5x4 pd.DataFrame, 5x4 pd.DataFrame, 5x4 pd.DataFrame, 5x4 pd.DataFrame]
Desired Output:
index labels samples
0 1 1 2 3 4 5 6 7
3 5 6 7 9 5 4
3 4 5 6 7 8 9
1 1 4 3 2 4 5 6 7
3 5 6 7 4 5 6
2 3 4 3 4 5 3
...
If select with not unique items, get another Panel:
np.random.seed(10)
labels = [1,1,1,2,2,2]
samples = np.random.randn(6, 5, 4)
p1 = pd.Panel(samples, items=map(str, labels))
print (p1)
<class 'pandas.core.panel.Panel'>
Dimensions: 6 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: 1 to 2
Major_axis axis: 0 to 4
Minor_axis axis: 0 to 3
print (p1['1'])
<class 'pandas.core.panel.Panel'>
Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: 1 to 1
Major_axis axis: 0 to 4
Minor_axis axis: 0 to 3
print (p1.to_frame())
1 1 1 2 2 2
major minor
0 0 1.331587 1.331587 1.331587 -0.232182 -0.232182 -0.232182
1 0.715279 0.715279 0.715279 -0.501729 -0.501729 -0.501729
2 -1.545400 -1.545400 -1.545400 1.128785 1.128785 1.128785
3 -0.008384 -0.008384 -0.008384 -0.697810 -0.697810 -0.697810
1 0 0.621336 0.621336 0.621336 -0.081122 -0.081122 -0.081122
1 -0.720086 -0.720086 -0.720086 -0.529296 -0.529296 -0.529296
2 0.265512 0.265512 0.265512 1.046183 1.046183 1.046183
3 0.108549 0.108549 0.108549 -1.418556 -1.418556 -1.418556
2 0 0.004291 0.004291 0.004291 -0.362499 -0.362499 -0.362499
1 -0.174600 -0.174600 -0.174600 -0.121906 -0.121906 -0.121906
2 0.433026 0.433026 0.433026 0.319356 0.319356 0.319356
3 1.203037 1.203037 1.203037 0.460903 0.460903 0.460903
3 0 -0.965066 -0.965066 -0.965066 -0.215790 -0.215790 -0.215790
1 1.028274 1.028274 1.028274 0.989072 0.989072 0.989072
2 0.228630 0.228630 0.228630 0.314754 0.314754 0.314754
3 0.445138 0.445138 0.445138 2.467651 2.467651 2.467651
4 0 -1.136602 -1.136602 -1.136602 -1.508321 -1.508321 -1.508321
1 0.135137 0.135137 0.135137 0.620601 0.620601 0.620601
2 1.484537 1.484537 1.484537 -1.045133 -1.045133 -1.045133
3 -1.079805 -1.079805 -1.079805 -0.798009 -0.798009 -0.798009
But if have unique one, get DataFrame:
np.random.seed(10)
labels = list('abcdef')
samples = np.random.randn(6, 5, 4)
p1 = pd.Panel(samples, items=labels)
print (p1)
<class 'pandas.core.panel.Panel'>
Dimensions: 6 (items) x 5 (major_axis) x 4 (minor_axis)
Items axis: a to f
Major_axis axis: 0 to 4
Minor_axis axis: 0 to 3
print (p1['a'])
0 1 2 3
0 1.331587 0.715279 -1.545400 -0.008384
1 0.621336 -0.720086 0.265512 0.108549
2 0.004291 -0.174600 0.433026 1.203037
3 -0.965066 1.028274 0.228630 0.445138
4 -1.136602 0.135137 1.484537 -1.079805
print (p1.to_frame())
a b c d e f
major minor
0 0 1.331587 -1.977728 0.660232 -0.232182 1.985085 0.117476
1 0.715279 -1.743372 -0.350872 -0.501729 1.744814 -1.907457
2 -1.545400 0.266070 -0.939433 1.128785 -1.856185 -0.922909
3 -0.008384 2.384967 -0.489337 -0.697810 -0.222774 0.469751
1 0 0.621336 1.123691 -0.804591 -0.081122 -0.065848 -0.144367
1 -0.720086 1.672622 -0.212698 -0.529296 -2.131712 -0.400138
2 0.265512 0.099149 -0.339140 1.046183 -0.048831 -0.295984
3 0.108549 1.397996 0.312170 -1.418556 0.393341 0.848209
2 0 0.004291 -0.271248 0.565153 -0.362499 0.217265 0.706830
1 -0.174600 0.613204 -0.147420 -0.121906 -1.994394 -0.787269
2 0.433026 -0.267317 -0.025905 0.319356 1.107708 0.292941
3 1.203037 -0.549309 0.289094 0.460903 0.244544 -0.470807
3 0 -0.965066 0.132708 -0.539879 -0.215790 -0.061912 2.404326
1 1.028274 -0.476142 0.708160 0.989072 -0.753893 -0.739357
2 0.228630 1.308473 0.842225 0.314754 0.711959 -0.312829
3 0.445138 0.195013 0.203581 2.467651 0.918269 -0.348882
4 0 -1.136602 0.400210 2.394704 -1.508321 -0.482093 -0.439026
1 0.135137 -0.337632 0.917459 0.620601 0.089588 0.141104
2 1.484537 1.256472 -0.112272 -1.045133 0.826999 0.273049
3 -1.079805 -0.731970 -0.362180 -0.798009 -1.954512 -1.618571
It is same as in DataFrame with non unique columns:
samples = np.random.randn(6, 5)
df = pd.DataFrame(samples, columns=list('11122'))
print (df)
1 1 1 2 2
0 0.346338 -0.855797 -0.932463 -2.289259 0.634696
1 0.272794 -0.924357 -1.898270 -0.743083 -1.587480
2 -0.519975 -0.136836 0.530178 -0.730629 2.520821
3 0.137530 -1.232763 0.508548 -0.480384 -1.213064
4 -0.157787 -1.600004 -1.287620 0.384642 -0.568072
5 -0.649427 -0.659585 -0.813359 -1.487412 -0.044206
print (df['1'])
1 1 1
0 0.346338 -0.855797 -0.932463
1 0.272794 -0.924357 -1.898270
2 -0.519975 -0.136836 0.530178
3 0.137530 -1.232763 0.508548
4 -0.157787 -1.600004 -1.287620
5 -0.649427 -0.659585 -0.813359
EDIT:
Also for creating df from list need unique labels (no unique raise error) and function concat with parameter keys, for Panel call to_panel:
np.random.seed(100)
raw_sample = []
labels = list('abcdef')
samples = np.random.randn(6, 5, 4)
for contents in range(samples.shape[0]):
raw_sample.append(pd.DataFrame(samples[contents]))
df = pd.concat(raw_sample, keys=labels)
print (df)
0 1 2 3
a 0 -1.749765 0.342680 1.153036 -0.252436
1 0.981321 0.514219 0.221180 -1.070043
2 -0.189496 0.255001 -0.458027 0.435163
3 -0.583595 0.816847 0.672721 -0.104411
4 -0.531280 1.029733 -0.438136 -1.118318
b 0 1.618982 1.541605 -0.251879 -0.842436
1 0.184519 0.937082 0.731000 1.361556
2 -0.326238 0.055676 0.222400 -1.443217
3 -0.756352 0.816454 0.750445 -0.455947
4 1.189622 -1.690617 -1.356399 -1.232435
c 0 -0.544439 -0.668172 0.007315 -0.612939
1 1.299748 -1.733096 -0.983310 0.357508
2 -1.613579 1.470714 -1.188018 -0.549746
3 -0.940046 -0.827932 0.108863 0.507810
4 -0.862227 1.249470 -0.079611 -0.889731
d 0 -0.881798 0.018639 0.237845 0.013549
1 -1.635529 -1.044210 0.613039 0.736205
2 1.026921 -1.432191 -1.841188 0.366093
3 -0.331777 -0.689218 2.034608 -0.550714
4 0.750453 -1.306992 0.580573 -1.104523
e 0 0.690121 0.686890 -1.566688 0.904974
1 0.778822 0.428233 0.108872 0.028284
2 -0.578826 -1.199451 -1.705952 0.369164
3 1.876573 -0.376903 1.831936 0.003017
4 -0.076023 0.003958 -0.185014 -2.487152
f 0 -1.704651 -1.136261 -2.973315 0.033317
1 -0.248889 -0.450176 0.132428 0.022214
2 0.317368 -0.752414 -1.296392 0.095139
3 -0.423715 -1.185984 -0.365462 -1.271023
4 1.586171 0.693391 -1.958081 -0.134801
p1 = df.to_panel()
print (p1)
<class 'pandas.core.panel.Panel'>
Dimensions: 4 (items) x 6 (major_axis) x 5 (minor_axis)
Items axis: 0 to 3
Major_axis axis: a to f
Minor_axis axis: 0 to 4
EDIT1:
If need MultiIndex DataFrame is possible create helper range for unique values, use concat and last remove helper level of MultiIndex:
np.random.seed(100)
raw_sample = []
labels = [1,1,1,2,2,2]
mux = pd.MultiIndex.from_arrays([labels, range(len(labels))])
samples = np.random.randn(6, 5, 4)
for contents in range(samples.shape[0]):
raw_sample.append(pd.DataFrame(samples[contents]))
df = pd.concat(raw_sample, keys=mux)
df = df.reset_index(level=1, drop=True)
print (df)
0 1 2 3
1 0 -1.749765 0.342680 1.153036 -0.252436
1 0.981321 0.514219 0.221180 -1.070043
2 -0.189496 0.255001 -0.458027 0.435163
3 -0.583595 0.816847 0.672721 -0.104411
4 -0.531280 1.029733 -0.438136 -1.118318
0 1.618982 1.541605 -0.251879 -0.842436
1 0.184519 0.937082 0.731000 1.361556
2 -0.326238 0.055676 0.222400 -1.443217
3 -0.756352 0.816454 0.750445 -0.455947
4 1.189622 -1.690617 -1.356399 -1.232435
0 -0.544439 -0.668172 0.007315 -0.612939
1 1.299748 -1.733096 -0.983310 0.357508
2 -1.613579 1.470714 -1.188018 -0.549746
3 -0.940046 -0.827932 0.108863 0.507810
4 -0.862227 1.249470 -0.079611 -0.889731
2 0 -0.881798 0.018639 0.237845 0.013549
1 -1.635529 -1.044210 0.613039 0.736205
2 1.026921 -1.432191 -1.841188 0.366093
3 -0.331777 -0.689218 2.034608 -0.550714
4 0.750453 -1.306992 0.580573 -1.104523
0 0.690121 0.686890 -1.566688 0.904974
1 0.778822 0.428233 0.108872 0.028284
2 -0.578826 -1.199451 -1.705952 0.369164
3 1.876573 -0.376903 1.831936 0.003017
4 -0.076023 0.003958 -0.185014 -2.487152
0 -1.704651 -1.136261 -2.973315 0.033317
1 -0.248889 -0.450176 0.132428 0.022214
2 0.317368 -0.752414 -1.296392 0.095139
3 -0.423715 -1.185984 -0.365462 -1.271023
4 1.586171 0.693391 -1.958081 -0.134801
But create panel is not possible:
p1 = df.to_panel()
print (p1)
>ValueError: Can't convert non-uniquely indexed DataFrame to Panel
Let's say I want to construct a dummy variable that is true if a number is between 1 and 10, I can do:
df['numdum'] = df['number'].isin(range(1,11))
Is there a way to do that for a continuous interval? So, create a dummy variable that is true if a number is in a range, allowing for non-integers.
Series objects (including dataframe columns) have a between method:
>>> s = pd.Series(np.linspace(0, 20, 8))
>>> s
0 0.000000
1 2.857143
2 5.714286
3 8.571429
4 11.428571
5 14.285714
6 17.142857
7 20.000000
dtype: float64
>>> s.between(1, 14.5)
0 False
1 True
2 True
3 True
4 True
5 True
6 False
7 False
dtype: bool
This works:
df['numdum'] = (df.number >= 1) & (df.number <= 10)
You could also do the same thing with cut(). No real advantage if there are just two categories:
>>> df['numdum'] = pd.cut( df['number'], [-99,10,99], labels=[1,0] )
number numdum
0 8 1
1 9 1
2 10 1
3 11 0
4 12 0
5 13 0
6 14 0
But it's nice if you have multiple categories:
>>> df['numdum'] = pd.cut( df['number'], [-99,8,10,99], labels=[1,2,3] )
number numdum
0 8 1
1 9 2
2 10 2
3 11 3
4 12 3
5 13 3
6 14 3
Labels can be True and False if that is preferred, or you can not specify the label at all, in which case the labels will contain info on the cutoff points.