A minute rate time series graph using pandas - python

I have got a file which looks like this
Times Code505 Code200 Code404
1543714067 855 86123 1840
1543714077 869 87327 1857
1543714087 882 88522 1883
1543714097 890 89764 1901
1543714107 904 90735 1924
1543714117 914 91963 1956
except it got a lot more data than this.
What I want to do is to plot a graph that looks like this
When I plot my graph, I get something more of this
What I am doing to get my graph which is the second one is
data['Times'] = pd.to_datetime(data['Times'], unit = 's')
data.set_index(['Times'],inplace=True)
data.plot()
I know I am missing something to get my graph look like a time series but I am unsure what I have to pass to pandas to get my graph look right.
I am collecting the data for a total of an hour and I collect a record which looks like this
1543714067 855 86123 1840
every 10 seconds

>>> df
Times Code505 Code200 Code404
0 1543714067 855 86123 1840
1 1543714077 869 87327 1857
2 1543714087 882 88522 1883
3 1543714097 890 89764 1901
4 1543714107 904 90735 1924
5 1543714117 914 91963 1956
>>>
This will calculate the RPS based on twenty second intervals:
Shift the data up 2 and subtract the original DataFrame
>>> df.shift(-2)
Times Code505 Code200 Code404
0 1.543714e+09 882.0 88522.0 1883.0
1 1.543714e+09 890.0 89764.0 1901.0
2 1.543714e+09 904.0 90735.0 1924.0
3 1.543714e+09 914.0 91963.0 1956.0
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
>>>
>>> deltas = df.shift(-2) - df
>>> deltas
Times Code505 Code200 Code404
0 20.0 27.0 2399.0 43.0
1 20.0 21.0 2437.0 44.0
2 20.0 22.0 2213.0 41.0
3 20.0 24.0 2199.0 55.0
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
>>>
Divide the deltas by twenty, then reestablish the times.
>>> rates = deltas / 20
>>> rates
Times Code505 Code200 Code404
0 1.0 1.35 119.95 2.15
1 1.0 1.05 121.85 2.20
2 1.0 1.10 110.65 2.05
3 1.0 1.20 109.95 2.75
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
>>> rates['Times'] = df['Times']
>>> rates
Times Code505 Code200 Code404
0 1543714067 1.35 119.95 2.15
1 1543714077 1.05 121.85 2.20
2 1543714087 1.10 110.65 2.05
3 1543714097 1.20 109.95 2.75
4 1543714107 NaN NaN NaN
5 1543714117 NaN NaN NaN
>>>
You can preserve the timestamps throughout the process if you make it the index first.
>>> df
Times Code505 Code200 Code404
0 1543714067 855 86123 1840
1 1543714077 869 87327 1857
2 1543714087 882 88522 1883
3 1543714097 890 89764 1901
4 1543714107 904 90735 1924
5 1543714117 914 91963 1956
>>> df = df.set_index('Times')
>>> df
Code505 Code200 Code404
Times
1543714067 855 86123 1840
1543714077 869 87327 1857
1543714087 882 88522 1883
1543714097 890 89764 1901
1543714107 904 90735 1924
1543714117 914 91963 1956
>>>
>>> deltas = df.shift(-2) - df
>>> rates = deltas / 20
>>> rates
Code505 Code200 Code404
Times
1543714067 1.35 119.95 2.15
1543714077 1.05 121.85 2.20
1543714087 1.10 110.65 2.05
1543714097 1.20 109.95 2.75
1543714107 NaN NaN NaN
1543714117 NaN NaN NaN
>>>

Related

Apply rolling custom function with pandas

There are a few similar questions in this site, but I couldn't find out a solution to my particular question.
I have a dataframe that I want to process with a custom function (the real function has a bit more pre-procesing, but the gist is contained in the toy example fun).
import statsmodels.api as sm
import numpy as np
import pandas as pd
mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data)
def fun(col1, col2, w1=10, w2=2):
return(np.mean(w1 * col1 + w2 * col2))
# This is the behavior I would expect for the full dataset, currently working
mtcars.apply(lambda x: fun(x.cyl, x.mpg), axis=1)
# This was my approach to do the same with a rolling function
mtcars.rolling(3).apply(lambda x: fun(x.cyl, x.mpg))
The rolling version returns this error:
AttributeError: 'Series' object has no attribute 'cyl'
I figured I don't fully understand how rolling works, since adding a print statement to the beginning of my function shows that fun is not getting the full dataset but an unnamed series of 3. What is the approach to apply this rolling function in pandas?
Just in case, I am running
>>> pd.__version__
'1.5.2'
Update
Looks like there is a very similar question here which might partially overlap with what I'm trying to do.
For completeness, here's how I would do this in R with the expected output.
library(dplyr)
fun <- function(col1, col2, w1=10, w2=2){
return(mean(w1*col1 + w2*col2))
}
mtcars %>%
mutate(roll = slider::slide2(.x = cyl,
.y = mpg,
.f = fun,
.before = 1,
.after = 1))
mpg cyl disp hp drat wt qsec vs am gear carb roll
Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 102
Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 96.53333
Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 96.8
Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 101.9333
Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2 105.4667
Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 107.4
Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 97.86667
Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2 94.33333
Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2 90.93333
Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4 93.2
Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4 102.2667
Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 107.6667
Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 112.6
Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 108.6
Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 104
Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4 103.6667
Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4 105
Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 105
Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 104.4667
Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1 97.2
Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1 100.6
Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2 101.4667
AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2 109.3333
Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4 111.8
Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2 106.5333
Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1 101.6667
Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2 95.8
Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2 101.4667
Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4 103.9333
Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6 107
Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8 97.4
Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2 96.4
There is no really elegant way to do this. Here is a suggestion:
First install numpy_ext (use pip install numpy_ext or pip install numpy_ext --user).
Second, you'll need to compute your column separatly and concat it to your ariginal dataframe:
import statsmodels.api as sm
import pandas as pd
from numpy_ext import rolling_apply as rolling_apply_ext
import numpy as np
mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data).reset_index()
def fun(col1, col2, w1=10, w2=2):
return(w1 * col1 + w2 * col2)
Col= pd.DataFrame(rolling_apply_ext(fun, 3, mtcars.cyl.values, mtcars.mpg.values)).rename(columns={2:'rolling'})
mtcars.join(Col["rolling"])
to get:
index mpg cyl disp hp drat wt qsec vs am \
0 Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1
1 Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1
2 Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1
3 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0
4 Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0
5 Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0
6 Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0
7 Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0
8 Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0
9 Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0
10 Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0
11 Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0
12 Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0
13 Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0
14 Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0
15 Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0
16 Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0
17 Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1
18 Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1
19 Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1
20 Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0
21 Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0
22 AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0
23 Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0
24 Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0
25 Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1
26 Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1
27 Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1
28 Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1
29 Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1
30 Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1
31 Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1
gear carb rolling
0 4 4 NaN
1 4 4 NaN
2 4 1 85.6
3 3 1 102.8
4 3 2 117.4
5 3 1 96.2
6 3 4 108.6
7 4 2 88.8
8 4 2 85.6
9 4 4 98.4
10 4 4 95.6
11 3 3 112.8
12 3 3 114.6
13 3 3 110.4
14 3 4 100.8
15 3 4 100.8
16 3 4 109.4
17 4 1 104.8
18 4 2 100.8
19 4 1 107.8
20 3 1 83.0
21 3 2 111.0
22 3 2 110.4
23 3 4 106.6
24 3 2 118.4
25 4 1 94.6
26 5 2 92.0
27 5 2 100.8
28 5 4 111.6
29 5 6 99.4
30 5 8 110.0
31 4 2 82.8
You can use the below function for rolling apply. It might be slow compared to pandas inbuild rolling in certain situations but has additional functionality.
Function argument win_size, min_periods (similar to pandas and takes only integer input). In addition, after parameter is also used to control to window, it shifts the windows to include after observation.
def roll_apply(df, fn, win_size, min_periods=None, after=None):
if min_periods is None:
min_periods = win_size
else:
assert min_periods >= 1
if after is None:
after = 0
before = win_size - 1 - after
i = np.arange(df.shape[0])
s = np.maximum(i - before, 0)
e = np.minimum(i + after, df.shape[0]) + 1
res = [fn(df.iloc[si:ei]) for si, ei in zip(s, e) if (ei-si) >= min_periods]
idx = df.index[(e-s) >= min_periods]
types = {type(ri) for ri in res}
if len(types) != 1:
return pd.Series(res, index=idx)
t = list(types)[0]
if t == pd.Series:
return pd.DataFrame(res, index=idx)
elif t == pd.DataFrame:
return pd.concat(res, keys=idx)
else:
return pd.Series(res, index=idx)
mtcars['roll'] = roll_apply(mtcars, lambda x: fun(x.cyl, x.mpg), win_size=3, min_periods=1, after=1)
index
mpg
cyl
disp
hp
drat
wt
qsec
vs
am
gear
carb
roll
Mazda RX4
21.0
6
160.0
110
3.9
2.62
16.46
0
1
4
4
102.0
Mazda RX4 Wag
21.0
6
160.0
110
3.9
2.875
17.02
0
1
4
4
96.53333333333335
Datsun 710
22.8
4
108.0
93
3.85
2.32
18.61
1
1
4
1
96.8
Hornet 4 Drive
21.4
6
258.0
110
3.08
3.215
19.44
1
0
3
1
101.93333333333332
Hornet Sportabout
18.7
8
360.0
175
3.15
3.44
17.02
0
0
3
2
105.46666666666665
Valiant
18.1
6
225.0
105
2.76
3.46
20.22
1
0
3
1
107.40000000000002
Duster 360
14.3
8
360.0
245
3.21
3.57
15.84
0
0
3
4
97.86666666666667
Merc 240D
24.4
4
146.7
62
3.69
3.19
20.0
1
0
4
2
94.33333333333333
Merc 230
22.8
4
140.8
95
3.92
3.15
22.9
1
0
4
2
90.93333333333332
Merc 280
19.2
6
167.6
123
3.92
3.44
18.3
1
0
4
4
93.2
Merc 280C
17.8
6
167.6
123
3.92
3.44
18.9
1
0
4
4
102.26666666666667
Merc 450SE
16.4
8
275.8
180
3.07
4.07
17.4
0
0
3
3
107.66666666666667
Merc 450SL
17.3
8
275.8
180
3.07
3.73
17.6
0
0
3
3
112.59999999999998
Merc 450SLC
15.2
8
275.8
180
3.07
3.78
18.0
0
0
3
3
108.60000000000001
Cadillac Fleetwood
10.4
8
472.0
205
2.93
5.25
17.98
0
0
3
4
104.0
Lincoln Continental
10.4
8
460.0
215
3.0
5.424
17.82
0
0
3
4
103.66666666666667
Chrysler Imperial
14.7
8
440.0
230
3.23
5.345
17.42
0
0
3
4
105.0
Fiat 128
32.4
4
78.7
66
4.08
2.2
19.47
1
1
4
1
105.0
Honda Civic
30.4
4
75.7
52
4.93
1.615
18.52
1
1
4
2
104.46666666666665
Toyota Corolla
33.9
4
71.1
65
4.22
1.835
19.9
1
1
4
1
97.2
Toyota Corona
21.5
4
120.1
97
3.7
2.465
20.01
1
0
3
1
100.60000000000001
Dodge Challenger
15.5
8
318.0
150
2.76
3.52
16.87
0
0
3
2
101.46666666666665
AMC Javelin
15.2
8
304.0
150
3.15
3.435
17.3
0
0
3
2
109.33333333333333
Camaro Z28
13.3
8
350.0
245
3.73
3.84
15.41
0
0
3
4
111.8
Pontiac Firebird
19.2
8
400.0
175
3.08
3.845
17.05
0
0
3
2
106.53333333333335
Fiat X1-9
27.3
4
79.0
66
4.08
1.935
18.9
1
1
4
1
101.66666666666667
Porsche 914-2
26.0
4
120.3
91
4.43
2.14
16.7
0
1
5
2
95.8
Lotus Europa
30.4
4
95.1
113
3.77
1.513
16.9
1
1
5
2
101.46666666666665
Ford Pantera L
15.8
8
351.0
264
4.22
3.17
14.5
0
1
5
4
103.93333333333332
Ferrari Dino
19.7
6
145.0
175
3.62
2.77
15.5
0
1
5
6
107.0
Maserati Bora
15.0
8
301.0
335
3.54
3.57
14.6
0
1
5
8
97.39999999999999
Volvo 142E
21.4
4
121.0
109
4.11
2.78
18.6
1
1
4
2
96.4
You can pass more complex function in roll_apply function. Below are few example
roll_apply(mtcars, lambda d: pd.Series({'A': d.sum().sum(), 'B': d.std().std()}), win_size=3, min_periods=1, after=1) # Simple example to illustrate use case
roll_apply(mtcars, lambda d: d, win_size=3, min_periods=3, after=1) # This will return rolling dataframe
I'm not aware of a way to do this calculation easily and efficiently by apply a single function to a pandas dataframe because you're calculating values across multiple rows and columns. An efficient way is to first calculate the column you want to calculate the rolling average for, then calculate the rolling average:
import statsmodels.api as sm
import pandas as pd
mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data)
# Create column
def df_fun(df, col1, col2, w1=10, w2=2):
return w1 * df[col1] + w2 * df[col2]
mtcars['fun_val'] = df_fun(mtcars, 'cyl', 'mpg')
# Calculate rolling average
mtcars['fun_val_r3m'] = mtcars['fun_val'].rolling(3, center=True, min_periods=0).mean()
This gives the correct answer, and is efficient since each step should be optimized for performance. I found that separating the row and column calculations like this is about 10 times faster than the latest approach you proposed and no need to import numpy. If you don't want to keep the intermediate calculation, fun_val, you can overwrite it with the rolling average value, fun_val_r3m.
If you really need to do this in one line with apply, I'm not aware of another way other than what you've done in your latest post. numpy array based approaches may be able to perform better, though less readable.
After much searching and fighting against arguments. I found an approach inspired by this answer
def fun(series, w1=10, w2=2):
col1 = mtcars.loc[series.index, 'cyl']
col2 = mtcars.loc[series.index, 'mpg']
return(np.mean(w1 * col1 + w2 * col2))
mtcars['roll'] = mtcars.rolling(3, center=True, min_periods=0)['mpg'] \
.apply(fun, raw=False)
mtcars
mpg cyl disp hp ... am gear carb roll
Mazda RX4 21.0 6 160.0 110 ... 1 4 4 102.000000
Mazda RX4 Wag 21.0 6 160.0 110 ... 1 4 4 96.533333
Datsun 710 22.8 4 108.0 93 ... 1 4 1 96.800000
Hornet 4 Drive 21.4 6 258.0 110 ... 0 3 1 101.933333
Hornet Sportabout 18.7 8 360.0 175 ... 0 3 2 105.466667
Valiant 18.1 6 225.0 105 ... 0 3 1 107.400000
Duster 360 14.3 8 360.0 245 ... 0 3 4 97.866667
Merc 240D 24.4 4 146.7 62 ... 0 4 2 94.333333
Merc 230 22.8 4 140.8 95 ... 0 4 2 90.933333
Merc 280 19.2 6 167.6 123 ... 0 4 4 93.200000
Merc 280C 17.8 6 167.6 123 ... 0 4 4 102.266667
Merc 450SE 16.4 8 275.8 180 ... 0 3 3 107.666667
Merc 450SL 17.3 8 275.8 180 ... 0 3 3 112.600000
Merc 450SLC 15.2 8 275.8 180 ... 0 3 3 108.600000
Cadillac Fleetwood 10.4 8 472.0 205 ... 0 3 4 104.000000
Lincoln Continental 10.4 8 460.0 215 ... 0 3 4 103.666667
Chrysler Imperial 14.7 8 440.0 230 ... 0 3 4 105.000000
Fiat 128 32.4 4 78.7 66 ... 1 4 1 105.000000
Honda Civic 30.4 4 75.7 52 ... 1 4 2 104.466667
Toyota Corolla 33.9 4 71.1 65 ... 1 4 1 97.200000
Toyota Corona 21.5 4 120.1 97 ... 0 3 1 100.600000
Dodge Challenger 15.5 8 318.0 150 ... 0 3 2 101.466667
AMC Javelin 15.2 8 304.0 150 ... 0 3 2 109.333333
Camaro Z28 13.3 8 350.0 245 ... 0 3 4 111.800000
Pontiac Firebird 19.2 8 400.0 175 ... 0 3 2 106.533333
Fiat X1-9 27.3 4 79.0 66 ... 1 4 1 101.666667
Porsche 914-2 26.0 4 120.3 91 ... 1 5 2 95.800000
Lotus Europa 30.4 4 95.1 113 ... 1 5 2 101.466667
Ford Pantera L 15.8 8 351.0 264 ... 1 5 4 103.933333
Ferrari Dino 19.7 6 145.0 175 ... 1 5 6 107.000000
Maserati Bora 15.0 8 301.0 335 ... 1 5 8 97.400000
Volvo 142E 21.4 4 121.0 109 ... 1 4 2 96.400000
[32 rows x 12 columns]
There are several things that are needed for this to perform as I wanted. raw=False will give fun access to the series if only to call .index (False : passes each row or column as a Series to the function.). This is dumb and inefficient, but it works. I needed my window center=True. I also needed the NaN filled with available info, so I set min_periods=0.
There are a few things that I don't like about this approach:
It seems to me that calling mtcars from outside the fun scope is potentially dangerous and might cause bugs.
Multiple indexing with .loc line by line does not scale well and probably has worse performance (doing the rolling more times than needed)

Why is pandas showing "?" instead of NaN

I'm learning pandas and when i display the data frame, it is displaying ? instead of NaN.
Why is it so?
CODE :
import pandas as pd
url = "https://archive.ics.uci.edu/ml/machine-learning-
databases/autos/imports-85.data"
df = pd.read_csv(url, header=None)
print(df.head())
headers = ["symboling", "normalized-losses", "make", "fuel-type",
"aspiration",
"num-of-doors", "body-style", "drive-wheels", "engine-location",
"wheel-base", "length", "width", "height", "curb-weight",
"engine-type", "num-of-cylinders", "engine-size", "fuel-system",
"bore", "stroke", "compression-ratio", "hoursepower", "peak-rpm",
"city-mpg", "highway-mpg", "price"]
df.columns=headers
print(df.head(30))
In data are missing values represented by ?, so for converting them is possible use parameter na_values, also names parameter in read_csv add columns by list, so assign is not necessary:
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data"
headers = ["symboling", "normalized-losses", "make", "fuel-type", "aspiration",
"num-of-doors", "body-style", "drive-wheels", "engine-location",
"wheel-base", "length", "width", "height", "curb-weight",
"engine-type", "num-of-cylinders", "engine-size", "fuel-system",
"bore", "stroke", "compression-ratio", "hoursepower", "peak-rpm",
"city-mpg", "highway-mpg", "price"]
df = pd.read_csv(url, header=None, names=headers, na_values='?')
print(df.head(10))
symboling normalized-losses make fuel-type aspiration \
0 3 NaN alfa-romero gas std
1 3 NaN alfa-romero gas std
2 1 NaN alfa-romero gas std
3 2 164.0 audi gas std
4 2 164.0 audi gas std
5 2 NaN audi gas std
6 1 158.0 audi gas std
7 1 NaN audi gas std
8 1 158.0 audi gas turbo
9 0 NaN audi gas turbo
num-of-doors body-style drive-wheels engine-location wheel-base ... \
0 two convertible rwd front 88.6 ...
1 two convertible rwd front 88.6 ...
2 two hatchback rwd front 94.5 ...
3 four sedan fwd front 99.8 ...
4 four sedan 4wd front 99.4 ...
5 two sedan fwd front 99.8 ...
6 four sedan fwd front 105.8 ...
7 four wagon fwd front 105.8 ...
8 four sedan fwd front 105.8 ...
9 two hatchback 4wd front 99.5 ...
engine-size fuel-system bore stroke compression-ratio hoursepower \
0 130 mpfi 3.47 2.68 9.0 111.0
1 130 mpfi 3.47 2.68 9.0 111.0
2 152 mpfi 2.68 3.47 9.0 154.0
3 109 mpfi 3.19 3.40 10.0 102.0
4 136 mpfi 3.19 3.40 8.0 115.0
5 136 mpfi 3.19 3.40 8.5 110.0
6 136 mpfi 3.19 3.40 8.5 110.0
7 136 mpfi 3.19 3.40 8.5 110.0
8 131 mpfi 3.13 3.40 8.3 140.0
9 131 mpfi 3.13 3.40 7.0 160.0
peak-rpm city-mpg highway-mpg price
0 5000.0 21 27 13495.0
1 5000.0 21 27 16500.0
2 5000.0 19 26 16500.0
3 5500.0 24 30 13950.0
4 5500.0 18 22 17450.0
5 5500.0 19 25 15250.0
6 5500.0 19 25 17710.0
7 5500.0 19 25 18920.0
8 5500.0 17 20 23875.0
9 5500.0 16 22 NaN
[10 rows x 26 columns]
This information is here:
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.names:
Missing Attribute Values: (denoted by "?")
Another solution: if you want to replace ? by NaN after reading the data, you can do this:
df_new = df.replace({'?':np.nan})

Dropping multiple columns in pandas at once

I have a data set consisting of 135 columns. I am trying to drop the columns which have empty data of more than 60%. There are some 40 columns approx in it. So, I wrote a function to drop this empty columns. But I am getting "Not contained in axis" error. Could some one help me solving this?. Or any other way to drop this 40 columns at once?
My function:
list_drop = df.isnull().sum()/(len(df))
def empty(df):
if list_drop > 0.5:
df.drop(list_drop,axis=1,inplace=True)
return df
Other method i tried:
df.drop(df.count()/len(df)<0.5,axis=1,inplace=True)
You could use isnull + sum and then use the mask to filter df.columns.
m = df.isnull().sum(0) / len(df) < 0.6
df = df[df.columns[m]]
Demo
df
A B C
0 29.0 NaN 26.6
1 NaN NaN 23.3
2 23.0 94.0 28.1
3 35.0 168.0 43.1
4 NaN NaN 25.6
5 32.0 88.0 31.0
6 NaN NaN 35.3
7 45.0 543.0 30.5
8 NaN NaN NaN
9 NaN NaN 37.6
10 NaN NaN 38.0
11 NaN NaN 27.1
12 23.0 846.0 30.1
13 19.0 175.0 25.8
14 NaN NaN 30.0
15 47.0 230.0 45.8
16 NaN NaN 29.6
17 38.0 83.0 43.3
18 30.0 96.0 34.6
m = df.isnull().sum(0) / len(df) < 0.3 # 0.3 as an example
m
A False
B False
C True
dtype: bool
df[df.columns[m]]
C
0 26.6
1 23.3
2 28.1
3 43.1
4 25.6
5 31.0
6 35.3
7 30.5
8 NaN
9 37.6
10 38.0
11 27.1
12 30.1
13 25.8
14 30.0
15 45.8
16 29.6
17 43.3
18 34.6

Using pandas to calculate over-the-month and over-the-year change

I can't wrap head around how to do this, but I want to go from this DataFrame:
Date Value
Jan-15 300
Feb-15 302
Mar-15 303
Apr-15 305
May-15 307
Jun-15 307
Jul-15 305
Aug-15 306
Sep-15 308
Oct-15 310
Nov-15 309
Dec-15 312
Jan-16 315
Feb-16 317
Mar-16 315
Apr-16 315
May-16 312
Jun-16 314
Jul-16 312
Aug-16 313
Sep-16 316
Oct-16 316
Nov-16 316
Dec-16 312
To this one by calculating over-the-month and over-the-year change:
Date Value otm oty
Jan-15 300 na na
Feb-15 302 2 na
Mar-15 303 1 na
Apr-15 305 2 na
May-15 307 2 na
Jun-15 307 0 na
Jul-15 305 -2 na
Aug-15 306 1 na
Sep-15 308 2 na
Oct-15 310 2 na
Nov-15 309 -1 na
Dec-15 312 3 na
Jan-16 315 3 15
Feb-16 317 2 15
Mar-16 315 -2 12
Apr-16 315 0 10
May-16 312 -3 5
Jun-16 314 2 7
Jul-16 312 -2 7
Aug-16 313 1 7
Sep-16 316 3 8
Oct-16 316 0 6
Nov-16 316 0 7
Dec-16 312 -4 0
So otm is calculated from the value of the field above and oty is calculated from 12 fields above.
I think you need diff, but is necessary there are not missing any month in index:
df['otm'] = df.Value.diff()
df['oty'] = df.Value.diff(12)
print (df)
Date Value otm oty
0 Jan-15 300 NaN NaN
1 Feb-15 302 2.0 NaN
2 Mar-15 303 1.0 NaN
3 Apr-15 305 2.0 NaN
4 May-15 307 2.0 NaN
5 Jun-15 307 0.0 NaN
6 Jul-15 305 -2.0 NaN
7 Aug-15 306 1.0 NaN
8 Sep-15 308 2.0 NaN
9 Oct-15 310 2.0 NaN
10 Nov-15 309 -1.0 NaN
11 Dec-15 312 3.0 NaN
12 Jan-16 315 3.0 15.0
13 Feb-16 317 2.0 15.0
14 Mar-16 315 -2.0 12.0
15 Apr-16 315 0.0 10.0
16 May-16 312 -3.0 5.0
17 Jun-16 314 2.0 7.0
18 Jul-16 312 -2.0 7.0
19 Aug-16 313 1.0 7.0
20 Sep-16 316 3.0 8.0
21 Oct-16 316 0.0 6.0
22 Nov-16 316 0.0 7.0
23 Dec-16 312 -4.0 0.0
If some data are missing it is a bit complicated:
convert to to_datetime + to_period
set_index + reindex - if are missing first Jan or last Dec values better is set it manually, not by min and max
change format of index by strftime
reset_index
df['Date'] = pd.to_datetime(df['Date'], format='%b-%y').dt.to_period('M')
df = df.set_index('Date')
df = df.reindex(pd.period_range(df.index.min(), df.index.max(), freq='M'))
df.index = df.index.strftime('%b-%y')
df = df.rename_axis('date').reset_index()
df['otm'] = df.Value.diff()
df['oty'] = df.Value.diff(12)
print (df)
date Value otm oty
0 Jan-15 300.0 NaN NaN
1 Feb-15 302.0 2.0 NaN
2 Mar-15 NaN NaN NaN
3 Apr-15 NaN NaN NaN
4 May-15 307.0 NaN NaN
5 Jun-15 307.0 0.0 NaN
6 Jul-15 305.0 -2.0 NaN
7 Aug-15 306.0 1.0 NaN
8 Sep-15 308.0 2.0 NaN
9 Oct-15 310.0 2.0 NaN
10 Nov-15 309.0 -1.0 NaN
11 Dec-15 312.0 3.0 NaN
12 Jan-16 315.0 3.0 15.0
13 Feb-16 317.0 2.0 15.0
14 Mar-16 315.0 -2.0 NaN
15 Apr-16 315.0 0.0 NaN
16 May-16 312.0 -3.0 5.0
17 Jun-16 314.0 2.0 7.0
18 Jul-16 312.0 -2.0 7.0
19 Aug-16 313.0 1.0 7.0
20 Sep-16 316.0 3.0 8.0
21 Oct-16 316.0 0.0 6.0
22 Nov-16 316.0 0.0 7.0
23 Dec-16 312.0 -4.0 0.0
More correct solution is to shift by month frequency:
#Create datetime column
df['DateTime'] = pd.to_datetime(df['Date'], format='%b-%y')
#Set it as index
df.set_index('DateTime', inplace=True)
#Then shift by month frequency:
df['otm'] = df['Value'] - df['Value'].shift(1, freq='MS')
df['oty'] = df['Value'] - df['Value'].shift(12, freq='MS')
df['otm'] = df['Value'] - df['Value'].shift(1)
df['oty'] = df['Value'] - df['Value'].shift(12)

Pandas DataFrame: How to calculate the difference by first row and last row in group?

Here is my pandas DataFrame:
import pandas as pd
import numpy as np
data = {"column1": [338, 519, 871, 1731, 2693, 2963, 3379, 3789, 3910, 4109, 4307, 4800, 4912, 5111, 5341, 5820, 6003, ...],
"column2": [NaN, 1, 1, 1, 1, NaN, NaN, 2, 2, NaN, NaN, 3, 3, 3, 3, 3, NaN, NaN], ...}
df = pd.DataFrame(data)
df
>>> column1 column2
0 338 NaN
1 519 1.0
2 871 1.0
3 1731 1.0
4 2693 1.0
5 2963 NaN
6 3379 NaN
7 3789 2.0
8 3910 2.0
9 4109 NaN
10 4307 NaN
11 4800 3.0
12 4912 3.0
13 5111 3.0
14 5341 3.0
15 5820 3.0
16 6003 NaN
17 .... ....
The integers in column2 denote "groups" in column1, e.g. rows 1-4 is group "1", rows 7-8 is group "2", rows 11-15 is group "3", etc.
I would like to calculate the difference between the first row and last row in each group. The resulting dataframe would look like this:
df
>>> column1 column2 column3
0 338 NaN NaN
1 519 1.0 2174
2 871 1.0 2174
3 1731 1.0 2174
4 2693 1.0 2174
5 2963 NaN NaN
6 3379 NaN NaN
7 3789 2.0 121
8 3910 2.0 121
9 4109 NaN NaN
10 4307 NaN NaN
11 4800 3.0 1020
12 4912 3.0 1020
13 5111 3.0 1020
14 5341 3.0 1020
15 5820 3.0 1020
16 6003 NaN NaN
17 .... .... ...
because:
2693-519 = 2174
3910-3789 = 121
5820-4800 = 1020
What is the "pandas way" to calculate column3? Somehow, one must iterate through column3, looking for consecutive groups of values such that df.column2 != "NaN".
EDIT: I realized my example may lead readers to assume the values in column1 are only increasing. Actually, there are intervals, column intervals
df = pd.DataFrame(data)
df
>>> interval column1 column2
0 interval1 338 NaN
1 interval1 519 1.0
2 interval1 871 1.0
3 interval1 1731 1.0
4 interval1 2693 1.0
5 interval1 2963 NaN
6 interval1 3379 NaN
7 interval1 3789 2.0
8 interval1 3910 2.0
9 interval1 4109 NaN
10 interval1 4307 NaN
11 interval1 4800 3.0
12 interval1 4912 3.0
13 interval1 5111 3.0
14 interval1 5341 3.0
15 interval1 5820 3.0
16 interval1 6003 NaN
17 .... ....
18 interval2 12 13
19 interval2 115 13
20 interval2 275 NaN
....
You can filter first and then get difference first and last value in transform:
df['col3'] = df[df.column2.notnull()]
.groupby('column2')['column1']
.transform(lambda x: x.iat[-1] - x.iat[0])
print (df)
column1 column2 col3
0 338 NaN NaN
1 519 1.0 2174.0
2 871 1.0 2174.0
3 1731 1.0 2174.0
4 2693 1.0 2174.0
5 2963 NaN NaN
6 3379 NaN NaN
7 3789 2.0 121.0
8 3910 2.0 121.0
9 4109 NaN NaN
10 4307 NaN NaN
11 4800 3.0 1020.0
12 4912 3.0 1020.0
13 5111 3.0 1020.0
14 5341 3.0 1020.0
15 5820 3.0 1020.0
16 6003 NaN NaN
EDIT1 by your new data:
df['col3'] = df[df.column2.notnull()]
.groupby('column2')['column1']
.transform(lambda x: x.iat[-1] - x.iat[0])
print (df)
interval column1 column2 col3
0 interval1 338 NaN NaN
1 interval1 519 1.0 2174.0
2 interval1 871 1.0 2174.0
3 interval1 1731 1.0 2174.0
4 interval1 2693 1.0 2174.0
5 interval1 2963 NaN NaN
6 interval1 3379 NaN NaN
7 interval1 3789 2.0 121.0
8 interval1 3910 2.0 121.0
9 interval1 4109 NaN NaN
10 interval1 4307 NaN NaN
11 interval1 4800 3.0 1020.0
12 interval1 4912 3.0 1020.0
13 interval1 5111 3.0 1020.0
14 interval1 5341 3.0 1020.0
15 interval1 5820 3.0 1020.0
16 interval1 6003 NaN NaN
18 interval2 12 13.0 103.0
19 interval2 115 13.0 103.0
20 interval2 275 NaN NaN

Categories

Resources