Why is pandas showing "?" instead of NaN - python

I'm learning pandas and when i display the data frame, it is displaying ? instead of NaN.
Why is it so?
CODE :
import pandas as pd
url = "https://archive.ics.uci.edu/ml/machine-learning-
databases/autos/imports-85.data"
df = pd.read_csv(url, header=None)
print(df.head())
headers = ["symboling", "normalized-losses", "make", "fuel-type",
"aspiration",
"num-of-doors", "body-style", "drive-wheels", "engine-location",
"wheel-base", "length", "width", "height", "curb-weight",
"engine-type", "num-of-cylinders", "engine-size", "fuel-system",
"bore", "stroke", "compression-ratio", "hoursepower", "peak-rpm",
"city-mpg", "highway-mpg", "price"]
df.columns=headers
print(df.head(30))

In data are missing values represented by ?, so for converting them is possible use parameter na_values, also names parameter in read_csv add columns by list, so assign is not necessary:
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data"
headers = ["symboling", "normalized-losses", "make", "fuel-type", "aspiration",
"num-of-doors", "body-style", "drive-wheels", "engine-location",
"wheel-base", "length", "width", "height", "curb-weight",
"engine-type", "num-of-cylinders", "engine-size", "fuel-system",
"bore", "stroke", "compression-ratio", "hoursepower", "peak-rpm",
"city-mpg", "highway-mpg", "price"]
df = pd.read_csv(url, header=None, names=headers, na_values='?')
print(df.head(10))
symboling normalized-losses make fuel-type aspiration \
0 3 NaN alfa-romero gas std
1 3 NaN alfa-romero gas std
2 1 NaN alfa-romero gas std
3 2 164.0 audi gas std
4 2 164.0 audi gas std
5 2 NaN audi gas std
6 1 158.0 audi gas std
7 1 NaN audi gas std
8 1 158.0 audi gas turbo
9 0 NaN audi gas turbo
num-of-doors body-style drive-wheels engine-location wheel-base ... \
0 two convertible rwd front 88.6 ...
1 two convertible rwd front 88.6 ...
2 two hatchback rwd front 94.5 ...
3 four sedan fwd front 99.8 ...
4 four sedan 4wd front 99.4 ...
5 two sedan fwd front 99.8 ...
6 four sedan fwd front 105.8 ...
7 four wagon fwd front 105.8 ...
8 four sedan fwd front 105.8 ...
9 two hatchback 4wd front 99.5 ...
engine-size fuel-system bore stroke compression-ratio hoursepower \
0 130 mpfi 3.47 2.68 9.0 111.0
1 130 mpfi 3.47 2.68 9.0 111.0
2 152 mpfi 2.68 3.47 9.0 154.0
3 109 mpfi 3.19 3.40 10.0 102.0
4 136 mpfi 3.19 3.40 8.0 115.0
5 136 mpfi 3.19 3.40 8.5 110.0
6 136 mpfi 3.19 3.40 8.5 110.0
7 136 mpfi 3.19 3.40 8.5 110.0
8 131 mpfi 3.13 3.40 8.3 140.0
9 131 mpfi 3.13 3.40 7.0 160.0
peak-rpm city-mpg highway-mpg price
0 5000.0 21 27 13495.0
1 5000.0 21 27 16500.0
2 5000.0 19 26 16500.0
3 5500.0 24 30 13950.0
4 5500.0 18 22 17450.0
5 5500.0 19 25 15250.0
6 5500.0 19 25 17710.0
7 5500.0 19 25 18920.0
8 5500.0 17 20 23875.0
9 5500.0 16 22 NaN
[10 rows x 26 columns]
This information is here:
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.names:
Missing Attribute Values: (denoted by "?")

Another solution: if you want to replace ? by NaN after reading the data, you can do this:
df_new = df.replace({'?':np.nan})

Related

Apply rolling custom function with pandas

There are a few similar questions in this site, but I couldn't find out a solution to my particular question.
I have a dataframe that I want to process with a custom function (the real function has a bit more pre-procesing, but the gist is contained in the toy example fun).
import statsmodels.api as sm
import numpy as np
import pandas as pd
mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data)
def fun(col1, col2, w1=10, w2=2):
return(np.mean(w1 * col1 + w2 * col2))
# This is the behavior I would expect for the full dataset, currently working
mtcars.apply(lambda x: fun(x.cyl, x.mpg), axis=1)
# This was my approach to do the same with a rolling function
mtcars.rolling(3).apply(lambda x: fun(x.cyl, x.mpg))
The rolling version returns this error:
AttributeError: 'Series' object has no attribute 'cyl'
I figured I don't fully understand how rolling works, since adding a print statement to the beginning of my function shows that fun is not getting the full dataset but an unnamed series of 3. What is the approach to apply this rolling function in pandas?
Just in case, I am running
>>> pd.__version__
'1.5.2'
Update
Looks like there is a very similar question here which might partially overlap with what I'm trying to do.
For completeness, here's how I would do this in R with the expected output.
library(dplyr)
fun <- function(col1, col2, w1=10, w2=2){
return(mean(w1*col1 + w2*col2))
}
mtcars %>%
mutate(roll = slider::slide2(.x = cyl,
.y = mpg,
.f = fun,
.before = 1,
.after = 1))
mpg cyl disp hp drat wt qsec vs am gear carb roll
Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 102
Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 96.53333
Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 96.8
Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 101.9333
Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2 105.4667
Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 107.4
Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 97.86667
Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2 94.33333
Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2 90.93333
Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4 93.2
Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4 102.2667
Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 107.6667
Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 112.6
Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 108.6
Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 104
Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4 103.6667
Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4 105
Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 105
Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 104.4667
Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1 97.2
Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1 100.6
Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2 101.4667
AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2 109.3333
Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4 111.8
Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2 106.5333
Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1 101.6667
Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2 95.8
Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2 101.4667
Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4 103.9333
Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6 107
Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8 97.4
Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2 96.4
There is no really elegant way to do this. Here is a suggestion:
First install numpy_ext (use pip install numpy_ext or pip install numpy_ext --user).
Second, you'll need to compute your column separatly and concat it to your ariginal dataframe:
import statsmodels.api as sm
import pandas as pd
from numpy_ext import rolling_apply as rolling_apply_ext
import numpy as np
mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data).reset_index()
def fun(col1, col2, w1=10, w2=2):
return(w1 * col1 + w2 * col2)
Col= pd.DataFrame(rolling_apply_ext(fun, 3, mtcars.cyl.values, mtcars.mpg.values)).rename(columns={2:'rolling'})
mtcars.join(Col["rolling"])
to get:
index mpg cyl disp hp drat wt qsec vs am \
0 Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1
1 Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1
2 Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1
3 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0
4 Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0
5 Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0
6 Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0
7 Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0
8 Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0
9 Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0
10 Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0
11 Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0
12 Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0
13 Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0
14 Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0
15 Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0
16 Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0
17 Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1
18 Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1
19 Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1
20 Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0
21 Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0
22 AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0
23 Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0
24 Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0
25 Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1
26 Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1
27 Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1
28 Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1
29 Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1
30 Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1
31 Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1
gear carb rolling
0 4 4 NaN
1 4 4 NaN
2 4 1 85.6
3 3 1 102.8
4 3 2 117.4
5 3 1 96.2
6 3 4 108.6
7 4 2 88.8
8 4 2 85.6
9 4 4 98.4
10 4 4 95.6
11 3 3 112.8
12 3 3 114.6
13 3 3 110.4
14 3 4 100.8
15 3 4 100.8
16 3 4 109.4
17 4 1 104.8
18 4 2 100.8
19 4 1 107.8
20 3 1 83.0
21 3 2 111.0
22 3 2 110.4
23 3 4 106.6
24 3 2 118.4
25 4 1 94.6
26 5 2 92.0
27 5 2 100.8
28 5 4 111.6
29 5 6 99.4
30 5 8 110.0
31 4 2 82.8
You can use the below function for rolling apply. It might be slow compared to pandas inbuild rolling in certain situations but has additional functionality.
Function argument win_size, min_periods (similar to pandas and takes only integer input). In addition, after parameter is also used to control to window, it shifts the windows to include after observation.
def roll_apply(df, fn, win_size, min_periods=None, after=None):
if min_periods is None:
min_periods = win_size
else:
assert min_periods >= 1
if after is None:
after = 0
before = win_size - 1 - after
i = np.arange(df.shape[0])
s = np.maximum(i - before, 0)
e = np.minimum(i + after, df.shape[0]) + 1
res = [fn(df.iloc[si:ei]) for si, ei in zip(s, e) if (ei-si) >= min_periods]
idx = df.index[(e-s) >= min_periods]
types = {type(ri) for ri in res}
if len(types) != 1:
return pd.Series(res, index=idx)
t = list(types)[0]
if t == pd.Series:
return pd.DataFrame(res, index=idx)
elif t == pd.DataFrame:
return pd.concat(res, keys=idx)
else:
return pd.Series(res, index=idx)
mtcars['roll'] = roll_apply(mtcars, lambda x: fun(x.cyl, x.mpg), win_size=3, min_periods=1, after=1)
index
mpg
cyl
disp
hp
drat
wt
qsec
vs
am
gear
carb
roll
Mazda RX4
21.0
6
160.0
110
3.9
2.62
16.46
0
1
4
4
102.0
Mazda RX4 Wag
21.0
6
160.0
110
3.9
2.875
17.02
0
1
4
4
96.53333333333335
Datsun 710
22.8
4
108.0
93
3.85
2.32
18.61
1
1
4
1
96.8
Hornet 4 Drive
21.4
6
258.0
110
3.08
3.215
19.44
1
0
3
1
101.93333333333332
Hornet Sportabout
18.7
8
360.0
175
3.15
3.44
17.02
0
0
3
2
105.46666666666665
Valiant
18.1
6
225.0
105
2.76
3.46
20.22
1
0
3
1
107.40000000000002
Duster 360
14.3
8
360.0
245
3.21
3.57
15.84
0
0
3
4
97.86666666666667
Merc 240D
24.4
4
146.7
62
3.69
3.19
20.0
1
0
4
2
94.33333333333333
Merc 230
22.8
4
140.8
95
3.92
3.15
22.9
1
0
4
2
90.93333333333332
Merc 280
19.2
6
167.6
123
3.92
3.44
18.3
1
0
4
4
93.2
Merc 280C
17.8
6
167.6
123
3.92
3.44
18.9
1
0
4
4
102.26666666666667
Merc 450SE
16.4
8
275.8
180
3.07
4.07
17.4
0
0
3
3
107.66666666666667
Merc 450SL
17.3
8
275.8
180
3.07
3.73
17.6
0
0
3
3
112.59999999999998
Merc 450SLC
15.2
8
275.8
180
3.07
3.78
18.0
0
0
3
3
108.60000000000001
Cadillac Fleetwood
10.4
8
472.0
205
2.93
5.25
17.98
0
0
3
4
104.0
Lincoln Continental
10.4
8
460.0
215
3.0
5.424
17.82
0
0
3
4
103.66666666666667
Chrysler Imperial
14.7
8
440.0
230
3.23
5.345
17.42
0
0
3
4
105.0
Fiat 128
32.4
4
78.7
66
4.08
2.2
19.47
1
1
4
1
105.0
Honda Civic
30.4
4
75.7
52
4.93
1.615
18.52
1
1
4
2
104.46666666666665
Toyota Corolla
33.9
4
71.1
65
4.22
1.835
19.9
1
1
4
1
97.2
Toyota Corona
21.5
4
120.1
97
3.7
2.465
20.01
1
0
3
1
100.60000000000001
Dodge Challenger
15.5
8
318.0
150
2.76
3.52
16.87
0
0
3
2
101.46666666666665
AMC Javelin
15.2
8
304.0
150
3.15
3.435
17.3
0
0
3
2
109.33333333333333
Camaro Z28
13.3
8
350.0
245
3.73
3.84
15.41
0
0
3
4
111.8
Pontiac Firebird
19.2
8
400.0
175
3.08
3.845
17.05
0
0
3
2
106.53333333333335
Fiat X1-9
27.3
4
79.0
66
4.08
1.935
18.9
1
1
4
1
101.66666666666667
Porsche 914-2
26.0
4
120.3
91
4.43
2.14
16.7
0
1
5
2
95.8
Lotus Europa
30.4
4
95.1
113
3.77
1.513
16.9
1
1
5
2
101.46666666666665
Ford Pantera L
15.8
8
351.0
264
4.22
3.17
14.5
0
1
5
4
103.93333333333332
Ferrari Dino
19.7
6
145.0
175
3.62
2.77
15.5
0
1
5
6
107.0
Maserati Bora
15.0
8
301.0
335
3.54
3.57
14.6
0
1
5
8
97.39999999999999
Volvo 142E
21.4
4
121.0
109
4.11
2.78
18.6
1
1
4
2
96.4
You can pass more complex function in roll_apply function. Below are few example
roll_apply(mtcars, lambda d: pd.Series({'A': d.sum().sum(), 'B': d.std().std()}), win_size=3, min_periods=1, after=1) # Simple example to illustrate use case
roll_apply(mtcars, lambda d: d, win_size=3, min_periods=3, after=1) # This will return rolling dataframe
I'm not aware of a way to do this calculation easily and efficiently by apply a single function to a pandas dataframe because you're calculating values across multiple rows and columns. An efficient way is to first calculate the column you want to calculate the rolling average for, then calculate the rolling average:
import statsmodels.api as sm
import pandas as pd
mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data)
# Create column
def df_fun(df, col1, col2, w1=10, w2=2):
return w1 * df[col1] + w2 * df[col2]
mtcars['fun_val'] = df_fun(mtcars, 'cyl', 'mpg')
# Calculate rolling average
mtcars['fun_val_r3m'] = mtcars['fun_val'].rolling(3, center=True, min_periods=0).mean()
This gives the correct answer, and is efficient since each step should be optimized for performance. I found that separating the row and column calculations like this is about 10 times faster than the latest approach you proposed and no need to import numpy. If you don't want to keep the intermediate calculation, fun_val, you can overwrite it with the rolling average value, fun_val_r3m.
If you really need to do this in one line with apply, I'm not aware of another way other than what you've done in your latest post. numpy array based approaches may be able to perform better, though less readable.
After much searching and fighting against arguments. I found an approach inspired by this answer
def fun(series, w1=10, w2=2):
col1 = mtcars.loc[series.index, 'cyl']
col2 = mtcars.loc[series.index, 'mpg']
return(np.mean(w1 * col1 + w2 * col2))
mtcars['roll'] = mtcars.rolling(3, center=True, min_periods=0)['mpg'] \
.apply(fun, raw=False)
mtcars
mpg cyl disp hp ... am gear carb roll
Mazda RX4 21.0 6 160.0 110 ... 1 4 4 102.000000
Mazda RX4 Wag 21.0 6 160.0 110 ... 1 4 4 96.533333
Datsun 710 22.8 4 108.0 93 ... 1 4 1 96.800000
Hornet 4 Drive 21.4 6 258.0 110 ... 0 3 1 101.933333
Hornet Sportabout 18.7 8 360.0 175 ... 0 3 2 105.466667
Valiant 18.1 6 225.0 105 ... 0 3 1 107.400000
Duster 360 14.3 8 360.0 245 ... 0 3 4 97.866667
Merc 240D 24.4 4 146.7 62 ... 0 4 2 94.333333
Merc 230 22.8 4 140.8 95 ... 0 4 2 90.933333
Merc 280 19.2 6 167.6 123 ... 0 4 4 93.200000
Merc 280C 17.8 6 167.6 123 ... 0 4 4 102.266667
Merc 450SE 16.4 8 275.8 180 ... 0 3 3 107.666667
Merc 450SL 17.3 8 275.8 180 ... 0 3 3 112.600000
Merc 450SLC 15.2 8 275.8 180 ... 0 3 3 108.600000
Cadillac Fleetwood 10.4 8 472.0 205 ... 0 3 4 104.000000
Lincoln Continental 10.4 8 460.0 215 ... 0 3 4 103.666667
Chrysler Imperial 14.7 8 440.0 230 ... 0 3 4 105.000000
Fiat 128 32.4 4 78.7 66 ... 1 4 1 105.000000
Honda Civic 30.4 4 75.7 52 ... 1 4 2 104.466667
Toyota Corolla 33.9 4 71.1 65 ... 1 4 1 97.200000
Toyota Corona 21.5 4 120.1 97 ... 0 3 1 100.600000
Dodge Challenger 15.5 8 318.0 150 ... 0 3 2 101.466667
AMC Javelin 15.2 8 304.0 150 ... 0 3 2 109.333333
Camaro Z28 13.3 8 350.0 245 ... 0 3 4 111.800000
Pontiac Firebird 19.2 8 400.0 175 ... 0 3 2 106.533333
Fiat X1-9 27.3 4 79.0 66 ... 1 4 1 101.666667
Porsche 914-2 26.0 4 120.3 91 ... 1 5 2 95.800000
Lotus Europa 30.4 4 95.1 113 ... 1 5 2 101.466667
Ford Pantera L 15.8 8 351.0 264 ... 1 5 4 103.933333
Ferrari Dino 19.7 6 145.0 175 ... 1 5 6 107.000000
Maserati Bora 15.0 8 301.0 335 ... 1 5 8 97.400000
Volvo 142E 21.4 4 121.0 109 ... 1 4 2 96.400000
[32 rows x 12 columns]
There are several things that are needed for this to perform as I wanted. raw=False will give fun access to the series if only to call .index (False : passes each row or column as a Series to the function.). This is dumb and inefficient, but it works. I needed my window center=True. I also needed the NaN filled with available info, so I set min_periods=0.
There are a few things that I don't like about this approach:
It seems to me that calling mtcars from outside the fun scope is potentially dangerous and might cause bugs.
Multiple indexing with .loc line by line does not scale well and probably has worse performance (doing the rolling more times than needed)

How do I get all the tables from a website using pandas

I am trying to get 3 tables from a particular website but only the first two are showing up. I have even tried get the data using BeautifulSoup but the third seems to be hidden somehow. Is there something I am missing?
url = "https://fbref.com/en/comps/9/keepersadv/Premier-League-Stats"
html = pd.read_html(url, header=1)
print(html[0])
print(html[1])
print(html[2]) # This prompts an error that the tables does not exist
The first two tables are the squad tables. The table not showing up is the individual player table. This also happens with similar pages from the same site.
You could use Selenium as suggested, but I think is a bit overkill. The table is available in the static HTML, just within the comments. So you would need to pull the comments out of BeautifulSoup to get those tables.
To get all the tables:
import pandas as pd
import requests
from bs4 import BeautifulSoup, Comment
url = 'https://fbref.com/en/comps/9/keepersadv/Premier-League-Stats'
response = requests.get(url)
tables = pd.read_html(response.text, header=1)
# Get the tables within the Comments
soup = BeautifulSoup(response.text, 'html.parser')
comments = soup.find_all(string=lambda text: isinstance(text, Comment))
for each in comments:
if 'table' in str(each):
try:
table = pd.read_html(str(each), header=1)[0]
table = table[table['Rk'].ne('Rk')].reset_index(drop=True)
tables.append(table)
except:
continue
Output:
for table in tables:
print(table)
Squad # Pl 90s GA PKA ... Stp Stp% #OPA #OPA/90 AvgDist
0 Arsenal 2 12.0 17 0 ... 10 8.8 6 0.50 14.6
1 Aston Villa 2 12.0 20 0 ... 6 6.8 13 1.08 16.2
2 Brentford 2 12.0 17 1 ... 10 9.9 18 1.50 15.6
3 Brighton 2 12.0 14 2 ... 17 16.2 13 1.08 15.3
4 Burnley 1 12.0 20 0 ... 14 11.7 17 1.42 16.6
5 Chelsea 2 12.0 4 2 ... 8 8.5 5 0.42 14.0
6 Crystal Palace 1 12.0 17 0 ... 7 7.5 6 0.50 13.5
7 Everton 2 12.0 19 0 ... 8 7.4 7 0.58 13.7
8 Leeds United 1 12.0 20 1 ... 8 12.5 15 1.25 16.3
9 Leicester City 1 12.0 21 2 ... 9 8.4 7 0.58 13.0
10 Liverpool 2 12.0 11 0 ... 9 9.7 16 1.33 17.0
11 Manchester City 2 12.0 6 1 ... 5 8.1 16 1.33 17.5
12 Manchester Utd 1 12.0 21 0 ... 4 4.4 2 0.17 13.3
13 Newcastle Utd 2 12.0 27 4 ... 10 9.8 4 0.33 13.9
14 Norwich City 1 12.0 27 2 ... 6 5.1 5 0.42 12.4
15 Southampton 1 12.0 14 0 ... 16 13.9 2 0.17 12.9
16 Tottenham 1 12.0 17 1 ... 3 2.7 5 0.42 14.1
17 Watford 2 12.0 20 1 ... 6 5.5 9 0.75 15.4
18 West Ham 1 12.0 14 0 ... 6 5.3 1 0.08 11.9
19 Wolves 1 12.0 12 3 ... 9 10.0 10 0.83 15.5
[20 rows x 28 columns]
Squad # Pl 90s GA PKA ... Stp Stp% #OPA #OPA/90 AvgDist
0 vs Arsenal 2 12.0 13 0 ... 4 5.9 11 0.92 15.5
1 vs Aston Villa 2 12.0 16 2 ... 11 8.0 7 0.58 14.8
2 vs Brentford 2 12.0 16 1 ... 16 14.0 9 0.75 15.7
3 vs Brighton 2 12.0 12 3 ... 11 12.5 8 0.67 15.9
4 vs Burnley 1 12.0 14 0 ... 16 10.7 12 1.00 15.1
5 vs Chelsea 2 12.0 30 2 ... 10 11.1 11 0.92 14.2
6 vs Crystal Palace 1 12.0 18 2 ... 7 7.2 9 0.75 14.4
7 vs Everton 2 12.0 16 3 ... 7 7.6 7 0.58 13.8
8 vs Leeds United 1 12.0 12 1 ... 8 7.3 5 0.42 14.2
9 vs Leicester City 1 12.0 16 0 ... 2 3.3 7 0.58 14.3
10 vs Liverpool 2 12.0 35 1 ... 12 9.9 14 1.17 13.7
11 vs Manchester City 2 12.0 25 0 ... 8 6.7 4 0.33 13.1
12 vs Manchester Utd 1 12.0 20 0 ... 7 7.8 7 0.58 14.7
13 vs Newcastle Utd 2 12.0 15 0 ... 8 8.0 8 0.67 15.3
14 vs Norwich City 1 12.0 7 2 ... 5 5.7 16 1.33 17.3
15 vs Southampton 1 12.0 11 2 ... 4 3.7 9 0.75 14.0
16 vs Tottenham 1 12.0 11 1 ... 9 12.2 9 0.75 16.0
17 vs Watford 2 12.0 16 0 ... 8 8.2 9 0.75 15.3
18 vs West Ham 1 12.0 23 0 ... 13 10.5 6 0.50 13.8
19 vs Wolves 1 12.0 12 0 ... 5 6.8 9 0.75 15.3
[20 rows x 28 columns]
Rk Player Nation Pos ... #OPA #OPA/90 AvgDist Matches
0 1 Alisson br BRA GK ... 15 1.36 17.1 Matches
1 2 Kepa Arrizabalaga es ESP GK ... 1 1.00 18.8 Matches
2 3 Daniel Bachmann at AUT GK ... 1 0.25 12.2 Matches
3 4 Asmir Begović ba BIH GK ... 0 0.00 15.0 Matches
4 5 Karl Darlow eng ENG GK ... 4 0.50 14.9 Matches
5 6 Ederson br BRA GK ... 14 1.27 17.5 Matches
6 7 Łukasz Fabiański pl POL GK ... 1 0.08 11.9 Matches
7 8 Álvaro Fernández es ESP GK ... 5 1.67 15.3 Matches
8 9 Ben Foster eng ENG GK ... 8 1.00 16.8 Matches
9 10 David de Gea es ESP GK ... 2 0.17 13.3 Matches
10 11 Vicente Guaita es ESP GK ... 6 0.50 13.5 Matches
11 12 Caoimhín Kelleher ie IRL GK ... 1 1.00 14.6 Matches
12 13 Tim Krul nl NED GK ... 5 0.42 12.4 Matches
13 14 Bernd Leno de GER GK ... 1 0.33 13.1 Matches
14 15 Hugo Lloris fr FRA GK ... 5 0.42 14.1 Matches
15 16 Emiliano Martínez ar ARG GK ... 12 1.09 16.4 Matches
16 17 Alex McCarthy eng ENG GK ... 2 0.17 12.9 Matches
17 18 Edouard Mendy sn SEN GK ... 4 0.36 13.3 Matches
18 19 Illan Meslier fr FRA GK ... 15 1.25 16.3 Matches
19 20 Jordan Pickford eng ENG GK ... 7 0.64 13.6 Matches
20 21 Nick Pope eng ENG GK ... 17 1.42 16.6 Matches
21 22 Aaron Ramsdale eng ENG GK ... 5 0.56 14.9 Matches
22 23 David Raya es ESP GK ... 13 1.44 15.7 Matches
23 24 José Sá pt POR GK ... 10 0.83 15.5 Matches
24 25 Robert Sánchez es ESP GK ... 13 1.18 15.4 Matches
25 26 Kasper Schmeichel dk DEN GK ... 7 0.58 13.0 Matches
26 27 Jason Steele eng ENG GK ... 0 0.00 13.0 Matches
27 28 Jed Steer eng ENG GK ... 1 1.00 14.3 Matches
28 29 Zack Steffen us USA GK ... 2 2.00 17.8 Matches
29 30 Freddie Woodman eng ENG GK ... 0 0.00 11.6 Matches
[30 rows x 34 columns]

I can't figure out why my web scraping code isn't working

I am very new to coding and I am trying to build a web scraper for Excel so that I can transfer it to Google Sheets. Unfortunately, the code that I have written is working for other people, but not me.
This is the code I have written:
import requests
from bs4 import BeautifulSoup, Comment
import pandas as pd
URL = 'https://www.hockey-reference.com/leagues/NHL_2021.html'
csv_name = 'nhl_season_stats.csv'
def get_nhl_stats(URL):
headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
pageTree = requests.get(URL, headers=headers)
pageSoup = BeautifulSoup(pageTree.content, 'html.parser')
comments = pageSoup.find_all(string=lambda text: isinstance(text, Comment))
tables = []
for each in comments:
if 'table' in each:
try:
tables.append(pd.read_html(each, header=1)[0])
except:
continue
df = tables[0]
df = df.rename(columns={'Unnamed: 1':'Team'})
df.to_csv(csv_name, index = False)
print(df)
get_nhl_stats(URL)
After running it, I receive this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 13, in get_nhl_stats
IndexError: list index out of range
Sorry for my bad jargon, as I am very new and very confused, but any help would be greatly appreciated!
this code working, maybe the problem is in the declaration of the class "Comment" or the server does not give you the requested values:
import requests
from bs4 import BeautifulSoup
import pandas as pd
URL = 'https://www.hockey-reference.com/leagues/NHL_2021.html'
csv_name = 'nhl_season_stats.csv'
def get_nhl_stats(URL):
headers = {'User-Agent':'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
pageTree = requests.get(URL, headers=headers)
pageSoup = BeautifulSoup(pageTree.content, 'html.parser')
comments = pageSoup.find_all(string=lambda text: isinstance(text, str))
tables = []
for each in comments:
if 'table' in each:
try:
tables.append(pd.read_html(each, header=1)[0])
except:
continue
df = tables[0]
df = df.rename(columns={'Unnamed: 1':'Team'})
df.to_csv(csv_name, index = False)
print(df)
get_nhl_stats(URL)
output:
Rk Team AvAge GP W L OL PTS PTS% GF GA SOW SOL SRS SOS TG/G EVGF EVGA PP PPO PP% PPA PPOA PK% SH SHA PIM/G oPIM/G S S% SA SV% SO
0 1.0 Toronto Maple Leafs 29.0 6 4 2 0 8 0.667 19 17 0.0 0.0 0.33 -0.01 6.00 11 12 8 18 44.44 4 22 81.82 0 1 10.5 7.5 190 10.0 157 0.892 0
1 2.0 Montreal Canadiens 28.6 5 3 0 2 8 0.800 24 15 0.0 1.0 0.77 -0.83 7.80 14 8 6 20 30.00 6 25 76.00 4 1 11.4 10.6 180 13.3 140 0.893 0
2 3.0 Vegas Golden Knights 28.9 5 4 1 0 8 0.800 18 12 0.0 0.0 1.12 -0.08 6.00 15 8 2 18 11.11 3 18 83.33 1 1 7.2 7.2 150 12.0 125 0.904 0
3 4.0 Minnesota Wild 29.1 5 4 1 0 8 0.800 15 10 0.0 0.0 0.86 -0.14 5.00 13 9 1 23 4.35 1 16 93.75 1 0 7.6 10.4 166 9.0 147 0.932 0
4 5.0 Washington Capitals 30.1 5 3 0 2 8 0.800 18 16 1.0 1.0 0.10 -0.30 6.80 16 12 2 9 22.22 3 18 83.33 0 1 8.6 5.0 130 13.8 141 0.887 0
5 6.0 Philadelphia Flyers 27.0 5 3 1 1 7 0.700 19 15 0.0 1.0 0.36 -0.24 6.80 14 10 5 17 29.41 5 18 72.22 0 0 7.2 6.8 125 15.2 187 0.920 1
6 7.0 Colorado Avalanche 26.9 5 3 2 0 6 0.600 17 12 0.0 0.0 0.47 -0.53 5.80 7 9 10 25 40.00 3 19 84.21 0 0 8.0 10.4 147 11.6 143 0.916 1
7 8.0 Winnipeg Jets 27.9 4 3 1 0 6 0.750 13 10 0.0 0.0 1.10 0.35 5.75 11 6 2 20 10.00 4 12 66.67 0 0 10.3 14.3 119 10.9 134 0.925 0
8 9.0 New York Islanders 28.9 4 3 1 0 6 0.750 9 6 0.0 0.0 0.61 -0.14 3.75 5 5 4 20 20.00 1 15 93.33 0 0 11.5 11.0 108 8.3 114 0.947 2
9 10.0 Tampa Bay Lightning 27.7 3 3 0 0 6 1.000 13 5 0.0 0.0 1.70 -0.97 6.00 11 2 2 8 25.00 3 11 72.73 0 0 9.0 7.0 107 12.1 85 0.941 0
10 11.0 Pittsburgh Penguins 28.6 5 3 2 0 6 0.600 16 21 2.0 0.0 -0.43 0.17 7.40 10 16 5 18 27.78 5 19 73.68 1 0 7.6 7.2 152 10.5 130 0.838 0
11 12.0 New Jersey Devils 26.2 4 2 1 1 5 0.625 9 10 0.0 1.0 -0.35 0.15 4.75 8 3 1 11 9.09 6 16 62.50 0 1 9.8 7.3 112 8.0 150 0.933 0
12 13.0 St. Louis Blues 28.3 4 2 1 1 5 0.625 10 14 0.0 1.0 -1.66 -0.41 6.00 10 6 0 14 0.00 8 21 61.90 0 0 11.0 7.5 109 9.2 129 0.891 0
13 14.0 Boston Bruins 28.8 4 2 1 1 5 0.625 7 9 2.0 0.0 0.07 0.07 4.00 3 7 3 13 23.08 2 18 88.89 1 0 11.3 8.8 135 5.2 96 0.906 0
14 15.0 Arizona Coyotes 28.4 5 2 2 1 5 0.500 17 17 0.0 1.0 -0.04 0.16 6.80 11 11 5 22 22.73 5 24 79.17 1 1 10.4 9.6 144 11.8 157 0.892 0
15 16.0 Calgary Flames 28.1 3 2 0 1 5 0.833 11 6 0.0 0.0 1.14 -0.52 5.67 5 4 6 16 37.50 1 12 91.67 0 1 8.7 11.3 93 11.8 93 0.935 1
16 17.0 Edmonton Oilers 27.9 6 2 4 0 4 0.333 15 20 0.0 0.0 -0.91 -0.08 5.83 10 14 3 23 13.04 4 18 77.78 2 2 7.7 9.3 192 7.8 200 0.900 0
17 18.0 Vancouver Canucks 27.3 6 2 4 0 4 0.333 17 28 1.0 0.0 -1.34 0.33 7.50 12 17 4 26 15.38 9 31 70.97 1 2 13.3 10.7 179 9.5 222 0.874 0
18 19.0 Anaheim Ducks 28.6 5 1 2 2 4 0.400 8 13 0.0 0.0 -0.10 0.90 4.20 8 10 0 12 0.00 2 15 86.67 0 1 6.4 5.2 133 6.0 160 0.919 1
19 20.0 Columbus Blue Jackets 26.6 5 1 2 2 4 0.400 10 16 0.0 0.0 -1.19 0.01 5.20 9 15 1 11 9.09 1 10 90.00 0 0 9.0 9.4 152 6.6 169 0.905 0
20 21.0 Los Angeles Kings 28.3 4 1 1 2 4 0.500 12 13 0.0 0.0 0.43 0.68 6.25 8 10 4 17 23.53 3 21 85.71 0 0 11.0 9.0 119 10.1 121 0.893 0
21 22.0 Detroit Red Wings 29.3 5 2 3 0 4 0.400 10 14 0.0 0.0 -1.54 -0.74 4.80 9 9 1 12 8.33 4 16 75.00 0 1 11.4 9.8 130 7.7 155 0.910 0
22 23.0 San Jose Sharks 29.4 5 2 3 0 4 0.400 12 18 2.0 0.0 -1.32 -0.52 6.00 7 16 5 21 23.81 2 18 88.89 0 0 8.4 9.6 162 7.4 148 0.878 0
23 24.0 Carolina Hurricanes 27.0 3 2 1 0 4 0.667 9 6 0.0 0.0 0.26 -0.74 5.00 6 5 3 12 25.00 1 9 88.89 0 0 7.7 9.7 98 9.2 68 0.912 1
24 25.0 Florida Panthers 27.8 2 2 0 0 4 1.000 10 6 0.0 0.0 1.29 -0.71 8.00 7 3 3 8 37.50 3 5 40.00 0 0 5.0 8.0 66 15.2 66 0.909 0
25 26.0 Nashville Predators 28.7 4 2 2 0 4 0.500 10 14 0.0 0.0 0.01 1.01 6.00 9 7 1 16 6.25 6 16 62.50 0 1 8.0 8.0 135 7.4 126 0.889 0
26 27.0 Buffalo Sabres 27.2 5 1 3 1 3 0.300 14 15 0.0 1.0 -0.18 0.22 5.80 11 14 3 17 17.65 1 6 83.33 0 0 3.8 8.2 161 8.7 133 0.887 0
27 28.0 New York Rangers 25.6 4 1 2 1 3 0.375 11 11 0.0 1.0 -0.15 0.11 5.50 7 7 4 21 19.05 4 16 75.00 0 0 8.5 14.0 140 7.9 112 0.902 1
28 29.0 Chicago Blackhawks 26.9 5 1 3 1 3 0.300 13 21 0.0 0.0 -0.43 1.17 6.80 5 16 7 17 41.18 5 20 75.00 1 0 8.0 6.8 154 8.4 167 0.874 0
29 30.0 Ottawa Senators 27.0 4 1 2 1 3 0.375 11 14 0.0 0.0 -0.04 0.71 6.25 8 10 3 18 16.67 4 21 80.95 0 0 14.3 15.3 113 9.7 120 0.883 0
30 31.0 Dallas Stars 28.8 1 1 0 0 2 1.000 7 0 0.0 0.0 7.30 0.30 7.00 1 0 5 8 62.50 0 5 100.00 1 0 10.0 16.0 28 25.0 34 1.000 1
31 NaN League Average 28.0 4 2 2 1 5 0.574 13 13 NaN NaN NaN NaN 5.94 9 9 4 16 21.33 4 16 78.67 0 0 8.0 8.0 133 9.8 133 0.902 0

How to sum certain values in a pandas column DataFrame in a specific date range

I have a large DataFrame that looks something like this:
df =
UPC Unit_Sales Price Price_Change Date
0 22 15 1.99 NaN 2017-10-10
1 22 7 2.19 True 2017-10-12
2 22 6 2.19 NaN 2017-10-13
3 22 7 1.99 True 2017-10-16
4 22 4 1.99 NaN 2017-10-17
5 35 15 3.99 NaN 2017-10-09
6 35 17 3.99 NaN 2017-10-11
7 35 5 4.29 True 2017-10-13
8 35 8 4.29 NaN 2017-10-15
9 35 2 4.29 NaN 2017-10-15
Basically I am trying to record how the sales of a product(UPC) reacted once the price changed for the following 7 days. I want to create a new column ['Reaction'] which records the sum of the unit sales from the day of price change, and 7 days forward. Keep in mind, sometimes a UPC has more than 2 price changes, so I want a different sum for each price change.
So I want to see this:
UPC Unit_Sales Price Price_Change Date Reaction
0 22 15 1.99 NaN 2017-10-10 NaN
1 22 7 2.19 True 2017-10-12 13
2 22 6 2.19 NaN 2017-10-13 NaN
3 22 7 1.99 True 2017-10-16 11
4 22 4 1.99 NaN 2017-10-19 NaN
5 35 15 3.99 NaN 2017-10-09 NaN
6 35 17 3.99 NaN 2017-10-11 NaN
7 35 5 4.29 True 2017-10-13 15
8 35 8 4.29 NaN 2017-10-15 NaN
9 35 2 4.29 NaN 2017-10-18 NaN
What is difficult is how the dates are set up in my data. Sometimes (like for UPC 35) the dates don't range past 7 days. So I would want it to default to the next nearest date, or however many dates there are (if there are less than 7 days).
Here's what I've tried:
I set the date to a datetime and I'm thinking of counting days by .days method.
This is how I'm thinking of setting a code up (rough draft):
x = df.loc[df['Price_Change'] == 'True']
for x in df:
df['Reaction'] = sum(df.Unit_Sales[1day :8days])
Is there an easier way to do this, maybe without a for loop?
You just need ffill with groupby
df.loc[df.Price_Change==True,'Reaction']=df.groupby('UPC').apply(lambda x : (x['Price_Change'].ffill()*x['Unit_Sales']).sum()).values
df
Out[807]:
UPC Unit_Sales Price Price_Change Date Reaction
0 22 15 1.99 NaN 2017-10-10 NaN
1 22 7 2.19 True 2017-10-12 24.0
2 22 6 2.19 NaN 2017-10-13 NaN
3 22 7 2.19 NaN 2017-10-16 NaN
4 22 4 2.19 NaN 2017-10-17 NaN
5 35 15 3.99 NaN 2017-10-09 NaN
6 35 17 3.99 NaN 2017-10-11 NaN
7 35 5 4.29 True 2017-10-13 15.0
8 35 8 4.29 NaN 2017-10-15 NaN
9 35 2 4.29 NaN 2017-10-15 NaN
Update
df['New']=df.groupby('UPC').apply(lambda x : x['Price_Change']==True).cumsum().values
v1=df.groupby(['UPC','New']).apply(lambda x : (x['Price_Change'].ffill()*x['Unit_Sales']).sum())
df=df.merge(v1.reset_index())
df[0]=df[0].mask(df['Price_Change']!=True)
df
Out[927]:
UPC Unit_Sales Price Price_Change Date New 0
0 22 15 1.99 NaN 2017-10-10 0 NaN
1 22 7 2.19 True 2017-10-12 1 13.0
2 22 6 2.19 NaN 2017-10-13 1 NaN
3 22 7 1.99 True 2017-10-16 2 11.0
4 22 4 1.99 NaN 2017-10-17 2 NaN
5 35 15 3.99 NaN 2017-10-09 2 NaN
6 35 17 3.99 NaN 2017-10-11 2 NaN
7 35 5 4.29 True 2017-10-13 3 15.0
8 35 8 4.29 NaN 2017-10-15 3 NaN
9 35 2 4.29 NaN 2017-10-15 3 NaN

Dropping multiple columns in pandas at once

I have a data set consisting of 135 columns. I am trying to drop the columns which have empty data of more than 60%. There are some 40 columns approx in it. So, I wrote a function to drop this empty columns. But I am getting "Not contained in axis" error. Could some one help me solving this?. Or any other way to drop this 40 columns at once?
My function:
list_drop = df.isnull().sum()/(len(df))
def empty(df):
if list_drop > 0.5:
df.drop(list_drop,axis=1,inplace=True)
return df
Other method i tried:
df.drop(df.count()/len(df)<0.5,axis=1,inplace=True)
You could use isnull + sum and then use the mask to filter df.columns.
m = df.isnull().sum(0) / len(df) < 0.6
df = df[df.columns[m]]
Demo
df
A B C
0 29.0 NaN 26.6
1 NaN NaN 23.3
2 23.0 94.0 28.1
3 35.0 168.0 43.1
4 NaN NaN 25.6
5 32.0 88.0 31.0
6 NaN NaN 35.3
7 45.0 543.0 30.5
8 NaN NaN NaN
9 NaN NaN 37.6
10 NaN NaN 38.0
11 NaN NaN 27.1
12 23.0 846.0 30.1
13 19.0 175.0 25.8
14 NaN NaN 30.0
15 47.0 230.0 45.8
16 NaN NaN 29.6
17 38.0 83.0 43.3
18 30.0 96.0 34.6
m = df.isnull().sum(0) / len(df) < 0.3 # 0.3 as an example
m
A False
B False
C True
dtype: bool
df[df.columns[m]]
C
0 26.6
1 23.3
2 28.1
3 43.1
4 25.6
5 31.0
6 35.3
7 30.5
8 NaN
9 37.6
10 38.0
11 27.1
12 30.1
13 25.8
14 30.0
15 45.8
16 29.6
17 43.3
18 34.6

Categories

Resources