invalid literal for float(): - python
I'm new with python. So maybe there is something really basic here I'm missing, but I can't figure it out...For my work I'm trying to read a txt file and apply KNN on it.
The File content is as follow and it has three columns with the third one as the class, the separator is a space.
0.85 17.45 2
0.75 15.6 2
3.3 15.45 2
5.25 14.2 2
4.9 15.65 2
5.35 15.85 2
5.1 17.9 2
4.6 18.25 2
4.05 18.75 2
3.4 19.7 2
2.9 21.15 2
3.1 21.85 2
3.9 21.85 2
4.4 20.05 2
7.2 14.5 2
7.65 16.5 2
7.1 18.65 2
7.05 19.9 2
5.85 20.55 2
5.5 21.8 2
6.55 21.8 2
6.05 22.3 2
5.2 23.4 2
4.55 23.9 2
5.1 24.4 2
8.1 26.35 2
10.15 27.7 2
9.75 25.5 2
9.2 21.1 2
11.2 22.8 2
12.6 23.1 2
13.25 23.5 2
11.65 26.85 2
12.45 27.55 2
13.3 27.85 2
13.7 27.75 2
14.15 26.9 2
14.05 26.55 2
15.15 24.2 2
15.2 24.75 2
12.2 20.9 2
12.15 21.45 2
12.75 22.05 2
13.15 21.85 2
13.75 22 2
13.95 22.7 2
14.4 22.65 2
14.2 22.15 2
14.1 21.75 2
14.05 21.4 2
17.2 24.8 2
17.7 24.85 2
17.55 25.2 2
17 26.85 2
16.55 27.1 2
19.15 25.35 2
18.8 24.7 2
21.4 25.85 2
15.8 21.35 2
16.6 21.15 2
17.45 20.75 2
18 20.95 2
18.25 20.2 2
18 22.3 2
18.6 22.25 2
19.2 21.95 2
19.45 22.1 2
20.1 21.6 2
20.1 20.9 2
19.9 20.35 2
19.45 19.05 2
19.25 18.7 2
21.3 22.3 2
22.9 23.65 2
23.15 24.1 2
24.25 22.85 2
22.05 20.25 2
20.95 18.25 2
21.65 17.25 2
21.55 16.7 2
21.6 16.3 2
21.5 15.5 2
22.4 16.5 2
22.25 18.1 2
23.15 19.05 2
23.5 19.8 2
23.75 20.2 2
25.15 19.8 2
25.5 19.45 2
23 18 2
23.95 17.75 2
25.9 17.55 2
27.65 15.65 2
23.1 14.6 2
23.5 15.2 2
24.05 14.9 2
24.5 14.7 2
14.15 17.35 1
14.3 16.8 1
14.3 15.75 1
14.75 15.1 1
15.35 15.5 1
15.95 16.45 1
16.5 17.05 1
17.35 17.05 1
17.15 16.3 1
16.65 16.1 1
16.5 15.15 1
16.25 14.95 1
16 14.25 1
15.9 13.2 1
15.15 12.05 1
15.2 11.7 1
17 15.65 1
16.9 15.35 1
17.35 15.45 1
17.15 15.1 1
17.3 14.9 1
17.7 15 1
17 14.6 1
16.85 14.3 1
16.6 14.05 1
17.1 14 1
17.45 14.15 1
17.8 14.2 1
17.6 13.85 1
17.2 13.5 1
17.25 13.15 1
17.1 12.75 1
16.95 12.35 1
16.5 12.2 1
16.25 12.5 1
16.05 11.9 1
16.65 10.9 1
16.7 11.4 1
16.95 11.25 1
17.3 11.2 1
18.05 11.9 1
18.6 12.5 1
18.9 12.05 1
18.7 11.25 1
17.95 10.9 1
18.4 10.05 1
17.45 10.4 1
17.6 10.15 1
17.7 9.85 1
17.3 9.7 1
16.95 9.7 1
16.75 9.65 1
19.8 9.95 1
19.1 9.55 1
17.5 8.3 1
17.55 8.1 1
17.85 7.55 1
18.2 8.35 1
19.3 9.1 1
19.4 8.85 1
19.05 8.85 1
18.9 8.5 1
18.6 7.85 1
18.7 7.65 1
19.35 8.2 1
19.95 8.3 1
20 8.9 1
20.3 8.9 1
20.55 8.8 1
18.35 6.95 1
18.65 6.9 1
19.3 7 1
19.1 6.85 1
19.15 6.65 1
21.2 8.8 1
21.4 8.8 1
21.1 8 1
20.4 7 1
20.5 6.35 1
20.1 6.05 1
20.45 5.15 1
20.95 5.55 1
20.95 6.2 1
20.9 6.6 1
21.05 7 1
21.85 8.5 1
21.9 8.2 1
22.3 7.7 1
21.85 6.65 1
21.3 5.05 1
22.6 6.7 1
22.5 6.15 1
23.65 7.2 1
24.1 7 1
21.95 4.8 1
22.15 5.05 1
22.45 5.3 1
22.45 4.9 1
22.7 5.5 1
23 5.6 1
23.2 5.3 1
23.45 5.95 1
23.75 5.95 1
24.45 6.15 1
24.6 6.45 1
25.2 6.55 1
26.05 6.4 1
25.3 5.75 1
24.35 5.35 1
23.3 4.9 1
22.95 4.75 1
22.4 4.55 1
22.8 4.1 1
22.9 4 1
23.25 3.85 1
23.45 3.6 1
23.55 4.2 1
23.8 3.65 1
23.8 4.75 1
24.2 4 1
24.55 4 1
24.7 3.85 1
24.7 4.3 1
24.9 4.75 1
26.4 5.7 1
27.15 5.95 1
27.3 5.45 1
27.5 5.45 1
27.55 5.1 1
26.85 4.95 1
26.6 4.9 1
26.85 4.4 1
26.2 4.4 1
26 4.25 1
25.15 4.1 1
25.6 3.9 1
25.85 3.6 1
24.95 3.35 1
25.1 3.25 1
25.45 3.15 1
26.85 2.95 1
27.15 3.15 1
27.2 3 1
27.95 3.25 1
27.95 3.5 1
28.8 4.05 1
28.8 4.7 1
28.75 5.45 1
28.6 5.75 1
29.25 6.3 1
30 6.55 1
30.6 3.4 1
30.05 3.45 1
29.75 3.45 1
29.2 4 1
29.45 4.05 1
29.05 4.55 1
29.4 4.85 1
29.5 4.7 1
29.9 4.45 1
30.75 4.45 1
30.4 4.05 1
30.8 3.95 1
31.05 3.95 1
30.9 5.2 1
30.65 5.85 1
30.7 6.15 1
31.5 6.25 1
31.65 6.55 1
32 7 1
32.5 7.95 1
33.35 7.45 1
32.6 6.95 1
32.65 6.6 1
32.55 6.35 1
32.35 6.1 1
32.55 5.8 1
32.2 5.05 1
32.35 4.25 1
32.9 4.15 1
32.7 4.6 1
32.75 4.85 1
34.1 4.6 1
34.1 5 1
33.6 5.25 1
33.35 5.65 1
33.75 5.95 1
33.4 6.2 1
34.45 5.8 1
34.65 5.65 1
34.65 6.25 1
35.25 6.25 1
34.35 6.8 1
34.1 7.15 1
34.45 7.3 1
34.7 7.2 1
34.85 7 1
34.35 7.75 1
34.55 7.85 1
35.05 8 1
35.5 8.05 1
35.8 7.1 1
36.6 6.7 1
36.75 7.25 1
36.5 7.4 1
35.95 7.9 1
36.1 8.1 1
36.15 8.4 1
37.6 7.35 1
37.9 7.65 1
29.15 4.4 1
34.9 9 1
35.3 9.4 1
35.9 9.35 1
36 9.65 1
35.75 10 1
36.7 9.15 1
36.6 9.8 1
36.9 9.75 1
37.25 10.15 1
36.4 10.15 1
36.3 10.7 1
36.75 10.85 1
38.15 9.7 1
38.4 9.45 1
38.35 10.5 1
37.7 10.8 1
37.45 11.15 1
37.35 11.4 1
37 11.75 1
36.8 12.2 1
37.15 12.55 1
37.25 12.15 1
37.65 11.95 1
37.95 11.85 1
38.6 11.75 1
38.5 12.2 1
38 12.95 1
37.3 13 1
37.5 13.4 1
37.85 14.5 1
38.3 14.6 1
38.05 14.45 1
38.35 14.35 1
38.5 14.25 1
39.3 14.2 1
39 13.2 1
38.95 12.9 1
39.2 12.35 1
39.5 11.8 1
39.55 12.3 1
39.75 12.75 1
40.2 12.8 1
40.4 12.05 1
40.45 12.5 1
40.55 13.15 1
40.45 14.5 1
40.2 14.8 1
40.65 14.9 1
40.6 15.25 1
41.3 15.3 1
40.95 15.7 1
41.25 16.8 1
40.95 17.05 1
40.7 16.45 1
40.45 16.3 1
39.9 16.2 1
39.65 16.2 1
39.25 15.5 1
38.85 15.5 1
38.3 16.5 1
38.75 16.85 1
39 16.6 1
38.25 17.35 1
39.5 16.95 1
39.9 17.05 1
My Code:
import csv
import random
import math
import operator
def loadDataset(filename, split, trainingSet=[] , testSet=[]):
with open(filename, 'rb') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for x in range(len(dataset)-1):
for y in range(3):
dataset[x][y] = float(dataset[x][y])
if random.random() < split:
trainingSet.append(dataset[x])
else:
testSet.append(dataset[x])
def euclideanDistance(instance1, instance2, length):
distance = 0
for x in range(length):
distance += pow((instance1[x] - instance2[x]), 2)
return math.sqrt(distance)
def getNeighbors(trainingSet, testInstance, k):
distances = []
length = len(testInstance)-1
for x in range(len(trainingSet)):
dist = euclideanDistance(testInstance, trainingSet[x], length)
distances.append((trainingSet[x], dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(k):
neighbors.append(distances[x][0])
return neighbors
def getResponse(neighbors):
classVotes = {}
for x in range(len(neighbors)):
response = neighbors[x][-1]
if response in classVotes:
classVotes[response] += 1
else:
classVotes[response] = 1
sortedVotes = sorted(classVotes.iteritems(), key=operator.itemgetter(1), reverse=True)
return sortedVotes[0][0]
def getAccuracy(testSet, predictions):
correct = 0
for x in range(len(testSet)):
if testSet[x][-1] == predictions[x]:
correct += 1
return (correct/float(len(testSet))) * 100.0
def main():
# prepare data
trainingSet=[]
testSet=[]
split = 0.67
loadDataset('Jain.txt', split, trainingSet, testSet)
print 'Train set: ' + repr(len(trainingSet))
print 'Test set: ' + repr(len(testSet))
# generate predictions
predictions=[]
k = 3
for x in range(len(testSet)):
neighbors = getNeighbors(trainingSet, testSet[x], k)
result = getResponse(neighbors)
predictions.append(result)
print('> predicted=' + repr(result) + ', actual=' + repr(testSet[x][-1]))
accuracy = getAccuracy(testSet, predictions)
print('Accuracy: ' + repr(accuracy) + '%')
main()
Here:
lines = csv.reader(csvfile)
You have to tell csv.reader what separator to use - else it will use the default excel ',' separator. Note that in the example you posted, the separator might actually NOT be "a space", but either a tab ("\t" in python) or just a random number of spaces - in which case it's not a csv-like format and you'll have to parse lines by yourself.
Also your code is far from pythonic. First thing first: python's 'for' loop are really "for each" kind of loops, ie they directly yields values from the object you iterate on. The proper way to iterate on a list is:
lst = ["a", "b", "c"]
for item in lst:
print(item)
so no need for range() and indexed access here. Note that if you want to have the index too, you can use enumerate(sequence), which will yield (index, item) pairs, ie:
lst = ["a", "b", "c"]
for index, item in enumerate(lst):
print("item at {} is {}".format(index, item))
So your loadDataset() function could be rewritten as:
def loadDataset(filename, split, trainingSet=None , testSet=None):
# fix the mutable default argument gotcha
# cf https://docs.python-guide.org/writing/gotchas/#mutable-default-arguments
if trainingSet is None:
trainingSet = []
if testSet is None:
testSet = []
with open(filename, 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter="\t")
for row in reader:
row = tuple(float(x) for x in row)
if random.random() < split:
trainingSet.append(row)
else:
testSet.append(row)
# so the caller can get the values back
return trainingSet, testSet
Note that if any value in your file is not a proper representation of a float, you'll still get a ValueError in row = tuple(float(x) for x in row). The solution here is to catch the error and handle it one way or another - either by reraising it with additionnal debugging info (which value is wrong and which line of the file it belongs to) or by logging the error and ignoring this row or however it makes sense in the context of your app / lib:
for row in reader:
try:
row = tuple(float(x) for x in row)
except ValueError as e:
# here we choose to just log the error
# and ignore the row, but you may want
# to do otherwise, your choice...
print("wrong value in line {}: {}".format(reader.line_num, row))
continue
if random.random() < split:
trainingSet.append(row)
else:
testSet.append(row)
Also, if you want to iterate over two lists in parallel (get 'list1[x], list2[x]' pairs), you can use zip():
lst1 = ["a", "b", "c"]
lst2 = ["x", "y", "z"]
for pair in zip(lst1, lst2):
print(pair)
and there are functions to sum() values from an iterable, ie:
lst = [1, 2, 3]
print(sum(lst))
so your euclideanDistance function can be rewritten as:
def euclideanDistance(instance1, instance2, length):
pairs = zip(instance1[:length], instance2[:length])
return math.sqrt(sum(pow(x - y) for x, y in pairs))
etc etc...
Related
Apply rolling custom function with pandas
There are a few similar questions in this site, but I couldn't find out a solution to my particular question. I have a dataframe that I want to process with a custom function (the real function has a bit more pre-procesing, but the gist is contained in the toy example fun). import statsmodels.api as sm import numpy as np import pandas as pd mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data) def fun(col1, col2, w1=10, w2=2): return(np.mean(w1 * col1 + w2 * col2)) # This is the behavior I would expect for the full dataset, currently working mtcars.apply(lambda x: fun(x.cyl, x.mpg), axis=1) # This was my approach to do the same with a rolling function mtcars.rolling(3).apply(lambda x: fun(x.cyl, x.mpg)) The rolling version returns this error: AttributeError: 'Series' object has no attribute 'cyl' I figured I don't fully understand how rolling works, since adding a print statement to the beginning of my function shows that fun is not getting the full dataset but an unnamed series of 3. What is the approach to apply this rolling function in pandas? Just in case, I am running >>> pd.__version__ '1.5.2' Update Looks like there is a very similar question here which might partially overlap with what I'm trying to do. For completeness, here's how I would do this in R with the expected output. library(dplyr) fun <- function(col1, col2, w1=10, w2=2){ return(mean(w1*col1 + w2*col2)) } mtcars %>% mutate(roll = slider::slide2(.x = cyl, .y = mpg, .f = fun, .before = 1, .after = 1)) mpg cyl disp hp drat wt qsec vs am gear carb roll Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 4 4 102 Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 4 4 96.53333 Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 4 1 96.8 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 101.9333 Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 3 2 105.4667 Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 3 1 107.4 Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 3 4 97.86667 Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 4 2 94.33333 Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 4 2 90.93333 Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 4 4 93.2 Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 4 4 102.2667 Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 3 3 107.6667 Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 3 3 112.6 Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 3 3 108.6 Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 3 4 104 Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 3 4 103.6667 Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4 105 Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 4 1 105 Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 104.4667 Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 4 1 97.2 Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 3 1 100.6 Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 3 2 101.4667 AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 3 2 109.3333 Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 3 4 111.8 Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2 106.5333 Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 4 1 101.6667 Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 5 2 95.8 Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 5 2 101.4667 Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 5 4 103.9333 Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 5 6 107 Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 5 8 97.4 Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1 4 2 96.4
There is no really elegant way to do this. Here is a suggestion: First install numpy_ext (use pip install numpy_ext or pip install numpy_ext --user). Second, you'll need to compute your column separatly and concat it to your ariginal dataframe: import statsmodels.api as sm import pandas as pd from numpy_ext import rolling_apply as rolling_apply_ext import numpy as np mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data).reset_index() def fun(col1, col2, w1=10, w2=2): return(w1 * col1 + w2 * col2) Col= pd.DataFrame(rolling_apply_ext(fun, 3, mtcars.cyl.values, mtcars.mpg.values)).rename(columns={2:'rolling'}) mtcars.join(Col["rolling"]) to get: index mpg cyl disp hp drat wt qsec vs am \ 0 Mazda RX4 21.0 6 160.0 110 3.90 2.620 16.46 0 1 1 Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.875 17.02 0 1 2 Datsun 710 22.8 4 108.0 93 3.85 2.320 18.61 1 1 3 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 4 Hornet Sportabout 18.7 8 360.0 175 3.15 3.440 17.02 0 0 5 Valiant 18.1 6 225.0 105 2.76 3.460 20.22 1 0 6 Duster 360 14.3 8 360.0 245 3.21 3.570 15.84 0 0 7 Merc 240D 24.4 4 146.7 62 3.69 3.190 20.00 1 0 8 Merc 230 22.8 4 140.8 95 3.92 3.150 22.90 1 0 9 Merc 280 19.2 6 167.6 123 3.92 3.440 18.30 1 0 10 Merc 280C 17.8 6 167.6 123 3.92 3.440 18.90 1 0 11 Merc 450SE 16.4 8 275.8 180 3.07 4.070 17.40 0 0 12 Merc 450SL 17.3 8 275.8 180 3.07 3.730 17.60 0 0 13 Merc 450SLC 15.2 8 275.8 180 3.07 3.780 18.00 0 0 14 Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.250 17.98 0 0 15 Lincoln Continental 10.4 8 460.0 215 3.00 5.424 17.82 0 0 16 Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 17 Fiat 128 32.4 4 78.7 66 4.08 2.200 19.47 1 1 18 Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 19 Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.90 1 1 20 Toyota Corona 21.5 4 120.1 97 3.70 2.465 20.01 1 0 21 Dodge Challenger 15.5 8 318.0 150 2.76 3.520 16.87 0 0 22 AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.30 0 0 23 Camaro Z28 13.3 8 350.0 245 3.73 3.840 15.41 0 0 24 Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 25 Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.90 1 1 26 Porsche 914-2 26.0 4 120.3 91 4.43 2.140 16.70 0 1 27 Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.90 1 1 28 Ford Pantera L 15.8 8 351.0 264 4.22 3.170 14.50 0 1 29 Ferrari Dino 19.7 6 145.0 175 3.62 2.770 15.50 0 1 30 Maserati Bora 15.0 8 301.0 335 3.54 3.570 14.60 0 1 31 Volvo 142E 21.4 4 121.0 109 4.11 2.780 18.60 1 1 gear carb rolling 0 4 4 NaN 1 4 4 NaN 2 4 1 85.6 3 3 1 102.8 4 3 2 117.4 5 3 1 96.2 6 3 4 108.6 7 4 2 88.8 8 4 2 85.6 9 4 4 98.4 10 4 4 95.6 11 3 3 112.8 12 3 3 114.6 13 3 3 110.4 14 3 4 100.8 15 3 4 100.8 16 3 4 109.4 17 4 1 104.8 18 4 2 100.8 19 4 1 107.8 20 3 1 83.0 21 3 2 111.0 22 3 2 110.4 23 3 4 106.6 24 3 2 118.4 25 4 1 94.6 26 5 2 92.0 27 5 2 100.8 28 5 4 111.6 29 5 6 99.4 30 5 8 110.0 31 4 2 82.8
You can use the below function for rolling apply. It might be slow compared to pandas inbuild rolling in certain situations but has additional functionality. Function argument win_size, min_periods (similar to pandas and takes only integer input). In addition, after parameter is also used to control to window, it shifts the windows to include after observation. def roll_apply(df, fn, win_size, min_periods=None, after=None): if min_periods is None: min_periods = win_size else: assert min_periods >= 1 if after is None: after = 0 before = win_size - 1 - after i = np.arange(df.shape[0]) s = np.maximum(i - before, 0) e = np.minimum(i + after, df.shape[0]) + 1 res = [fn(df.iloc[si:ei]) for si, ei in zip(s, e) if (ei-si) >= min_periods] idx = df.index[(e-s) >= min_periods] types = {type(ri) for ri in res} if len(types) != 1: return pd.Series(res, index=idx) t = list(types)[0] if t == pd.Series: return pd.DataFrame(res, index=idx) elif t == pd.DataFrame: return pd.concat(res, keys=idx) else: return pd.Series(res, index=idx) mtcars['roll'] = roll_apply(mtcars, lambda x: fun(x.cyl, x.mpg), win_size=3, min_periods=1, after=1) index mpg cyl disp hp drat wt qsec vs am gear carb roll Mazda RX4 21.0 6 160.0 110 3.9 2.62 16.46 0 1 4 4 102.0 Mazda RX4 Wag 21.0 6 160.0 110 3.9 2.875 17.02 0 1 4 4 96.53333333333335 Datsun 710 22.8 4 108.0 93 3.85 2.32 18.61 1 1 4 1 96.8 Hornet 4 Drive 21.4 6 258.0 110 3.08 3.215 19.44 1 0 3 1 101.93333333333332 Hornet Sportabout 18.7 8 360.0 175 3.15 3.44 17.02 0 0 3 2 105.46666666666665 Valiant 18.1 6 225.0 105 2.76 3.46 20.22 1 0 3 1 107.40000000000002 Duster 360 14.3 8 360.0 245 3.21 3.57 15.84 0 0 3 4 97.86666666666667 Merc 240D 24.4 4 146.7 62 3.69 3.19 20.0 1 0 4 2 94.33333333333333 Merc 230 22.8 4 140.8 95 3.92 3.15 22.9 1 0 4 2 90.93333333333332 Merc 280 19.2 6 167.6 123 3.92 3.44 18.3 1 0 4 4 93.2 Merc 280C 17.8 6 167.6 123 3.92 3.44 18.9 1 0 4 4 102.26666666666667 Merc 450SE 16.4 8 275.8 180 3.07 4.07 17.4 0 0 3 3 107.66666666666667 Merc 450SL 17.3 8 275.8 180 3.07 3.73 17.6 0 0 3 3 112.59999999999998 Merc 450SLC 15.2 8 275.8 180 3.07 3.78 18.0 0 0 3 3 108.60000000000001 Cadillac Fleetwood 10.4 8 472.0 205 2.93 5.25 17.98 0 0 3 4 104.0 Lincoln Continental 10.4 8 460.0 215 3.0 5.424 17.82 0 0 3 4 103.66666666666667 Chrysler Imperial 14.7 8 440.0 230 3.23 5.345 17.42 0 0 3 4 105.0 Fiat 128 32.4 4 78.7 66 4.08 2.2 19.47 1 1 4 1 105.0 Honda Civic 30.4 4 75.7 52 4.93 1.615 18.52 1 1 4 2 104.46666666666665 Toyota Corolla 33.9 4 71.1 65 4.22 1.835 19.9 1 1 4 1 97.2 Toyota Corona 21.5 4 120.1 97 3.7 2.465 20.01 1 0 3 1 100.60000000000001 Dodge Challenger 15.5 8 318.0 150 2.76 3.52 16.87 0 0 3 2 101.46666666666665 AMC Javelin 15.2 8 304.0 150 3.15 3.435 17.3 0 0 3 2 109.33333333333333 Camaro Z28 13.3 8 350.0 245 3.73 3.84 15.41 0 0 3 4 111.8 Pontiac Firebird 19.2 8 400.0 175 3.08 3.845 17.05 0 0 3 2 106.53333333333335 Fiat X1-9 27.3 4 79.0 66 4.08 1.935 18.9 1 1 4 1 101.66666666666667 Porsche 914-2 26.0 4 120.3 91 4.43 2.14 16.7 0 1 5 2 95.8 Lotus Europa 30.4 4 95.1 113 3.77 1.513 16.9 1 1 5 2 101.46666666666665 Ford Pantera L 15.8 8 351.0 264 4.22 3.17 14.5 0 1 5 4 103.93333333333332 Ferrari Dino 19.7 6 145.0 175 3.62 2.77 15.5 0 1 5 6 107.0 Maserati Bora 15.0 8 301.0 335 3.54 3.57 14.6 0 1 5 8 97.39999999999999 Volvo 142E 21.4 4 121.0 109 4.11 2.78 18.6 1 1 4 2 96.4 You can pass more complex function in roll_apply function. Below are few example roll_apply(mtcars, lambda d: pd.Series({'A': d.sum().sum(), 'B': d.std().std()}), win_size=3, min_periods=1, after=1) # Simple example to illustrate use case roll_apply(mtcars, lambda d: d, win_size=3, min_periods=3, after=1) # This will return rolling dataframe
I'm not aware of a way to do this calculation easily and efficiently by apply a single function to a pandas dataframe because you're calculating values across multiple rows and columns. An efficient way is to first calculate the column you want to calculate the rolling average for, then calculate the rolling average: import statsmodels.api as sm import pandas as pd mtcars=pd.DataFrame(sm.datasets.get_rdataset("mtcars", "datasets", cache=True).data) # Create column def df_fun(df, col1, col2, w1=10, w2=2): return w1 * df[col1] + w2 * df[col2] mtcars['fun_val'] = df_fun(mtcars, 'cyl', 'mpg') # Calculate rolling average mtcars['fun_val_r3m'] = mtcars['fun_val'].rolling(3, center=True, min_periods=0).mean() This gives the correct answer, and is efficient since each step should be optimized for performance. I found that separating the row and column calculations like this is about 10 times faster than the latest approach you proposed and no need to import numpy. If you don't want to keep the intermediate calculation, fun_val, you can overwrite it with the rolling average value, fun_val_r3m. If you really need to do this in one line with apply, I'm not aware of another way other than what you've done in your latest post. numpy array based approaches may be able to perform better, though less readable.
After much searching and fighting against arguments. I found an approach inspired by this answer def fun(series, w1=10, w2=2): col1 = mtcars.loc[series.index, 'cyl'] col2 = mtcars.loc[series.index, 'mpg'] return(np.mean(w1 * col1 + w2 * col2)) mtcars['roll'] = mtcars.rolling(3, center=True, min_periods=0)['mpg'] \ .apply(fun, raw=False) mtcars mpg cyl disp hp ... am gear carb roll Mazda RX4 21.0 6 160.0 110 ... 1 4 4 102.000000 Mazda RX4 Wag 21.0 6 160.0 110 ... 1 4 4 96.533333 Datsun 710 22.8 4 108.0 93 ... 1 4 1 96.800000 Hornet 4 Drive 21.4 6 258.0 110 ... 0 3 1 101.933333 Hornet Sportabout 18.7 8 360.0 175 ... 0 3 2 105.466667 Valiant 18.1 6 225.0 105 ... 0 3 1 107.400000 Duster 360 14.3 8 360.0 245 ... 0 3 4 97.866667 Merc 240D 24.4 4 146.7 62 ... 0 4 2 94.333333 Merc 230 22.8 4 140.8 95 ... 0 4 2 90.933333 Merc 280 19.2 6 167.6 123 ... 0 4 4 93.200000 Merc 280C 17.8 6 167.6 123 ... 0 4 4 102.266667 Merc 450SE 16.4 8 275.8 180 ... 0 3 3 107.666667 Merc 450SL 17.3 8 275.8 180 ... 0 3 3 112.600000 Merc 450SLC 15.2 8 275.8 180 ... 0 3 3 108.600000 Cadillac Fleetwood 10.4 8 472.0 205 ... 0 3 4 104.000000 Lincoln Continental 10.4 8 460.0 215 ... 0 3 4 103.666667 Chrysler Imperial 14.7 8 440.0 230 ... 0 3 4 105.000000 Fiat 128 32.4 4 78.7 66 ... 1 4 1 105.000000 Honda Civic 30.4 4 75.7 52 ... 1 4 2 104.466667 Toyota Corolla 33.9 4 71.1 65 ... 1 4 1 97.200000 Toyota Corona 21.5 4 120.1 97 ... 0 3 1 100.600000 Dodge Challenger 15.5 8 318.0 150 ... 0 3 2 101.466667 AMC Javelin 15.2 8 304.0 150 ... 0 3 2 109.333333 Camaro Z28 13.3 8 350.0 245 ... 0 3 4 111.800000 Pontiac Firebird 19.2 8 400.0 175 ... 0 3 2 106.533333 Fiat X1-9 27.3 4 79.0 66 ... 1 4 1 101.666667 Porsche 914-2 26.0 4 120.3 91 ... 1 5 2 95.800000 Lotus Europa 30.4 4 95.1 113 ... 1 5 2 101.466667 Ford Pantera L 15.8 8 351.0 264 ... 1 5 4 103.933333 Ferrari Dino 19.7 6 145.0 175 ... 1 5 6 107.000000 Maserati Bora 15.0 8 301.0 335 ... 1 5 8 97.400000 Volvo 142E 21.4 4 121.0 109 ... 1 4 2 96.400000 [32 rows x 12 columns] There are several things that are needed for this to perform as I wanted. raw=False will give fun access to the series if only to call .index (False : passes each row or column as a Series to the function.). This is dumb and inefficient, but it works. I needed my window center=True. I also needed the NaN filled with available info, so I set min_periods=0. There are a few things that I don't like about this approach: It seems to me that calling mtcars from outside the fun scope is potentially dangerous and might cause bugs. Multiple indexing with .loc line by line does not scale well and probably has worse performance (doing the rolling more times than needed)
Issue with connecting points for 3d line plot in plotly
I have a dataframe of the following format: x y z aa_num frame_num cluster 1 1.86 3.11 8.62 1 1 1 2 1.77 3.32 8.31 2 1 1 3 1.59 3.17 8.00 3 1 1 4 1.67 3.49 7.81 4 1 1 5 2.04 3.59 7.81 5 1 1 6 2.20 3.34 7.57 6 1 1 7 2.09 3.19 7.25 7 1 1 8 2.13 3.30 6.89 8 1 1 9 2.17 3.63 6.70 9 1 1 10 2.22 3.63 6.33 10 1 1 11 2.06 3.83 6.04 11 1 1 12 2.31 3.75 5.76 12 1 1 13 2.15 3.45 5.59 13 1 1 14 2.21 3.28 5.26 14 1 1 15 2.00 3.13 4.98 15 1 1 16 2.13 2.86 4.74 16 1 1 17 1.97 2.78 4.41 17 1 1 18 2.20 2.76 4.10 18 1 1 19 2.43 2.46 4.14 19 1 1 20 2.34 2.23 3.85 20 1 1 21 2.61 2.16 3.59 21 1 1 22 2.42 1.92 3.36 22 1 1 23 2.44 1.95 2.98 23 1 1 24 2.26 1.62 2.94 24 1 1 25 2.19 1.35 3.20 25 1 1 26 1.92 1.11 3.08 26 1 1 27 1.93 0.83 3.33 27 1 1 28 1.83 0.72 3.68 28 1 1 29 1.95 0.47 3.95 29 1 1 30 1.84 0.36 4.29 30 1 1 31 0.56 3.93 7.07 1 2 1 32 0.66 3.84 7.42 2 2 1 33 0.87 3.54 7.49 3 2 1 34 0.84 3.19 7.33 4 2 1 35 0.76 3.32 6.98 5 2 1 36 0.88 3.23 6.63 6 2 1 37 1.10 3.46 6.43 7 2 1 38 1.35 3.49 6.15 8 2 1 39 1.72 3.50 6.23 9 2 1 40 1.88 3.67 5.93 10 2 1 41 2.25 3.72 5.97 11 2 1 42 2.43 3.48 5.74 12 2 1 43 2.23 3.35 5.44 13 2 1 44 2.23 3.38 5.06 14 2 1 45 2.01 3.38 4.76 15 2 1 46 2.02 3.44 4.38 16 2 1 47 1.98 3.10 4.20 17 2 1 48 2.05 3.13 3.83 18 2 1 49 2.28 2.85 3.72 19 2 1 50 2.09 2.56 3.58 20 2 1 51 2.21 2.37 3.27 21 2 1 52 2.06 2.04 3.15 22 2 1 53 1.93 2.01 2.80 23 2 1 54 1.86 1.64 2.83 24 2 1 55 1.95 1.38 3.10 25 2 1 56 1.78 1.04 3.04 26 2 1 57 1.90 0.84 3.34 27 2 1 58 1.83 0.74 3.70 28 2 1 59 1.95 0.48 3.95 29 2 1 60 1.84 0.36 4.29 30 2 1 etc.. I'm trying to create a 3d line plot of this data, where a line consisting of 30 <x,y,z> points will be plotted for each frame_num and the points would be connected in the order of aa_num. The code to do this is as follows: fig = plot_ly(output_cl, x = ~x, y = ~y, z = ~z, type = 'scatter3d', mode = 'lines+markers', opacity = 1, line = list(width = 1, color = ~frame_num, colorscale = 'Viridis'), marker = list(size = 2, color = ~frame_num, colorscale = 'Viridis')) When I plot a single frame, it works fine: However, a strange issue arises when I try to plot multiple instances. For some reason, when I try to plot frame 1 and 2, point 1 and 30 connect to each other for frame 2. However, this doesn't happen for frame 1. Any ideas why? Is there someway to specify the ordering of points in 3d in plotly?
If you want to create two different traces based on column frame_num you need to pass it as a categorial variable by using factor. As an alternative you can use name = ~ frame_num or split = ~ frame_num to create multiple traces. library(plotly) output_cl <- data.frame( x = c(1.86,1.77,1.59,1.67,2.04,2.2,2.09, 2.13,2.17,2.22,2.06,2.31,2.15,2.21,2,2.13,1.97,2.2, 2.43,2.34,2.61,2.42,2.44,2.26,2.19,1.92,1.93,1.83,1.95, 1.84,0.56,0.66,0.87,0.84,0.76,0.88,1.1,1.35,1.72,1.88, 2.25,2.43,2.23,2.23,2.01,2.02,1.98,2.05,2.28,2.09, 2.21,2.06,1.93,1.86,1.95,1.78,1.9,1.83,1.95,1.84), y = c(3.11,3.32,3.17,3.49,3.59,3.34,3.19, 3.3,3.63,3.63,3.83,3.75,3.45,3.28,3.13,2.86,2.78,2.76, 2.46,2.23,2.16,1.92,1.95,1.62,1.35,1.11,0.83,0.72, 0.47,0.36,3.93,3.84,3.54,3.19,3.32,3.23,3.46,3.49,3.5, 3.67,3.72,3.48,3.35,3.38,3.38,3.44,3.1,3.13,2.85,2.56, 2.37,2.04,2.01,1.64,1.38,1.04,0.84,0.74,0.48,0.36), z = c(8.62,8.31,8,7.81,7.81,7.57,7.25,6.89, 6.7,6.33,6.04,5.76,5.59,5.26,4.98,4.74,4.41,4.1, 4.14,3.85,3.59,3.36,2.98,2.94,3.2,3.08,3.33,3.68,3.95, 4.29,7.07,7.42,7.49,7.33,6.98,6.63,6.43,6.15,6.23,5.93, 5.97,5.74,5.44,5.06,4.76,4.38,4.2,3.83,3.72,3.58, 3.27,3.15,2.8,2.83,3.1,3.04,3.34,3.7,3.95,4.29), aa_num = c(1L,2L,3L,4L,5L,6L,7L,8L,9L,10L, 11L,12L,13L,14L,15L,16L,17L,18L,19L,20L,21L,22L,23L, 24L,25L,26L,27L,28L,29L,30L,1L,2L,3L,4L,5L,6L,7L, 8L,9L,10L,11L,12L,13L,14L,15L,16L,17L,18L,19L,20L, 21L,22L,23L,24L,25L,26L,27L,28L,29L,30L), frame_num = c(1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L, 1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L, 1L,1L,1L,2L,2L,2L,2L,2L,2L,2L,2L,2L,2L,2L,2L, 2L,2L,2L,2L,2L,2L,2L,2L,2L,2L,2L,2L,2L,2L,2L,2L, 2L,2L), cluster = c(1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L, 1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L, 1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L, 1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L,1L, 1L,1L) ) fig = plot_ly(output_cl, x = ~x, y = ~y, z = ~z, type = 'scatter3d', mode = 'lines+markers', color = ~factor(frame_num), # or name = ~ frame_num or split = ~ frame_num colors ="Set2", opacity = 1, line = list(width = 1), marker = list(size = 2)) fig
Calculate Positive Streak for Pandas Rows in reverse
I want to calculate a positive streak for numbers in a row in reverse fashion. I tried using cumsum() but that's not helping me. The DataFrame looks as follows with the expected output: country score_1 score_2 score_3 score_4 score_5 expected_streak U.S. 12.4 13.6 19.9 22 28.7 4 Africa 11.1 15.5 9.2 7 34.2 1 India 13.9 6.6 16.3 21.8 30.9 3 Australia 25.4 36.9 18.9 29 NaN 0 Malaysia 12.8 NaN -6.2 28.6 31.7 2 Argentina 40.7 NaN 16.3 20.1 39 2 Canada 56.4 NaN NaN -2 -1 1 So, basically score_5 should be greater than score_4 and so on... to get a count of streak. If a number is greater than score_5 the streak count ends.
One way using diff with cummin: df2 = df.filter(like="score_").loc[:, ::-1] df["expected"] = df2.diff(-1, axis=1).gt(0).cummin(1).sum(1) print(df) Output: country score_1 score_2 score_3 score_4 score_5 expected 0 U.S. 12.4 13.6 19.9 22.0 28.7 4 1 Africa 11.1 15.5 9.2 7.0 34.2 1 2 India 13.9 6.6 16.3 21.8 30.9 3 3 Australia 25.4 36.9 18.9 29.0 NaN 0 4 Malaysia 12.8 NaN -6.2 28.6 31.7 2 5 Argentina 40.7 NaN 16.3 20.1 39.0 2 6 Canada 56.4 NaN NaN -2.0 -1.0 1
How to dynamically remove rows from a DataFrame Pandas
I have the following NFL tracking data: Event PlayId FrameId x-coord y-coord 0 Start 1 1 20.2 20.0 1 NaN 1 2 21.0 19.1 2 NaN 1 3 21.3 18.3 3 NaN 1 4 22.0 17.5 4 End 1 5 22.5 17.2 4 NaN 1 6 22.5 17.2 4 NaN 1 7 22.5 17.2 4 NaN 1 8 22.5 17.2 4 NaN 1 9 22.5 17.2 4 NaN 1 10 22.5 17.2 5 NaN 2 1 23.0 16.9 6 Start 2 2 23.6 16.7 7 End 2 3 25.1 34.1 8 NaN 2 4 25.9 34.2 10 NaN 3 1 22.7 34.2 11 Nan 3 2 21.5 34.5 12 NaN 3 3 21.1 37.3 13 Start 3 4 21.2 44.3 14 NaN 3 5 20.4 44.6 15 End 3 6 21.9 42.7 How can I filter this list to only get the rows in between the "Start" and "End" values for the Event column? To clarify, this is the data I want to filter for: Event PlayId FrameId x-coord y-coord 0 Start 1 1 20.2 20.0 1 NaN 1 2 21.0 19.1 2 NaN 1 3 21.3 18.3 3 NaN 1 4 22.0 17.5 4 End 1 5 22.5 17.2 6 Start 2 2 23.6 16.7 7 End 2 3 25.1 34.1 13 Start 3 4 21.2 44.3 14 NaN 3 5 20.4 44.6 15 End 3 6 21.9 42.7 An explicit solution will not work because the actual dataset is very large and there is no way to predict where the Start and End values fall.
Doing with slice and ffill then concat back , Also you have Nan in your df , should it be NaN ? df1=df.copy() newdf=pd.concat([df1[df.Event.ffill()=='Start'],df1[df.Event=='End']]).sort_index() newdf Event PlayId FrameId x-coord y-coord 0 Start 1 1 20.2 20.0 1 NaN 1 2 21.0 19.1 2 NaN 1 3 21.3 18.3 3 NaN 1 4 22.0 17.5 4 End 1 5 22.5 17.2 6 Start 2 2 23.6 16.7 7 End 2 3 25.1 34.1 13 Start 3 4 21.2 44.3 14 NaN 3 5 20.4 44.6 15 End 3 6 21.9 42.7 Or newdf=df[~((df.Event.ffill()=='End')&(df.Event.isna()))] newdf Event PlayId FrameId x-coord y-coord 0 Start 1 1 20.2 20.0 1 NaN 1 2 21.0 19.1 2 NaN 1 3 21.3 18.3 3 NaN 1 4 22.0 17.5 4 End 1 5 22.5 17.2 6 Start 2 2 23.6 16.7 7 End 2 3 25.1 34.1 13 Start 3 4 21.2 44.3 14 NaN 3 5 20.4 44.6 15 End 3 6 21.9 42.7
Dropping NaNs from selected data in pandas
Continuing on my previous question link (things are explained there), I now have obtained an array. However, I don't know how to use this array, but that is a further question. The point of this question is, there are NaN values in the 63 x 2 column that I created and I want the rows with NaN values deleted so that I can use the data (once I ask another question on how to graph and export as x , y arrays) Here's what I have. This code works. import pandas as pd df = pd.read_csv("~/Truncated raw data hcl.csv") data1 = [df.iloc[:, [0, 1]]] The sample of the .csv file is located in the link. I tried inputting data1.dropna() but it didn't work. I want the NaN values/rows to drop so that I'm left with a 28 x 2 array. (I am using the first column with actual values as an example). Thank you.
Try import pandas as pd df = pd.read_csv("~/Truncated raw data hcl.csv") data1 = df.iloc[:, [0, 1]] cleaned_data = data1.dropna() You were probably getting an Exception like "List does not have a method 'dropna'". That's because your data1 was not a Pandas DataFrame, but a List - and inside that list was a DataFrame.
However the answer is already given, Though i would like to put some thoughts across this. Importing Your dataFrame taking the example dataset from your earlier post you provided: >>> import pandas as pd >>> df = pd.read_csv("so.csv") >>> df time 1mnaoh trial 1 1mnaoh trial 2 1mnaoh trial 3 ... 5mnaoh trial 1 5mnaoh trial 2 5mnaoh trial 3 5mnaoh trial 4 0 0.0 23.2 23.1 23.1 ... 23.3 24.3 24.1 24.1 1 0.5 23.2 23.1 23.1 ... 23.4 24.3 24.1 24.1 2 1.0 23.2 23.1 23.1 ... 23.5 24.3 24.1 24.1 3 1.5 23.2 23.1 23.1 ... 23.6 24.3 24.1 24.1 4 2.0 23.3 23.2 23.2 ... 23.7 24.5 24.7 25.1 5 2.5 24.0 23.5 23.5 ... 23.8 27.2 26.7 28.1 6 3.0 25.4 24.4 24.1 ... 23.9 31.4 29.8 31.3 7 3.5 26.9 25.5 25.1 ... 23.9 35.1 33.2 34.4 8 4.0 27.8 26.5 26.2 ... 24.0 37.7 35.9 36.8 9 4.5 28.5 27.3 27.0 ... 24.0 39.7 38.0 38.7 10 5.0 28.9 27.9 27.7 ... 24.0 40.9 39.6 40.2 11 5.5 29.2 28.2 28.3 ... 24.0 41.9 40.7 41.0 12 6.0 29.4 28.5 28.6 ... 24.1 42.5 41.6 41.2 13 6.5 29.5 28.8 28.9 ... 24.1 43.1 42.3 41.7 14 7.0 29.6 29.0 29.1 ... 24.1 43.4 42.8 42.3 15 7.5 29.7 29.2 29.2 ... 24.0 43.7 43.1 42.9 16 8.0 29.8 29.3 29.3 ... 24.2 43.8 43.3 43.3 17 8.5 29.8 29.4 29.4 ... 27.0 43.9 43.5 43.6 18 9.0 29.9 29.5 29.5 ... 30.8 44.0 43.6 43.8 19 9.5 29.9 29.6 29.5 ... 33.9 44.0 43.7 44.0 20 10.0 30.0 29.7 29.6 ... 36.2 44.0 43.7 44.1 21 10.5 30.0 29.7 29.6 ... 37.9 44.0 43.8 44.2 22 11.0 30.0 29.7 29.6 ... 39.3 NaN 43.8 44.3 23 11.5 30.0 29.8 29.7 ... 40.2 NaN 43.8 44.3 24 12.0 30.0 29.8 29.7 ... 40.9 NaN 43.9 44.3 25 12.5 30.1 29.8 29.7 ... 41.4 NaN 43.9 44.3 26 13.0 30.1 29.8 29.8 ... 41.8 NaN 43.9 44.4 27 13.5 30.1 29.9 29.8 ... 42.0 NaN 43.9 44.4 28 14.0 30.1 29.9 29.8 ... 42.1 NaN NaN 44.4 29 14.5 NaN 29.9 29.8 ... 42.3 NaN NaN 44.4 30 15.0 NaN 29.9 NaN ... 42.4 NaN NaN NaN 31 15.5 NaN NaN NaN ... 42.4 NaN NaN NaN However, It good to clean the data beforehand and then process the data as you desired hence dropping the NA values during import itself will be significantly useful. >>> df = pd.read_csv("so.csv").dropna() <-- dropping the NA here itself >>> df time 1mnaoh trial 1 1mnaoh trial 2 1mnaoh trial 3 ... 5mnaoh trial 1 5mnaoh trial 2 5mnaoh trial 3 5mnaoh trial 4 0 0.0 23.2 23.1 23.1 ... 23.3 24.3 24.1 24.1 1 0.5 23.2 23.1 23.1 ... 23.4 24.3 24.1 24.1 2 1.0 23.2 23.1 23.1 ... 23.5 24.3 24.1 24.1 3 1.5 23.2 23.1 23.1 ... 23.6 24.3 24.1 24.1 4 2.0 23.3 23.2 23.2 ... 23.7 24.5 24.7 25.1 5 2.5 24.0 23.5 23.5 ... 23.8 27.2 26.7 28.1 6 3.0 25.4 24.4 24.1 ... 23.9 31.4 29.8 31.3 7 3.5 26.9 25.5 25.1 ... 23.9 35.1 33.2 34.4 8 4.0 27.8 26.5 26.2 ... 24.0 37.7 35.9 36.8 9 4.5 28.5 27.3 27.0 ... 24.0 39.7 38.0 38.7 10 5.0 28.9 27.9 27.7 ... 24.0 40.9 39.6 40.2 11 5.5 29.2 28.2 28.3 ... 24.0 41.9 40.7 41.0 12 6.0 29.4 28.5 28.6 ... 24.1 42.5 41.6 41.2 13 6.5 29.5 28.8 28.9 ... 24.1 43.1 42.3 41.7 14 7.0 29.6 29.0 29.1 ... 24.1 43.4 42.8 42.3 15 7.5 29.7 29.2 29.2 ... 24.0 43.7 43.1 42.9 16 8.0 29.8 29.3 29.3 ... 24.2 43.8 43.3 43.3 17 8.5 29.8 29.4 29.4 ... 27.0 43.9 43.5 43.6 18 9.0 29.9 29.5 29.5 ... 30.8 44.0 43.6 43.8 19 9.5 29.9 29.6 29.5 ... 33.9 44.0 43.7 44.0 20 10.0 30.0 29.7 29.6 ... 36.2 44.0 43.7 44.1 21 10.5 30.0 29.7 29.6 ... 37.9 44.0 43.8 44.2 and lastly cast your dataFrame as you wish: >>> df = [df.iloc[:, [0, 1]]] # new_df = [df.iloc[:, [0, 1]]] <-- if you don't want to alter actual dataFrame >>> df [ time 1mnaoh trial 1 0 0.0 23.2 1 0.5 23.2 2 1.0 23.2 3 1.5 23.2 4 2.0 23.3 5 2.5 24.0 6 3.0 25.4 7 3.5 26.9 8 4.0 27.8 9 4.5 28.5 10 5.0 28.9 11 5.5 29.2 12 6.0 29.4 13 6.5 29.5 14 7.0 29.6 15 7.5 29.7 16 8.0 29.8 17 8.5 29.8 18 9.0 29.9 19 9.5 29.9 20 10.0 30.0 21 10.5 30.0] Better Solution: While looking at the end result, i see you are just concerning about the particular columns those are 'time' & '1mnaoh trial 1' hence idealistic would be to use usecole option which will reduce your memory footprint for the search across the data because you just opted the only columns which are useful for you and then use dropna() which will give you wanted you wanted i believe. >>> df = pd.read_csv("so.csv", usecols=['time', '1mnaoh trial 1']).dropna() >>> df time 1mnaoh trial 1 0 0.0 23.2 1 0.5 23.2 2 1.0 23.2 3 1.5 23.2 4 2.0 23.3 5 2.5 24.0 6 3.0 25.4 7 3.5 26.9 8 4.0 27.8 9 4.5 28.5 10 5.0 28.9 11 5.5 29.2 12 6.0 29.4 13 6.5 29.5 14 7.0 29.6 15 7.5 29.7 16 8.0 29.8 17 8.5 29.8 18 9.0 29.9 19 9.5 29.9 20 10.0 30.0 21 10.5 30.0 22 11.0 30.0 23 11.5 30.0 24 12.0 30.0 25 12.5 30.1 26 13.0 30.1 27 13.5 30.1 28 14.0 30.1