Update function based on categorical values python - python

MatchId ExpectedGoals_Team1 ExpectedGoals_Team2 Timestamp Stages Home Away
0 698085 0.8585339288573895 1.4819072820614578 2016-08-13 11:30:00 0 [92, 112] [94]
1 698086 1.097064295289673 1.0923520385902274 2016-09-12 14:00:00 0 [] [164]
2 698087 1.2752442136224664 0.8687263006179976 2016-11-25 14:00:00 1 [90] [147]
3 698088 1.0571269856980154 1.4323522262211752 2016-02-16 14:00:00 2 [10, 66, 101] [50, 118]
4 698089 1.2680212913301165 0.918961072480616 2016-05-10 14:00:00 2 [21] [134, 167]
Here is the function that needs to be updating the outcomes based on the categorized column 'Stages'.
x1 = np.array([1, 0, 0])
x2 = np.array([0, 1, 0])
x3 = np.array([0, 0, 1])
total_timeslot = 196
m=1
def squared_diff(row):
ssd = []
Home = row.Home
Away = row.Away
y = np.array([1 - (row.ExpectedGoals_Team1*m + row.ExpectedGoals_Team2*m), row.ExpectedGoals_Team1*m, row.ExpectedGoals_Team2*m])
for k in range(total_timeslot):
if k in Home:
ssd.append(sum((x2 - y) ** 2))
elif k in Away:
ssd.append(sum((x3 - y) ** 2))
else:
ssd.append(sum((x1 - y) ** 2))
return sum(ssd)
sum(df.apply(squared_diff, axis=1))
For m=1, Out[400]: 7636.305551658377
By assigning an arbitrary value of m for each category in Stages I want to test a cost function. Let m1 = 2, m2 = 3.
Here is how I attempted.
def stages(row):
Stages = row.Stages
if Stages == 0:
return np.array([1 - (row.ExpectedGoals_Team1*m + row.ExpectedGoals_Team2*m), row.ExpectedGoals_Team1*m, row.ExpectedGoals_Team2*m])
elif Stages == 1:
return np.array([1 - (row.ExpectedGoals_Team1*m1 + row.ExpectedGoals_Team2*m1), row.ExpectedGoals_Team1*m1, row.ExpectedGoals_Team2*m1])
else:
return np.array([1 - (row.ExpectedGoals_Team1*m2 + row.ExpectedGoals_Team2*m2), row.ExpectedGoals_Team1*m2, row.ExpectedGoals_Team2*m2])
df.apply(squared_diff, Stages, axis=1)
TypeError: apply() got multiple values for argument 'axis'

df.apply(squared_diff, Stages, axis=1) got error because the second parameter is for axis so it thought axis=Stages, but then the third parameter is again axis=1.
To address the problem, you can first store desired m into a separate column
df['m'] = df.Stages.apply(lambda x: 1 if x == 0 else 2 if x == 1 else 3)
Then replace this line in your squared_diff function
y = np.array([1 - (row.ExpectedGoals_Team1*m + row.ExpectedGoals_Team2*m), row.ExpectedGoals_Team1*m, row.ExpectedGoals_Team2*m])
with
y = np.array([1 - (row.ExpectedGoals_Team1*row.m + row.ExpectedGoals_Team2*row.m), row.ExpectedGoals_Team1*row.m, row.ExpectedGoals_Team2*row.m])

Related

Converting pandas.core.series.Series to dataframe with multiple column names

My toy example is as follows:
import numpy as np
from sklearn.datasets import load_iris
import pandas as pd
### prepare data
Xy = np.c_[load_iris(return_X_y=True)]
mycol = ['x1','x2','x3','x4','group']
df = pd.DataFrame(data=Xy, columns=mycol)
dat = df.iloc[:100,:] #only consider two species
dat['group'] = dat.group.apply(lambda x: 1 if x ==0 else 2) #two species means two groups
dat.shape
dat.head()
### Linear discriminant analysis procedure
G1 = dat.iloc[:50,:-1]; x1_bar = G1.mean(); S1 = G1.cov(); n1 = G1.shape[0]
G2 = dat.iloc[50:,:-1]; x2_bar = G2.mean(); S2 = G2.cov(); n2 = G2.shape[0]
Sp = (n1-1)/(n1+n2-2)*S1 + (n2-1)/(n1+n2-2)*S2
a = np.linalg.inv(Sp).dot(x1_bar-x2_bar); u_bar = (x1_bar + x2_bar)/2
m = a.T.dot(u_bar); print("Linear discriminant boundary is {} ".format(m))
def my_lda(x):
y = a.T.dot(x)
pred = 1 if y >= m else 2
return y.round(4), pred
xx = dat.iloc[:,:-1]
xxa = xx.agg(my_lda, axis=1)
xxa.shape
type(xxa)
We have xxa is a pandas.core.series.Series with shape (100,). Note that there are two columns in parentheses of xxa, I want convert xxa to a pd.DataFrame with 100 rows x 2 columns and I try
xxa_df1 = pd.DataFrame(data=xxa, columns=['y','pred'])
which gives ValueError: Shape of passed values is (100, 1), indices imply (100, 2).
Then I continue to try
xxa2 = xxa.to_frame()
# xxa2 = pd.DataFrame(xxa) #equals `xxa.to_frame()`
xxa_df2 = pd.DataFrame(data=xxa2, columns=['y','pred'])
and xxa_df2 presents all NaN with 100 rows x 2 columns. What should I do next?
Let's try Series.tolist()
xxa_df1 = pd.DataFrame(data=xxa.tolist(), columns=['y','pred'])
print(xxa_df1)
y pred
0 42.0080 1
1 32.3859 1
2 37.5566 1
3 31.0958 1
4 43.5050 1
.. ... ...
95 -56.9613 2
96 -61.8481 2
97 -62.4983 2
98 -38.6006 2
99 -61.4737 2
[100 rows x 2 columns]

2D Gaussian oversampling over large dataframe

I currently have a dataframe in the following format:
step tag_id x_pos y_pos
1 1 5 3
1 2 3 4
2 1 2 2
2 3 1 6
.........................
.........................
N 1 5 7
For each row in the df, I am aiming to add an additional m rows oversampling from a Gaussian distribution for the x and y values (independent). Thus, a df of N = 100 and m = 10 would result in a df length 1010, including the original and oversampled values.
The code I have for this works, but it is extremely slow over a large dataset (N > 100k). There are many operations (creating new arrays/ dfs, use of itertuples, etc.) that I'm sure are hampering performance; I would appreciate any help as to how I can improve the performance so I can generate higher m values over the whole dataset. For instance: input data is from a pandas dataframe, but the multi-variate normal function operates on numpy arrays. Is there a more natural way to implement this through pandas without the copying between numpy arrays and dataframes? Thanks!
Reproducible example:
import pandas as pd
import numpy as np
import random
def gaussianOversample2(row, n):
sigma = 2
mean_x = float(getattr(row,'x_pos'))
mean_y = float(getattr(row,'y_pos'))
step = getattr(row, 'step')
tag_id = getattr(row, 'tag_id')
sigma = np.array([1,1])
cov = np.diag(sigma ** 2)
x,y = np.random.multivariate_normal([mean_x, mean_y], cov, n).T
x = np.concatenate(([mean_x], x))
y = np.concatenate(([mean_y], y))
steps = np.empty(n+1)
tags = np.empty(n+1)
steps.fill(step)
tags.fill(tag_id)
return x,y, steps, tags
def oversampleDf(df, n):
oversampled_arr = np.empty((0,4), float)
# with input df with step, tag_id, x_pos, y_pos
data = pd.DataFrame(columns = df.columns)
count = 0
for row in df.itertuples(index=False):
count = count + 1
temp = np.zeros((len(row), n+1))
oversample_x, oversample_y, steps, tags = gaussianOversample2(row, n)
temp[0] = steps
temp[1] = tags
temp[2] = oversample_x
temp[3] = oversample_y
temp = pd.DataFrame(temp.T, columns = df.columns)
data = data.append(temp)
if count % 1000 == 0:
print("Row: ", count)
return data
df = pd.DataFrame([[1, 1, 5, 3],[1, 2, 3, 4],[2, 1, 2, 2],[2, 3, 1, 6], columns = ['step', 'tag_id', 'x_pos', 'y_pos']])
res = oversampleDf(df, 20)
"""
# Result should be:
step tag_id x_pos y_pos
0 1.0 1.0 5.000000 3.000000
1 1.0 1.0 3.423492 3.886602
2 1.0 1.0 5.404581 2.177559
3 1.0 1.0 4.023274 2.883737
4 1.0 1.0 3.390710 3.038782
.. ... ... ... ...
16 2.0 3.0 1.894151 5.510321
17 2.0 3.0 1.110932 5.281578
18 2.0 3.0 1.623538 4.529825
19 2.0 3.0 -0.576756 7.476872
20 2.0 3.0 -0.866123 5.898048
"""
This is the solution I have found for myself; it is more of a workaround than a technique using quicker methods. I instead write out to a csv file, which I then read in once complete, as so:
def gaussianOversample3(row, n):
mean_x = float(getattr(row,'x_pos'))
mean_y = float(getattr(row,'y_pos'))
step = getattr(row, 'step')
tag_id = getattr(row, 'tag_id')
sigma = np.array([1,1])
cov = np.diag(sigma ** 2)
x,y = np.random.multivariate_normal([mean_x, mean_y], cov, n).T
x = np.concatenate(([mean_x], x))
y = np.concatenate(([mean_y], y))
steps = np.empty(n+1)
tags = np.empty(n+1)
steps.fill(step)
tags.fill(tag_id)
pd.DataFrame(data = np.column_stack((steps,tags,x,y))).to_csv("oversample.csv", mode = 'a', header = False)
def oversampleDf2(df, n):
filename = "oversample.csv"
d = pd.DataFrame(list())
d.to_csv(filename)
#count = 0
for row in df.itertuples(index=False):
#count = count + 1
gaussianOversample3(row, n)
#if count % 10000 == 0:
# print("Row: ", count)
Because of how it is reading the file, I have to do the following:
oversampleDf2(defensive_df2, num_oversamples)
oversampled_df = pd.read_csv("oversample_10.csv", sep= ' ')
oversampled_df.columns = ['col']
oversampled_df = oversampled_df.col.str.split(",",expand=True)
oversampled_df.columns = ['temp', 'step', 'tag_id', 'x_pos', 'y_pos']
oversampled_df = oversampled_df.drop(['temp'], axis = 1)
oversampled_df = oversampled_df.astype(float)

How to optimize such codes as follows in python?

I have a user-own metric to implement as follows:
def metric(pred:pd.DataFrame(), valid:pd.DataFrame()):
date_begin = valid.dt.min()
date_end = valid.dt.max()
x = valid[valid.label == 1].dt.min()
# p
p_n_tpp_df = valid[(valid.dt >= x) &\
(valid.dt <= x + timedelta(days=30)) &\
(p_n_tpp_df.label == 1)]
p_n_pp_df = valid[(valid.dt >= date_begin + timedelta(days=30)) &\
(valid.dt <= date_end + timedelta(days=30)) &\
(p_n_tpp_df.label == 1)]
p_n_tpp = len([x for x in pred.serial_number.values\
if x in p_n_tpp_df.serial_number.unique()])
p_n_pp = len([x for x in pred.serial_number.values\
if x in p_n_pp_df.serial_number.unique()])
p = p_n_tpp / p_n_pp
print('p: ', p)
# r
p_n_tpr_df = valid[(valid.dt >= date_begin - timedelta(days=30)) &\
(valid.dt <= date_end - timedelta(days=30)) &\
(p_n_tpr_df.label == 1)]
p_n_pr_df = valid[(valid.dt >= date_begin) &\
(valid.dt <= date_end) &\
(p_n_pr_df.label == 1)]
p_n_tpr = len([x for x in pred.serial_number.values\
if x in p_n_tpr_df.serial_number.unique()])
p_n_pr = len([x for x in pred.serial_number.values\
if x in p_n_pr_df.serial_number.unique()])
r = p_n_tpr / p_n_pr
print('p: ', r)
m = 2 * p * r / (p + r)
return m
The pd.DataFrame() of pred and valid have the same columns and dt has no intersections.
And the all the values of serial_number in valid is a subset of all the values of serial_number in pred.
The label column only has 2 values: 0 or 1.
Here is the sample of pred and valid is as follows:
print(pred.head(3))
serial_number dt label
0 123 2011-03-21 1
1 52 2011-03-22 0
2 12 2011-03-01 1
..., ...
print(pred.info())
Int64Index: 10000000 entries,
Data columns (total 3 columns):
serial_number int32
dt datetimes64[ns]
label int8
..., ...
print(valid.head(3))
serial_number dt label
0 324 2011-04-22 1
1 52 2011-04-22 0
2 14 2011-04-01 1
..., ...
print(valid.info())
Int64Index: 10000000 entries,
Data columns (total 3 columns):
serial_number int32
dt datetimes64[ns]
label int8
And the size of input pd.DataFrame is about 10, 000, 000 samples and 3 features.
When I try to use it to calculate this metric, it is really slow and time spending is more than 2 hours on Intel 9600KF.
So I am wondering how to optimize such code on time cost.
Thanks in advance.
Here is the biggest performance win in the code that you have:
Numpy set logic
len([x for x in pred.serial_number.values\
if x in p_n_tpr_df.serial_number.unique()])
Any line that looks like this is getting the size of the set intersection of pred.serial_number and p_n_tpr_df.serial_number. Using numpy rather than the list comprehension and the unique call will save substantial compute time:
intersect_size = np.intersect1d(pred.serial_number.values,
p_n_tpr_df.serial_number.values).shape[0]

How to apply euclidean distance function to a groupby object in pandas dataframe?

I have a set of objects and their positions over time. I would like to get the average distance between objects for each time point. An example dataframe is as follows:
time = [0, 0, 0, 1, 1, 2, 2]
x = [216, 218, 217, 280, 290, 130, 132]
y = [13, 12, 12, 110, 109, 3, 56]
car = [1, 2, 3, 1, 3, 4, 5]
df = pd.DataFrame({'time': time, 'x': x, 'y': y, 'car': car})
df
x y car
time
0 216 13 1
0 218 12 2
0 217 12 3
1 280 110 1
1 290 109 3
2 130 3 4
2 132 56 5
The end result I would like to have is:
df2
average distance
between cars
time
0 1.55
1 10.05
2 53.04
any idea on how to proceed? I've been trying apply the scipy.spatial.distance function to the dataframe, but I'm not sure how to apply it to df.groupby('time'), and then get the mean value of all those distances.
Any help appreciated!
You could pass an array of the points to scipy.spatial.distaince.pdist and it will calculate all pair-wise distances between Xi and Xj for i>j. Then take the mean.
import numpy as np
from scipy import spatial
df.groupby('time').apply(lambda x: spatial.distance.pdist(np.array(list(zip(x.x, x.y)))).mean())
Outputs:
time
0 1.550094
1 10.049876
2 53.037722
dtype: float64
For me using apply or for loop does not have much different
l1=[]
l2=[]
for y,x in df.groupby('time'):
v=np.triu(spatial.distance.cdist(x[['x','y']].values, x[['x','y']].values),k=0)
v = np.ma.masked_equal(v, 0)
l2.append(np.mean(v))
l1.append(y)
pd.DataFrame({'ave':l2},index=l1)
Out[250]:
ave
0 1.550094
1 10.049876
2 53.037722
building this up from the first principles:
For each point at index n, it is necessary to compute the distance with all the points with index > n.
if the distance between two points is given by formula:
np.sqrt((x0 - x1)**2 + (y0 - y1)**2)
then for an array of points in a dataframe, we can get all the distances & then calculate its mean:
distances = []
for i in range(len(df)-1):
distances += np.sqrt( (df.x[i+1:] - df.x[i])**2 + (df.y[i+1:] - df.y[i])**2 ).tolist()
np.mean(distances)
expressing the same logic using pd.concat & a couple of helper functions
def diff_sq(x, i):
return (x.iloc[i+1:] - x.iloc[i])**2
def dist_df(x, y, i):
d_sq = diff_sq(x, i) + diff_sq(y, i)
return np.sqrt(d_sq)
def avg_dist(df):
return pd.concat([dist_df(df.x, df.y, i) for i in range(len(df)-1)]).mean()
then it is possible to use the avg_dist function with groupby
df.groupby('time').apply(avg_dist)
# outputs:
time
0 1.550094
1 10.049876
2 53.037722
dtype: float64
You could also use the itertools package to define your own function as follow:
import itertools
import numpy as np
def combinations(series):
l = list()
for item in itertools.combinations(series,2):
l.append(((item[0] - item[1])**2))
return l
df2 = df.groupby('time').agg(combinations)
df2['avg_distance'] = [np.mean(np.sqrt(pd.Series(df2.iloc[k,0]) +
pd.Series(df2.iloc[k,1]))) for k in range(len(df2))]
df2.avg_distance.to_frame()
Then, the output is:
avg_distance
time
0 1.550094
1 10.049876
2 53.037722

Take the sum of every N rows in a pandas series

Suppose
s = pd.Series(range(50))
0 0
1 1
2 2
3 3
...
48 48
49 49
How can I get the new series that consists of sum of every n rows?
Expected result is like below, when n = 5;
0 10
1 35
2 60
3 85
...
8 210
9 235
If using loc or iloc and loop by python, of course it can be accomplished, however I believe it could be done simply in Pandas way.
Also, this is a very simplified example, I don't expect the explanation of the sequences:). Actual data series I'm trying has the time index and the the number of events occurred in every second as the values.
GroupBy.sum
N = 5
s.groupby(s.index // N).sum()
0 10
1 35
2 60
3 85
4 110
5 135
6 160
7 185
8 210
9 235
dtype: int64
Chunk the index into groups of 5 and group accordingly.
numpy.reshape + sum
If the size is a multiple of N (or 5), you can reshape and add:
s.values.reshape(-1, N).sum(1)
# array([ 10, 35, 60, 85, 110, 135, 160, 185, 210, 235])
numpy.add.at
b = np.zeros(len(s) // N)
np.add.at(b, s.index // N, s.values)
b
# array([ 10., 35., 60., 85., 110., 135., 160., 185., 210., 235.])
The most efficient solution I can think of is f1() in my example below. It is orders of magnitude faster than using the groupby in the other answer.
Note that f1() doesn't work when the length of the array is not an exact multiple, e.g. if you want to sum a 3-item array every 2 items.
For those cases, you can use f1v2():
f1v2( [0,1,2,3,4] ,2 ) = [1,5,4]
My code is below. I have used timeit for the comparisons:
import timeit
import numpy as np
import pandas as pd
def f1(a,x):
if isinstance(a, pd.Series):
a = a.to_numpy()
return a.reshape((int(a.shape[0]/x), int(x) )).sum(1)
def f2(myarray, x):
return [sum(myarray[n: n+x]) for n in range(0, len(myarray), x)]
def f3(myarray, x):
s = pd.Series(myarray)
out = s.groupby(s.index // 2).sum()
return out
def f1v2(a,x):
if isinstance(a, pd.Series):
a = a.to_numpy()
mod = a.shape[0] % x
if mod != 0:
excl = a[-mod:]
keep = a[: len(a) - mod]
out = keep.reshape((int(keep.shape[0]/x), int(x) )).sum(1)
out = np.hstack( (excl.sum() , out) )
else:
out = a.reshape((int(a.shape[0]/x), int(x) )).sum(1)
return out
a = np.arange(0,1e6)
out1 = f1(a,2)
out2 = f2(a,2)
out3 = f2(a,2)
t1 = timeit.Timer( "f1(a,2)" , globals = globals() ).repeat(repeat = 5, number = 2)
t1v2 = timeit.Timer( "f1v2(a,2)" , globals = globals() ).repeat(repeat = 5, number = 2)
t2 = timeit.Timer( "f2(a,2)" , globals = globals() ).repeat(repeat = 5, number = 2)
t3 = timeit.Timer( "f3(a,2)" , globals = globals() ).repeat(repeat = 5, number = 2)
resdf = pd.DataFrame(index = ['min time'])
resdf['f1'] = [min(t1)]
resdf['f1v2'] = [min(t1v2)]
resdf['f2'] = [min(t2)]
resdf['f3'] = [min(t3)]
#the docs explain why it makes more sense to take the min than the avg
resdf = resdf.transpose()
resdf['% difference vs fastes'] = (resdf /resdf.min() - 1) * 100
b = np.array( [0,1,2,4,5,6,7] )
out1v2 = f1v2(b,2)

Categories

Resources