So I have a dataframe df_hist that I'm sampling a row by a group pat_mrn_id. It's pretty darn slow and there's got to be a vectorized way of doing this.
Code example below
import random
import numpy as np
import pandas as pd
N = int(1e8)
A_list = np.random.randint(1, 100, N)
B_list = np.random.randint(1, 100, N)
mrns = [random.randint(0,1000) for i in range(N)]
d = {'pat_mrn_id':mrns,'a_list':A_list,'b_list':B_list}
df_hist = pd.DataFrame(data=d)
df_hist.groupby('pat_mrn_id').apply(lambda x: x.sample(1)).reset_index(drop=True)
Related
I have a pandas dataframe and I am trying to estimate a new timeseries V(t) based on the values of an existing timeseries B(t). I have written a minimal reproducible example to generate a sample dataframe as follows:
import pandas as pd
import numpy as np
lenb = 5000
lenv = 200
l = 5
B = pd.DataFrame({'a': np.arange(0, lenb, 1), 'b': np.arange(0, lenb, 1)},
index=pd.date_range('2022-01-01', periods=lenb, freq='2s'))
I want to calculate V(t) for all times 't' in the timeseries B as:
V(t) = (B(t-2*l) + 4*B(t-l)+ 6*B(t)+ 4*B(t+l)+ 1*B(t+2*l))/16
How can I perform this calculation in a vectorized manner in pandas? Lets say that l=5
Would that be the correct way to do it:
def V_t(B, l):
V = (B.shift(-2*l) + 4*B.shift(-l) + 6*B + 4*B.shift(l) + B.shift(2*l)) / 16
return V
I would have done it as you suggested in your latest edit. So here is an alternative to avoid having to type all the shift commands for an arbitrary long list of factors/multipliers:
import numpy as np
def V_t(B, l):
X = [1, 4, 6, 4, 4]
Y = [-2*l, -l, 0, l, 2*l]
return pd.DataFrame(np.add.reduce([x*B.shift(y) for x, y in zip(X, Y)])/16,
index=B.index, columns=B.columns)
I have a pandas dataframe, which have columns A & B
I just want to plot a distribution graph of the percentage of differences between column A & B
A B
1 1.051990e+10 1.051990e+04
2 1.051990e+10 1.051990e+04
5 4.841800e+10 1.200000e+10
8 2.327700e+10 2.716000e+10
9 1.204900e+10 2.100000e+08
Distribution graph will be like, how many records are having 10% of differences, how many are 20% difference
I tried as follows
df percCal(x,y):
return (x-y)*100/x
df['perc'] = df.apply(lambda x: percCal(df['A'], df['B']), axis=1)
This is not working, as i'm newbie please help
You don't need the lambda operation.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df1 = pd.DataFrame(np.random.randint(1, 10, (20, 2)), columns=['A', 'B'])
def percCal(x,y):
return (x-y)*100/x
Alternatively, just manipulate the columns directly:
df1['diff'] = (df1['A'] - df1['B']) * 100 / df1['A']
Apply the function and plot:
df1['diff'] = percCal(df1['A'], df1['B'])
df1['diff'].plot(kind='density')
df['perc'] = (df['A'] - df['B']) *100/df['A']
def percCal(x,y):
return (x-y)*100/x
df['perc'] = df.apply(lambda x: percCal(x['A'], x['B']), axis=1)
Change dfin lambda for x in this case you are giving the function the data xthat means you are giving the percCalwhat you have in the row of the data frame and when you use dfyou are giving actually the data frame and the function is returning a data frame not a value. But please check your code, if xin the function can be 0 is a problem.
Think this is what you are looking for:
# Dummy df
data = [
[1.051990e+10, 1.051990e+04],
[1.051990e+10, 1.051990e+04],
[4.841800e+10, 1.200000e+10],
[2.327700e+10, 2.716000e+10],
[1.204900e+10, 2.100000e+08],
]
cols = ['A', 'B']
df2 = pd.DataFrame(data, columns=cols)
# Solution
import seaborn as sns
df2['pct_diff'] = (df2['A'] - df2['B']) / df2['A']
sns.distplot(df2['pct_diff']);
I am trying to subset a pandas dataframe using two conditions. However, I am not getting the same results as when done with numpy. What am I doing wrong?
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(20,120,101)
y = np.linspace(-45,25,101)
xs,ys = np.meshgrid(x,y)
idx = (xs >=100) & (ys >= 0)
plt.scatter(xs,ys,s=2,c='b')
plt.scatter(xs[idx],ys[idx],s=2,c='r')
I need to remove the red block from my dataset, which I can do with numpy by using:
plt.scatter(xs[~idx],ys[~idx],s=2,c='b')
How do I replicate this with a pandas dataframe?
I've tried using the same logic as I used above:
data = {'x':x,'y':y}
df = pd.DataFrame(data)
mask = (df.x >=100) & (df.y >= 0)
df2 = df[~mask]
I've also tried using loc:
df.loc[(df.x >=100) & (df.y >= 0),['x','y']] = np.nan
Both of these methods give the following result:
How do I replicate the results from numpy?
Many thanks.
You don't obtain the same result because you didn't create all the couple of coordinates before passing them to pandas. Here is a quick solution:
data = {'x':xs.flatten(),'y':ys.flatten()}
df = pd.DataFrame(data)
mask = (df.x >=100) & (df.y >= 0)
df2 = df[~mask]
plt.scatter(df2.x,df2.y,s=2,c='b')
Flatten reshape your arrays to only have one dimension so that they can be used to construct a DF containing couple of coordinates and not lists.
Output:
Edit: Same result but with dataframe containing x and y
Split the df in chunks
data_x = np.linspace(20,120,101)
data_y = np.linspace(-45,25,101)
dataframe = pd.DataFrame({'x':data_x,'y':data_y})
chunk_size = 25
dfs = [dataframe[i:i+chunk_size] for i in range(0,dataframe.shape[0],chunk_size)]
Define the function that will give you the points you are interested in. Two loops because you need to get every configuration of x and y values
def generatorPoints(dfs):
for i in range(len(dfs)):
x = dfs[i].x
for j in range(len(dfs)):
y = dfs[j].y
xs, ys = np.meshgrid(x,y)
idx = (xs >=100) & (ys >= 0)
yield xs[~idx], ys[~idx]
x, y = [], []
for xs, ys in generatorPoints(dfs):
x.extend(xs), y.extend(ys)
plt.scatter(x,y,s=2,c='b')
This gives the same result as the previous code. There is certainly place to make some optimization but this is a start for your request :).
I have this code which works fine and gives me the result I am looking for. It loops through a list of window sizes to create rolling aggregates for each metric in the sum_metric_list, min_metric_list and max_metric_list.
# create the rolling aggregations for each window
for window in constants.AGGREGATION_WINDOW:
# get the sum and count sums
sum_metrics_names_list = [x[6:] + "_1_" + str(window) for x in sum_metrics_list]
adt_df[sum_metrics_names_list] = adt_df.groupby('athlete_id')[sum_metrics_list].apply(lambda x : x.rolling(center = False, window = window, min_periods = 1).sum())
# get the min of mins
min_metrics_names_list = [x[6:] + "_1_" + str(window) for x in min_metrics_list]
adt_df[min_metrics_names_list] = adt_df.groupby('athlete_id')[min_metrics_list].apply(lambda x : x.rolling(center = False, window = window, min_periods = 1).min())
# get the max of max
max_metrics_names_list = [x[6:] + "_1_" + str(window) for x in max_metrics_list]
adt_df[max_metrics_names_list] = adt_df.groupby('athlete_id')[max_metrics_list].apply(lambda x : x.rolling(center = False, window = window, min_periods = 1).max())
It works well on small datasets but as soon as I run it on my full data with >3000 metrics and 40 windows it becomes very slow. Is there any way to optimise this code?
The benchmark (and code) below suggests that you can save a significant amount of time by using
df.groupby(...).rolling()
instead of
df.groupby(...)[col].apply(lambda x: x.rolling(...))
The main time-saving idea here is to try to apply vectorized functions (such as sum) to the largest possible array (or DataFrame) at one time (with one function call) instead of many tiny function calls.
df.groupby(...).rolling().sum() calls sum on each (grouped) sub-DataFrame. It
can compute the rolling sums for all the columns with one call.
You could use df[sum_metrics_list+[key]].groupby(key).rolling().sum() to compute the rolling/sum on the sum_metrics_list columns.
In contrast, df.groupby(...)[col].apply(lambda x: x.rolling(...)) calls sum on a single column of each (grouped) sub-DataFrame. Since you have >3000 metrics you end up calling df.groupby(...)[col].rolling().sum() (or min or max) 3000 times.
Of course, this pseudo-logic of counting the number of calls is only a heuristic which may guide you in the direction of faster code. The proof is in the pudding:
import collections
import timeit
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def make_df(nrows=100, ncols=3):
seed = 2018
np.random.seed(seed)
df = pd.DataFrame(np.random.randint(10, size=(nrows, ncols)))
df['athlete_id'] = np.random.randint(10, size=nrows)
return df
def orig(df, key='athlete_id'):
columns = list(df.columns.difference([key]))
result = pd.DataFrame(index=df.index)
for window in range(2, 4):
for col in columns:
colname = 'sum_col{}_winsize{}'.format(col, window)
result[colname] = df.groupby(key)[col].apply(lambda x: x.rolling(
center=False, window=window, min_periods=1).sum())
colname = 'min_col{}_winsize{}'.format(col, window)
result[colname] = df.groupby(key)[col].apply(lambda x: x.rolling(
center=False, window=window, min_periods=1).min())
colname = 'max_col{}_winsize{}'.format(col, window)
result[colname] = df.groupby(key)[col].apply(lambda x: x.rolling(
center=False, window=window, min_periods=1).max())
result = pd.concat([df, result], axis=1)
return result
def alt(df, key='athlete_id'):
"""
Call rolling on the whole DataFrame, not each column separately
"""
columns = list(df.columns.difference([key]))
result = [df]
for window in range(2, 4):
rolled = df.groupby(key, group_keys=False).rolling(
center=False, window=window, min_periods=1)
new_df = rolled.sum().drop(key, axis=1)
new_df.columns = ['sum_col{}_winsize{}'.format(col, window) for col in columns]
result.append(new_df)
new_df = rolled.min().drop(key, axis=1)
new_df.columns = ['min_col{}_winsize{}'.format(col, window) for col in columns]
result.append(new_df)
new_df = rolled.max().drop(key, axis=1)
new_df.columns = ['max_col{}_winsize{}'.format(col, window) for col in columns]
result.append(new_df)
df = pd.concat(result, axis=1)
return df
timing = collections.defaultdict(list)
ncols = [3, 10, 20, 50, 100]
for n in ncols:
df = make_df(ncols=n)
timing['orig'].append(timeit.timeit(
'orig(df)',
'from __main__ import orig, alt, df',
number=10))
timing['alt'].append(timeit.timeit(
'alt(df)',
'from __main__ import orig, alt, df',
number=10))
plt.plot(ncols, timing['orig'], label='using groupby/apply (orig)')
plt.plot(ncols, timing['alt'], label='using groupby/rolling (alternative)')
plt.legend(loc='best')
plt.xlabel('number of columns')
plt.ylabel('seconds')
print(pd.DataFrame(timing, index=pd.Series(ncols, name='ncols')))
plt.show()
and yields these timeit benchmarks
alt orig
ncols
3 0.871695 0.996862
10 0.991617 3.307021
20 1.168522 6.602289
50 1.676441 16.558673
100 2.521121 33.261957
The speed advantage of alt compared to orig seems to increase as the number of columns increases.
Is there an idiomatic way of getting the slope for linear trend line fitting values in a DataFrame column? The data is indexed with DateTime index.
This should do it:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(100, 5), pd.date_range('2012-01-01', periods=100))
def trend(df):
df = df.copy().sort_index()
dates = df.index.to_julian_date().values[:, None]
x = np.concatenate([np.ones_like(dates), dates], axis=1)
y = df.values
return pd.DataFrame(np.linalg.pinv(x.T.dot(x)).dot(x.T).dot(y).T,
df.columns, ['Constant', 'Trend'])
trend(df)
Using the same df above for its index:
df_sample = pd.DataFrame((df.index.to_julian_date() * 10 + 2) + np.random.rand(100) * 1e3,
df.index)
coef = trend(df_sample)
df_sample['trend'] = (coef.iloc[0, 1] * df_sample.index.to_julian_date() + coef.iloc[0, 0])
df_sample.plot(style=['.', '-'])