Sliding Window over Pandas Dataframe - python

I have a large pandas dataframe of time-series data.
I currently manipulate this dataframe to create a new, smaller dataframe that is rolling average of every 10 rows. i.e. a rolling window technique. Like this:
def create_new_df(df):
features = []
x = df['X'].astype(float)
i = x.index.values
time_sequence = [i] * 10
idx = np.array(time_sequence).T.flatten()[:len(x)]
x = x.groupby(idx).mean()
x.name = 'X'
features.append(x)
new_df = pd.concat(features, axis=1)
return new_df
Code to test:
columns = ['X']
df_ = pd.DataFrame(columns=columns)
df_ = df_.fillna(0) # with 0s rather than NaNs
data = np.array([np.arange(20)]*1).T
df = pd.DataFrame(data, columns=columns)
test = create_new_df(df)
print test
Output:
X
0 4.5
1 14.5
However, I want the function to make the new dataframe using a sliding window with a 50% overlap
So the output would look like this:
X
0 4.5
1 9.5
2 14.5
How can I do this?
Here's what I've tried:
from itertools import tee, izip
def window(iterable, size):
iters = tee(iterable, size)
for i in xrange(1, size):
for each in iters[i:]:
next(each, None)
return izip(*iters)
for each in window(df, 20):
print list(each) # doesn't have the desired sliding window effect
Some might also suggest using the pandas rolling_mean() methods, but if so, I can't see how to use this function with window overlap.
Any help would be much appreciated.

I think pandas rolling techniques are fine here. Note that starting with version 0.18.0 of pandas, you would use rolling().mean() instead of rolling_mean().
>>> df=pd.DataFrame({ 'x':range(30) })
>>> df = df.rolling(10).mean() # version 0.18.0 syntax
>>> df[4::5] # take every 5th row
x
4 NaN
9 4.5
14 9.5
19 14.5
24 19.5
29 24.5

Related

How to move with range in data frame using python [duplicate]

I have a large pandas dataframe of time-series data.
I currently manipulate this dataframe to create a new, smaller dataframe that is rolling average of every 10 rows. i.e. a rolling window technique. Like this:
def create_new_df(df):
features = []
x = df['X'].astype(float)
i = x.index.values
time_sequence = [i] * 10
idx = np.array(time_sequence).T.flatten()[:len(x)]
x = x.groupby(idx).mean()
x.name = 'X'
features.append(x)
new_df = pd.concat(features, axis=1)
return new_df
Code to test:
columns = ['X']
df_ = pd.DataFrame(columns=columns)
df_ = df_.fillna(0) # with 0s rather than NaNs
data = np.array([np.arange(20)]*1).T
df = pd.DataFrame(data, columns=columns)
test = create_new_df(df)
print test
Output:
X
0 4.5
1 14.5
However, I want the function to make the new dataframe using a sliding window with a 50% overlap
So the output would look like this:
X
0 4.5
1 9.5
2 14.5
How can I do this?
Here's what I've tried:
from itertools import tee, izip
def window(iterable, size):
iters = tee(iterable, size)
for i in xrange(1, size):
for each in iters[i:]:
next(each, None)
return izip(*iters)
for each in window(df, 20):
print list(each) # doesn't have the desired sliding window effect
Some might also suggest using the pandas rolling_mean() methods, but if so, I can't see how to use this function with window overlap.
Any help would be much appreciated.
I think pandas rolling techniques are fine here. Note that starting with version 0.18.0 of pandas, you would use rolling().mean() instead of rolling_mean().
>>> df=pd.DataFrame({ 'x':range(30) })
>>> df = df.rolling(10).mean() # version 0.18.0 syntax
>>> df[4::5] # take every 5th row
x
4 NaN
9 4.5
14 9.5
19 14.5
24 19.5
29 24.5

Is there a faster way to split a pandas dataframe into two complementary parts?

Good evening all,
I have a situation where I need to split a dataframe into two complementary parts based on the value of one feature.
What I mean by this is that for every row in dataframe 1, I need a complementary row in dataframe 2 that takes on the opposite value of that specific feature.
In my source dataframe, the feature I'm referring to is stored under column "773", and it can take on values of either 0.0 or 1.0.
I came up with the following code that does this sufficiently, but it is remarkably slow. It takes about a minute to split 10,000 rows, even on my all-powerful EC2 instance.
data = chunk.iloc[:,1:776]
listy1 = []
listy2 = []
for i in range(0,len(data)):
random_row = data.sample(n=1).iloc[0]
listy1.append(random_row.tolist())
if random_row["773"] == 0.0:
x = data[data["773"] == 1.0].sample(n=1).iloc[0]
listy2.append(x.tolist())
else:
x = data[data["773"] == 0.0].sample(n=1).iloc[0]
listy2.append(x.tolist())
df1 = pd.DataFrame(listy1)
df2 = pd.DataFrame(listy2)
Note: I don't care about duplicate rows, because this data is being used to train a model that compares two objects to tell which one is "better."
Do you have some insight into why this is so slow, or any suggestions as to make this faster?
A key concept in efficient numpy/scipy/pandas coding is using library-shipped vectorized functions whenever possible. Try to process multiple rows at once instead of iterate explicitly over rows. i.e. avoid for loops and .iterrows().
The implementation provided is a little subtle in terms of indexing, but the vectorization thinking should be straightforward as follows:
Draw the main dataset at once.
The complementary dataset: draw the 0-rows at once, the complementary 1-rows at once, and then put them into the corresponding rows at once.
Code:
import pandas as pd
import numpy as np
from datetime import datetime
np.random.seed(52) # reproducibility
n = 10000
df = pd.DataFrame(
data={
"773": [0,1]*int(n/2),
"dummy1": list(range(n)),
"dummy2": list(range(0, 10*n, 10))
}
)
t0 = datetime.now()
print("Program begins...")
# 1. draw the main dataset
draw_idx = np.random.choice(n, n) # repeatable draw
df_main = df.iloc[draw_idx, :].reset_index(drop=True)
# 2. draw the complementary dataset
# (1) count number of 1's and 0's
n_1 = np.count_nonzero(df["773"][draw_idx].values)
n_0 = n - n_1
# (2) split data for drawing
df_0 = df[df["773"] == 0].reset_index(drop=True)
df_1 = df[df["773"] == 1].reset_index(drop=True)
# (3) draw n_1 indexes in df_0 and n_0 indexes in df_1
idx_0 = np.random.choice(len(df_0), n_1)
idx_1 = np.random.choice(len(df_1), n_0)
# (4) broadcast the drawn rows into the complementary dataset
df_comp = df_main.copy()
mask_0 = (df_main["773"] == 0).values
df_comp.iloc[mask_0 ,:] = df_1.iloc[idx_1, :].values # df_1 into mask_0
df_comp.iloc[~mask_0 ,:] = df_0.iloc[idx_0, :].values # df_0 into ~mask_0
print(f"Program ends in {(datetime.now() - t0).total_seconds():.3f}s...")
Check
print(df_main.head(5))
773 dummy1 dummy2
0 0 28 280
1 1 11 110
2 1 13 130
3 1 23 230
4 0 86 860
print(df_comp.head(5))
773 dummy1 dummy2
0 1 19 190
1 0 74 740
2 0 28 280 <- this row is complementary to df_main
3 0 60 600
4 1 37 370
Efficiency gain: 14.23s -> 0.011s (ca. 128x)

combine pandas apply results as multiple columns in a single dataframe

Summary
Suppose that you apply a function to a groupby object, so that every g.apply for every g in the df.groupby(...) gives you a series/dataframe. How do I combine these results into a single dataframe, but with the group names as columns?
Details
I have a dataframe event_df that looks like this:
index event note time
0 on C 0.5
1 on D 0.75
2 off C 1.0
...
I want to create a sampling of the event for every note, and the sampling is done at times as given by t_df:
index t
0 0
1 0.5
2 1.0
...
So that I'd get something like this.
t C D
0 off off
0.5 on off
1.0 off on
...
What I've done so far:
def get_t_note_series(notedata_row, t_arr):
"""Return the time index in the sampling that corresponds to the event."""
t_idx = np.argwhere(t_arr >= notedata_row['time']).flatten()[0]
return t_idx
def get_t_for_gb(group, **kwargs):
t_idxs = group.apply(get_t_note_series, args=(t_arr,), axis=1)
t_idxs.rename('t_arr_idx', inplace=True)
group_with_t = pd.concat([group, t_idxs], axis=1).set_index('t_arr_idx')
print(group_with_t)
return group_with_t
t_arr = np.arange(0,10,0.5)
t_df = pd.DataFrame({'t': t_arr}).rename_axis('t_arr_idx')
gb = event_df.groupby('note')
gb.apply(get_t_for_gb, **kwargs)
So what I get is a number of dataframes for each note, all of the same size (same as t_df):
t event
0 on
0.5 off
...
t event
0 off
0.5 on
...
How do I go from here to my desired dataframe, with each group corresponding to a column in a new dataframe, and the index being t?
EDIT: sorry, I didn't take into account below, that you rescale your time column and can't present a whole solution now because I have to leave, but I think, you could do the rescaling by using pandas.merge_asof with your two dataframes to get the nearest "rescaled" time and from the merged dataframe you could try the code below. I hope this is, what you wanted.
import pandas as pd
import io
sio= io.StringIO("""index event note time
0 on C 0.5
1 on D 0.75
2 off C 1.0""")
df= pd.read_csv(sio, sep='\s+', index_col=0)
df.groupby(['time', 'note']).agg({'event': 'first'}).unstack(-1).fillna('off')
Take the first row in each time-note group by agg({'event': 'first'}), then use the note-index column and transpose it, so the note values become columns. Then at the end fill all cells, for which no datapoints could be found with 'off' by fillna.
This outputs:
Out[28]:
event
note C D
time
0.50 on off
0.75 off on
1.00 off off
You might also want to try min or max in case on/off is not unambiguous for a combination of time/note (if there are more rows for the same time/note where some have on and some have off) and you prefer one of these values (say if there is one on, then no matter how many offs are there, you want an on etc.). If you want something like a mayority-vote, I would suggest to add a mayority vote column in the aggregated dataframe (before the unstack()).
Oh so I found it! All I had to do was to unstack the groupby results. Going back to generating the groupby result:
def get_t_note_series(notedata_row, t_arr):
"""Return the time index in the sampling that corresponds to the event."""
t_idx = np.argwhere(t_arr >= notedata_row['time']).flatten()[0]
return t_idx
def get_t_for_gb(group, **kwargs):
t_idxs = group.apply(get_t_note_series, args=(t_arr,), axis=1)
t_idxs.rename('t_arr_idx', inplace=True)
group_with_t = pd.concat([group, t_idxs], axis=1).set_index('t_arr_idx')
## print(group_with_t) ## unnecessary!
return group_with_t
t_arr = np.arange(0,10,0.5)
t_df = pd.DataFrame({'t': t_arr}).rename_axis('t_arr_idx')
gb = event_df.groupby('note')
result = gb.apply(get_t_for_gb, **kwargs)
At this point, result is a dataframe with note as an index:
>> print(result)
event
note t
C 0 off
0.5 on
1.0 off
....
D 0 off
0.5 off
1.0 on
....
Doing result = result.unstack('note') does the trick:
>> result = result.unstack('note')
>> print(result)
event
note C D
t
0 off off
0.5 on on
1.0 off off
....
D 0 off
0.5 off
1.0 on
....

How to "melt" `pandas.DataFrame` objects in Python 3?

I'm trying to melt certain columns of a pd.DataFrame while preserving columns of the other. In this case, I want to melt sine and cosine columns into values and then which column they came from (i.e. sine or cosine) into a new columns entitled data_type then preserving the original desc column.
How can I use pd.melt to achieve this without melting and concatenating each component manually?
# Data
a = np.linspace(0,2*np.pi,100)
DF_data = pd.DataFrame([a, np.sin(np.pi*a), np.cos(np.pi*a)], index=["t", "sine", "cosine"], columns=["t_%d"%_ for _ in range(100)]).T
DF_data["desc"] = ["info about this" for _ in DF_data.index]
The round about way I did it:
# Melt each part
DF_melt_A = pd.DataFrame([DF_data["t"],
DF_data["sine"],
pd.Series(DF_data.shape[0]*["sine"], index=DF_data.index, name="data_type"),
DF_data["desc"]]).T.reset_index()
DF_melt_A.columns = ["idx","t","values","data_type","desc"]
DF_melt_B = pd.DataFrame([DF_data["t"],
DF_data["cosine"],
pd.Series(DF_data.shape[0]*["cosine"], index=DF_data.index, name="data_type"),
DF_data["desc"]]).T.reset_index()
DF_melt_B.columns = ["idx","t","values","data_type","desc"]
# Merge
pd.concat([DF_melt_A, DF_melt_B], axis=0, ignore_index=True)
If I do pd.melt(DF_data I get a complete meltdown
In response to the comments:
allright so I had to create a similar df because I did not have access to your a variable. I change your a variable for a list from 0 to 99... so t will be 0 to 99
you could do this :
a = range(0, 100)
DF_data = pd.DataFrame([a, [np.sin(x)for x in a], [np.cos(x)for x in a]], index=["t", "sine", "cosine"], columns=["t_%d"%_ for _ in range(100)]).T
DF_data["desc"] = ["info about this" for _ in DF_data.index]
df = pd.melt(DF_data, id_vars=['t','desc'])
df.head(5)
this should return what you are looking for.
t desc variable value
0 0.0 info about this sine 0.000000
1 1.0 info about this sine 0.841471
2 2.0 info about this sine 0.909297
3 3.0 info about this sine 0.141120
4 4.0 info about this sine -0.756802

Iterate through data frame to generate random number in python

Starting with this dataframe I want to generate 100 random numbers using the hmean column for loc and the hstd column for scale
I am starting with a data frame that I change to an array. I want to iterate through the entire data frame and produce the following output.
My code below will only return the answer for row zero.
Name amax hmean hstd amin
0 Bill 22.924545 22.515861 0.375822 22.110000
1 Bob 26.118182 24.713880 0.721507 23.738400
2 Becky 23.178606 22.722464 0.454028 22.096752
This code provides one row of output, instead of three
from scipy import stats
import pandas as pd
def h2f(df, n):
for index, row in df.iterrows():
list1 = []
nr = df.as_matrix()
ff = stats.norm.rvs(loc=nr[index,2], scale=nr[index,3], size = n)
list1.append(ff)
return list1
df2 = h2f(data, 100)
pd.DataFrame(df2)
This is the output of my code
0 1 2 3 4 ... 99 100
0 22.723833 22.208324 22.280701 22.416486 22.620035 22.55817
This is the desired output
0 1 2 3 ... 99 100
0 22.723833 22.208324 22.280701 22.416486 22.620035
1 21.585776 22.190145 22.206638 21.927285 22.561882
2 22.357906 22.680952 21.4789 22.641407 22.341165
Dedent return list1 so it is not in the for-loop.
Otherwise, the function returns after only one pass through the loop.
Also move list1 = [] outside the for-loop so list1 does not get re-initialized with every pass through the loop:
import io
from scipy import stats
import pandas as pd
def h2f(df, n):
list1 = []
for index, row in df.iterrows():
mean, std = row['hmean'], row['hstd']
ff = stats.norm.rvs(loc=mean, scale=std, size=n)
list1.append(ff)
return list1
content = '''\
Name amax hmean hstd amin
0 Bill 22.924545 22.515861 0.375822 22.110000
1 Bob 26.118182 24.713880 0.721507 23.738400
2 Becky 23.178606 22.722464 0.454028 22.096752'''
df = pd.read_table(io.BytesIO(content), sep='\s+')
df2 = pd.DataFrame(h2f(df, 100))
print(df2)
PS. It is inefficent to call nr = df.as_matrix() with each pass through the loop.
Since nr never changes, at most, call it once, before entering the for-loop.
Even better, just use row['hmean'] and row['hstd'] to obtain the desired numbers.

Categories

Resources