How to bin rows into 10 minutes interval? - python

Assume this is my sample data:
ID datetime
0 2 2015-01-09 19:05:39
1 1 2015-01-10 20:33:38
2 1 2015-01-10 20:33:38
3 1 2015-01-10 20:45:39
4 1 2015-01-10 20:46:39
5 1 2015-01-10 20:46:59
6 1 2015-01-10 20:50:39
I want to create a new column "BIN" which tells us which 10 minute bin this row belongs to.
i.e) Select minimum datetime and start from there. In this example data first row is the minimum time but it's not the case which my real data. My real data is not sorted.
ID datetime bin
0 2 2015-01-09 19:05:39 1
1 1 2015-01-10 20:33:38 2
2 1 2015-01-10 20:33:38 2
3 1 2015-01-10 20:45:39 3
4 1 2015-01-10 20:46:39 3
5 1 2015-01-10 20:46:59 3
6 1 2015-01-10 20:50:39 3

First subtract minimum value of datetime for timedeltas, then create 10minutes values by Series.dt.floor, then Series.rank and last convert to integers by Series.astype:
df['datetime'] = pd.to_datetime(df['datetime'])
df['bin'] = (df['datetime'].sub(df['datetime'].min())
.dt.floor('10Min')
.rank(method='dense')
.astype(int))
print (df)
ID datetime bin
0 2 2015-01-09 19:05:39 1
1 1 2015-01-10 20:33:38 2
2 1 2015-01-10 20:33:38 2
3 1 2015-01-10 20:45:39 3
4 1 2015-01-10 20:46:39 3
5 1 2015-01-10 20:46:59 3
6 1 2015-01-10 20:50:39 3

If you dataframe is called df. Assuming the bins you are referring to range from 1 - 6, where 1 is between 0 - 10 minutes and 6 between 50 - 60, then you can use the following formula:
import math
df['datetime'] = pd.to_datetime(df['datetime'])
df['bin'] = math.ceil(df['datetime'].minute / 10)

Related

Time since first ever occurrence in Pandas

I have the following data frame in Pandas:
df = pd.DataFrame({
'ID': [1,2,1,1,2,3,1,3,3,3,2],
'date': ['2021-04-28','2022-05-21','2011-03-01','2021-11-28','1992-12-01','1999-10-28','2022-01-12','2019-02-28','2001-03-28','2022-01-01','2009-05-28']
})
I want to produce a column time since first occur that is the time passed in days since their first occurrence.
Here is what I did:
df['date'] = pd.to_datetime(df['date'], dayfirst=True)
df.sort_values(by=['ID', 'date'], ascending = [True, False], inplace=True)
and I got the sorted data frame
ID date
6 1 2022-01-12
3 1 2021-11-28
0 1 2021-04-28
2 1 2011-03-01
1 2 2022-05-21
10 2 2009-05-28
4 2 1992-12-01
9 3 2022-01-01
7 3 2019-02-28
8 3 2001-03-28
5 3 1999-10-28
so the output should look like
ID date time since first occur
6 1 2022-01-12 3970
3 1 2021-11-28 3925
0 1 2021-04-28 3711
2 1 2011-03-01 0
1 2 2022-05-21 10763
10 2 2009-05-28 6022
4 2 1992-12-01 0
9 3 2022-01-01 8101
7 3 2019-02-28 7063
8 3 2001-03-28 517
5 3 1999-10-28 0
Thanks in advance for helping.
After sorting the dataframe, you can get the difference between date and minimal date in group
df['time since first occur'] = (df['date'] - df.groupby('ID')['date'].transform('min')).dt.days
print(df)
ID date time since first occur
6 1 2022-01-12 3970
3 1 2021-11-28 3925
0 1 2021-04-28 3711
2 1 2011-03-01 0
1 2 2022-05-21 10763
10 2 2009-05-28 6022
4 2 1992-12-01 0
9 3 2022-01-01 8101
7 3 2019-02-28 7063
8 3 2001-03-28 517
5 3 1999-10-28 0

Calculate average of every 7 instances in a dataframe column

I have this pandas dataframe with daily asset prices:
Picture of head of Dataframe
I would like to create a pandas series (It could also be an additional column in the dataframe or some other datastructure) with the weakly average asset prices. This means I need to calculate the average on every 7 consecutive instances in the column and save it into a series.
Picture of how result should look like
As I am a complete newbie to python (and programming in general, for that matter), I really have no idea how to start.
I am very grateful for every tipp!
I believe need GroupBy.transform by modulo of numpy array create by numpy.arange for general solution also working with all indexes (e.g. with DatetimeIndex):
np.random.seed(2018)
rng = pd.date_range('2018-04-19', periods=20)
df = pd.DataFrame({'Date': rng[::-1],
'ClosingPrice': np.random.randint(4, size=20)})
#print (df)
df['weekly'] = df['ClosingPrice'].groupby(np.arange(len(df)) // 7).transform('mean')
print (df)
ClosingPrice Date weekly
0 2 2018-05-08 1.142857
1 2 2018-05-07 1.142857
2 2 2018-05-06 1.142857
3 1 2018-05-05 1.142857
4 1 2018-05-04 1.142857
5 0 2018-05-03 1.142857
6 0 2018-05-02 1.142857
7 2 2018-05-01 2.285714
8 1 2018-04-30 2.285714
9 1 2018-04-29 2.285714
10 3 2018-04-28 2.285714
11 3 2018-04-27 2.285714
12 3 2018-04-26 2.285714
13 3 2018-04-25 2.285714
14 1 2018-04-24 1.666667
15 0 2018-04-23 1.666667
16 3 2018-04-22 1.666667
17 2 2018-04-21 1.666667
18 2 2018-04-20 1.666667
19 2 2018-04-19 1.666667
Detail:
print (np.arange(len(df)) // 7)
[0 0 0 0 0 0 0 1 1 1 1 1 1 1 2 2 2 2 2 2]

Grouping by date range with pandas

I am looking to group by two columns: user_id and date; however, if the dates are close enough, I want to be able to consider the two entries part of the same group and group accordingly. Date is m-d-y
user_id date val
1 1-1-17 1
2 1-1-17 1
3 1-1-17 1
1 1-1-17 1
1 1-2-17 1
2 1-2-17 1
2 1-10-17 1
3 2-1-17 1
The grouping would group by user_id and dates +/- 3 days from each other. so the group by summing val would look like:
user_id date sum(val)
1 1-2-17 3
2 1-2-17 2
2 1-10-17 1
3 1-1-17 1
3 2-1-17 1
Any way someone could think of that this could be done (somewhat) easily? I know there are some problematic aspects of this. for example, what to do if the dates string together endlessly with three days apart. but the exact data im using only has 2 values per person..
Thanks!
I'd convert this to a datetime column and then use pd.TimeGrouper:
dates = pd.to_datetime(df.date, format='%m-%d-%y')
print(dates)
0 2017-01-01
1 2017-01-01
2 2017-01-01
3 2017-01-01
4 2017-01-02
5 2017-01-02
6 2017-01-10
7 2017-02-01
Name: date, dtype: datetime64[ns]
df = (df.assign(date=dates).set_index('date')
.groupby(['user_id', pd.TimeGrouper('3D')])
.sum()
.reset_index())
print(df)
user_id date val
0 1 2017-01-01 3
1 2 2017-01-01 2
2 2 2017-01-10 1
3 3 2017-01-01 1
4 3 2017-01-31 1
Similar solution using pd.Grouper:
df = (df.assign(date=dates)
.groupby(['user_id', pd.Grouper(key='date', freq='3D')])
.sum()
.reset_index())
print(df)
user_id date val
0 1 2017-01-01 3
1 2 2017-01-01 2
2 2 2017-01-10 1
3 3 2017-01-01 1
4 3 2017-01-31 1
Update: TimeGrouper will be deprecated in future versions of pandas, so Grouper would be preferred in this scenario (thanks for the heads up, Vaishali!).
I come with a very ugly solution but still work...
df=df.sort_values(['user_id','date'])
df['Key']=df.sort_values(['user_id','date']).groupby('user_id')['date'].diff().dt.days.lt(3).ne(True).cumsum()
df.groupby(['user_id','Key'],as_index=False).agg({'val':'sum','date':'first'})
Out[586]:
user_id Key val date
0 1 1 3 2017-01-01
1 2 2 2 2017-01-01
2 2 3 1 2017-01-10
3 3 4 1 2017-01-01
4 3 5 1 2017-02-01

Getting time difference per unique row items using pandas

Can someone please show me how to use pandas to get time difference per unique rows in the following data (df):
Round Order Date
1 1 2011.02.04 00:20:21
1 2 2011.02.04 00:25:11
1 3 2011.02.04 00:35:10
1 4 2011.02.04 00:47:10
2 1 2011.02.04 00:21:21
2 2 2011.02.04 00:31:11
2 3 2011.02.04 00:41:10
Because of the sequential order i column 'Order', the time difference will be the date value in row 4 minus the date value in row 1. So I want to arrive at this table (time_df):
Round TimeDiff
1 26.39
2 19.39
You can use groupby with difference min and max:
df['Date'] = pd.to_datetime(df['Date'], format='%Y.%m.%d %H:%M:%S')
print df
Round Order Date
0 1 1 2011-02-04 00:20:21
1 1 2 2011-02-04 00:25:11
2 1 3 2011-02-04 00:35:10
3 1 4 2011-02-04 00:47:10
4 2 1 2011-02-04 00:21:21
5 2 2 2011-02-04 00:31:11
6 2 3 2011-02-04 00:41:10
print df.groupby('Round')['Date'].apply(lambda x: x.max() - x.min())
Round
1 00:26:49
2 00:19:49
Name: Date, dtype: timedelta64[ns]
I would do it this way:
In [324]: df
Out[324]:
Round Order Date
0 1 1 2011-02-04 00:20:21
1 1 2 2011-02-04 00:25:11
2 1 3 2011-02-04 00:35:10
3 1 4 2011-02-04 00:47:10
4 2 1 2011-02-04 00:21:21
5 2 2 2011-02-04 00:31:11
6 2 3 2011-02-04 00:41:10
In [325]: grp = df.groupby('Round')
In [327]: grp.Date.max()-grp.Date.min()
Out[327]:
Round
1 00:26:49
2 00:19:49
Name: Date, dtype: timedelta64[ns]

split, groupby, combine in Pandas to find a difference in dates

I have a simple dataframe that looks like this:
I would like to use groupby to group by id, then find some way to difference the dates, and then column bind them back to the dataframe, so I end up with this:
The groupby is straightforward,
grouped = DF.groupby('id')
and finding the earliest date is straightforward,
maxdates = grouped['date'].min()
But I'm not sure how to proceed. How do I apply the date subtraction operation, then combine?
There is a similar question here.
Thanks for reading this far.
My dataframe is:
dates=pd.to_datetime(['2015-01-01', '2015-02-01', '2015-03-01', '2015-04-01', '2015-05-01', '2015-01-01', '2015-01-02', '2015-01-03', '2015-01-04', '2015-01-05'])
DF = DataFrame({'id':[1,1,1,1,1,2,2,2,2,2], 'date':dates})
cols = ['id', 'date']
DF=DF[cols]
EDIT:
Both answers below are awesome. I wish I could accept them both.
You can use apply like this:
earliest_by_id = DF.groupby('id')['date'].min()
def since_earliest(row):
return row.date - earliest_by_id[row.id]
DF['days_since_earliest'] = DF.apply(since_earliest, axis=1)
print(DF)
id date days_since_earliest
0 1 2015-01-01 0 days
1 1 2015-02-01 31 days
2 1 2015-03-01 59 days
3 1 2015-04-01 90 days
4 1 2015-05-01 120 days
5 2 2015-01-01 0 days
6 2 2015-01-02 1 days
7 2 2015-01-03 2 days
8 2 2015-01-04 3 days
9 2 2015-01-05 4 days
edit:
DF['days_since_earliest'] = DF.apply(since_earliest, axis=1).astype('timedelta64[D]')
print(DF)
id date days_since_earliest
0 1 2015-01-01 0
1 1 2015-02-01 31
2 1 2015-03-01 59
3 1 2015-04-01 90
4 1 2015-05-01 120
5 2 2015-01-01 0
6 2 2015-01-02 1
7 2 2015-01-03 2
8 2 2015-01-04 3
9 2 2015-01-05 4
FWIW, using transform can often be simpler (and usually faster) than apply. transform takes the results of a groupby operation and broadcasts it up to the original index:
>>> df["dse"] = df["date"] - df.groupby("id")["date"].transform(min)
>>> df
id date dse
0 1 2015-01-01 0 days
1 1 2015-02-01 31 days
2 1 2015-03-01 59 days
3 1 2015-04-01 90 days
4 1 2015-05-01 120 days
5 2 2015-01-01 0 days
6 2 2015-01-02 1 days
7 2 2015-01-03 2 days
8 2 2015-01-04 3 days
9 2 2015-01-05 4 days
If you'd prefer integer days instead of timedelta objects, you can use the dt.days accessor:
>>> df["dse"] = df["dse"].dt.days
>>> df
id date dse
0 1 2015-01-01 0
1 1 2015-02-01 31
2 1 2015-03-01 59
3 1 2015-04-01 90
4 1 2015-05-01 120
5 2 2015-01-01 0
6 2 2015-01-02 1
7 2 2015-01-03 2
8 2 2015-01-04 3
9 2 2015-01-05 4

Categories

Resources