How can I iterate over days in the dataframe in pandas?
Example:
My dataframe:
time consumption
time
2016-10-17 09:00:00 2016-10-17 09:00:00 2754.483333
2016-10-17 10:00:00 2016-10-17 10:00:00 2135.966666
2016-10-17 11:00:00 2016-10-17 11:00:00 1497.716666
2016-10-17 12:00:00 2016-10-17 12:00:00 448.100000
2016-10-24 09:00:00 2016-10-24 09:00:00 1527.716666
2016-10-24 10:00:00 2016-10-24 10:00:00 1219.833333
2016-10-24 11:00:00 2016-10-24 11:00:00 1284.350000
2016-10-24 12:00:00 2016-10-24 12:00:00 14195.633333
2016-10-31 09:00:00 2016-10-31 09:00:00 2120.933333
2016-10-31 10:00:00 2016-10-31 10:00:00 1630.700000
2016-10-31 11:00:00 2016-10-31 11:00:00 1241.866666
2016-10-31 12:00:00 2016-10-31 12:00:00 1156.266666
Pseudocode:
for day in df:
print day
First iteration return:
time consumption
time
2016-10-17 09:00:00 2016-10-17 09:00:00 2754.483333
2016-10-17 10:00:00 2016-10-17 10:00:00 2135.966666
2016-10-17 11:00:00 2016-10-17 11:00:00 1497.716666
2016-10-17 12:00:00 2016-10-17 12:00:00 448.100000
Second iteration return:
2016-10-24 09:00:00 2016-10-24 09:00:00 1527.716666
2016-10-24 10:00:00 2016-10-24 10:00:00 1219.833333
2016-10-24 11:00:00 2016-10-24 11:00:00 1284.350000
2016-10-24 12:00:00 2016-10-24 12:00:00 14195.633333
Third iteration return :
2016-10-31 09:00:00 2016-10-31 09:00:00 2120.933333
2016-10-31 10:00:00 2016-10-31 10:00:00 1630.700000
2016-10-31 11:00:00 2016-10-31 11:00:00 1241.866666
2016-10-31 12:00:00 2016-10-31 12:00:00 1156.266666
Use groupby by date, what is a bit different as day:
#groupby by index date
for idx, day in df.groupby(df.index.date):
print (day)
time consumption
time
2016-10-17 09:00:00 2016-10-17 09:00:00 2754.483333
2016-10-17 10:00:00 2016-10-17 10:00:00 2135.966666
2016-10-17 11:00:00 2016-10-17 11:00:00 1497.716666
2016-10-17 12:00:00 2016-10-17 12:00:00 448.100000
time consumption
time
2016-10-24 09:00:00 2016-10-24 09:00:00 1527.716666
2016-10-24 10:00:00 2016-10-24 10:00:00 1219.833333
2016-10-24 11:00:00 2016-10-24 11:00:00 1284.350000
2016-10-24 12:00:00 2016-10-24 12:00:00 14195.633333
time consumption
time
2016-10-31 09:00:00 2016-10-31 09:00:00 2120.933333
2016-10-31 10:00:00 2016-10-31 10:00:00 1630.700000
2016-10-31 11:00:00 2016-10-31 11:00:00 1241.866666
2016-10-31 12:00:00 2016-10-31 12:00:00 1156.266666
Or:
#groupby by column time
for idx, day in df.groupby(df.time.dt.date):
print (day)
time consumption
time
2016-10-17 09:00:00 2016-10-17 09:00:00 2754.483333
2016-10-17 10:00:00 2016-10-17 10:00:00 2135.966666
2016-10-17 11:00:00 2016-10-17 11:00:00 1497.716666
2016-10-17 12:00:00 2016-10-17 12:00:00 448.100000
time consumption
time
2016-10-24 09:00:00 2016-10-24 09:00:00 1527.716666
2016-10-24 10:00:00 2016-10-24 10:00:00 1219.833333
2016-10-24 11:00:00 2016-10-24 11:00:00 1284.350000
2016-10-24 12:00:00 2016-10-24 12:00:00 14195.633333
time consumption
time
2016-10-31 09:00:00 2016-10-31 09:00:00 2120.933333
2016-10-31 10:00:00 2016-10-31 10:00:00 1630.700000
2016-10-31 11:00:00 2016-10-31 11:00:00 1241.866666
2016-10-31 12:00:00 2016-10-31 12:00:00 1156.266666
Differences can check in first 2 rows are changed with different month:
for idx, day in df.groupby(df.index.day):
print (day)
time consumption
time
2016-09-17 09:00:00 2016-10-17 09:00:00 2754.483333
2016-09-17 10:00:00 2016-10-17 10:00:00 2135.966666
2016-10-17 11:00:00 2016-10-17 11:00:00 1497.716666
2016-10-17 12:00:00 2016-10-17 12:00:00 448.100000
time consumption
time
2016-10-24 09:00:00 2016-10-24 09:00:00 1527.716666
2016-10-24 10:00:00 2016-10-24 10:00:00 1219.833333
2016-10-24 11:00:00 2016-10-24 11:00:00 1284.350000
2016-10-24 12:00:00 2016-10-24 12:00:00 14195.633333
time consumption
time
2016-10-31 09:00:00 2016-10-31 09:00:00 2120.933333
2016-10-31 10:00:00 2016-10-31 10:00:00 1630.700000
2016-10-31 11:00:00 2016-10-31 11:00:00 1241.866666
2016-10-31 12:00:00 2016-10-31 12:00:00 1156.266666
for idx, day in df.groupby(df.index.date):
print (day)
time consumption
time
2016-09-17 09:00:00 2016-10-17 09:00:00 2754.483333
2016-09-17 10:00:00 2016-10-17 10:00:00 2135.966666
time consumption
time
2016-10-17 11:00:00 2016-10-17 11:00:00 1497.716666
2016-10-17 12:00:00 2016-10-17 12:00:00 448.100000
time consumption
time
2016-10-24 09:00:00 2016-10-24 09:00:00 1527.716666
2016-10-24 10:00:00 2016-10-24 10:00:00 1219.833333
2016-10-24 11:00:00 2016-10-24 11:00:00 1284.350000
2016-10-24 12:00:00 2016-10-24 12:00:00 14195.633333
time consumption
time
2016-10-31 09:00:00 2016-10-31 09:00:00 2120.933333
2016-10-31 10:00:00 2016-10-31 10:00:00 1630.700000
2016-10-31 11:00:00 2016-10-31 11:00:00 1241.866666
2016-10-31 12:00:00 2016-10-31 12:00:00 1156.266666
Related
I need quick help. how can I resample data in this data frame from 1min candles to 1 hour
i don't want to sum the price maybe choose the highest or lowest, only the volume column I need to sum.
this is the dataframe in csv https://drive.google.com/file/d/1yTd0TB6Pp9obg4iyCWzeFIg3lin9tYVc/view?usp=sharing
Try this:
df = pd.read_csv('BTC.csv')
df['Date'] = pd.to_datetime(df['Date'])
df.set_index('Date', inplace=True)
df = df.resample('1H').agg({'Close': 'min', 'Volume': 'sum'})
print(df)
Close Volume
Date
2020-06-06 15:00:00 9650.39 201.0
2020-06-06 16:00:00 9593.09 1616.0
2020-06-06 17:00:00 9595.00 1140.0
2020-06-06 18:00:00 9606.57 642.0
2020-06-06 19:00:00 9614.44 1015.0
2020-06-06 20:00:00 9647.68 1293.0
2020-06-06 21:00:00 9678.52 1293.0
2020-06-06 22:00:00 9635.49 1021.0
2020-06-06 23:00:00 9644.18 1118.0
2020-06-07 00:00:00 9629.88 801.0
2020-06-07 01:00:00 9647.38 541.0
2020-06-07 02:00:00 9654.82 1034.0
2020-06-07 03:00:00 9671.70 710.0
2020-06-07 04:00:00 9677.98 1264.0
2020-06-07 05:00:00 9659.31 798.0
2020-06-07 06:00:00 9656.76 886.0
2020-06-07 07:00:00 9639.48 1769.0
2020-06-07 08:00:00 9599.25 3190.0
2020-06-07 09:00:00 9623.41 1332.0
2020-06-07 10:00:00 9610.64 1018.0
2020-06-07 11:00:00 9575.59 1812.0
2020-06-07 12:00:00 9499.99 5431.0
2020-06-07 13:00:00 9446.98 4372.0
2020-06-07 14:00:00 9426.07 5999.0
2020-06-07 15:00:00 9463.05 1097.0
I have the following strings:
start = "07:00:00"
end = "17:00:00"
How can I generate a list of 5 minute interval between those times, ie
["07:00:00","07:05:00",...,"16:55:00","17:00:00"]
This works for me, I'm sure you can figure out how to put the results in the list instead of printing them out:
>>> import datetime
>>> start = "07:00:00"
>>> end = "17:00:00"
>>> delta = datetime.timedelta(minutes=5)
>>> start = datetime.datetime.strptime( start, '%H:%M:%S' )
>>> end = datetime.datetime.strptime( end, '%H:%M:%S' )
>>> t = start
>>> while t <= end :
... print datetime.datetime.strftime( t, '%H:%M:%S')
... t += delta
...
07:00:00
07:05:00
07:10:00
07:15:00
07:20:00
07:25:00
07:30:00
07:35:00
07:40:00
07:45:00
07:50:00
07:55:00
08:00:00
08:05:00
08:10:00
08:15:00
08:20:00
08:25:00
08:30:00
08:35:00
08:40:00
08:45:00
08:50:00
08:55:00
09:00:00
09:05:00
09:10:00
09:15:00
09:20:00
09:25:00
09:30:00
09:35:00
09:40:00
09:45:00
09:50:00
09:55:00
10:00:00
10:05:00
10:10:00
10:15:00
10:20:00
10:25:00
10:30:00
10:35:00
10:40:00
10:45:00
10:50:00
10:55:00
11:00:00
11:05:00
11:10:00
11:15:00
11:20:00
11:25:00
11:30:00
11:35:00
11:40:00
11:45:00
11:50:00
11:55:00
12:00:00
12:05:00
12:10:00
12:15:00
12:20:00
12:25:00
12:30:00
12:35:00
12:40:00
12:45:00
12:50:00
12:55:00
13:00:00
13:05:00
13:10:00
13:15:00
13:20:00
13:25:00
13:30:00
13:35:00
13:40:00
13:45:00
13:50:00
13:55:00
14:00:00
14:05:00
14:10:00
14:15:00
14:20:00
14:25:00
14:30:00
14:35:00
14:40:00
14:45:00
14:50:00
14:55:00
15:00:00
15:05:00
15:10:00
15:15:00
15:20:00
15:25:00
15:30:00
15:35:00
15:40:00
15:45:00
15:50:00
15:55:00
16:00:00
16:05:00
16:10:00
16:15:00
16:20:00
16:25:00
16:30:00
16:35:00
16:40:00
16:45:00
16:50:00
16:55:00
17:00:00
>>>
Try:
# import modules
from datetime import datetime, timedelta
# Create starting and end datetime object from string
start = datetime.strptime("07:00:00", "%H:%M:%S")
end = datetime.strptime("17:00:00", "%H:%M:%S")
# min_gap
min_gap = 5
# compute datetime interval
arr = [(start + timedelta(hours=min_gap*i/60)).strftime("%H:%M:%S")
for i in range(int((end-start).total_seconds() / 60.0 / min_gap))]
print(arr)
# ['07:00:00', '07:05:00', '07:10:00', '07:15:00', '07:20:00', '07:25:00', '07:30:00', ..., '16:55:00']
Explanations:
First, you need to convert string date to datetime object. The strptime does it!
Then, we will find the number of minutes between the starting date and the ending datetime. This discussion solved it! We can do it like this :
(end-start).total_seconds() / 60.0
However, in our case, we only want to iterate every n minutes. So, in our loop, we need to divide it by n.
Also, as we will iterate over this number of minutes, we need to convertit to int for the for loop. That results in:
int((end-start).total_seconds() / 60.0 / min_gap)
Then, on each element of our loop, we will add the number of minutes to the initial datetime. The tiemdelta function has been designed for. As parameter, we specify the number of hours we want to add : min_gap*i/60.
Finally, we convert this datetime object back to a string object using the strftime.
I have following stock price data in hand:
2017-06-15 10:00:00 958.4334
2017-06-15 11:00:00 955.7800
2017-06-15 12:00:00 958.2800
2017-06-15 13:00:00 959.2200
2017-06-15 14:00:00 962.4900
2017-06-15 15:00:00 964.0000
2017-06-15 15:59:00 963.3500
2017-06-16 09:00:00 997.3500
2017-06-16 10:00:00 995.0000
2017-06-16 11:00:00 992.7600
2017-06-16 12:00:00 990.7200
2017-06-16 13:00:00 994.6800
2017-06-16 14:00:00 996.0500
2017-06-16 15:00:00 987.6100
2017-06-16 15:59:00 987.5000
2017-06-19 09:00:00 999.1700
2017-06-19 10:00:00 1001.2700
2017-06-19 11:00:00 995.5200
2017-06-19 12:00:00 994.3350
2017-06-19 13:00:00 995.2199
2017-06-19 14:00:00 990.9221
2017-06-19 15:00:00 995.1300
2017-06-19 15:59:00 994.3400
2017-06-20 09:00:00 995.5200
2017-06-20 10:00:00 1003.5100
2017-06-20 11:00:00 998.8129
2017-06-20 12:00:00 996.2800
2017-06-20 13:00:00 997.2100
2017-06-20 14:00:00 998.0000
2017-06-20 15:00:00 992.5800
2017-06-20 15:59:00 992.8000
2017-06-21 09:00:00 993.9500
2017-06-21 10:00:00 995.2700
2017-06-21 11:00:00 996.4000
2017-06-21 12:00:00 994.2800
2017-06-21 13:00:00 996.1000
2017-06-21 14:00:00 998.7450
2017-06-21 15:00:00 1001.7900
2017-06-21 15:59:00 1002.9800
2017-06-22 09:00:00 1001.4100
2017-06-22 10:00:00 1004.0700
2017-06-22 11:00:00 1003.1500
2017-06-22 12:00:00 1003.4800
2017-06-22 13:00:00 1003.1600
2017-06-22 14:00:00 1003.1800
2017-06-22 15:00:00 1001.3900
2017-06-22 15:59:00 1001.5600
2017-06-23 09:00:00 999.8699
2017-06-23 10:00:00 1001.5800
2017-06-23 11:00:00 1001.0700
2017-06-23 12:00:00 1002.9800
2017-06-23 13:00:00 1003.2400
2017-06-23 14:00:00 1002.4300
2017-06-23 15:00:00 1003.7400
2017-06-23 15:59:00 1003.0500
2017-06-26 09:00:00 1006.2000
2017-06-26 10:00:00 997.3500
2017-06-26 11:00:00 999.3300
2017-06-26 12:00:00 999.1000
2017-06-26 13:00:00 997.0600
2017-06-26 14:00:00 995.8336
2017-06-26 15:00:00 993.9900
2017-06-26 15:59:00 993.5500
2017-06-27 09:00:00 992.7550
2017-06-27 10:00:00 993.7600
2017-06-27 11:00:00 990.6700
2017-06-27 12:00:00 986.5500
2017-06-27 13:00:00 981.1099
2017-06-27 14:00:00 982.5499
2017-06-27 15:00:00 977.4100
2017-06-27 15:59:00 976.7800
2017-06-28 09:00:00 971.4600
2017-06-28 10:00:00 982.5200
2017-06-28 11:00:00 980.9100
2017-06-28 12:00:00 986.4372
2017-06-28 13:00:00 987.6710
2017-06-28 14:00:00 986.7977
2017-06-28 15:00:00 990.0300
2017-06-28 15:59:00 991.0000
2017-06-29 09:00:00 982.5200
2017-06-29 10:00:00 977.7710
2017-06-29 11:00:00 972.6600
2017-06-29 12:00:00 970.3100
2017-06-29 13:00:00 969.1600
2017-06-29 14:00:00 973.4720
2017-06-29 15:00:00 975.9100
2017-06-29 15:59:00 975.3100
2017-06-30 09:00:00 977.5800
2017-06-30 10:00:00 978.6400
2017-06-30 11:00:00 978.7299
2017-06-30 12:00:00 974.9700
2017-06-30 13:00:00 975.7700
2017-06-30 14:00:00 975.7000
2017-06-30 15:00:00 968.0000
2017-06-30 15:59:00 969.0000
I was trying to calculate the MACD using TTR::MACD as following (given above dataframe is called amz.xts):
macd -> MACD(amz.xts, nFast = 20, nSlow = 40, nSig = 10, maType = 'EMA')
the result was a series of decimal numbers which are mainly between 0.0 ~ 1.5
Whereas when I used python wrapper of Talib to do the same thing, the result was between 0.0 ~ 25.0 and vast majority was between 10.0 ~ 20.0
and this was also the same data shown in my charting software for trading.
python code:
import talib as ta
# m is macd
# s is signal
# h is histogram
m,s,h = ta.MACD(data, fastperiod=20, slowperiod=40, signalperiod=10)
I wouldn't doubt the trading software was wrong and given python gave the same result, I prefer to say TTR::MACD was doing something different. I also think the result from python and trading software made sense because the price is really high. (above $900 per share).
Am I doing something wrong or it is just they use different algorithm? (which I highly doubt.)
I haven't checked the python function, but TTR::MACD() is definitely correct. Maybe it is that percent=FALSE argument?
library(TTR)
xx <- rep(c(1, rep(0, 49)), 4)
fast <- 20
slow <- 40
sig <- 10
macd <- MACD(xx, fast, slow, sig, maType="EMA", percent=FALSE)
macd2 <- EMA(xx, fast) - EMA(xx, slow)
macd2 <- cbind(macd2, EMA(macd2, sig))
par(mar=c(2, 2, 1, 1))
matplot(macd[-1:-40, ], type="l", lty=1, lwd=1.5)
matlines(macd2[-1:-40, ], type="l", lty=3, lwd=3, col=c("green", "blue"))
I have a df with the following index
df.index
>>> [2010-01-04 10:00:00, ..., 2010-12-31 16:00:00]
The main column is volume.
In the timestamp sequence, weekends and some other weekdays are not present. I want to resample my time index to have the aggregate sum of volume per minute. So I do the following:
df = df.resample('60S', how=sum)
There are some missing minutes. In other words, there are minutes where there are no trades. I want to include these missing minutes and add a 0 to the column volume.
To solve this, I would usually do something like:
new_range = pd.date_range('20110104 09:30:00','20111231 16:00:00',
freq='60s')+df.index
df = df.reindex(new_range)
df = df.between_time(start_time='10:00', end_time='16:00') # time interval per day that I want
df = df.fillna(0)
But now I am stuck with unwanted dates like the weekends and some other days. How can I get rid of the dates that were not originally in my timestamp index?
Just construct the range of datetimes you want and reindex to it.
Entire range
In [9]: rng = pd.date_range('20130101 09:00','20130110 16:00',freq='30T')
In [10]: rng
Out[10]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 09:00:00, ..., 2013-01-10 16:00:00]
Length: 447, Freq: 30T, Timezone: None
Eliminate times out of range
In [11]: rng = rng.take(rng.indexer_between_time('09:30','16:00'))
In [12]: rng
Out[12]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 09:30:00, ..., 2013-01-10 16:00:00]
Length: 140, Freq: None, Timezone: None
Eliminate non-weekdays
In [13]: rng = rng[rng.weekday<5]
In [14]: rng
Out[14]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-01-01 09:30:00, ..., 2013-01-10 16:00:00]
Length: 112, Freq: None, Timezone: None
Just looking at the values, you prob want df.reindex(index=rng)
In [15]: rng.to_series()
Out[15]:
2013-01-01 09:30:00 2013-01-01 09:30:00
2013-01-01 10:00:00 2013-01-01 10:00:00
2013-01-01 10:30:00 2013-01-01 10:30:00
2013-01-01 11:00:00 2013-01-01 11:00:00
2013-01-01 11:30:00 2013-01-01 11:30:00
2013-01-01 12:00:00 2013-01-01 12:00:00
2013-01-01 12:30:00 2013-01-01 12:30:00
2013-01-01 13:00:00 2013-01-01 13:00:00
2013-01-01 13:30:00 2013-01-01 13:30:00
2013-01-01 14:00:00 2013-01-01 14:00:00
2013-01-01 14:30:00 2013-01-01 14:30:00
2013-01-01 15:00:00 2013-01-01 15:00:00
2013-01-01 15:30:00 2013-01-01 15:30:00
2013-01-01 16:00:00 2013-01-01 16:00:00
2013-01-02 09:30:00 2013-01-02 09:30:00
...
2013-01-09 16:00:00 2013-01-09 16:00:00
2013-01-10 09:30:00 2013-01-10 09:30:00
2013-01-10 10:00:00 2013-01-10 10:00:00
2013-01-10 10:30:00 2013-01-10 10:30:00
2013-01-10 11:00:00 2013-01-10 11:00:00
2013-01-10 11:30:00 2013-01-10 11:30:00
2013-01-10 12:00:00 2013-01-10 12:00:00
2013-01-10 12:30:00 2013-01-10 12:30:00
2013-01-10 13:00:00 2013-01-10 13:00:00
2013-01-10 13:30:00 2013-01-10 13:30:00
2013-01-10 14:00:00 2013-01-10 14:00:00
2013-01-10 14:30:00 2013-01-10 14:30:00
2013-01-10 15:00:00 2013-01-10 15:00:00
2013-01-10 15:30:00 2013-01-10 15:30:00
2013-01-10 16:00:00 2013-01-10 16:00:00
Length: 112
You could also start with a constructed business day freq series (and/or add custom business day if you want holidays, new in 0.14.0, see here
I have this dataframe. The columns represent the highs and the lows in daily EURUSD price:
df.low df.high
2013-01-17 16:00:00 1.33394 2013-01-17 20:00:00 1.33874
2013-01-18 18:00:00 1.32805 2013-01-18 09:00:00 1.33983
2013-01-21 00:00:00 1.32962 2013-01-21 09:00:00 1.33321
2013-01-22 11:00:00 1.32667 2013-01-22 09:00:00 1.33715
2013-01-23 17:00:00 1.32645 2013-01-23 14:00:00 1.33545
2013-01-24 10:00:00 1.32860 2013-01-24 18:00:00 1.33926
2013-01-25 04:00:00 1.33497 2013-01-25 17:00:00 1.34783
2013-01-28 10:00:00 1.34246 2013-01-28 16:00:00 1.34771
2013-01-29 13:00:00 1.34143 2013-01-29 21:00:00 1.34972
2013-01-30 08:00:00 1.34820 2013-01-30 21:00:00 1.35873
2013-01-31 13:00:00 1.35411 2013-01-31 17:00:00 1.35944
I summed them up into a third column (df.extremes).
df.extremes
2013-01-17 16:00:00 1.33394
2013-01-17 20:00:00 1.33874
2013-01-18 18:00:00 1.32805
2013-01-18 09:00:00 1.33983
2013-01-21 00:00:00 1.32962
2013-01-21 09:00:00 1.33321
2013-01-22 09:00:00 1.33715
2013-01-22 11:00:00 1.32667
2013-01-23 14:00:00 1.33545
2013-01-23 17:00:00 1.32645
2013-01-24 10:00:00 1.32860
2013-01-24 18:00:00 1.33926
2013-01-25 04:00:00 1.33497
2013-01-25 17:00:00 1.34783
2013-01-28 10:00:00 1.34246
2013-01-28 16:00:00 1.34771
2013-01-29 13:00:00 1.34143
2013-01-29 21:00:00 1.34972
2013-01-30 08:00:00 1.34820
2013-01-30 21:00:00 1.35873
2013-01-31 13:00:00 1.35411
2013-01-31 17:00:00 1.35944
But now i want to filter some values from df.extremes.
To explain what to filter i try with this "pseudocode":
IF following the index we move from: previous df.low --> df.low --> df.high:
IF df.low > previous df.low: delete df.low
IF df.low < previous df.low: delete previous df.low
If i try to work this out with a for loop, it gives me a KeyError: 1.3339399999999999.
day = df.groupby(pd.TimeGrouper('D'))
is_day_min = day.extremes.apply(lambda x: x == x.min())
for i in df.extremes:
if is_day_min[i] == True and is_day_min[i+1] == True:
if df.extremes[i] > df.extremes[i+1]:
del df.extremes[i]
for i in df.extremes:
if is_day_min[i] == True and is_day_min[i+1] == True:
if df.extremes[i] < df.extremes[i+1]:
del df.extremes[i+1]
How to filter/delete the values as i explained in pseudocode?
I am struggling with indexing and bools but i can't solve this. I strongly suspect that i need to use a lambda function, but i don't know how to apply it. So please have mercy it's too long that i'm trying on this. Hope i've been clear enough.
All you're really missing is a way of saying "previous low" in a vectorized fashion. That's spelled df['low'].shift(-1). Once you have that it's just:
prev = df.low.shift(-1)
filtered_df = df[~((df.low > prev) | (df.low < prev))]