This question already has answers here:
How to remove relative shift in matplotlib axis
(2 answers)
Closed 4 years ago.
I have this data that I am trying to plot using pandas:
df['SGP_sum'].iloc[0:16] =
Date
2018-01-01 00:00:00 99.998765
2018-01-01 01:00:00 99.993401
2018-01-01 02:00:00 100.005571
2018-01-01 03:00:00 100.027737
2018-01-01 04:00:00 100.022474
2018-01-01 05:00:00 100.039800
2018-01-01 06:00:00 100.043310
2018-01-01 07:00:00 100.045207
2018-01-01 08:00:00 100.045201
2018-01-01 09:00:00 100.043810
2018-01-01 10:00:00 100.042977
2018-01-01 11:00:00 100.054589
2018-01-01 12:00:00 100.052009
2018-01-01 13:00:00 100.040163
2018-01-01 14:00:00 100.009129
2018-01-01 15:00:00 99.975595
Name: SGP_sum, dtype: float64
But when I plot it, this is what I see: ( I get negative values in the display)
df['SGP_sum'].iloc[0:16].plot()
I believe that it is treating your values less than 100 as negative values due to the scale of your y axis being +1e2.
Try plotting without this scaling on the axis.
Related
I have a dataframe with some data that I'm going to run simulations on. Each row is a datetime and a value. Because of the nature of the problem, I need to keep the original frequency of 1 hour when the value is above a certain threshold. When it's not, I could resample the data and run that part of the simulation on lower frequency data, in order to speed up the simulation.
My idea is to somehow group the dataframe by day (since I've noticed there are many whole days where the value stays below the threshold), check the max value over each group, and if the max is below the threshold then aggregate the data in that group into a single mean value.
Here's a minimal working example:
import pandas as pd
import numpy as np
threshold = 3
idx = pd.date_range("2018-01-01", periods=27, freq="H")
df = pd.Series(np.append(np.ones(26), 5), index=idx).to_frame("v")
print(df)
Output:
v
2018-01-01 00:00:00 1.0
2018-01-01 01:00:00 1.0
2018-01-01 02:00:00 1.0
2018-01-01 03:00:00 1.0
2018-01-01 04:00:00 1.0
2018-01-01 05:00:00 1.0
2018-01-01 06:00:00 1.0
2018-01-01 07:00:00 1.0
2018-01-01 08:00:00 1.0
2018-01-01 09:00:00 1.0
2018-01-01 10:00:00 1.0
2018-01-01 11:00:00 1.0
2018-01-01 12:00:00 1.0
2018-01-01 13:00:00 1.0
2018-01-01 14:00:00 1.0
2018-01-01 15:00:00 1.0
2018-01-01 16:00:00 1.0
2018-01-01 17:00:00 1.0
2018-01-01 18:00:00 1.0
2018-01-01 19:00:00 1.0
2018-01-01 20:00:00 1.0
2018-01-01 21:00:00 1.0
2018-01-01 22:00:00 1.0
2018-01-01 23:00:00 1.0
2018-01-02 00:00:00 1.0
2018-01-02 01:00:00 1.0
2018-01-02 02:00:00 5.0
The desired output of the operation would be this dataframe:
v
2018-01-01 00:00:00 1.0
2018-01-02 00:00:00 1.0
2018-01-02 01:00:00 1.0
2018-01-02 02:00:00 5.0
where the first value is the mean of the first day.
I think I'm getting close
grouped = df.resample("1D")
for name, group in grouped:
if group["v"].max() <= 3:
group['v'].agg("mean")
but I'm unsure how to actually apply the aggregation to the desired groups, and get a dataframe back.
Any help is greatly appreciated.
So I found a solution.
grouped = df.resample("1D")
def conditionalAggregation(x):
if x['v'].max() <= 3:
idx = [x.index[0].replace(hour=0, minute=0, second=0, microsecond=0)]
return pd.DataFrame(x['v'].max(), index=idx, columns=['v'])
else:
return x
conditionallyAggregated = grouped.apply(conditionalAggregation)
conditionallyAggregated = conditionallyAggregated.droplevel(level=0)
conditionallyAggregated
This gives the following df:
v
2018-01-01 00:00:00 1.0
2018-01-02 00:00:00 1.0
2018-01-02 01:00:00 1.0
2018-01-02 02:00:00 5.0
I have a time series dataset that can be created with the following code.
idx = pd.date_range("2018-01-01", periods=100, freq="H")
ts = pd.Series(idx)
dft = pd.DataFrame(ts,columns=["date"])
dft["data"] = ""
dft["data"][0:5]= "a"
dft["data"][5:15]= "b"
dft["data"][15:20]= "c"
dft["data"][20:30]= "d"
dft["data"][30:40]= "a"
dft["data"][40:70]= "c"
dft["data"][70:85]= "b"
dft["data"][85:len(dft)]= "c"
In the data column, the unique values are a,b,c,d. These values are repeating in a sequence in different time windows. I want to capture the first and last value of that time window. How can I do that?
Compute a grouper for your changing values using shift to compare consecutive rows, then use groupby+agg to get the min/max per group:
group = dft.data.ne(dft.data.shift()).cumsum()
dft.groupby(group)['date'].agg(['min', 'max'])
output:
min max
data
1 2018-01-01 00:00:00 2018-01-01 04:00:00
2 2018-01-01 05:00:00 2018-01-01 14:00:00
3 2018-01-01 15:00:00 2018-01-01 19:00:00
4 2018-01-01 20:00:00 2018-01-02 05:00:00
5 2018-01-02 06:00:00 2018-01-02 15:00:00
6 2018-01-02 16:00:00 2018-01-03 21:00:00
7 2018-01-03 22:00:00 2018-01-04 12:00:00
8 2018-01-04 13:00:00 2018-01-05 03:00:00
edit. combining with original data:
dft.groupby(group).agg({'data': 'first', 'date': ['min', 'max']})
output:
data date
first min max
data
1 a 2018-01-01 00:00:00 2018-01-01 04:00:00
2 b 2018-01-01 05:00:00 2018-01-01 14:00:00
3 c 2018-01-01 15:00:00 2018-01-01 19:00:00
4 d 2018-01-01 20:00:00 2018-01-02 05:00:00
5 a 2018-01-02 06:00:00 2018-01-02 15:00:00
6 c 2018-01-02 16:00:00 2018-01-03 21:00:00
7 b 2018-01-03 22:00:00 2018-01-04 12:00:00
8 c 2018-01-04 13:00:00 2018-01-05 03:00:00
I have hourly time series data stored in a pandas series. Similar to this example:
import pandas as pd
import numpy as np
date_rng = pd.date_range(start='1/1/2019', end='1/2/2019', freq='H')
data = np.random.uniform(180,182,size=(len(date_rng)))
timeseries = pd.Series(data, index=date_rng)
timeseries.iloc[4:12] = 181.911
At three decimal places, it is highly unlikely the data will be exactly the same for more than, say, 3 hours in a row. When this flatlining occurs, it indicates an issue with the sensor. So I want to detect repeated data and replace it with nan values (i.e., detect the repeated values 181.911 in the above and replace with nan)
I assume I can iterate over the time series and detect/replace that way, but is there a more efficient way to do this?
You can do it with diff, but the first occurrence retain in the series.
timeseries.where(timeseries.diff(1)!=0.0,np.nan)
2019-01-01 00:00:00 180.539278
2019-01-01 01:00:00 181.509729
2019-01-01 02:00:00 180.740326
2019-01-01 03:00:00 181.736425
2019-01-01 04:00:00 181.911000
2019-01-01 05:00:00 NaN
2019-01-01 06:00:00 NaN
2019-01-01 07:00:00 NaN
2019-01-01 08:00:00 NaN
2019-01-01 09:00:00 NaN
2019-01-01 10:00:00 NaN
2019-01-01 11:00:00 NaN
2019-01-01 12:00:00 180.093216
2019-01-01 13:00:00 180.623440
First occurrence also can be removed using diff(-1) and diff(1):
np.c_[timeseries.where(timeseries.diff(-1)!=0.0,np.nan), timeseries.where(timeseries.diff(1)!=0.0,np.nan)].mean(axis=1)
It works when repetitions are sequential in series.
With following reasonably efficient function one can choose the minimum number of repeated values to consider as flatline:
import numpy as np
def remove_flatlines(ts, threshold):
# get start and end indices of each flatline as an n x 2 array
isflat = np.concatenate(([False], np.isclose(ts.diff(), 0), [False]))
isedge = isflat[1:] != isflat[:-1]
flatrange = np.where(isedge)[0].reshape(-1, 2)
# include also first value of each flatline
flatrange[:, 0] -= 1
# remove flatlines with at least threshold number of equal values
ts = ts.copy()
for j in range(len(flatrange)):
if flatrange[j][1] - flatrange[j][0] >= threshold:
ts.iloc[flatrange[j][0]:flatrange[j][1]] = np.nan
return ts
Applied to example:
remove_flatlines(timeseries, threshold=3)
2019-01-01 00:00:00 181.447940
2019-01-01 01:00:00 180.142692
2019-01-01 02:00:00 180.994674
2019-01-01 03:00:00 180.116489
2019-01-01 04:00:00 NaN
2019-01-01 05:00:00 NaN
2019-01-01 06:00:00 NaN
2019-01-01 07:00:00 NaN
2019-01-01 08:00:00 NaN
2019-01-01 09:00:00 NaN
2019-01-01 10:00:00 NaN
2019-01-01 11:00:00 NaN
2019-01-01 12:00:00 180.972644
2019-01-01 13:00:00 181.969759
2019-01-01 14:00:00 181.008693
2019-01-01 15:00:00 180.769328
2019-01-01 16:00:00 180.576061
2019-01-01 17:00:00 181.562315
2019-01-01 18:00:00 181.978567
2019-01-01 19:00:00 181.928330
2019-01-01 20:00:00 180.773995
2019-01-01 21:00:00 180.475290
2019-01-01 22:00:00 181.460028
2019-01-01 23:00:00 180.220693
2019-01-02 00:00:00 181.630176
Freq: H, dtype: float64
My DF looks like this:
date Open
2018-01-01 00:00:00 1.0536
2018-01-01 00:01:00 1.0527
2018-01-01 00:02:00 1.0558
2018-01-01 00:03:00 1.0534
2018-01-01 00:04:00 1.0524
The above DF is a minute based. What I want to do is create a new DF2 which is a day based by selecting a single time value from the day.
For example, I want to select the 02:00:00 every day then my new DF2 will look like this:
date Open
2018-01-01 02:00:00 1.0332
2018-01-02 02:00:00 1.0423
2018-01-03 02:00:00 1.0252
2018-01-04 02:00:00 1.0135
2018-01-05 02:00:00 1.0628
....
Now our DF2 is a day based and not minute
What did I do?
I selected the day with this dt method.
df2 = df.groupby(df.date.dt.date,sort=False).Open.dt.hour.between(2, 2)
However, it does not work.
# Sample data.
np.random.seed(0)
df = pd.DataFrame({
'date': pd.date_range('2019-01-01', '2019-01-10', freq='1Min'),
'Open': 1 + np.random.randn(12961) / 100})
>>> df.loc[df['date'].dt.hour.eq(2) & df['date'].dt.minute.eq(0), :]
date Open
120 2019-01-01 02:00:00 1.003764
1560 2019-01-02 02:00:00 1.015878
3000 2019-01-03 02:00:00 1.015933
4440 2019-01-04 02:00:00 0.990582
5880 2019-01-05 02:00:00 0.982440
7320 2019-01-06 02:00:00 1.012546
8760 2019-01-07 02:00:00 0.979695
10200 2019-01-08 02:00:00 1.013195
11640 2019-01-09 02:00:00 0.993046
I have a time series and I want to group the rows by hour of day (regardless of date) and visualize these as boxplots. So I'd want 24 boxplots starting from hour 1, then hour 2, then hour 3 and so on.
The way I see this working is splitting the dataset up into 24 series (1 for each hour of the day), creating a boxplot for each series and then plotting this on the same axes.
The only way I can think of to do this is to manually select all the values between each hour, is there a faster way?
some sample data:
Date Actual Consumption
2018-01-01 00:00:00 47.05
2018-01-01 00:15:00 46
2018-01-01 00:30:00 44
2018-01-01 00:45:00 45
2018-01-01 01:00:00 43.5
2018-01-01 01:15:00 43.5
2018-01-01 01:30:00 43
2018-01-01 01:45:00 42.5
2018-01-01 02:00:00 43
2018-01-01 02:15:00 42.5
2018-01-01 02:30:00 41
2018-01-01 02:45:00 42.5
2018-01-01 03:00:00 42.04
2018-01-01 03:15:00 41.96
2018-01-01 03:30:00 44
2018-01-01 03:45:00 44
2018-01-01 04:00:00 43.54
2018-01-01 04:15:00 43.46
2018-01-01 04:30:00 43.5
2018-01-01 04:45:00 43
2018-01-01 05:00:00 42.04
This is what i've tried so far:
zero = df.between_time('00:00', '00:59')
one = df.between_time('01:00', '01:59')
two = df.between_time('02:00', '02:59')
and then I would plot a boxplot for each of these on the same axes. However it's very tedious to do this for all 24 hours in a day.
This is the kind of output I want:
https://www.researchgate.net/figure/Boxplot-of-the-NOx-data-by-hour-of-the-day_fig1_24054015
there are 2 steps to achieve this:
convert Actual to date time:
df.Actual = pd.to_datetime(df.Actual)
Group by the hour:
df.groupby([df.Date, df.Actual.dt.hour+1]).Consumption.sum().reset_index()
I assumed you wanted to sum the Consumption (unless you wish to have mean or whatever just change it). One note: hour+1 so it will start from 1 and not 0 (remove it if you wish 0 to be midnight).
desired result:
Date Actual Consumption
0 2018-01-01 1 182.05
1 2018-01-01 2 172.50
2 2018-01-01 3 169.00
3 2018-01-01 4 172.00
4 2018-01-01 5 173.50
5 2018-01-01 6 42.04