How to truncate a datetime to just the day - python

I am trying to move from pandas to polars but I am running into the following issue.
import polars as pl
df = pl.DataFrame(
{
"integer": [1, 2, 3],
"date": [
"2010-01-31T23:00:00+00:00",
"2010-02-01T00:00:00+00:00",
"2010-02-01T01:00:00+00:00"
]
}
)
df = df.with_columns(
[
pl.col("date").str.strptime(pl.Datetime, fmt="%Y-%m-%dT%H:%M:%S%z").dt.with_time_zone("Europe/Amsterdam"),
]
)
Yields the following dataframe:
>>> df
shape: (3, 2)
┌─────────┬────────────────────────────────┐
│ integer ┆ date │
│ --- ┆ --- │
│ i64 ┆ datetime[μs, Europe/Amsterdam] │
╞═════════╪════════════════════════════════╡
│ 1 ┆ 2010-02-01 00:00:00 CET │
│ 2 ┆ 2010-02-01 01:00:00 CET │
│ 3 ┆ 2010-02-01 02:00:00 CET │
└─────────┴────────────────────────────────┘
As you can see, I transformed the datetime string from UTC to CET succesfully. However, when I try to extract the date (using the accepted answer by the polars author in this thread: https://stackoverflow.com/a/73212748/16332690), it seems to extract the date from the UTC string even though it has been transformed, e.g.:
df = df.with_columns(
[
pl.col("date").cast(pl.Date).alias("valueDay")
]
)
>>> df
shape: (3, 3)
┌─────────┬────────────────────────────────┬────────────┐
│ integer ┆ date ┆ valueDay │
│ --- ┆ --- ┆ --- │
│ i64 ┆ datetime[μs, Europe/Amsterdam] ┆ date │
╞═════════╪════════════════════════════════╪════════════╡
│ 1 ┆ 2010-02-01 00:00:00 CET ┆ 2010-01-31 │
│ 2 ┆ 2010-02-01 01:00:00 CET ┆ 2010-02-01 │
│ 3 ┆ 2010-02-01 02:00:00 CET ┆ 2010-02-01 │
└─────────┴────────────────────────────────┴────────────┘
The valueDay should be 2010-02-01 for all 3 values.
Can anyone help me fix this? By the way, what is the best way to optimize this code? Do I constantly have to assign everything to df or is there a way to chain all of this?
Edit:
I managed to find a quick way around this but it would be nice if the issue above could be addressed. A pandas dt.date like way to approach this would be nice, I noticed that it is missing over here: https://pola-rs.github.io/polars/py-polars/html/reference/series/timeseries.html
df = pl.DataFrame(
{
"integer": [1, 2, 3],
"date": [
"2010-01-31T23:00:00+00:00",
"2010-02-01T00:00:00+00:00",
"2010-02-01T01:00:00+00:00"
]
}
)
df = df.with_columns(
[
pl.col("date").str.strptime(pl.Datetime, fmt="%Y-%m-%dT%H:%M:%S%z").dt.with_time_zone("Europe/Amsterdam"),
]
)
df = df.with_columns(
[
pl.col("date").dt.day().alias("day"),
pl.col("date").dt.month().alias("month"),
pl.col("date").dt.year().alias("year"),
]
)
df = df.with_columns(
pl.datetime(year=pl.col("year"), month=pl.col("month"), day=pl.col("day"))
)
df = df.with_columns(
[
pl.col("datetime").cast(pl.Date).alias("valueDay")
]
)
Yields the following:
>>> df
shape: (3, 7)
┌─────────┬────────────────────────────────┬─────┬───────┬──────┬─────────────────────┬────────────┐
│ integer ┆ date ┆ day ┆ month ┆ year ┆ datetime ┆ valueDay │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ datetime[μs, Europe/Amsterdam] ┆ u32 ┆ u32 ┆ i32 ┆ datetime[μs] ┆ date │
╞═════════╪════════════════════════════════╪═════╪═══════╪══════╪═════════════════════╪════════════╡
│ 1 ┆ 2010-02-01 00:00:00 CET ┆ 1 ┆ 2 ┆ 2010 ┆ 2010-02-01 00:00:00 ┆ 2010-02-01 │
│ 2 ┆ 2010-02-01 01:00:00 CET ┆ 1 ┆ 2 ┆ 2010 ┆ 2010-02-01 00:00:00 ┆ 2010-02-01 │
│ 3 ┆ 2010-02-01 02:00:00 CET ┆ 1 ┆ 2 ┆ 2010 ┆ 2010-02-01 00:00:00 ┆ 2010-02-01 │
└─────────┴────────────────────────────────┴─────┴───────┴──────┴─────────────────────┴────────────┘

Would this temporary workaround help? Starting with this data:
import polars as pl
from datetime import datetime
df = pl.DataFrame(
{
"date": pl.date_range(
datetime(2010, 1, 30, 22, 0, 0),
datetime(2010, 2, 1, 2, 0, 0),
"1h",
).dt.with_time_zone("Europe/Amsterdam"),
}
)
df
shape: (29, 1)
┌────────────────────────────────┐
│ date │
│ --- │
│ datetime[μs, Europe/Amsterdam] │
╞════════════════════════════════╡
│ 2010-01-30 23:00:00 CET │
│ 2010-01-31 00:00:00 CET │
│ 2010-01-31 01:00:00 CET │
│ 2010-01-31 02:00:00 CET │
│ 2010-01-31 03:00:00 CET │
│ 2010-01-31 04:00:00 CET │
│ 2010-01-31 05:00:00 CET │
│ 2010-01-31 06:00:00 CET │
│ 2010-01-31 07:00:00 CET │
│ 2010-01-31 08:00:00 CET │
│ 2010-01-31 09:00:00 CET │
│ 2010-01-31 10:00:00 CET │
│ 2010-01-31 11:00:00 CET │
│ 2010-01-31 12:00:00 CET │
│ 2010-01-31 13:00:00 CET │
│ 2010-01-31 14:00:00 CET │
│ 2010-01-31 15:00:00 CET │
│ 2010-01-31 16:00:00 CET │
│ 2010-01-31 17:00:00 CET │
│ 2010-01-31 18:00:00 CET │
│ 2010-01-31 19:00:00 CET │
│ 2010-01-31 20:00:00 CET │
│ 2010-01-31 21:00:00 CET │
│ 2010-01-31 22:00:00 CET │
│ 2010-01-31 23:00:00 CET │
│ 2010-02-01 00:00:00 CET │
│ 2010-02-01 01:00:00 CET │
│ 2010-02-01 02:00:00 CET │
│ 2010-02-01 03:00:00 CET │
└────────────────────────────────┘
You can extract the date using
(
df.with_columns(
pl.col("date")
.dt.cast_time_zone("UTC")
.cast(pl.Date)
.alias("trunc_date")
)
)
shape: (29, 2)
┌────────────────────────────────┬────────────┐
│ date ┆ trunc_date │
│ --- ┆ --- │
│ datetime[μs, Europe/Amsterdam] ┆ date │
╞════════════════════════════════╪════════════╡
│ 2010-01-30 23:00:00 CET ┆ 2010-01-30 │
│ 2010-01-31 00:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 01:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 02:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 03:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 04:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 05:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 06:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 07:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 08:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 09:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 10:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 11:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 12:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 13:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 14:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 15:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 16:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 17:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 18:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 19:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 20:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 21:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 22:00:00 CET ┆ 2010-01-31 │
│ 2010-01-31 23:00:00 CET ┆ 2010-01-31 │
│ 2010-02-01 00:00:00 CET ┆ 2010-02-01 │
│ 2010-02-01 01:00:00 CET ┆ 2010-02-01 │
│ 2010-02-01 02:00:00 CET ┆ 2010-02-01 │
│ 2010-02-01 03:00:00 CET ┆ 2010-02-01 │
└────────────────────────────────┴────────────┘

There's nothing wrong with the cast function per se. It's just not intended to be used for truncating the time out of a datetime.
As it happens the function you're looking for is truncate so this is what you want to do (in the last with_columns chunk)
pl.DataFrame(
{
"integer": [1, 2, 3],
"date": [
"2010-01-31T23:00:00+00:00",
"2010-02-01T00:00:00+00:00",
"2010-02-01T01:00:00+00:00"
]
}
).with_columns(
[
pl.col("date").str.strptime(pl.Datetime, fmt="%Y-%m-%dT%H:%M:%S%z").dt.with_time_zone("Europe/Amsterdam"),
]
).with_columns(
[
pl.col("date").dt.truncate('1d').alias("valueDay")
]
)

Related

Adding secondary group by clause using groupby_dynamic() operation in polar

I would like to groupby the data in interval of a hourly/daily/weekly and further group by certain other clauses. I was able to acheive groupby hourly/daily/weekly basis by using groupby_dynamic option provided by polars.
How do we add a secondary non datetime groupby clause to the polars dataframe after using groupby_dynamic operation in polar?
The sample dataframe read from csv is
┌─────────-─────────-─┬─────────────┬────────┬─────────┬──-───────────┐
| Date ┆ Item ┆ Issue ┆ Channel ┆ ID │
|═════════════════════|═════════════|════════|═════════|══════════════|
| 2023-01-02 01:00:00 ┆ Item ABC ┆ EAAGCD ┆ Twitter ┆ 32513995 │
| 2023-01-02 01:40:00 ┆ Item ABC ┆ ASDFFF ┆ Web ┆ 32513995 │
| 2023-01-02 02:15:00 ┆ Item ABC ┆ WERWET ┆ Web ┆ 32513995 │
| 2023-01-02 03:00:00 ┆ Item ABC ┆ BVRTNB ┆ Twitter ┆ 32513995 │
| 2023-01-03 04:11:00 ┆ Item ABC ┆ VDFGVS ┆ Fax ┆ 32513995 │
| 2023-01-03 04:30:00 ┆ Item ABC ┆ QWEDWE ┆ Twitter ┆ 32513995 │
| 2023-01-03 04:45:00 ┆ Item ABC ┆ BRHMNU ┆ Fax ┆ 32513995 │
└─────────────────────┴─────────────┴────────┴─────────┴──────────────┘
I am grouping this data in houlry interval using polars groupby_dynamic operation using the below code snippet.
import polars as pl
q = (
pl.scan_csv("Test.csv", parse_dates=True)
.filter(pl.col("Item") == "Item ABC")
.groupby_dynamic("Date", every="1h", closed="right")
.agg([pl.col("ID").count().alias("total")])
.sort(["Date"])
)
df = q.collect()
This code gives me result as
┌─────────────────────┬───────┐
│ Date ┆ total │
╞═════════════════════╪═══════╡
│ 2023-01-02 01:00:00 ┆ 2 │
│ 2023-01-02 02:00:00 ┆ 1 │
│ 2023-01-02 03:00:00 ┆ 1 │
│ 2023-01-05 04:00:00 ┆ 3 │
└─────────────────────┴───────┘
But i would want to further group by this data by "Channel" and expecting the result as
┌────────────-──────-─┬─────────┬───────┐
│ Date ┆ Channel ┆ total │
╞═════════════════════╪═════════╪═══════╡
│ 2023-01-02 01:00:00 ┆ Twitter ┆ 1 │
│ 2023-01-02 01:00:00 ┆ Web ┆ 1 │
│ 2023-01-02 01:00:00 ┆ Web ┆ 1 │
│ 2023-01-02 01:00:00 ┆ Twitter ┆ 1 │
│ 2023-01-03 01:00:00 ┆ Fax ┆ 2 │
│ 2023-01-11 01:00:00 ┆ Twitter ┆ 1 │
└─────────────────────┴─────────┴───────┘
You can specify by
q = (
pl.scan_csv("Test.csv", parse_dates=True)
.filter(pl.col("Item") == "Item ABC")
.groupby_dynamic("Date", every="1h", closed="right", by="Item")
.agg([pl.col("ID").count().alias("total")])
.sort(["Date"])
)

Pandas groupby apply is very slow

grouped = data_v1.sort_values(by = "Strike_Price").groupby(['dateTime','close','Index','Expiry','group'])
def calc_summary(group):
name = group.name
if name[3] == "above":
call_oi = group['Call_OI'].sum()
call_vol = group['Call_Volume'].sum()
put_oi = group['Put_OI'].sum()
put_vol = group['Put_Volume'].sum()
call_oi_1 = group.head(1)['Call_OI'].sum()
call_vol_1 = group.head(1)['Call_Volume'].sum()
put_oi_1 = group.head(1)['Put_OI'].sum()
put_vol_1 = group.head(1)['Put_Volume'].sum()
else:
call_oi = group['Call_OI'].sum()
call_vol = group['Call_Volume'].sum()
put_oi = group['Put_OI'].sum()
put_vol = group['Put_Volume'].sum()
call_oi_1 = group.tail(1)['Call_OI'].sum()
call_vol_1 = group.tail(1)['Call_Volume'].sum()
put_oi_1 = group.tail(1)['Put_OI'].sum()
put_vol_1 = group.tail(1)['Put_Volume'].sum()
summary = pd.DataFrame([{'call_oi':call_oi,
'call_vol':call_vol,
'put_oi':put_oi,
'put_vol':put_vol,
'call_oi_1':call_oi_1,
'call_vol_1':call_vol_1,
'put_oi_1':put_oi_1,
'put_vol_1':put_vol_1,
return summary
result = grouped.apply(calc_summary)
This above code takes too much time to run given the dataset is not even that big. Currently, it takes about 23 seconds in my system.
I tried swifter but that doesn't work with groupby objects.
What should I do to make my code faster?
Edit:
The data looks like this
{'dateTime': {0: Timestamp('2023-02-06 09:21:00'),
1: Timestamp('2023-02-06 09:21:00'),
2: Timestamp('2023-02-06 09:21:00'),
3: Timestamp('2023-02-06 09:21:00'),
4: Timestamp('2023-02-06 09:21:00')},
'close': {0: 17780.55, 1: 17780.55, 2: 17780.55, 3: 17780.55, 4: 17780.55},
'Index': {0: 'NIFTY', 1: 'NIFTY', 2: 'NIFTY', 3: 'NIFTY', 4: 'NIFTY'},
'Expiry': {0: '16FEB2023',
1: '23FEB2023',
2: '9FEB2023',
3: '16FEB2023',
4: '23FEB2023'},
'Expiry_order': {0: 'week_2',
1: 'week_3',
2: 'week_1',
3: 'week_2',
4: 'week_3'},
'group': {0: 'below', 1: 'below', 2: 'below', 3: 'below', 4: 'below'},
'Call_OI': {0: nan, 1: 60.0, 2: 4.0, 3: nan, 4: nan},
'Put_OI': {0: 1364.0, 1: 11255.0, 2: 91059.0, 3: 343.0, 4: 153.0},
'Call_Volume': {0: nan, 1: 3.0, 2: 2.0, 3: nan, 4: nan},
'Put_Volume': {0: 84.0, 1: 1246.0, 2: 5197.0, 3: 24.0, 4: 1.0},
'Strike_Price': {0: 16100.0, 1: 16100.0, 2: 16100.0, 3: 16150.0, 4: 16150.0}}
Using your sample data:
import io
import pandas as pd
csv = """
dateTime,close,Index,Expiry,Expiry_order,group,Call_OI,Put_OI,Call_Volume,Put_Volume,Strike_Price
2023-02-06 09:21:00,17780.55,NIFTY,16FEB2023,week_2,below,,1364.0,,84.0,16100.0
2023-02-06 09:21:00,17780.55,NIFTY,23FEB2023,week_3,below,60.0,11255.0,3.0,1246.0,16100.0
2023-02-06 09:21:00,17780.55,NIFTY,9FEB2023,week_1,below,4.0,91059.0,2.0,5197.0,16100.0
2023-02-06 09:21:00,17780.55,NIFTY,16FEB2023,week_2,below,,343.0,,24.0,16150.0
2023-02-06 09:21:00,17780.55,NIFTY,23FEB2023,week_3,below,,153.0,,1.0,16150.0
"""
df = pd.read_csv(io.StringIO(csv))
The output of your calc_summary function:
>>> df.sort_values(by='Strike_Price').groupby(['dateTime', 'close', 'Index', 'Expiry', 'group']).apply(calc_summary)
call_oi call_vol put_oi put_vol call_oi_1 call_vol_1 put_oi_1 put_vol_1
dateTime close Index Expiry group
2023-02-06 09:21:00 17780.55 NIFTY 16FEB2023 below 0 0.0 0.0 1707.0 108.0 0.0 0.0 343.0 24.0
23FEB2023 below 0 60.0 3.0 11408.0 1247.0 0.0 0.0 153.0 1.0
9FEB2023 below 0 4.0 2.0 91059.0 5197.0 4.0 2.0 91059.0 5197.0
.agg()
You're performing an aggregation where you conditionally want the head/tail depending on the value of the group column.
You could aggregate both values instead and then do the filtering afterwards.
This allows you to use .agg() directly.
We can use first and last aggregations for head/tail but must first fillna(0) as they handle NaN values differently.
summary = (
df.fillna(0) # needed for first/last as they ignore NaN
.sort_values(by='Strike_Price')
.groupby(['dateTime', 'close', 'Index', 'Expiry', 'group'])
[['Call_OI', 'Call_Volume', 'Put_OI', 'Put_Volume']]
.agg(['first', 'last', 'sum'])
.reset_index()
)
Which produces a multi-indexed column structure like:
dateTime close Index Expiry group Call_OI Call_Volume Put_OI Put_Volume
first last sum first last sum first last sum first last sum
0 2023-02-06 09:21:00 17780.55 NIFTY 16FEB2023 below 0.0 0.0 0.0 0.0 0.0 0.0 1364.0 343.0 1707.0 84.0 24.0 108.0
1 2023-02-06 09:21:00 17780.55 NIFTY 23FEB2023 below 60.0 0.0 60.0 3.0 0.0 3.0 11255.0 153.0 11408.0 1246.0 1.0 1247.0
2 2023-02-06 09:21:00 17780.55 NIFTY 9FEB2023 below 4.0 4.0 4.0 2.0 2.0 2.0 91059.0 91059.0 91059.0 5197.0 5197.0 5197.0
To say you want the last values when group != "above" you can:
>>> below = summary.loc[summary['group'] != 'above', summary.columns.get_level_values(1) != 'first']
>>> below
dateTime close Index Expiry group Call_OI Call_Volume Put_OI Put_Volume
last sum last sum last sum last sum
0 2023-02-06 09:21:00 17780.55 NIFTY 16FEB2023 below 0.0 0.0 0.0 0.0 343.0 1707.0 24.0 108.0
1 2023-02-06 09:21:00 17780.55 NIFTY 23FEB2023 below 0.0 60.0 0.0 3.0 153.0 11408.0 1.0 1247.0
2 2023-02-06 09:21:00 17780.55 NIFTY 9FEB2023 below 4.0 4.0 2.0 2.0 91059.0 91059.0 5197.0 5197.0
To flatten the column structure similar to your functions output you can:
>>> below.columns = [left.lower() + ('' if right in {'', 'sum'} else '_1') for left, right in below.columns]
>>> below
datetime close index expiry group call_oi_1 call_oi call_volume_1 call_volume put_oi_1 put_oi put_volume_1 put_volume
0 2023-02-06 09:21:00 17780.55 NIFTY 16FEB2023 below 0.0 0.0 0.0 0.0 343.0 1707.0 24.0 108.0
1 2023-02-06 09:21:00 17780.55 NIFTY 23FEB2023 below 0.0 60.0 0.0 3.0 153.0 11408.0 1.0 1247.0
2 2023-02-06 09:21:00 17780.55 NIFTY 9FEB2023 below 4.0 4.0 2.0 2.0 91059.0 91059.0 5197.0 5197.0
There are no examples of above in your data - but you could do the same for those rows using == 'above' and != 'last' and concat both sets of rows into a single dataframe.
Polars
You may also wish to compare how the dataset performs with polars.
One possible approach which generates the same output:
import io
import polars as pl
df = pl.read_csv(io.StringIO(csv))
columns = ["Call_OI", "Call_Volume", "Put_OI", "Put_Volume"]
(
df
.sort("Strike_Price")
.groupby(["dateTime", "close", "Index", "Expiry", "group"], maintain_order=True)
.agg([
pl.col(columns).sum(),
pl.when(pl.col("group").first() == "above")
.then(pl.col(columns).first())
.otherwise(pl.col(columns).last())
.suffix("_1")
])
.fill_null(0)
)
shape: (3, 13)
┌─────────────────────┬──────────┬───────┬───────────┬───────┬─────────┬─────────────┬─────────┬────────────┬───────────┬───────────────┬──────────┬──────────────┐
│ dateTime | close | Index | Expiry | group | Call_OI | Call_Volume | Put_OI | Put_Volume | Call_OI_1 | Call_Volume_1 | Put_OI_1 | Put_Volume_1 │
│ --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- │
│ str | f64 | str | str | str | f64 | f64 | f64 | f64 | f64 | f64 | f64 | f64 │
╞═════════════════════╪══════════╪═══════╪═══════════╪═══════╪═════════╪═════════════╪═════════╪════════════╪═══════════╪═══════════════╪══════════╪══════════════╡
│ 2023-02-06 09:21:00 | 17780.55 | NIFTY | 16FEB2023 | below | 0.0 | 0.0 | 1707.0 | 108.0 | 0.0 | 0.0 | 343.0 | 24.0 │
│ 2023-02-06 09:21:00 | 17780.55 | NIFTY | 23FEB2023 | below | 60.0 | 3.0 | 11408.0 | 1247.0 | 0.0 | 0.0 | 153.0 | 1.0 │
│ 2023-02-06 09:21:00 | 17780.55 | NIFTY | 9FEB2023 | below | 4.0 | 2.0 | 91059.0 | 5197.0 | 4.0 | 2.0 | 91059.0 | 5197.0 │
└─────────────────────┴──────────┴───────┴───────────┴───────┴─────────┴─────────────┴─────────┴────────────┴───────────┴───────────────┴──────────┴──────────────┘

Python Polars: How to apply a aggregate function for all columns and pass one additional column as argument?

I have a lazy dataframe (using scan_parquet) like below,
region time sen1 sen2 sen3
us 1 10.0 11.0 12.0
us 2 11.0 14.0 13.0
us 3 10.1 10.0 12.3
us 4 13.0 11.1 14.0
us 5 12.0 11.0 19.0
uk 1 10.0 11.0 12.1
uk 2 11.0 14.0 13.0
uk 3 10.1 10.0 12.0
uk 4 13.0 11.1 14.0
uk 5 12.0 11.0 19.0
uk 6 13.7 11.1 14.0
uk 7 12.0 11.0 21.9
I want to find max and min for all the sensors for each region and while doing so, I also wanted the time at which max and min happened.
So, I wrote the below aggregate function,
def my_custom_agg(t,v):
smax = v.max()
smin = v.min()
smax_t = t[v.arg_max()]
smin_t = t[v.arg_max()]
return [smax, smin, smax_t, smin_t]
Then I did the groupby as below,
df.groupby('region').agg(
[
pl.col('*').apply(lambda s: my_custom_agg(pl.col('time'),s))
]
)
When I do this, I get the below error,
TypeError: 'Expr' object is not subscribable
Expected result,
region sen1 sen2 sen3
us [13.0,10.0,4,1] [14.0,10.0,2,3] [19.0,12.0,5,1]
uk [13.7,10.0,6,1] [14.0,10.0,2,3] [21.9,12.0,7,3]
# which I will melt and transform to below,
region sname smax smin smax_t smin_t
us sen1 13.0 10.0 4 1
us sen2 14.0 10.0 2 3
us sen3 19.0 12.0 5 1
uk sen1 13.7 10.0 6 1
uk sen2 14.0 10.0 2 3
uk sen3 21.9 12.0 7 3
Could you please tell me how to pass one additional column as an argument? If there is an alternative way to do this, I am happy to hear it since I am flexible with the output format.
Note: In my real dataset I have 8k sensors, so it is better to do with *.
Thanks for your support.
You could .melt() and .sort() first.
Then when you .groupby() you can use .first() and .last() to get the min/max for time and value.
pl.all() can be used instead of pl.col("*")
>>> (
... df
... .melt(["region", "time"], variable_name="sname")
... .sort(pl.all().exclude("time"))
... .groupby(["region", "sname"])
... .agg([
... pl.all().first().suffix("_min"),
... pl.all().last() .suffix("_max"),
... ])
... )
shape: (6, 6)
┌────────┬───────┬──────────┬───────────┬──────────┬───────────┐
│ region ┆ sname ┆ time_min ┆ value_min ┆ time_max ┆ value_max │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 ┆ f64 ┆ i64 ┆ f64 │
╞════════╪═══════╪══════════╪═══════════╪══════════╪═══════════╡
│ uk ┆ sen1 ┆ 1 ┆ 10.0 ┆ 6 ┆ 13.7 │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ uk ┆ sen3 ┆ 3 ┆ 12.0 ┆ 7 ┆ 21.9 │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ us ┆ sen1 ┆ 1 ┆ 10.0 ┆ 4 ┆ 13.0 │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ us ┆ sen2 ┆ 3 ┆ 10.0 ┆ 2 ┆ 14.0 │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ uk ┆ sen2 ┆ 3 ┆ 10.0 ┆ 2 ┆ 14.0 │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┤
│ us ┆ sen3 ┆ 1 ┆ 12.0 ┆ 5 ┆ 19.0 │
└────────┴───────┴──────────┴───────────┴──────────┴───────────┘

From hourly data, get daily nsmallest values for each column

I have a dataframe df with n columns, with hourly data (date_i X1_i X2_i ... Xn_i).
For each day, I want to get the nsmallest values for each column. But i cannot find a way without looping over the columns.
It is easy with the smallest value as df.groupby(pd.Grouper(freq='D')).min() seems to do the trick, but when I try the nsmallest method, i get the following error message : "Cannot access callable attribute 'nsmallest' of 'DataFrameGroupBy' objects, try using the 'apply' method".
I tried to use nsmallest with the 'apply' method but was asked to specify columns...
If someone has an idea, it would be very helpful
Thanks
PS : sorry for the formatting, this is my first post ever
Edit : some illustrations
what my data looks like :
0 1 ... 9678 9679
2022-01-08 00:00:00 18472.232746 28934.878033 ... 20668.503228 22079.457224
2022-01-08 01:00:00 19546.101746 30239.880033 ... 21789.779228 23330.190224
2022-01-08 02:00:00 22031.448746 33016.048033 ... 24278.199228 25990.503224
2022-01-08 03:00:00 24089.368644 36134.608919 ... 26327.332591 28089.134306
2022-01-08 04:00:00 24640.942644 36818.412919 ... 26894.204591 28736.705306
2022-01-08 05:00:00 23329.700644 35639.693919 ... 25555.199591 27379.323306
2022-01-08 06:00:00 20990.043644 33329.805919 ... 23137.500591 24917.126306
2022-01-08 07:00:00 18314.599644 30347.799919 ... 20167.500591 22022.524306
2022-01-08 08:00:00 17628.482226 31301.113041 ... 21665.296600 24202.625832
2022-01-08 09:00:00 15743.339226 29588.354041 ... 19912.297600 22341.947832
2022-01-08 10:00:00 15498.405226 29453.561041 ... 19799.009600 22131.170832
2022-01-08 11:00:00 14950.121226 28767.791041 ... 19328.678600 21507.167832
2022-01-08 12:00:00 13925.869226 27530.472041 ... 18404.139600 20460.316832
2022-01-08 13:00:00 17502.122226 30922.783041 ... 21990.380600 24008.382832
2022-01-08 14:00:00 19159.511385 34275.005187 ... 23961.590286 26460.214883
2022-01-08 15:00:00 20583.356385 35751.662187 ... 25315.380286 27793.800883
2022-01-08 16:00:00 20443.423385 35925.362187 ... 25184.576286 27672.536883
2022-01-08 17:00:00 15825.211385 31604.614187 ... 20646.669286 23145.311883
2022-01-08 18:00:00 11902.354052 28786.559805 ... 16028.363856 19313.677750
2022-01-08 19:00:00 13483.710052 30631.806805 ... 17635.338856 20948.556750
2022-01-08 20:00:00 16084.773323 33944.862396 ... 20627.810852 22763.962851
2022-01-08 21:00:00 18340.833323 36435.799396 ... 22920.037852 25240.320851
2022-01-08 22:00:00 15110.698323 33159.222396 ... 19794.355852 22102.416851
2022-01-08 23:00:00 15663.400323 33741.501396 ... 20180.693852 22605.909851
2022-01-09 00:00:00 19500.930751 39058.431760 ... 24127.257756 26919.289816
2022-01-09 01:00:00 20562.985751 40330.807760 ... 25123.488756 28051.573816
2022-01-09 02:00:00 23408.547751 43253.635760 ... 27840.447756 30960.372816
2022-01-09 03:00:00 25975.071191 45523.722743 ... 30274.316013 32276.174330
2022-01-09 04:00:00 27180.858191 46586.959743 ... 31348.131013 33414.631330
2022-01-09 05:00:00 26383.511191 45793.920743 ... 30598.931013 32605.280330
... ... ... ... ...
What i get with the min function :
2022-01-08 11902.354052 27530.472041 ... 16028.363856 19313.677750
2022-01-09 14491.281907 30293.870235 ... 16766.428013 21386.135041
...
What i would like to have, for example with nsmallest(2)
2022-01-08 11902.354052 27530.472041 ... 16028.363856 19313.677750
13483.710052 28767.791041 ... 17635.338856 20460.316832
2022-01-09 14491.281907 30293.870235 ... 16766.428013 21386.135041
14721.392907 30722.928235 ... 17130.594013 21732.426041
...
Group by days and get the 2 smallest values as a list and explode all columns (pandas>=1.3.0)
get_2smallest = lambda x: x.nsmallest(2).tolist()
out = df.resample('D').apply(get_2smallest).explode(df.columns.tolist())
print(out)
# Output
0 1 9678 9679
2022-01-08 11902.354052 27530.472041 16028.363856 19313.67775
2022-01-08 13483.710052 28767.791041 17635.338856 20460.316832
2022-01-09 19500.930751 39058.43176 24127.257756 26919.289816
2022-01-09 20562.985751 40330.80776 25123.488756 28051.573816
Update
Another version, maybe faster:
out = df.set_index(df.index.date).stack().rename_axis(['Date', 'Col']) \
.rename('Val').sort_values().groupby(level=[0, 1]).head(2) \
.sort_index().reset_index().assign(Idx=lambda x: x.index % 2) \
.pivot(index=['Date', 'Idx'], columns='Col', values='Val') \
.droplevel('Idx').rename_axis(index=None, columns=None)

how can we resample time series in polars

I'd like to use the bucket expression with groupby, to downsample on monthly basis, as the downsampling function will be deprecated. Is there a easy way to do this, datetime.timedelta only works on days and lower.
With the landing groupby_dynamic we can now downsample and use the whole expression API for our aggregations. Meaning we can resample by either.
upsampling
downsampling
first upsample and then downsample
Let's go through an example:
df = pl.DataFrame(
{"time": pl.date_range(low=datetime(2021, 12, 16), high=datetime(2021, 12, 16, 3), interval="30m"),
"groups": ["a", "a", "a", "b", "b", "a", "a"],
"values": [1., 2., 3., 4., 5., 6., 7.]
})
print(df)
shape: (7, 3)
┌─────────────────────┬────────┬────────┐
│ time ┆ groups ┆ values │
│ --- ┆ --- ┆ --- │
│ datetime ┆ str ┆ f64 │
╞═════════════════════╪════════╪════════╡
│ 2021-12-16 00:00:00 ┆ a ┆ 1 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 00:30:00 ┆ a ┆ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 01:00:00 ┆ a ┆ 3 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 01:30:00 ┆ b ┆ 4 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:00:00 ┆ b ┆ 5 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:30:00 ┆ a ┆ 6 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 03:00:00 ┆ a ┆ 7 │
└─────────────────────┴────────┴────────┘
Upsampling
Upsampling can be done by defining an interval. This will yield a DataFrame with nulls, which can then be filled with a fill strategy or interpolation.
df.upsample("time", "15m").fill_null("forward")
shape: (13, 3)
┌─────────────────────┬────────┬────────┐
│ time ┆ groups ┆ values │
│ --- ┆ --- ┆ --- │
│ datetime ┆ str ┆ f64 │
╞═════════════════════╪════════╪════════╡
│ 2021-12-16 00:00:00 ┆ a ┆ 1 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 00:15:00 ┆ a ┆ 1 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 00:30:00 ┆ a ┆ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 00:45:00 ┆ a ┆ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ ... ┆ ... ┆ ... │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:00:00 ┆ b ┆ 5 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:15:00 ┆ b ┆ 5 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:30:00 ┆ a ┆ 6 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:45:00 ┆ a ┆ 6 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 03:00:00 ┆ a ┆ 7 │
└─────────────────────┴────────┴────────┘
(df.upsample("time", "15m")
.interpolate()
.fill_null("forward") # string columns cannot be interpolated
)
shape: (13, 3)
┌─────────────────────┬────────┬────────┐
│ time ┆ groups ┆ values │
│ --- ┆ --- ┆ --- │
│ datetime ┆ str ┆ f64 │
╞═════════════════════╪════════╪════════╡
│ 2021-12-16 00:00:00 ┆ a ┆ 1 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 00:15:00 ┆ a ┆ 1.5 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 00:30:00 ┆ a ┆ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 00:45:00 ┆ a ┆ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ ... ┆ ... ┆ ... │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:00:00 ┆ b ┆ 5 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:15:00 ┆ b ┆ 3.5 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:30:00 ┆ a ┆ 6 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 02:45:00 ┆ a ┆ 4 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2021-12-16 03:00:00 ┆ a ┆ 7 │
└─────────────────────┴────────┴────────┘
Downsampling
This is the powerful one, because we can also combine this with normal groupby keys. Having a virtual moving window over the time series (grouped by one or multiple keys) that can be aggregated with the expression API.
(df.groupby_dynamic(
time_column="time",
every="1h",
closed="both",
by="groups",
include_boundaries=True
)
.agg([
pl.col('time').count(),
pl.col("time").max(),
pl.sum("values"),
]))
shape: (4, 7)
┌────────┬────────────┬────────────┬────────────┬────────────┬─────────────────────┬────────────┐
│ groups ┆ _lower_bou ┆ _upper_bou ┆ time ┆ time_count ┆ time_max ┆ values_sum │
│ --- ┆ ndary ┆ ndary ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ --- ┆ --- ┆ datetime ┆ u32 ┆ datetime ┆ f64 │
│ ┆ datetime ┆ datetime ┆ ┆ ┆ ┆ │
╞════════╪════════════╪════════════╪════════════╪════════════╪═════════════════════╪════════════╡
│ a ┆ 2021-12-16 ┆ 2021-12-16 ┆ 2021-12-16 ┆ 3 ┆ 2021-12-16 01:00:00 ┆ 6 │
│ ┆ 00:00:00 ┆ 01:00:00 ┆ 00:00:00 ┆ ┆ ┆ │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ a ┆ 2021-12-16 ┆ 2021-12-16 ┆ 2021-12-16 ┆ 1 ┆ 2021-12-16 00:00:00 ┆ 1 │
│ ┆ 01:00:00 ┆ 02:00:00 ┆ 00:00:00 ┆ ┆ ┆ │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ a ┆ 2021-12-16 ┆ 2021-12-16 ┆ 2021-12-16 ┆ 2 ┆ 2021-12-16 03:00:00 ┆ 13 │
│ ┆ 02:00:00 ┆ 03:00:00 ┆ 00:00:00 ┆ ┆ ┆ │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ b ┆ 2021-12-16 ┆ 2021-12-16 ┆ 2021-12-16 ┆ 2 ┆ 2021-12-16 02:00:00 ┆ 9 │
│ ┆ 01:00:00 ┆ 02:00:00 ┆ 01:00:00 ┆ ┆ ┆ │
└────────┴────────────┴────────────┴────────────┴────────────┴─────────────────────┴────────────┘
I found a solutuion for my problem using the round expression and a groupby operation on the date column after it.
Here is some Code exmaple:
df = pl.DataFrame(
{
"A": [
"2020-01-01",
"2020-01-02",
"2020-02-03",
"2020-02-04",
"2020-03-05",
"2020-03-06",
"2020-06-06",
],
"B": [1.0, 8.0, 6.0, 2.0, 16.0, 10.0,2],
"C": [3.0, 6.0, 9.0, 2.0, 13.0, 8.0,2],
"D": [12.0, 5.0, 9.0, 2.0, 11.0, 2.0,2],
}
)
q = (
df.lazy().with_column(pl.col('A').str.strptime(pl.Date, "%Y-%m-%d").dt.round(rule='month',n=1))
.groupby('A').agg(
[pl.col("B").max(),
pl.col("C").min(),
pl.col("D").last()]
)
.sort('A')
)
df = q.collect()
print(df)
prints
┌────────────┬───────┬───────┬────────┐
│ A ┆ B_max ┆ C_min ┆ D_last │
│ --- ┆ --- ┆ --- ┆ --- │
│ date ┆ f64 ┆ f64 ┆ f64 │
╞════════════╪═══════╪═══════╪════════╡
│ 2020-01-01 ┆ 8 ┆ 3 ┆ 5 │
├╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2020-02-01 ┆ 6 ┆ 2 ┆ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2020-03-01 ┆ 16 ┆ 8 ┆ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2020-06-01 ┆ 2 ┆ 2 ┆ 2 │
└────────────┴───────┴───────┴────────┘
Some explanation, first I cast the String column to pl.Date type, then I use .dt to create the Namespace of the Date Types. After that I use the round function of the DateTime Namespace to round all Dates on monthly basis to the first day in the month. As result every date in the same month got the same date, so I can group them with groupby and use some aggregation functions on the groups. The downside of this method(and downsampling too) is that u missing the months where no dates exist. For this u can us this code, it's a little bit messy and I had to use pandas but I think it works.
from dateutil.relativedelta import relativedelta
date_min = df['A'].dt.min()
date_max = df['A'].dt.max()+relativedelta(months=+1)
t_index=pd.date_range(date_min, date_max, freq='M',closed='right').values
t_index = [datetime.datetime.fromisoformat(str(np.datetime_as_string(x, unit='D'))) for x in t_index]
df_ref = pl.DataFrame(t_index,columns='A')
q=(
df_ref.lazy().with_column(pl.col('A').cast(pl.Date).dt.round(rule='month',n=1))
.join(df.lazy(),on='A',how='left')
)
df = q.collect()
print(df)
results in
┌────────────┬───────┬───────┬────────┐
│ A ┆ B_max ┆ C_min ┆ D_last │
│ --- ┆ --- ┆ --- ┆ --- │
│ date ┆ f64 ┆ f64 ┆ f64 │
╞════════════╪═══════╪═══════╪════════╡
│ 2020-01-01 ┆ 8 ┆ 3 ┆ 5 │
├╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2020-02-01 ┆ 6 ┆ 2 ┆ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2020-03-01 ┆ 16 ┆ 8 ┆ 2 │
├╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2020-04-01 ┆ null ┆ null ┆ null │
├╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2020-05-01 ┆ null ┆ null ┆ null │
├╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌┤
│ 2020-06-01 ┆ 2 ┆ 2 ┆ 2 │
└────────────┴───────┴───────┴────────┘

Categories

Resources