I am trying to loop through a Polars recordset using the following code:
import polars as pl
mydf = pl.DataFrame(
{"start_date": ["2020-01-02", "2020-01-03", "2020-01-04"],
"Name": ["John", "Joe", "James"]})
print(mydf)
│start_date ┆ Name │
│ --- ┆ --- │
│ str ┆ str │
╞════════════╪═══════╡
│ 2020-01-02 ┆ John │
│ 2020-01-03 ┆ Joe │
│ 2020-01-04 ┆ James │
for row in mydf.rows():
print(row)
('2020-01-02', 'John')
('2020-01-03', 'Joe')
('2020-01-04', 'James')
Is there a way to specifically reference 'Name' using the named column as opposed to the index? In Pandas this would look something like:
import pandas as pd
mydf = pd.DataFrame(
{"start_date": ["2020-01-02", "2020-01-03", "2020-01-04"],
"Name": ["John", "Joe", "James"]})
for index, row in mydf.iterrows():
mydf['Name'][index]
'John'
'Joe'
'James'
You can specify that you want the rows to be named
for row in mydf.rows(named=True):
print(row)
It will give you a dict:
{'start_date': '2020-01-02', 'Name': 'John'}
{'start_date': '2020-01-03', 'Name': 'Joe'}
{'start_date': '2020-01-04', 'Name': 'James'}
You can then call row['Name']
Note that:
previous versions returned namedtuple instead of dict.
it's less memory intensive to use iter_rows
overall it's not recommended to iterate through the data this way
Row iteration is not optimal as the underlying data is stored in columnar form; where possible, prefer export via one of the dedicated export/output methods.
You would use select for that
names = mydf.select(['Name'])
for row in names:
print(row)
Related
Is there a way to filter data in a period of time (i.e., start time and end time) using polars?
import pandas as pd
import polars as pl
dr = pd.date_range(start='2020-01-01', end='2021-01-01', freq="30min")
df = pd.DataFrame({"timestamp": dr})
pf = pl.from_pandas(df)
The best try I've got was:
pf.filter((pl.col("timestamp").dt.hour()>=9) & (pl.col("timestamp").dt.minute()>=30))
It only gave me everything after 9:30; and if I append another filter after that:
pf.filter((pl.col("timestamp").dt.hour()>=9) & (pl.col("timestamp").dt.minute()>=30)).filter((pl.col("timestamp").dt.hour()<16))
This however does not give me the slice that falls right on 16:00.
polars API do not seem to specifically deal with the time part of time series (only date part); Is there a better workaround here using polars?
Good question!
Firstly, we can create this kind of DataFrame in Polars:
from datetime import datetime, time
import polars as pl
start = datetime(2020,1,1)
stop = datetime(2021,1,1)
df = pl.DataFrame({'timestamp':pl.date_range(low=start, high=stop, interval="30m")})
To work on the time components of a datetime we cast the timestamp column to the pl.Time dtype.
To filter on a range of times we then pass the upper and lower boundaries of time to in_between.
In this example I've printed the original timestamp column, the timestamp column cast to pl.Time and the filter condition.
(
df
.select(
[
pl.col("timestamp"),
pl.col("timestamp").cast(pl.Time).alias('time_component'),
(pl.col("timestamp").cast(pl.Time).is_between(
time(9,30),time(16),include_bounds=True
)
)
]
)
)
What you are after is:
(
df
.filter(
pl.col("timestamp").cast(pl.Time).is_between(
time(9,30),time(16),include_bounds=True
)
)
)
See the API docs for the syntax on controlling behaviour at the boundaries:
https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.Expr.is_between.html#polars.Expr.is_between
It is described in the polars book here: https://pola-rs.github.io/polars-book/user-guide/howcani/timeseries/selecting_dates.html#filtering-by-a-date-range
It would look something like this:
start_date = "2022-03-22 00:00:00"
end_date = "2022-03-27 00:00:00"
df = pl.DataFrame(
{
"dates": [
"2022-03-22 00:00:00",
"2022-03-23 00:00:00",
"2022-03-24 00:00:00",
"2022-03-25 00:00:00",
"2022-03-26 00:00:00",
"2022-03-27 00:00:00",
"2022-03-28 00:00:00",
]
}
)
df.with_column(pl.col("dates").is_between(start_date,end_date)).filter(pl.col("is_between") == True)
shape: (4, 2)
┌─────────────────────┬────────────┐
│ dates ┆ is_between │
│ --- ┆ --- │
│ str ┆ bool │
╞═════════════════════╪════════════╡
│ 2022-03-23 00:00:00 ┆ true │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2022-03-24 00:00:00 ┆ true │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2022-03-25 00:00:00 ┆ true │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2022-03-26 00:00:00 ┆ true │
└─────────────────────┴────────────┘
I have a data sorting problem where the original data consists of three 'blocks' containing a 'parent' row and two 'children' rows. A minimum working example looks like this:
import polars as pl
df_original = pl.DataFrame(
{
'Direction': ["Buy", "Sell", "Buy", "Sell", "Sell", "Buy"],
'Order ID': [None, '123_1', '123_0', None, '456_1', '456_0'],
'Parent Order ID': [123, None, None, 456, None, None],
}
)
I would like to order these based on the parent row. If the parent is a 'Buy' then the next row should be the 'Sell' child-order, the third row should be the 'Buy' order.
For a parent 'Sell' order it needs to be followed buy the 'Buy' order and then the 'Sell' order.
I have tried it with polars.sort(), but I am missing a piece of logic and can't find out what it is.
The final result should look like this:
df_sorted = pl.DataFrame(
{
'Direction': ["Buy", "Sell", "Buy", "Sell", "Buy", "Sell"],
'Order ID': [None, '123_1', '123_0', None, '456_0', '456_1'],
'Parent Order ID': [123, None, None, 456, None, None],
}
)
If I understand the question correctly you want to alternate the order of "Buy"/"Sell".
This snippet produces your desired output.
df = pl.DataFrame(
{
'Direction': ["Buy", "Sell", "Buy", "Sell", "Sell", "Buy"],
'Order ID': [None, '123_1', '123_0', None, '456_1', '456_0'],
'Parent Order ID': [123, None, None, 456, None, None],
}
)
consecutive = (pl.col("Direction") != pl.col("Direction").shift())
df.filter(consecutive).vstack(df.filter(~consecutive))
shape: (6, 3)
┌───────────┬──────────┬─────────────────┐
│ Direction ┆ Order ID ┆ Parent Order ID │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═══════════╪══════════╪═════════════════╡
│ Buy ┆ null ┆ 123 │
├╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ Sell ┆ 123_1 ┆ null │
├╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ Buy ┆ 123_0 ┆ null │
├╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ Sell ┆ null ┆ 456 │
├╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ Buy ┆ 456_0 ┆ null │
├╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ Sell ┆ 456_1 ┆ null │
└───────────┴──────────┴─────────────────┘
Suppose you have
df=pl.DataFrame(
{
"date":["2022-01-01", "2022-01-02"],
"hroff":[5,2],
"minoff":[1,2]
}).with_column(pl.col('date').str.strptime(pl.Date,"%Y-%m-%d"))
and you want to make a new column that adds the hour and min offsets to the date column. The only thing I saw was the dt.offset_by method. I made an extra column
df=df.with_column((pl.col('hroff')+"h"+pl.col('minoff')+"m").alias('offset'))
and then tried
df.with_column(pl.col('date') \
.cast(pl.Datetime).dt.with_time_zone('UTC') \
.dt.offset_by(pl.col('offset')).alias('newdate'))
but that doesn't work because dt.offset_by only takes a fixed string, not another column.
What's the best way to do that?
Use pl.duration:
import polars as pl
df = pl.DataFrame({
"date": pl.Series(["2022-01-01", "2022-01-02"]).str.strptime(pl.Datetime(time_zone="UTC"), "%Y-%m-%d"),
"hroff": [5, 2],
"minoff": [1, 2]
})
print(df.select(
pl.col("date") + pl.duration(hours=pl.col("hroff"), minutes=pl.col("minoff"))
))
shape: (2, 1)
┌─────────────────────┐
│ date │
│ --- │
│ datetime[μs] │
╞═════════════════════╡
│ 2022-01-01 05:01:00 │
├╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2022-01-02 02:02:00 │
└─────────────────────┘
Currently when I try to retrieve date from a polars datetime column, I have to write sth. similar to:
df = pl.DataFrame({
'time': [dt.datetime.now()]
})
df = df.select([
pl.col("*"),
pl.col("time").apply(lambda x: x.date()).alias("date")
])
Is there a different way, something closer to:
pl.col("time").dt.date().alias("date")
You can cast a Datetime column to a Date column:
import datetime
import polars as pl
df = pl.DataFrame({
'time': [datetime.datetime.now()]
})
df.with_column(
pl.col("time").cast(pl.Date)
)
shape: (1, 1)
┌────────────┐
│ time │
│ --- │
│ date │
╞════════════╡
│ 2022-08-02 │
└────────────┘
I've been recently using Polars for a project im starting to develop, ive come across several problems but with the info here and in the docs i have solved those issues.
My issue:
When I save the dataframe it stores datetime data like this:
1900-01-01T18:00:00.000000000
Load dataframe from saved dataframe
When it shows like this in my console ( I have checked type is object) :
1900-01-01 18:00:00
Dataframe show pre-saved
Code:
'''
My column is a string like this: 1234 , this means 12:34, so i do the following transformation:
'''
df = df2.with_columns([
pl.col('initial_band_time').apply(lambda x: datetime.datetime.strptime(x, '%H:%M')),
pl.col('final_band_time').apply(lambda x: datetime.datetime.strptime(x, '%H:%M')),
])
df = df.drop('version').rename({'day_type': 'day'})
print(df)
print(df.dtypes)
#output: <class 'polars.datatypes.Datetime'>, <class 'polars.datatypes.Datetime'>
'''
I save it with write_csv
'''
df.write_csv('data/trp_occupation_level_emt_cleaned.csv', sep=",")
dfnew = pl.read_csv('data/trp_occupation_level_emt_cleaned.csv')
# print new df
print(dfnew.head())
print(dfnew.dtypes)
# output: <class 'polars.datatypes.Utf8'>, <class 'polars.datatypes.Utf8'>
I know i can read the csv with parsed_dates= True, but i consume this dataframe in a database so i need it to export it with dates parsed.
Polars does not default to parsing string data as dates automatically.
But you can easily turn it on by setting the parse_dates keyword argument.
pl.read_csv("myfile.csv", parse_dates=True)
It sounds like you want to specify the formatting of Date and Datetime fields in an output csv file - to conform with the formatting requirements of an external application (e.g., database loader).
We can do that easily using the strftime format function. Basically, we will convert the Date/Datetime fields to strings, formatted as we need them, just before we write the csv file. This way, the csv output writer will not alter them.
For example, let's start with this data:
from io import StringIO
import polars as pl
my_csv = """sample_id,initial_band_time,final_band_time
1,2022-01-01T18:00:00,2022-01-01T18:35:00
2,2022-01-02T19:35:00,2022-01-02T20:05:00
"""
df = pl.read_csv(StringIO(my_csv), parse_dates=True)
print(df)
shape: (2, 3)
┌───────────┬─────────────────────┬─────────────────────┐
│ sample_id ┆ initial_band_time ┆ final_band_time │
│ --- ┆ --- ┆ --- │
│ i64 ┆ datetime[μs] ┆ datetime[μs] │
╞═══════════╪═════════════════════╪═════════════════════╡
│ 1 ┆ 2022-01-01 18:00:00 ┆ 2022-01-01 18:35:00 │
├╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 2022-01-02 19:35:00 ┆ 2022-01-02 20:05:00 │
└───────────┴─────────────────────┴─────────────────────┘
Now, we'll apply the strftime function and the following format specifier %F %T.
df = df.with_column(pl.col(pl.Datetime).dt.strftime(fmt="%F %T"))
print(df)
shape: (2, 3)
┌───────────┬─────────────────────┬─────────────────────┐
│ sample_id ┆ initial_band_time ┆ final_band_time │
│ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ str │
╞═══════════╪═════════════════════╪═════════════════════╡
│ 1 ┆ 2022-01-01 18:00:00 ┆ 2022-01-01 18:35:00 │
├╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 2 ┆ 2022-01-02 19:35:00 ┆ 2022-01-02 20:05:00 │
└───────────┴─────────────────────┴─────────────────────┘
Notice that our Datetime fields have been converted to strings (the 'str' in the column header).
And here's a pro tip: notice that I'm using a datatype wildcard expression in the col expression: pl.col(pl.Datetime). This way, you don't need to specify each Datetime field; Polars will automatically convert them all.
Now, when we write the csv file, we get the following output.
df.write_csv('/tmp/tmp.csv')
Output csv:
sample_id,initial_band_time,final_band_time
1,2022-01-01 18:00:00,2022-01-01 18:35:00
2,2022-01-02 19:35:00,2022-01-02 20:05:00
You may need to play around with the format specifier until you find one that your external application will accept. Here's a handy reference for format specifiers.
Here's another trick: you can do this step just before writing the csv file:
df.with_column(pl.col(pl.Datetime).dt.strftime(fmt="%F %T")).write_csv('/tmp/tmp.csv')
This way, your original dataset is not changed ... only the copy that you intend to write to a csv file.
BTW, I use this trick all the time when writing csv files that I intend to use in spreadsheets. I often just want the "%F" (date) part of the datetime, not the "%T" part (time). It just makes parsing easier in the spreadsheet.