I have a dataset of events with a date column which I need to display in a weekly plot and do some more data processing afterwards. After some googling I found pd.Grouper(freq="W") so I am using that to group the events by week and display them. My problem is that after doing the groupby and ungroup I end up with a data frame where there is an unnamed column that I am unable to refer to except using iloc. This is an issue because in later plots I am grouping by other columns so I need a way to refer to this column by name, not iloc.
Here's a reproducible example of my dataset:
from datetime import datetime
from faker import Faker
fake = Faker()
start_date = datetime(2023, 1, 1)
end_date = datetime(2023, 2, 1)
# Generate data frame of 30 random dates in January 2023
df = pd.DataFrame(
{"date": [fake.date_time_between(start_date=start_date, end_date=end_date) for i in range(30)],
"dummy": [1 for i in range(30)]}) # There's probably a better way of counting than this
grouper = df.set_index("date").groupby([pd.Grouper(freq="W"), 'dummy'])
result = grouper['dummy'].count().unstack('dummy').fillna(0)
The result data frame that I get has weird indexes/columns that I am unable to navigate:
>>> print(result)
dummy 1
date
2023-01-01 1
2023-01-08 3
2023-01-15 4
2023-01-22 9
2023-01-29 8
2023-02-05 5
>>> print(result.columns)
Int64Index([1], dtype='int64', name='dummy')
Then only column here is dummy, but even after result.dummy I get an AttributeError
I've also tried result.reset_index():
dummy date 1
0 2023-01-01 1
1 2023-01-08 3
2 2023-01-15 4
3 2023-01-22 9
4 2023-01-29 8
5 2023-02-05 5
But for this data frame I can only get the date column - the counts column named "1" cannot be accessed using result.reset_index()["1"] as I get an AttributeError
I am completely perplexed by what is going on here, pandas is really powerful but sometimes I find it incredibly unintuitive. I've checked several pages of the docs and checked if there's another index level (there isn't). Can someone who's better at pandas help me out here?
I just want a way to convert the grouped data frame into something like this:
date counts
0 2023-01-01 1
1 2023-01-08 3
2 2023-01-15 4
3 2023-01-22 9
4 2023-01-29 8
5 2023-02-05 5
Where date and counts are columns and there is an unnamed index
You can solve this by simply doing:
from datetime import datetime
from faker import Faker
fake = Faker()
start_date = datetime(2023, 1, 1)
end_date = datetime(2023, 2, 1)
# Generate data frame of 30 random dates in January 2023
df = pd.DataFrame(
{"date": [fake.date_time_between(start_date=start_date, end_date=end_date) for i in range(30)],
"dummy": [1 for i in range(30)]}) # There's probably a better way of counting than this
result = df.groupby([pd.Grouper(freq="W", key='date'), 'dummy'], squeeze=True)['dummy'].count()
result = result.reset_index(name='counts')
result = result.drop(['dummy'], axis = 1)
which gives
date counts
0 2023-01-01 3
1 2023-01-08 7
2 2023-01-15 5
3 2023-01-22 5
4 2023-01-29 8
5 2023-02-05 2
Related
I have a database with a column describing the dates when a certain data was collected. However, the dates were inserted as MM-DD (eg., Jul-13) and they are coded as string.
ids = pd.Series([1, 2, 3, 4])
dates = pd.Series(["Jul-29", "Jul-29", "Dec-29", "Apr-22"])
df = pd.DataFrame({"ids" : ids, "dates" : dates})
ids dates
0 1 Jul-29
1 2 Jul-29
2 3 Dec-29
3 4 Apr-22
I would like to insert the year in these dates before converting to date based on a condition. I know that data from December belongs to 2021, whereas the rest of the data was collected in 2022. Therefore I need something like this:
ids dates corrected_dates
0 1 Jul-29 Jul-29-2022
1 2 Jul-29 Jul-29-2022
2 3 Dec-29 Dec-29-2021
3 4 Apr-22 Apr-22-2022
I have tried:
df["corrected_dates"] = np.where("Dec" in df["dates"], df["dates"] + "-2021", df["dates"] + "-2022")
but this resulted in
ids dates corrected_dates
0 1 Jul-29 Jul-29-2022
1 2 Jul-29 Jul-29-2022
2 3 Dec-29 Dec-29-2022
3 4 Apr-22 Apr-22-2022
Therefore, I am probably not coding the conditional properly but I can't find out what I am doing wrong.
I was able to insert the the year in a new column by doing
corrected_dates = []
for date in df["dates"]:
if "Dec" in date:
new_date = date + "-2021"
else:
new_date = date + "-2022"
corrected_dates.append(new_date)
and then df["corrected_dates"] = corrected_dates but this seems too cumbersome (not to mention that I am not sure this would work if there were missing data in df["dates"].
Can anyone help me understand what I am doing wrong when using np.where() or suggest a better alternative than using a for loop?
Thanks
Let us do str.startswith
df['new'] = np.where(df["dates"].str.startswith('Dec'), df["dates"] + "-2021", df["dates"] + "-2022")
df
Out[19]:
ids dates new
0 1 Jul-29 Jul-29-2022
1 2 Jul-29 Jul-29-2022
2 3 Dec-29 Dec-29-2021
3 4 Apr-22 Apr-22-2022
I am retrieving Yahoo stock ticker data and want to convert the given currency to euros. For this purpose I am using the Python Library Currency Converter and the pandas method multiply.
One of the columns, trading volume, shouldn't be "converted" - whats the best way to skip it?
This is what I currently have:
import pandas as pd
import datetime
import pandas_datareader.data as web
from pandas import Series, DataFrame
from currency_converter import CurrencyConverter
start = datetime.datetime(2017, 1, 1)
end = datetime.datetime(2020, 12, 31)
c = CurrencyConverter()
df = web.DataReader("EXK", 'yahoo', start, end)
df.tail()
conversion = c.convert(1, 'USD', 'EUR')
eurodf = df.multiply(conversion,axis='rows')
eurodf.tail()
One approach I thought of taking, was to maybe join the "volume" column after multiplication.
Alternatively I could just target that one column and convert it back?
You can loc all columns except one. For example:
A B C
0 0 1 2
1 3 4 5
2 6 7 8
df.loc[:, df.columns.drop('B')] *= 10
Result:
A B C
0 0 1 20
1 30 4 50
2 60 7 80
I have a large dataset (df) with lots of columns and I am trying to get the total number of each day.
|datetime|id|col3|col4|col...
1 |11-11-2020|7|col3|col4|col...
2 |10-11-2020|5|col3|col4|col...
3 |09-11-2020|5|col3|col4|col...
4 |10-11-2020|4|col3|col4|col...
5 |10-11-2020|4|col3|col4|col...
6 |07-11-2020|4|col3|col4|col...
I want my result to be something like this
|datetime|id|col3|col4|col...|Count
6 |07-11-2020|4|col3|col4|col...| 1
3 |5|col3|col4|col...| 1
2 |10-11-2020|5|col3|col4|col...| 1
4 |4|col3|col4|col...| 2
1 |11-11-2020|7|col3|col4|col...| 1
I tried to use resample like this df = df.groupby(['id','col3', pd.Grouper(key='datetime', freq='D')]).sum().reset_index() and this is my result. I am still new to programming and Pandas but I have read up on pandas docs and am still unable to do it.
|datetime|id|col3|col4|col...
6 |07-11-2020|4|col3|1|0.0
3 |07-11-2020|5|col3|1|0.0
2 |10-11-2020|5|col3|1|0.0
4 |10-11-2020|4|col3|2|0.0
1 |11-11-2020|7|col3|1|0.0
try this:
df = df.groupby(['datetime','id','col3']).count()
If you want the count values for all columns based only on the date, then:
df.groupby('datetime').count()
And you'll get a DataFrame who has the date time as the index and the column cells representing the number of entries for that given index.
I have a df that is a time series of user access data
UserID Access Date
a 10/01/2019
b 10/01/2019
c 10/01/2019
a 10/02/2019
b 10/02/2019
d 10/02/2019
e 10/03/2019
f 10/03/2019
a 10/03/2019
b 10/03/2019
a 10/04/2019
b 10/04/2019
c 10/05/2019
I have another df that lists out the dates and I want to aggregate the unique occurrence of UserIDs in the rolling past 3 days. The expected output would look like below:
Date Past_3_days_unique_count
10/01/2019 NaN
10/02/2019 NaN
10/03/2019 6
10/04/2019 5
10/04/2019 5
How would I be able to achieve this?
It's quite straightforward - let me walk you through it via the following snippet and its comments.
import pandas as pd
import numpy as np
# Generate some dates
dates = pd.date_range("01-01-2016", "01-10-2016", freq="6H")
# Generate some user ids
ids = np.random.randint(1, 5, len(dates))
df = pd.DataFrame({"id": ids, "date": dates})
# Collect unique IDs for each day
q = df.groupby(df["date"].dt.to_period("D"))["id"].nunique()
# Grab the rolling sum over 3 previous days which is what we wanted
q.rolling(3).sum()
Use pandas groupby the documentation is very good
I have a dataframe with more than 4 million rows and 30 columns. I am just providing a sample of my patient dataframe
df = pd.DataFrame({
'subject_ID':[1,1,1,1,1,2,2,2,2,2,3,3,3],
'date_visit':['1/1/2020 12:35:21','1/1/2020 14:35:32','1/1/2020 16:21:20','01/02/2020 15:12:37','01/03/2020 16:32:12',
'1/1/2020 12:35:21','1/3/2020 14:35:32','1/8/2020 16:21:20','01/09/2020 15:12:37','01/10/2020 16:32:12',
'11/01/2022 13:02:31','13/01/2023 17:12:31','16/01/2023 19:22:31'],
'item_name':['PEEP','Fio2','PEEP','Fio2','PEEP','PEEP','PEEP','PEEP','PEEP','PEEP','Fio2','Fio2','Fio2']})
I would like to do two things
1) Find the subjects and their records which are missing in the sequence
2) Get the count of item_name for each subjects
For q2, this is what I tried
df.groupby(['subject_ID','item_name']).count() # though this produces output, column name is not okay. I mean why do it show the count value on `date_visit` column?
For q1, this is what I am trying
df['day'].le(df['shift_date'].add(1))
I expect my output to be like as shown below
You can get the first part with:
In [14]: df.groupby("subject_ID")['item_name'].value_counts().unstack(fill_value=0)
Out[14]:
item_name Fio2 PEEP
subject_ID
1 2 3
2 0 5
3 3 0
EDIT:
I think you've still got your date formats a bit messed up in your sample output, and strongly recommend switching everything to the ISO 8601 standard since that prevents problems like that down the road. pandas won't correctly parse that 11/01/2022 entry on its own, so I've manually fixed it in the sample.
Using what I assume these dates are supposed to be, you can find the gaps by grouping and using .resample():
In [73]: df['dates'] = pd.to_datetime(df['date_visit'])
In [74]: df.loc[10, 'dates'] = pd.to_datetime("2022-01-11 13:02:31")
In [75]: dates = df.groupby("subject_ID").apply(lambda x: x.set_index('dates').resample('D').first())
In [76]: dates.index[dates.isnull().any(axis=1)].to_frame().reset_index(drop=True)
Out[76]:
subject_ID dates
0 2 2020-01-02
1 2 2020-01-04
2 2 2020-01-05
3 2 2020-01-06
4 2 2020-01-07
5 3 2022-01-12
6 3 2022-01-14
7 3 2022-01-15
You can then add seq status to that first frame by checking whether the ID shows up in this new frame.