I am trying to 'join' two DataFrames based on a condition.
Condition
if df1.Year == df2.Year &
df1.Date >= df2.BeginDate or df1.Date <= df2.EndDate &
df1.ID == df2.ID
#if the condition is True, I would love to add an extra column (binary) to df1, something like
#df1.condition = Yes or No.
My data looks like this:
df1:
Year Week ID Date
2020 1 123 2020-01-01 00:00:00
2020 1 345 2020-01-01 00:00:00
2020 2 123 2020-01-07 00:00:00
2020 1 123 2020-01-01 00:00:00
df2:
Year BeginDate EndDate ID
2020 2020-01-01 00:00:00 2020-01-02 00:00:00 123
2020 2020-01-01 00:00:00 2020-01-02 00:00:00 123
2020 2020-01-01 00:00:00 2020-01-02 00:00:00 978
2020 2020-09-21 00:00:00 2020-01-02 00:00:00 978
end_df: #Expected output
Year Week ID Condition
2020 1 123 True #Year is matching, week1 is between the dates, ID is matching too
2019 1 345 False #Year is not matching
2020 2 187 False # ID is not matching
2020 1 123 True # Same as first row.
I thought to solve this by looping over two DataFrames:
for row in df1.iterrrows():
for row2 in df2.iterrows():
if row['Year'] == row2['Year2']:
if row['ID] == row2['ID']:
.....
.....
row['Condition'] = True
else:
row['Condition'] = False
However... this is leading to error after error.
Really looking forward how you guys will tackle this problem. Many thanks in advance!
UPDATE 1
I created a loop. However, this loop is taking ages (and I am not sure how to add the value to a new column).
Note, in df1 I created a 'Date' column (in the same format as the begin & enddate from df2).
Key now: How can I add the True value (in the end of the loop..) to my df1 (in an extra column)?
for index, row in df1.interrows():
row['Year'] = str(row['Year'])
for index1, row1 in df2.iterrows():
row1['Year'] = str(row1['Year'])
if row['Year'] == row1['Year']:
row['ID'] = str(row['ID'])
row1['ID'] = str(row1['ID'])
if row['ID] == row1['ID']:
if row['Date'] >= row1['BeginDate'] and row['Date'] <= row1['Enddate']:
print("I would like to add this YES to df1 in an extra column")
Edit 2
Trying #davidbilla solution: It looks like the 'condition' column is not doing well. As you can see, it match even while df1.Year != df2.Year. Note that df2 is sorted based on ID (so all the same unique numbers should be there
I guess you are expecting something like this - if you are trying to match the dataframes row wise (i.e compare row1 of df1 with row1 of df2):
df1['condition'] = np.where((df1['Year']==df2['Year'])&(df1['ID']==df2['ID'])&((df1['Date']>=df2['BeginDate'])or(df1['Date']<=df2['EndDate'])), True, False)
np.where takes the conditions as the first parameter, the second parameter will the be the value if the condition pass, the 3rd parameter is the value if the condition fail.
EDIT 1:
Based on your sample dataset
df1 = pd.DataFrame([[2020,1,123],[2020,1,345],[2020,2,123],[2020,1,123]],
columns=['Year','Week','ID'])
df2 = pd.DataFrame([[2020,'2020-01-01 00:00:00','2020-01-02 00:00:00',123],
[2020,'2020-01-01 00:00:00','2020-01-02 00:00:00',123],
[2020,'2020-01-01 00:00:00','2020-01-02 00:00:00',978],
[2020,'2020-09-21 00:00:00','2020-01-02 00:00:00',978]],
columns=['Year','BeginDate','EndDate','ID'])
df2['BeginDate'] = pd.to_datetime(df2['BeginDate'])
df2['EndDate'] = pd.to_datetime(df2['EndDate'])
df1['condition'] = np.where((df1['Year']==df2['Year'])&(df1['ID']==df2['ID']),True, False)
# &((df1['Date']>=df2['BeginDate'])or(df1['Date']<=df2['EndDate'])) - removed this condition as the df has no Date field
print(df1)
Output:
Year Date ID condition
0 2020 1 123 True
1 2020 1 345 False
2 2020 2 123 False
3 2020 1 123 False
EDIT 2: To compare one row in df1 with all rows in df2
df1['condition'] = (df1['Year'].isin(df2['Year']))&(df1['ID'].isin(df2['ID']))
This takes df1['Year'] and compares it against all values of df2['Year'].
Based on the sample dataset:
df1:
Year Date ID
0 2020 2020-01-01 123
1 2020 2020-01-01 345
2 2020 2020-10-01 123
3 2020 2020-11-13 123
df2:
Year BeginDate EndDate ID
0 2020 2020-01-01 2020-02-01 123
1 2020 2020-01-01 2020-01-02 123
2 2020 2020-03-01 2020-05-01 978
3 2020 2020-09-21 2020-10-01 978
Code change:
date_range = list(zip(df2['BeginDate'],df2['EndDate']))
def check_date(date):
for (s,e) in date_range:
if date>=s and date<=e:
return True
return False
df1['condition'] = (df1['Year'].isin(df2['Year']))&(df1['ID'].isin(df2['ID']))
df1['date_compare'] = df1['Date'].apply(lambda x: check_date(x)) # you can directly store this in df1['condition']. I just wanted to print the values so have used a new field
df1['condition'] = (df1['condition']==True)&(df1['date_compare']==True)
Output:
Year Date ID condition date_compare
0 2020 2020-01-01 123 True True # Year match, ID match and Date is within the range of df2 row 1
1 2020 2020-01-01 345 False True # Year match, ID no match
2 2020 2020-10-01 123 True True # Year match, ID match, Date is within range of df2 row 4
3 2020 2020-11-13 123 False False # Year match, ID match, but Date is not in range of any row in df2
EDIT 3:
Based on updated question (Earlier I thought it was ok if the 3 values year, id and date match df2 in any of the rows not on the same row). I think I got better understanding of your requirement now.
df2['BeginDate'] = pd.to_datetime(df2['BeginDate'])
df2['EndDate'] = pd.to_datetime(df2['EndDate'])
df1['Date'] = pd.to_datetime(df1['Date'])
df1['condition'] = False
for idx1, row1 in df1.iterrows():
match = False
for idx2, row2 in df2.iterrows():
if (row1['Year']==row2['Year']) & \
(row1['ID']==row2['ID']) & \
(row1['Date']>=row2['BeginDate']) & \
(row1['Date']<=row2['EndDate']):
match = True
df1.at[idx1, 'condition'] = match
Output - Set 1:
DF1:
Year Date ID
0 2020 2020-01-01 123
1 2020 2020-01-01 123
2 2020 2020-01-01 345
3 2020 2020-01-10 123
4 2020 2020-11-13 123
DF2:
Year BeginDate EndDate ID
0 2020 2020-01-15 2020-02-01 123
1 2020 2020-01-01 2020-01-02 123
2 2020 2020-03-01 2020-05-01 978
3 2020 2020-09-21 2020-10-01 978
DF1 result:
Year Date ID condition
0 2020 2020-01-01 123 True
1 2020 2020-01-01 123 True
2 2020 2020-01-01 345 False
3 2020 2020-01-10 123 False
4 2020 2020-11-13 123 False
Output - Set 2:
DF1:
Year Date ID
0 2019 2019-01-01 s904112
1 2019 2019-01-01 s911243
2 2019 2019-01-01 s917131
3 2019 2019-01-01 sp986214
4 2019 2019-01-01 s510006
5 2020 2020-01-10 s540006
DF2:
Year BeginDate EndDate ID
0 2020 2020-01-27 2020-09-02 s904112
1 2020 2020-01-27 2020-09-02 s904112
2 2020 2020-01-03 2020-03-15 s904112
3 2020 2020-04-15 2020-01-05 s904112
4 2020 2020-01-05 2020-05-15 s540006
5 2019 2019-01-05 2019-05-15 s904112
DF1 Result:
Year Date ID condition
0 2019 2019-01-01 s904112 False
1 2019 2019-01-01 s911243 False
2 2019 2019-01-01 s917131 False
3 2019 2019-01-01 sp986214 False
4 2019 2019-01-01 s510006 False
5 2020 2020-01-10 s540006 True
The 2nd row of the desired output has Year as 2019, so I assume the 2nd row of df1.Year is also 2019 instead of 2020
If I understand correctly, you need to merge and filter-out Date outside of the BeginDate and EndDate range. First, there are duplicates and invalid date ranges in df2. We need to drop duplicates and invalid ranges before merge. Invalid date ranges are ranges where BeginDate >= EndDate which is index 3 of df2.
#convert all date columns of both `df1` and `df2` to datetime dtype
df1['Date'] = pd.to_datetime(df1['Date'])
df2[['BeginDate', 'EndDate']] = df2[['BeginDate', 'EndDate']].apply(pd.to_datetime)
#left-merge on `Year`, `ID` and using `eval` to compute
#columns `Condition` where `Date` is between `BeginDate` and `EndDate`.
#Finally assign back to `df1`
df1['Condition'] = (df1.merge(df2.loc[df2.BeginDate < df2.EndDate].drop_duplicates(),
on=['Year','ID'], how='left')
.eval('Condition= BeginDate <= Date <= EndDate')['Condition'])
Out[614]:
Year Week ID Date Condition
0 2020 1 123 2020-01-01 True
1 2019 1 345 2020-01-01 False
2 2020 2 123 2020-01-07 False
3 2020 1 123 2020-01-01 True
Related
I want to filter rolls (df1) with date column that in datetime64[ns] from df2 (same column name and dtype). I tried searching for a solution but I get the error:
Can only compare identically-labeled Series objects | 'Timestamp' object is not iterable or other.
sample df1
id
date
value
1
2018-10-09
120
2
2018-10-09
60
3
2018-10-10
59
4
2018-11-25
120
5
2018-08-25
120
sample df2
date
2018-10-09
2018-10-10
sample result that I want
id
date
value
1
2018-10-09
120
2
2018-10-09
60
3
2018-10-10
59
In fact, I want this program to run 1 time in every 7 days, counting back from the day it started. So I want it to remove dates that are not in these past 7 days.
# create new dataframe -> df2
data = {'date':[]}
df2 = pd.DataFrame(data)
#Set the date to the last 7 days.
days_use = 7 # 7 -> 1
for x in range (days_use,0,-1):
days_use = x
use_day = date.today() - timedelta(days=days_use)
df2.loc[x] = use_day
#Change to datetime64[ns]
df2['date'] = pd.to_datetime(df2['date'])
Use isin:
>>> df1[df1["date"].isin(df2["date"])]
id date value
0 1 2018-10-09 120
1 2 2018-10-09 60
2 3 2018-10-10 59
If you want to create df2 with the dates for the past week, you can simply use pd.date_range:
df2 = pd.DataFrame({"date": pd.date_range(pd.Timestamp.today().date()-pd.DateOffset(7),periods=7)})
>>> df2
date
0 2022-05-03
1 2022-05-04
2 2022-05-05
3 2022-05-06
4 2022-05-07
5 2022-05-08
6 2022-05-09
Hi I have a table of data like below and I want to try do a rolling count that takes the date in the group by and the values of dates prior.
Table of data:
Date
ID
1/1/2020
123
2/1/2020
432
2/1/2020
5234
4/1/2020
543
5/1/2020
645
6/1/2020
231
My desired output is something like this:
Date
count
1/1/2020
1
2/1/2020
3
4/1/2020
4
5/1/2020
5
6/1/2020
6
I have tried the following but it doesn't seem to work on how I want it do it.
df[['id','date']].groupby('date').cumcount()
Convert column to datetimes for correct ordering if aggregate GroupBy.size and add cumulative sum by Series.cumsum:
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
df = df.groupby('Date').size().cumsum().reset_index(name='count')
print (df)
Date count
0 2020-01-01 1
1 2020-01-02 3
2 2020-01-04 4
3 2020-01-05 5
4 2020-01-06 6
I have a df in format:
start end
0 2020-01-01 2020-01-01
1 2020-01-01 2020-01-01
2 2020-01-02 2020-01-02
...
57 2020-04-01 2020-04-01
58 2020-04-02 2020-04-02
And I want to count the number of entries in each month and place it in a new df i.e. the number of 'start' entries for Jan, Feb, etc, to give me:
Month Entries
2020-01 3
...
2020-04 2
I am currently trying something like this, but its not what I'm needing:
df.index = pd.to_datetime(df['start'],format='%Y-%m-%d')
df.groupby(pd.Grouper(freq='M'))
df['start'].value_counts()
Use Groupby.count with Series.dt:
In [1282]: df
Out[1282]:
start end
0 2020-01-01 2020-01-01
1 2020-01-01 2020-01-01
2 2020-01-02 2020-01-02
57 2020-04-01 2020-04-01
58 2020-04-02 2020-04-02
# Do this only when your `start` and `end` columns are object. If already datetime, you can ignore below 2 statements
In [1284]: df.start = pd.to_datetime(df.start)
In [1285]: df.end = pd.to_datetime(df.end)
In [1296]: df1 = df.groupby([df.start.dt.year, df.start.dt.month]).count().rename_axis(['year', 'month'])['start'].reset_index(name='Entries')
In [1297]: df1
Out[1297]:
year month Entries
0 2020 1 3
1 2020 4 2
df['check'] = ((df['id'] == 123) & (df['date1'] >= date1)) | ((df['id'] == 456) & (df['date2'] >= date2))
present = df.groupby(['id', 'month', 'check'])['userid'].nunique().reset_index(name="usercount")
This is my code, so my expected output must have number of unique users per month in the column usercount
grouped by id. i used id, month and check in groupby.
The check column is type bool, based on first line of my code, but when i got output from present dataframe, the users are counted who has check value as True, also who has as False.
Actually, it should count the users who have only True in check column.
help me out in this
You need filter by check column by boolean indexing, not pass to by parameter in groupby:
#first convert datetimes to start of months
df['month'] = df['month'].dt.floor('d') - pd.offsets.MonthBegin(1)
print (df)
check month id userid
0 True 2019-06-01 123 a
1 False 2019-02-01 123 b
2 False 2019-01-01 123 c
3 False 2019-02-01 123 d
4 True 2019-06-01 123 e
5 True 2020-07-01 123 f
6 True 2020-07-01 123 g
7 True 2020-06-01 123 h
print (df[df['check']])
check month id userid
0 True 2019-06-01 123 a
4 True 2019-06-01 123 e
5 True 2020-07-01 123 f
6 True 2020-07-01 123 g
7 True 2020-06-01 123 h
present = (df[df['check']].groupby(['id', 'month'])['userid']
.nunique()
.reset_index(name="usercount"))
print (present)
id month usercount
0 123 2019-06-01 2
1 123 2020-06-01 1
2 123 2020-07-01 2
Command:
dataframe.date.head()
Result:
0 12-Jun-98
1 7-Aug-2005
2 28-Aug-66
3 11-Sep-1954
4 9-Oct-66
5 NaN
Command:
pd.to_date(dataframe.date.head())
Result:
0 1998-06-12 00:00:00
1 2005-08-07 00:00:00
2 2066-08-28 00:00:00
3 1954-09-11 00:00:00
4 2066-10-09 00:00:00
5 NaN
I don't want to get 2066 it should be 1966, what to do?
The year range supposed to be from 1920 to 2017. The dataframe contains Null values
You can substract 100 years if dt.year is more as 2017:
df['date'] = pd.to_datetime(df['date'])
df['date'] = df['date'].mask(df['date'].dt.year > 2017,
df['date'] - pd.Timedelta(100, unit='Y'))
print (df)
date
0 1998-06-12 00:00:00
1 2005-08-07 00:00:00
2 1966-08-28 18:00:00
3 1954-09-11 00:00:00
4 1966-10-09 18:00:00