Python: Pandas merge three dataframes on date, keeping all dates [duplicate] - python

This question already has answers here:
Merge multiple DataFrames Pandas
(5 answers)
Pandas Merging 101
(8 answers)
Closed 7 months ago.
I have three dataframes
Dataframe df1:
date A
0 2022-04-11 1
1 2022-04-12 2
2 2022-04-14 26
3 2022-04-16 2
4 2022-04-17 1
5 2022-04-20 17
6 2022-04-21 14
7 2022-04-22 1
8 2022-04-23 9
9 2022-04-24 1
10 2022-04-25 5
11 2022-04-26 2
12 2022-04-27 21
13 2022-04-28 9
14 2022-04-29 17
15 2022-04-30 5
16 2022-05-01 8
17 2022-05-07 1241217
18 2022-05-08 211
19 2022-05-09 1002521
20 2022-05-10 488739
21 2022-05-11 12925
22 2022-05-12 57
23 2022-05-13 8515098
24 2022-05-14 1134576
Dateframe df2:
date B
0 2022-04-12 8
1 2022-04-14 7
2 2022-04-16 2
3 2022-04-19 2
4 2022-04-23 2
5 2022-05-07 2
6 2022-05-08 5
7 2022-05-09 2
8 2022-05-14 1
Dataframe df3:
date C
0 2022-04-12 6
1 2022-04-13 1
2 2022-04-14 2
3 2022-04-20 3
4 2022-04-21 9
5 2022-04-22 25
6 2022-04-23 56
7 2022-04-24 49
8 2022-04-25 68
9 2022-04-26 71
10 2022-04-27 40
11 2022-04-28 44
12 2022-04-29 27
13 2022-04-30 34
14 2022-05-01 28
15 2022-05-07 9
16 2022-05-08 20
17 2022-05-09 24
18 2022-05-10 21
19 2022-05-11 8
20 2022-05-12 8
21 2022-05-13 14
22 2022-05-14 25
23 2022-05-15 43
24 2022-05-16 36
25 2022-05-17 29
26 2022-05-18 28
27 2022-05-19 17
28 2022-05-20 6
I would like to merge df1, df2, df3 in a single dataframe with columns date, A, B, C, in such a way that date contains all dates which appeared in df1 and/or df2 and/or df3 (without repetition), and if a particular date was not in any of the dataframes, then for the respective column I put value 0.0. So, I would like to have something like that:
date A B C
0 2022-04-11 1.0 0.0 0.0
1 2022-08-12 2.0 8.0 6.0
2 2022-08-13 0.0 0.0 1.0
...
I tried to use this method
merge1 = pd.merge(df1, df2, how='outer')
sorted_merge1 = merge1.sort_values(by=['date'], ascending=False)
full_merge = pd.merge(sorted_merg1, df3, how='outer')
However, it seems it skips the dates which are not common for all three dataframes.

Try this,
print(pd.merge(df1, df2, on='date', how='outer').merge(df3, on='date', how='outer').fillna(0))
O/P:
date A B C
0 2022-04-11 1.0 0.0 0.0
1 2022-04-12 2.0 8.0 6.0
2 2022-04-14 26.0 7.0 2.0
3 2022-04-16 2.0 2.0 0.0
4 2022-04-17 1.0 0.0 0.0
5 2022-04-20 17.0 0.0 3.0
6 2022-04-21 14.0 0.0 9.0
7 2022-04-22 1.0 0.0 25.0
8 2022-04-23 9.0 2.0 56.0
9 2022-04-24 1.0 0.0 49.0
10 2022-04-25 5.0 0.0 68.0
11 2022-04-26 2.0 0.0 71.0
12 2022-04-27 21.0 0.0 40.0
13 2022-04-28 9.0 0.0 44.0
14 2022-04-29 17.0 0.0 27.0
15 2022-04-30 5.0 0.0 34.0
16 2022-05-01 8.0 0.0 28.0
17 2022-05-07 1241217.0 2.0 9.0
18 2022-05-08 211.0 5.0 20.0
19 2022-05-09 1002521.0 2.0 24.0
20 2022-05-10 488739.0 0.0 21.0
21 2022-05-11 12925.0 0.0 8.0
22 2022-05-12 57.0 0.0 8.0
23 2022-05-13 8515098.0 0.0 14.0
24 2022-05-14 1134576.0 1.0 25.0
25 2022-04-19 0.0 2.0 0.0
26 2022-04-13 0.0 0.0 1.0
27 2022-05-15 0.0 0.0 43.0
28 2022-05-16 0.0 0.0 36.0
29 2022-05-17 0.0 0.0 29.0
30 2022-05-18 0.0 0.0 28.0
31 2022-05-19 0.0 0.0 17.0
32 2022-05-20 0.0 0.0 6.0
​
perform merge chain and fill NaN with 0

Related

better grouping of label frequency by month from dataframe

I have a dataframe with a date+time and a label, which I want to reshape into date (/month) columns with label frequencies for that month:
date_time label
1 2017-09-26 17:08:00 0
3 2017-10-03 13:27:00 2
4 2017-10-04 19:04:00 0
11 2017-10-11 18:28:00 1
27 2017-10-13 11:22:00 0
28 2017-10-13 21:43:00 0
39 2017-10-16 14:43:00 0
40 2017-10-16 21:39:00 0
65 2017-10-21 21:53:00 2
...
98 2017-11-01 20:08:00 3
99 2017-11-02 12:00:00 3
100 2017-11-02 12:01:00 2
109 2017-11-02 12:03:00 3
110 2017-11-03 22:24:00 0
111 2017-11-04 09:05:00 3
112 2017-11-06 12:36:00 3
113 2017-11-06 12:48:00 2
128 2017-11-07 15:20:00 2
143 2017-11-10 16:36:00 3
144 2017-11-10 20:00:00 0
145 2017-11-10 20:02:00 0
I group the label frequency by month with this line (thanks partially to this post):
df2 = df.groupby([pd.Grouper(key='date_time', freq='M'), 'label'])['label'].count()
which outputs
date_time label
2017-09-30 0 1
2017-10-31 0 6
1 1
2 8
3 2
2017-11-30 0 25
4 2
5 1
2 4
3 11
2017-12-31 0 14
5 3
2 5
3 7
2018-01-31 0 8
4 1
5 1
2 2
3 3
but, as mentioned before, I would like to get the data by month/date columns:
2017-09-30 2017-10-31 2017-11-30 2017-12-31 2018-01-31
0 1 6 25 14 8
1 0 1 0 0 0
2 0 8 4 5 2
3 0 2 11 7 3
4 0 0 2 0 1
5 0 0 1 3 1
currently I can do sort of divide the data with
pd.concat([df2[m] for m in df2.index.levels[0]], axis=1).fillna(0)
but I lose the column names:
label label label label label
0 1.0 6.0 25.0 14.0 8.0
1 0.0 1.0 0.0 0.0 0.0
2 0.0 8.0 4.0 5.0 2.0
3 0.0 2.0 11.0 7.0 3.0
4 0.0 0.0 2.0 0.0 1.0
5 0.0 0.0 1.0 3.0 1.0
So I have to do a longer version where I generate a series, rename it, concatenate and then fill in the blanks:
m_list = []
for m in df2.index.levels[0]:
m_labels = df2[m]
m_labels = m_labels.rename(m)
m_list.append(m_labels)
pd.concat(m_list, axis=1).fillna(0)
resulting in
2017-09-30 2017-10-31 2017-11-30 2017-12-31 2018-01-31
0 1.0 6.0 25.0 14.0 8.0
1 0.0 1.0 0.0 0.0 0.0
2 0.0 8.0 4.0 5.0 2.0
3 0.0 2.0 11.0 7.0 3.0
4 0.0 0.0 2.0 0.0 1.0
5 0.0 0.0 1.0 3.0 1.0
Is there a shorter/more elegant way to get to this last datagrame from my original one?
You just need unstack here
df.groupby([pd.Grouper(key='date_time', freq='M'), 'label'])['label'].count().unstack(0,fill_value=0)
Out[235]:
date_time 2017-09-30 2017-10-31 2017-11-30
label
0 1 5 3
1 0 1 0
2 0 2 3
3 0 0 6
Base on your groupby output
s.unstack(0,fill_value=0)
Out[240]:
date_time 2017-09-30 2017-10-31 2017-11-30 2017-12-31 2018-01-31
label
0 1 6 25 14 8
1 0 1 0 0 0
2 0 8 4 5 2
3 0 2 11 7 3
4 0 0 2 0 1
5 0 0 1 3 1

Pandas DataFrame- create a 14 day moving average, but show simple averages for the first 14 days of data?

I have a pandas dataframe similar to this.
score avg
date
1/1/2017 0 0
1/2/2017 1 0.5
1/3/2017 2 1
1/4/2017 3 1.5
1/5/2017 4 2
1/6/2017 5 2.5
1/7/2017 6 3
1/8/2017 7 3.5
1/9/2017 8 4
1/10/2017 9 4.5
1/11/2017 10 5
1/12/2017 11 5.5
1/13/2017 12 7.5
1/14/2017 13 6.5
1/15/2017 14 7.5
1/16/2017 15 8.5
1/17/2017 16 9.5
1/18/2017 17 10.5
1/19/2017 18 11.5
1/20/2017 19 12.5
1/21/2017 20 13.5
1/22/2017 21 14.5
1/23/2017 22 15.5
1/24/2017 23 16.5
1/25/2017 24 17.5
1/26/2017 25 18.5
1/27/2017 26 19.5
1/28/2017 27 20.5
1/29/2017 28 21.5
Basically I am looking to create a 14 day rolling average of the data, but instead of showing NaNs for the first 14 days, simply showing the simple averages. For example, the average on day 2 is the average of day 1 and 2, the average on day 10 is the averages of days 1-10, etc. How would I go about doing this without having to manually create averages? Thanks for the help!
What you need to use is rolling with min_periods=1 as paramter:
df['avg2'] = df.rolling(14, min_periods=1)['score'].mean()
Output:
date score avg avg2
0 2017-01-01 0 0.0 0.0
1 2017-01-02 1 0.5 0.5
2 2017-01-03 2 1.0 1.0
3 2017-01-04 3 1.5 1.5
4 2017-01-05 4 2.0 2.0
5 2017-01-06 5 2.5 2.5
6 2017-01-07 6 3.0 3.0
7 2017-01-08 7 3.5 3.5
8 2017-01-09 8 4.0 4.0
9 2017-01-10 9 4.5 4.5
10 2017-01-11 10 5.0 5.0
11 2017-01-12 11 5.5 5.5
12 2017-01-13 12 7.5 6.0
13 2017-01-14 13 6.5 6.5
14 2017-01-15 14 7.5 7.5
15 2017-01-16 15 8.5 8.5
16 2017-01-17 16 9.5 9.5
17 2017-01-18 17 10.5 10.5
18 2017-01-19 18 11.5 11.5
19 2017-01-20 19 12.5 12.5
20 2017-01-21 20 13.5 13.5
21 2017-01-22 21 14.5 14.5
22 2017-01-23 22 15.5 15.5
23 2017-01-24 23 16.5 16.5
24 2017-01-25 24 17.5 17.5
25 2017-01-26 25 18.5 18.5
26 2017-01-27 26 19.5 19.5
27 2017-01-28 27 20.5 20.5
28 2017-01-29 28 21.5 21.5

Pandas groupBy with conditional grouping

I have two data frames and need to group the first one based on some criteria from the second df.
df1=
summary participant_id response_date
0 2.0 11 2016-04-30
1 3.0 11 2016-05-01
2 3.0 11 2016-05-02
3 3.0 11 2016-05-03
4 3.0 11 2016-05-04
5 3.0 11 2016-05-05
6 3.0 11 2016-05-06
7 4.0 11 2016-05-07
8 4.0 11 2016-05-08
9 3.0 11 2016-05-09
10 3.0 11 2016-05-10
11 3.0 11 2016-05-11
12 3.0 11 2016-05-12
13 3.0 11 2016-05-13
14 3.0 11 2016-05-14
15 3.0 11 2016-05-15
16 3.0 11 2016-05-16
17 4.0 11 2016-05-17
18 3.0 11 2016-05-18
19 3.0 11 2016-05-19
20 3.0 11 2016-05-20
21 4.0 11 2016-05-21
22 4.0 11 2016-05-22
23 4.0 11 2016-05-23
24 3.0 11 2016-05-24
25 3.0 11 2016-05-25
26 3.0 11 2016-05-26
27 3.0 11 2016-05-27
28 3.0 11 2016-05-28
29 3.0 11 2016-05-29
.. ... ... ...
df2 =
summary participant_id response_date
0 12.0 11 2016-04-30
1 12.0 11 2016-05-14
2 14.0 11 2016-05-28
. ... ... ...
I need to group (get blocks) of df1 between the dates in the column of df2. Namely:
df1=
summary participant_id response_date
2.0 11 2016-04-30
3.0 11 2016-05-01
3.0 11 2016-05-02
3.0 11 2016-05-03
3.0 11 2016-05-04
3.0 11 2016-05-05
3.0 11 2016-05-06
4.0 11 2016-05-07
4.0 11 2016-05-08
3.0 11 2016-05-09
3.0 11 2016-05-10
3.0 11 2016-05-11
3.0 11 2016-05-12
3.0 11 2016-05-13
3.0 11 2016-05-14
3.0 11 2016-05-15
3.0 11 2016-05-16
4.0 11 2016-05-17
3.0 11 2016-05-18
3.0 11 2016-05-19
3.0 11 2016-05-20
4.0 11 2016-05-21
4.0 11 2016-05-22
4.0 11 2016-05-23
3.0 11 2016-05-24
3.0 11 2016-05-25
3.0 11 2016-05-26
3.0 11 2016-05-27
3.0 11 2016-05-28
3.0 11 2016-05-29
.. ... ... ...
Is there an elegant solution with groupby?
There might be a more elegant solution but you can loop through the response_date values in df2 and create a boolean series of values by checking against the all the response_date values in df1 and simply summing them all up.
df1['group'] = 0
for rd in df2.response_date.values:
df1['group'] += df1.response_date > rd
Output:
summary participant_id response_date group
0 2.0 11 2016-04-30 0
1 3.0 11 2016-05-01 1
2 3.0 11 2016-05-02 1
3 3.0 11 2016-05-03 1
4 3.0 11 2016-05-04 1
Building off of #Scott's answer:
You can use pd.cut but you will need to add a date before the earliest date and after the latest date in response_date from df2
dates = [pd.Timestamp('2000-1-1')] +
df2.response_date.sort_values().tolist() +
[pd.Timestamp('2020-1-1')]
df1['group'] = pd.cut(df1['response_date'], dates)
You want the .cut method. This lets you bin your dates by some other list of dates.
df1['cuts'] = pd.cut(df1['response_date'], df2['response_date'])
grouped = df1.groupby('cuts')
print grouped.max() #for example

How to find a position of a last ocurrence of certain value in a pandas dataframe?

In a dataframe where one column is datetime and another one is only ones or zeros, how can I find the times of each of the last occurences of 1?
For example:
times = pd.date_range(start="1/1/2015", end="2/1/2015",freq='D')
YN = np.zeros(len(times))
YN[0:8] = np.ones(len(YN[0:8]))
YN[12:20] = np.ones(len(YN[12:20]))
YN[25:29] = np.ones(len(YN[25:29]))
df = pd.DataFrame({"Time":times,"Yes No":YN})
print df
Which looks like
Time Yes No
0 2015-01-01 1.0
1 2015-01-02 1.0
2 2015-01-03 1.0
3 2015-01-04 1.0
4 2015-01-05 1.0
5 2015-01-06 1.0
6 2015-01-07 1.0
7 2015-01-08 1.0
8 2015-01-09 0.0
9 2015-01-10 0.0
10 2015-01-11 0.0
11 2015-01-12 0.0
12 2015-01-13 1.0
13 2015-01-14 1.0
14 2015-01-15 1.0
15 2015-01-16 1.0
16 2015-01-17 1.0
17 2015-01-18 1.0
18 2015-01-19 1.0
19 2015-01-20 1.0
20 2015-01-21 0.0
21 2015-01-22 0.0
22 2015-01-23 0.0
23 2015-01-24 0.0
24 2015-01-25 0.0
25 2015-01-26 1.0
26 2015-01-27 1.0
27 2015-01-28 1.0
28 2015-01-29 1.0
29 2015-01-30 0.0
30 2015-01-31 0.0
31 2015-02-01 0.0
How could I extract the dates that have the last occurrence of 1 before another series of zeros, in this case 8/1/2015, 20/1/2015 and 29/1/2015?
This question addresses a similar problem, but I don't want all of the ones, I just want the last one before it changes to zero (and not only the one where it happens for the first time).
you can use Series.shift(-1) in conjunction with Series.diff() methods
In [42]: df.loc[df['Yes No'].shift(-1).diff().eq(-1)]
Out[42]:
Time Yes No
7 2015-01-08 1.0
19 2015-01-20 1.0
28 2015-01-29 1.0
In [43]: df.loc[df['Yes No'].shift(-1).diff().eq(-1), 'Time']
Out[43]:
7 2015-01-08
19 2015-01-20
28 2015-01-29
Name: Time, dtype: datetime64[ns]
Explanation:
In [44]: df['Yes No'].shift(-1).diff()
Out[44]:
0 NaN
1 0.0
2 0.0
3 0.0
4 0.0
5 0.0
6 0.0
7 -1.0
8 0.0
9 0.0
10 0.0
11 1.0
12 0.0
13 0.0
14 0.0
15 0.0
16 0.0
17 0.0
18 0.0
19 -1.0
20 0.0
21 0.0
22 0.0
23 0.0
24 1.0
25 0.0
26 0.0
27 0.0
28 -1.0
29 0.0
30 0.0
31 NaN
Name: Yes No, dtype: float64
You can use diff with eq for boolean mask and filter by boolean indexing:
print (df[df['Yes_No'].diff(-1).eq(1)])
Time Yes_No
7 2015-01-08 1.0
19 2015-01-20 1.0
28 2015-01-29 1.0
print (df.loc[df['Yes_No'].diff(-1).eq(1), 'Time'])
7 2015-01-08
19 2015-01-20
28 2015-01-29
Name: Time, dtype: datetime64[ns]
numpy
v = df['Yes No'].values
df[(v - np.append(v[1:], 0) == 1)]
Time Yes No
7 2015-01-08 1.0
19 2015-01-20 1.0
28 2015-01-29 1.0
v = df['Yes No'].values
df.Time[(v - np.append(v[1:], 0) == 1)]
7 2015-01-08
19 2015-01-20
28 2015-01-29
Name: Time, dtype: datetime64[ns]
Here's an approach using pandas groupby.
It could be useful if you plan to do many operations on this kind of data.
def find_consecutive(x, on = None, filter = None):
# Group consecutive sequences
if on is None:
on = x.columns
return x.groupby([(x[on] != x[on].shift()).cumsum(), x[on].loc[:]])
grouped = df.pipe(lambda x: find_consecutive(x, on = 'Yes No'))
# For each sequence extract the last time
last_dates = grouped.last()\ # Explicitly: apply(lambda x: x['Time'].iloc[-1])\
.reset_index(level = 1, drop = False)
# A bit of formatting to extract only dates for "Yes" (there is probably
# a cleaner way to do this)
yes_last_dates = last_dates.pipe(lambda x: x[x["Yes No"]==1]['Time'])\
.pipe(lambda x: x.reset_index(drop = True))
This gives the expected result:
0 2015-01-08
1 2015-01-20
2 2015-01-29
You can inspect grouped doing the following:
for key, group in grouped:
print key, group
(1, 1.0) Time Yes No
0 2015-01-01 1.0
1 2015-01-02 1.0
2 2015-01-03 1.0
3 2015-01-04 1.0
4 2015-01-05 1.0
5 2015-01-06 1.0
6 2015-01-07 1.0
7 2015-01-08 1.0
(2, 0.0) Time Yes No
8 2015-01-09 0.0
9 2015-01-10 0.0
10 2015-01-11 0.0
11 2015-01-12 0.0
(3, 1.0) Time Yes No
12 2015-01-13 1.0
13 2015-01-14 1.0
14 2015-01-15 1.0
15 2015-01-16 1.0
16 2015-01-17 1.0
17 2015-01-18 1.0
18 2015-01-19 1.0
19 2015-01-20 1.0
....

Pandas: create dataframe using value_counts

I have data
age
32
16
39
39
23
36
29
26
43
34
35
50
29
29
31
42
53
I need to get smth like this
I can get
df.age.value_counts()
and
100. * df.age.value_counts() / len(df.age)
But how can I union this and give name to columns?
You can use cut with agg:
#helper df with min and max ages, necessary add category Total
df1 = pd.DataFrame({'G':['14 yo and younger','15-19','20-24','25-29','30-34',
'35-39','40-44','45-49','50-54','55-59','60-64','65+','Total'],
'Min':[0, 15,20,25,30,35,40,45,50,55,60,65,np.nan],
'Max':[14,19,24,29,34,39,44,49,54,59,64,120, np.nan]})
print (df1)
G Max Min
0 14 yo and younger 14.0 0.0
1 15-19 19.0 15.0
2 20-24 24.0 20.0
3 25-29 29.0 25.0
4 30-34 34.0 30.0
5 35-39 39.0 35.0
6 40-44 44.0 40.0
7 45-49 49.0 45.0
8 50-54 54.0 50.0
9 55-59 59.0 55.0
10 60-64 64.0 60.0
11 65+ 120.0 65.0
12 Total NaN NaN
cutoff = np.hstack([np.array(df1.Min[0]), df1.Max.values])
labels = df1.G.values
df['Groups'] = pd.cut(df.age, bins=cutoff, labels=labels, right=True, include_lowest=True)
print (df)
age Groups
0 32 30-34
1 16 15-19
2 39 35-39
3 39 35-39
4 23 20-24
5 36 35-39
6 29 25-29
7 26 25-29
8 43 40-44
9 34 30-34
10 35 35-39
11 50 50-54
12 29 25-29
13 29 25-29
14 31 30-34
15 42 40-44
16 53 50-54
df = df.groupby('Groups')['Groups']
.agg({'Total':[len, lambda x: len(x)/df.shape[0] * 100 ]})
.rename(columns={'len':'N', '<lambda>':'%'})
#last Total row
df.ix['Total'] = df.sum()
print (df)
Total
N %
Groups
14 yo and younger 0.0 0.000000
15-19 1.0 5.882353
20-24 1.0 5.882353
25-29 4.0 23.529412
30-34 3.0 17.647059
35-39 4.0 23.529412
40-44 2.0 11.764706
45-49 0.0 0.000000
50-54 2.0 11.764706
55-59 0.0 0.000000
60-64 0.0 0.000000
65+ 0.0 0.000000
Total 17.0 100.000000
EDIT1:
Solution with size scale better:
df1 = df.groupby('Groups').size().to_frame()
df1.columns = pd.MultiIndex.from_arrays(('Total','N'))
df1.ix[:,('Total','%')] = 100 * df1.ix[:,('Total','N')] / df.shape[0]
df1.ix['Total'] = df1.sum()
print (df1)
Total
N %
Groups
14 yo and younger 0.0 0.000000
15-19 1.0 5.882353
20-24 1.0 5.882353
25-29 4.0 23.529412
30-34 3.0 17.647059
35-39 4.0 23.529412
40-44 2.0 11.764706
45-49 0.0 0.000000
50-54 2.0 11.764706
55-59 0.0 0.000000
60-64 0.0 0.000000
65+ 0.0 0.000000
Total 17.0 100.000000

Categories

Resources