Сonvert the data from summary to daily time series data (pandas) - python

I have a dataset which is a time series. It has several regions at once, here is a small example:
date confirmed deaths recovered region_code
0 2020-03-27 3.0 0.0 0.0 ARK
1 2020-03-27 4.0 0.0 0.0 BA
2 2020-03-27 1.0 0.0 0.0 BEL
..........................................................
71540 2022-07-19 164194.0 2830.0 160758.0 YAR
71541 2022-07-19 19170.0 555.0 18484.0 YEV
71542 2022-07-19 169603.0 2349.0 167075.0 ZAB
I have three columns for which I want to display information about how many new cases have been added in separate three columns:
date confirmed deaths recovered region_code daily_confirmed daily_deaths daily_recovered
0 2020-03-27 3.0 0.0 0.0 ARK 3.0 0.0 0.0
1 2020-03-27 4.0 0.0 0.0 BA 4.0 0.0 0.0
2 2020-03-27 1.0 0.0 0.0 BEL 1.0 0.0 0.0
..........................................................
71540 2022-07-19 164194.0 2830.0 160758.0 YAR 32.0 16.0 8.0
71541 2022-07-19 19170.0 555.0 18484.0 YEV 6.0 1.0 1.0
71542 2022-07-19 169603.0 2349.0 167075.0 ZAB 1.0 8.0 9.0
That is, for each region, you need to get the difference between the current date and the last day in order to understand how many new cases have occurred.
The problem is that I don't know how to do this process correctly. Since there are no missing dates in the data, you can use something like this: df['daily_cases'] = df['confirmed'] - df['confirmed'].shift(fill_value=0). But there are many different regions here, that is, first you need to filter everything correctly somehow ... Any ideas how to do this?

Use DataFrameGroupBy.diff with replace first missing values by original columns add prefix to columns and cast to inetegers if necessary:
print (df)
date confirmed deaths recovered region_code
0 2020-03-27 3.0 0.0 0.0 ARK
1 2020-03-27 4.0 0.0 0.0 BA
2 2020-03-27 1.0 0.0 0.0 BEL
3 2020-03-28 4.0 0.0 4.0 ARK
4 2020-03-28 6.0 0.0 0.0 BA
5 2020-03-28 1.0 0.0 0.0 BEL
6 2020-03-29 6.0 0.0 10.0 ARK
7 2020-03-29 8.0 0.0 0.0 BA
8 2020-03-29 5.0 0.0 0.0 BEL
cols = ['confirmed','deaths','recovered']
df1 = (df.groupby(['region_code'])[cols]
.diff()
.fillna(df[cols])
.add_prefix('daily_')
.astype(int))
print (df1)
daily_confirmed daily_deaths daily_recovered
0 3 0 0
1 4 0 0
2 1 0 0
3 1 0 4
4 2 0 0
5 0 0 0
6 2 0 6
7 2 0 0
8 4 0 0
Last append to original:
df = df.join(df1)
print (df)

Related

Conditionally Set Values Greater Than 0 To 1

I have a dataframe that looks like this, with many more date columns
AUTHOR 2022-07-01 2022-10-14 2022-10-15 .....
0 Kathrine 0.0 7.0 0.0
1 Catherine 0.0 13.0 17.0
2 Amanda Jane 0.0 0.0 0.0
3 Jaqueline 0.0 3.0 0.0
4 Christine 0.0 0.0 0.0
I would like to set values in each column after the AUTHOR to 1 when the value is greater than 0, so the resulting table would look like this:
AUTHOR 2022-07-01 2022-10-14 2022-10-15 .....
0 Kathrine 0.0 1.0 0.0
1 Catherine 0.0 1.0 1.0
2 Amanda Jane 0.0 0.0 0.0
3 Jaqueline 0.0 1.0 0.0
4 Christine 0.0 0.0 0.0
I tried the following line of code but got an error, which makes sense. As I need to figure out how to apply this code just to the date columns while also keeping the AUTHOR column in my table.
Counts[Counts != 0] = 1
TypeError: Cannot do inplace boolean setting on mixed-types with a non np.nan value
You can select the date column first then mask on these columns
cols = df.drop(columns='AUTHOR').columns
# or
cols = df.filter(regex='\d{4}-\d{2}-\d{2}').columns
# or
cols = df.select_dtypes(include='number').columns
df[cols] = df[cols].mask(df[cols] != 0, 1)
print(df)
AUTHOR 2022-07-01 2022-10-14 2022-10-15
0 Kathrine 0.0 1.0 0.0
1 Catherine 0.0 1.0 1.0
2 Amanda Jane 0.0 0.0 0.0
3 Jaqueline 0.0 1.0 0.0
4 Christine 0.0 0.0 0.0
Since you would like to only exclude the first column you could first set it as index and then create your booleans. In the end you will reset the index.
df.set_index('AUTHOR').pipe(lambda g: g.mask(g > 0, 1)).reset_index()
df
AUTHOR 2022-10-14 2022-10-15
0 Kathrine 0.0 1.0
1 Cathrine 1.0 1.0

Multiple based on different data frame

I have two dataframes:
df1:
Name Segment Axis 1 2 3 4 5
Amazon 1 slope NaN 2.5 2.5 2.5 2.5
Amazon 1 x 0.0 1.0 2.0 3.0 4.0
Amazon 1 y 0.0 0.4 0.8 1.2 1.6
Amazon 2 slope NaN 2.0 2.0 2.0 2.0
Amazon 2 x 0.0 2.0 4.0 6.0 8.0
Amazon 2 y 0.0 1.0 2.0 3.0 4.0
df2:
Name Segment Cost
Amazon 1 100
Amazon 2 112
Netflix 1 110
Netflix 2 210
I want to multiple all the values on that fall on the "Slope" in columns 1-5 by the corresponding cost in the second dataframe.
Expected output:
Name Segment Axis 1 2 3 4 5
Amazon 1 slope NaN 250 250 250 250
Amazon 1 x 0.0 1.0 2.0 3.0 4.0
Amazon 1 y 0.0 0.4 0.8 1.2 1.6
Amazon 2 slope NaN 224 224 224 224
Amazon 2 x 0.0 2.0 4.0 6.0 8.0
Amazon 2 y 0.0 1.0 2.0 3.0 4.0
Try this:
#merge df2 to align to df1
u = df1.merge(df2,on=['Name','Segment'],how='left')
#find columns to multiply the cost
cols = df1.columns ^ ['Name','Segment','Axis']
#multiply and assign back
df1[cols] = u[cols].mul(u['Cost'],axis=0).where(df1['Axis'].eq('slope'),df1[cols])
print(df1)
Name Segment Axis 1 2 3 4 5
0 Amazon 1 slope NaN 250.0 250.0 250.0 250.0
1 Amazon 1 x 0.0 1.0 2.0 3.0 4.0
2 Amazon 1 y 0.0 0.4 0.8 1.2 1.6
3 Amazon 2 slope NaN 224.0 224.0 224.0 224.0
4 Amazon 2 x 0.0 2.0 4.0 6.0 8.0
5 Amazon 2 y 0.0 1.0 2.0 3.0 4.0
You can make use of the Index to have pandas do all the heavy lifting with alignment. Unfortunately DataFrame.mul(Series) doesn't yet support a fill_value so we need to .fillna.
df1 = df1.set_index(['Name', 'Segment', 'Axis'])
# Give df2 a 'slope' level so we know what to align
df2 = df2.assign(Axis='slope').set_index(['Name', 'Segment', 'Axis'])
# So we don't add rows from df2 not in df1
df2 = df2[df2.index.isin(df1.index)]
df1 = df1.mul(df2['Cost'], axis=0).fillna(df1)
print(df1)
1 2 3 4 5
Name Segment Axis
Amazon 1 slope NaN 250.0 250.0 250.0 250.0
x 0.0 1.0 2.0 3.0 4.0
y 0.0 0.4 0.8 1.2 1.6
2 slope NaN 224.0 224.0 224.0 224.0
x 0.0 2.0 4.0 6.0 8.0
y 0.0 1.0 2.0 3.0 4.0

Filling the missing rows in pandas dataframe

data = {
'node1': [1, 1,1, 2,2,5],
'node2': [8,16,22,5,25,10],
'weight': [1,1,1,1,1,1], }
df = pd.DataFrame(data, columns = ['node1','node2','weight'])
df2=df.assign(Cu=df.groupby('node1').cumcount()).set_index('Cu').groupby('node1') \
.apply(lambda x : x['node2']).unstack('Cu').fillna(np.nan)
Output:
1 8.0 16.0 22.0
2 5.0 25.0 0.0
5 10.0 0.0 0.0
This the output I am gettting but I require the output:
1 8 16 22
2 5 25 0
3 0 0 0
4 0 0 0
5 10 0 0
The rows which are missing in the data like the 3,4 should have the columns as zeros
Here are few ways of doing it.
Option 1
In [36]: idx = np.arange(df.node1.min(), df.node1.max()+1)
In [37]: df.groupby('node1')['node2'].apply(list).apply(pd.Series).reindex(idx).fillna(0)
Out[37]:
0 1 2
node1
1 8.0 16.0 22.0
2 5.0 25.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
5 10.0 0.0 0.0
Option 2
In [39]: (df.groupby('node1')['node2'].apply(lambda x: pd.Series(x.values))
.unstack().reindex(idx).fillna(0))
Out[39]:
0 1 2
node1
1 8.0 16.0 22.0
2 5.0 25.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
5 10.0 0.0 0.0
Option 3
In [55]: pd.DataFrame.from_dict(
{i: x.values for i, x in df.groupby('node1')['node2']},
orient='index').reindex(idx).fillna(0)
Out[55]:
0 1 2
1 8.0 16.0 22.0
2 5.0 25.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
5 10.0 0.0 0.0
And, measure the efficiency, readability based on your usecase.
In [15]: idx = np.arange(df.node1.min(), df.node1.max()+1)
In [16]: df.pivot_table(index='node1',
columns=df.groupby('node1').cumcount(),
values='node2',
fill_value=0) \
.reindex(idx) \
.fillna(0)
Out[16]:
0 1 2
node1
1 8.0 16.0 22.0
2 5.0 25.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
5 10.0 0.0 0.0

Pandas resample timeseries in to 24hours

I have the data like this:
OwnerUserId Score
CreationDate
2015-01-01 00:16:46.963 1491895.0 0.0
2015-01-01 00:23:35.983 1491895.0 1.0
2015-01-01 00:30:55.683 1491895.0 1.0
2015-01-01 01:10:43.830 2141635.0 0.0
2015-01-01 01:11:08.927 1491895.0 1.0
2015-01-01 01:12:34.273 3297613.0 1.0
..........
This is a whole year data with different user's score ,I hope to get the data like:
OwnerUserId 1491895.0 1491895.0 1491895.0 2141635.0 1491895.0
00:00 0.0 3.0 0.0 3.0 5.8
00:01 5.0 3.0 0.0 3.0 5.8
00:02 3.0 33.0 20.0 3.0 5.8
......
23:40 12.0 33.0 10.0 3.0 5.8
23:41 32.0 33.0 20.0 3.0 5.8
23:42 12.0 13.0 10.0 3.0 5.8
The element of dataframe is the score(mean or sum).
I have been try like follow:
pd.pivot_table(data_series.reset_index(),index=['CreationDate'],columns=['OwnerUserId'],
fill_value=0).resample('W').sum()['Score']
Get the result like the image.
I think you need:
#remove `[]` and add parameter values for remove MultiIndex in columns
df = pd.pivot_table(data_series.reset_index(),
index='CreationDate',
columns='OwnerUserId',
values='Score',
fill_value=0)
#truncate seconds and convert to timedeltaindex
df.index = pd.to_timedelta(df.index.floor('T').strftime('%H:%M:%S'))
#or round to minutes
#df.index = pd.to_timedelta(df.index.round('T').strftime('%H:%M:%S'))
print (df)
OwnerUserId 1491895.0 2141635.0 3297613.0
00:16:00 0 0 0
00:23:00 1 0 0
00:30:00 1 0 0
01:10:00 0 0 0
01:11:00 1 0 0
01:12:00 0 0 1
idx = pd.timedelta_range('00:00:00', '23:59:00', freq='T')
#resample by minutes, aggregate sum, for add missing rows use reindex
df = df.resample('T').sum().fillna(0).reindex(idx, fill_value=0)
print (df)
OwnerUserId 1491895.0 2141635.0 3297613.0
00:00:00 0.0 0.0 0.0
00:01:00 0.0 0.0 0.0
00:02:00 0.0 0.0 0.0
00:03:00 0.0 0.0 0.0
00:04:00 0.0 0.0 0.0
00:05:00 0.0 0.0 0.0
00:06:00 0.0 0.0 0.0
...
...

Mean of a grouped-by pandas dataframe

I need to calculate the mean per day of the colums duration and km for the
rows with value ==1 and values = 0.
df
Out[20]:
Date duration km value
0 2015-03-28 09:07:00.800001 0 0 0
1 2015-03-28 09:36:01.819998 1 2 1
2 2015-03-30 09:36:06.839997 1 3 1
3 2015-03-30 09:37:27.659997 nan 5 0
4 2015-04-22 09:51:40.440003 3 7 0
5 2015-04-23 10:15:25.080002 0 nan 1
how can I modify this solution in order to have the means duration_value0, duration_value1, km_value0 and km_value1?
df = df.set_index('Date').groupby(pd.Grouper(freq='d')).mean().dropna(how='all')
print (df)
duration km
Date
2015-03-28 0.5 1.0
2015-03-30 1.5 4.0
2015-04-22 3.0 7.0
2015-04-23 0.0 0.0
I believe doing a group by Date as well as value should do it.
Call dfGroupBy.mean followed by df.reset_index to get your desired output:
In [713]: df.set_index('Date')\
.groupby([pd.Grouper(freq='d'), 'value'])\
.mean().reset_index(1, drop=True)
Out[713]:
duration km
Date
2015-03-28 0.0 0.0
2015-03-28 1.0 2.0
2015-03-30 NaN 5.0
2015-03-30 1.0 3.0
2015-04-22 3.0 7.0
2015-04-23 0.0 NaN
I think you are looking pivot table i.e
df.pivot_table(values=['duration','km'],columns=['value'],index=df['Date'].dt.date,aggfunc='mean')
Output:
duration km
value 0 1 0 1
Date
2015-03-28 0.0 1.0 0.0 2.0
2015-03-30 NaN 1.0 5.0 3.0
2015-04-22 3.0 NaN 7.0 NaN
2015-04-23 NaN 0.0 NaN NaN
In [24]:
If you want the new column names like distance0,distance1 ... You can use list comprehension i.e if you store the pivot table in ndf
ndf.columns = [i[0]+str(i[1]) for i in ndf.columns]
Output:
duration0 duration1 km0 km1
Date
2015-03-28 0.0 1.0 0.0 2.0
2015-03-30 NaN 1.0 5.0 3.0
2015-04-22 3.0 NaN 7.0 NaN
2015-04-23 NaN 0.0 NaN NaN

Categories

Resources