I am trying to figure out how to "aggregate" this transposed dataset. I am not sure if aggregate is the right word because the math is happening across rows. I have a dataframe that looks similar to such:
EDIT: There are multiple cases of the same value in "date." The data is transposed to the person ID. There are also Date1-5 columns as well. The date referenced in the below table is the one i ultimately home to aggregate by for the created NRev1-NRev# values.
Date
Return1
Return2
Return3
Return4
Return5
Rev1
Rev2
Rev3
Rev4
Rev5
2020-1
0
1
2
3
4
100
500
100
200
300
2020-2
5
6
7
8
nan
200
120
100
200
nan
2020-3
2
3
7
9
nan
100
0
100
200
nan
and am trying to create additional revenue columns based upon their values of return, while adding together the values from rev1-rev5.
The resulting columns would look as follows:
Date
NRev0
NRev1
NRev2
NRev3
NRev4
NRev5
NRev6
NRev7
NRev8
NRev9
2020-1
100
500
100
200
300
0
0
0
0
0
2020-2
0
0
0
0
0
200
120
100
200
0
2020-3
0
0
100
0
0
0
0
100
0
200
Essentially, what I'm looking to do is to create a new variable "NRev," concatenated based upon the row value of "return." So if return1 = 4, for instance, NRev4 would equal the value of Rev1. The values of returns will change over time, but the number of return columns to revenue columns will always match. So theoretically, if there were a maximum value of 100 across all "Return" columns, the corresponding "Revenue" column would create "NRev100", and be filled with the corresponding revenue value's index.
In SPSS, I am able to create the columns using this code, but is non pythonic, and the number of return and rev columns will increase over time, as well as return values:
if return1=0 NRev0= NRev0+Rev1.
if return1=1 NRev1= NRev1+Rev1.
if return1=2 NRev2= NRev2+Rev1.
if return1=3 NRev3= NRev3+Rev1.
if return1=4 NRev4= NRev4+Rev1.
if return2=0 NRev0= NRev0+Rev2.
if return2=1 NRev1= NRev1+Rev2.
if return2=2 NRev2= NRev2+Rev2.
if return2=3 NRev3= NRev3+Rev2.
if return2=4 NRev4= NRev4+Rev2.
if return3=0 NRev0= NRev0+Rev3.
if return3=1 NRev1= NRev1+Rev3.
if return3=2 NRev2= NRev2+Rev3.
if return3=3 NRev3= NRev3+Rev3.
if return3=4 NRev4= NRev4+Rev3.
We can do some reshaping with pd.wide_to_long then pivot_table back to wide format. This allows us to align Return and Rev lines then convert the Return values to the new columns. Some cleanup with add_prefix and rename_axis can be done to polish the output:
new_df = (
pd.wide_to_long(df, stubnames=['Return', 'Rev'], i='Date', j='K')
.dropna()
.astype({'Return': int})
.pivot_table(index='Date', columns='Return', values='Rev', fill_value=0)
.add_prefix('NRev')
.rename_axis(columns=None)
.reset_index()
)
new_df:
Date NRev0 NRev1 NRev2 NRev3 NRev4 NRev5 NRev6 NRev7 NRev8 NRev9
0 2020-1 100 500 100 200 300 0 0 0 0 0
1 2020-2 0 0 0 0 0 200 120 100 200 0
2 2020-3 0 0 100 0 0 0 0 100 0 200
wide_to_long gives:
Return Rev
Date K
2020-1 1 0.0 100.0 # Corresponding Return index and Rev are in the same row
2020-2 1 5.0 200.0
2020-3 1 2.0 100.0
2020-1 2 1.0 500.0
2020-2 2 6.0 120.0
2020-3 2 3.0 0.0
2020-1 3 2.0 100.0
2020-2 3 7.0 100.0
2020-3 3 7.0 100.0
2020-1 4 3.0 200.0
2020-2 4 8.0 200.0
2020-3 4 9.0 200.0
2020-1 5 4.0 300.0
2020-2 5 NaN NaN
2020-3 5 NaN NaN # These NaN are Not Needed
The Removing NaN step and returning Return to int
(pd.wide_to_long(df, stubnames=['Return', 'Rev'], i='Date', j='K')
.dropna()
.astype({'Return': int}))
Return Rev
Date K
2020-1 1 0 100.0
2020-2 1 5 200.0
2020-3 1 2 100.0
2020-1 2 1 500.0
2020-2 2 6 120.0
2020-3 2 3 0.0
2020-1 3 2 100.0
2020-2 3 7 100.0
2020-3 3 7 100.0
2020-1 4 3 200.0
2020-2 4 8 200.0
2020-3 4 9 200.0
2020-1 5 4 300.0
Then this can easily be moved back to wide with a pivot_table:
(pd.wide_to_long(df, stubnames=['Return', 'Rev'], i='Date', j='K')
.dropna()
.astype({'Return': int})
.pivot_table(index='Date', columns='Return', values='Rev', fill_value=0))
Return 0 1 2 3 4 5 6 7 8 9
Date
2020-1 100 500 100 200 300 0 0 0 0 0
2020-2 0 0 0 0 0 200 120 100 200 0
2020-3 0 0 100 0 0 0 0 100 0 200
The rest is just cosmetic changes to the DataFrame.
If dates are duplicated wide_to_long cannot be used, but we can manually reshape the DataFrame to wide with str.extract then set_index + stack:
# Set Index Column
new_df = df.set_index('Date')
# Handle MultiIndex Manually
new_df.columns = pd.MultiIndex.from_frame(
new_df.columns.str.extract('(.*)(\d+)$')
)
# Stack then the rest is the same
new_df = (
new_df.stack()
.dropna()
.astype({'Return': int})
.pivot_table(index='Date', columns='Return', values='Rev',
fill_value=0, aggfunc='first')
.add_prefix('NRev')
.rename_axis(columns=None)
.reset_index()
)
Sample DF with duplicate dates:
df = pd.DataFrame({'Date': ['2020-1', '2020-2', '2020-2'],
'Return1': [0, 5, 0],
'Return2': [1, 6, 1],
'Return3': [2, 7, 2],
'Return4': [3, 8, 3],
'Return5': [4.0, nan, 4.0],
'Rev1': [100, 200, 100],
'Rev2': [500, 120, 0],
'Rev3': [100, 100, 100],
'Rev4': [200, 200, 200],
'Rev5': [300.0, nan, nan]})
df
Date Return1 Return2 Return3 Return4 Return5 Rev1 Rev2 Rev3 Rev4 Rev5
0 2020-1 0 1 2 3 4.0 100 500 100 200 300.0
1 2020-2 5 6 7 8 NaN 200 120 100 200 NaN
2 2020-2 0 1 2 3 4.0 100 0 100 200 NaN
new_df
Date NRev0 NRev1 NRev2 NRev3 NRev4 NRev5 NRev6 NRev7 NRev8
0 2020-1 100 500 100 200 300 0 0 0 0
1 2020-2 100 0 100 200 0 200 120 100 200
Related
In this example, we attempt to apply value in group and column to all other NaNs, that are in the same group and column.
import pandas as pd
df = pd.DataFrame({'id':[1,1,2,2,3,4,5], 'Year':[2000,2000, 2001, 2001, 2000, 2000, 2000], 'Values': [1, 3, 2, 3, 4, 5,6]})
df['pct'] = df.groupby(['id', 'Year'])['Values'].apply(lambda x: x/x.shift() - 1)
print(df)
id Year Values pct
0 1 2000 1 NaN
1 1 2000 3 2.0
2 2 2001 2 NaN
3 2 2001 3 0.5
4 3 2000 4 NaN
5 4 2000 5 NaN
6 5 2000 6 NaN
I have tried to use .ffill() to fill the NaN's within each group that contains a value. For example, the code is trying to make it so that the NaN associated with index 0, to be 2.0, and the NaN associated to index 2 to be 0.5.
df['pct'] = df.groupby(['id', 'Year'])['pct'].ffill()
print(df)
id Year Values pct
0 1 2000 1 NaN
1 1 2000 3 2.0
2 2 2001 2 NaN
3 2 2001 3 0.5
4 3 2000 4 NaN
5 4 2000 5 NaN
6 5 2000 6 NaN
It should be bfill
df['pct'] = df.groupby(['id', 'Year'])['pct'].bfill()
df
Out[109]:
id Year Values pct
0 1 2000 1 2.0
1 1 2000 3 2.0
2 2 2001 2 0.5
3 2 2001 3 0.5
4 3 2000 4 NaN
5 4 2000 5 NaN
6 5 2000 6 NaN
I have a Dataframe of the form
date_time uids
2018-10-16 23:00:00 1000,1321,7654,1321
2018-10-16 23:10:00 7654
2018-10-16 23:20:00 NaN
2018-10-16 23:30:00 7654,1000,7654,1321,1000
2018-10-16 23:40:00 691,3974,3974,323
2018-10-16 23:50:00 NaN
2018-10-17 00:00:00 NaN
2018-10-17 00:10:00 NaN
2018-10-17 00:20:00 27,33,3974,3974,7665,27
This is a very big data frame containing the 5 mins time interval and the number of appearances of ids during those time intervals.
I want to iterate over these DataFrame 6 rows at a time (corresponding to 1 hour) and create DataFrame containing the ID and the number of times each id appear during this time.
Expected output is one dataframe per hour information. For example, in the above case dataframe for the hour 23 - 00 will have this form
uid 1 2 3 4 5 6
1000 1 0 0 2 0 0
1321 2 0 0 1 0 0
and so on
How can I do this efficiently?
I don't have an exact solution but you could create a pivot table: ids on the index and datetimes on the columns. Then you just have to select the columns you want.
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"date_time": [
"2018-10-16 23:00:00",
"2018-10-16 23:10:00",
"2018-10-16 23:20:00",
"2018-10-16 23:30:00",
"2018-10-16 23:40:00",
"2018-10-16 23:50:00",
"2018-10-17 00:00:00",
"2018-10-17 00:10:00",
"2018-10-17 00:20:00",
],
"uids": [
"1000,1321,7654,1321",
"7654",
np.nan,
"7654,1000,7654,1321,1000",
"691,3974,3974,323",
np.nan,
np.nan,
np.nan,
"27,33,3974,3974,7665,27",
],
}
)
df["date_time"] = pd.to_datetime(df["date_time"])
df = (
df.set_index("date_time") #do not use set_index if date_time is current index
.loc[:, "uids"]
.str.extractall(r"(?P<uids>\d+)")
.droplevel(level=1)
) # separate all the ids
df["number"] = df.index.minute.astype(float) / 10 + 1 # get the number 1 to 6 depending on the minutes
df_pivot = df.pivot_table(
values="number",
index="uids",
columns=["date_time"],
) #dataframe with all the uids on the index and all the datetimes in columns.
You can apply this to the whole dataframe or just a subset containing 6 rows. Then you rename your columns.
You can use the function crosstab:
df['uids'] = df['uids'].str.split(',')
df = df.explode('uids')
df['date_time'] = df['date_time'].dt.minute.floordiv(10).add(1)
pd.crosstab(df['uids'], df['date_time'], dropna=False)
Output:
date_time 1 2 3 4 5 6
uids
1000 1 0 0 2 0 0
1321 2 0 0 1 0 0
27 0 0 2 0 0 0
323 0 0 0 0 1 0
33 0 0 1 0 0 0
3974 0 0 2 0 2 0
691 0 0 0 0 1 0
7654 1 1 0 2 0 0
7665 0 0 1 0 0 0
We can achieve this with extracting the minutes from your datetime column. Then using pivot_table to get your wide format:
df['date_time'] = pd.to_datetime(df['date_time'])
df['minute'] = df['date_time'].dt.minute // 10
piv = (df.assign(uids=df['uids'].str.split(','))
.explode('uids')
.pivot_table(index='uids', columns='minute', values='minute', aggfunc='size')
)
minute 0 1 2 3 4
uids
1000 1.0 NaN NaN 2.0 NaN
1321 2.0 NaN NaN 1.0 NaN
27 NaN NaN 2.0 NaN NaN
323 NaN NaN NaN NaN 1.0
33 NaN NaN 1.0 NaN NaN
3974 NaN NaN 2.0 NaN 2.0
691 NaN NaN NaN NaN 1.0
7654 1.0 1.0 NaN 2.0 NaN
7665 NaN NaN 1.0 NaN NaN
I have a pandas dataframe:
SrNo value
a nan
1 100
2 200
3 300
b nan
1 500
2 600
3 700
c nan
1 900
2 1000
i want my final dataframe as:
value new_col
100 a
200 a
300 a
500 b
600 b
700 b
900 c
1000 c
i.e for sr.no 'a' the values under a should have 'a' as a new column similarly for b and c
Create new column by where with condition by isnull, then use ffill for replace NaNs by forward filling.
Last remove NaNs rows by dropna and column by drop:
print (df['SrNo'].where(df['value'].isnull()))
0 a
1 NaN
2 NaN
3 NaN
4 b
5 NaN
6 NaN
7 NaN
8 c
9 NaN
10 NaN
Name: SrNo, dtype: object
df['new_col'] = df['SrNo'].where(df['value'].isnull()).ffill()
df = df.dropna().drop('SrNo', 1)
print (df)
value new_col
1 100.0 a
2 200.0 a
3 300.0 a
5 500.0 b
6 600.0 b
7 700.0 b
9 900.0 c
10 1000.0 c
Here's one way
In [2160]: df.assign(
new_col=df.SrNo.str.extract('(\D+)', expand=True).ffill()
).dropna().drop('SrNo', 1)
Out[2160]:
value new_col
1 100.0 a
2 200.0 a
3 300.0 a
5 500.0 b
6 600.0 b
7 700.0 b
9 900.0 c
10 1000.0 c
Another way with replace numbers with nan and ffill()
df['col'] = df['SrNo'].replace('([0-9]+)',np.nan,regex=True).ffill()
df = df.dropna(subset=['value']).drop('SrNo',1)
Output:
value col
1 100.0 a
2 200.0 a
3 300.0 a
5 500.0 b
6 600.0 b
7 700.0 b
9 900.0 c
10 1000.0 c
I am working on some machine learning task and I want change each line from "numbered objects" to "sorted by some attrs objects".
For example, I have 5 heroes in 2 teams represented by theirs stats (dN_%stat% and rN_%stat%), and what I want is to sort heroes in each team by stats numbered 3,4,0,2 so the first one is strongest and so on.
Here is my current code, but it is very slow, so I want to use native pandas objects and operations:
def sort_heroes(df):
for match_id in df.index:
for team in ['r', 'd']:
heroes = []
for n in range(1,6):
heroes.append(
[df.ix[match_id, '%s%s_%s' % (team, n, stat)]
for stat in stats])
heroes.sort(key=lambda x: (x[3], x[4], x[0], x[2]))
for n in range(1,6):
for i, stat in enumerate(stats):
df.ix[match_id, '%s%s_%s' %
(team, n, stat)] = heroes[n - 1][i]
Short example with not full but useful data representation:
match_id r1_xp r1_gold r2_xp r2_gold r3_xp r3_gold d1_xp d1_gold d2_xp d2_gold
1 10 20 100 10 5000 300 0 0 15 5
2 1 1 1000 80 100 13 200 87 311 67
What I want is to sort those columns by groups with prefix (rN_ and dN_) firstly by gold then by xp
match_id r1_xp r1_gold r2_xp r2_gold r3_xp r3_gold d1_xp d1_gold d2_xp d2_gold
1 5000 300 10 20 100 20 15 5 0 0
2 1000 80 100 13 1 1 200 87 311 67
You can use:
df.set_index('match_id', inplace=True)
#create MultiIndex with 3 levels
arr = df.columns.str.extract('([rd])(\d*)_(.*)', expand=True).T.values
df.columns = pd.MultiIndex.from_arrays(arr)
#reshape df, sorting
df = df.stack([0,1]).reset_index().sort_values(['match_id','level_1','gold','xp'],
ascending=[True,False,False,False])
print (df)
match_id level_1 level_2 gold xp
4 1 r 3 300.0 5000.0
2 1 r 1 20.0 10.0
3 1 r 2 10.0 100.0
1 1 d 2 5.0 15.0
0 1 d 1 0.0 0.0
8 2 r 2 80.0 1000.0
9 2 r 3 13.0 100.0
7 2 r 1 1.0 1.0
5 2 d 1 87.0 200.0
6 2 d 2 67.0 311.0
#asign new values to level 2
df.level_2 = df.groupby(['match_id','level_1']).cumcount().add(1).astype(str)
#get original shape
df = df.set_index(['match_id','level_1','level_2']).stack().unstack([1,2,3]).astype(int)
df = df.sort_index(level=[0,1,2], ascending=[False, True, False], axis=1)
#Multiindex in columns to column names
df.columns = ['{}{}_{}'.format(x[0], x[1], x[2]) for x in df.columns]
df.reset_index(inplace=True)
print (df)
match_id r1_xp r1_gold r2_xp r2_gold r3_xp r3_gold d1_xp d1_gold \
0 1 5000 300 10 20 100 10 15 5
1 2 1000 80 100 13 1 1 200 87
d2_xp d2_gold
0 0 0
1 311 67
I have a MultiIndex Series (3 indices) that looks like this:
Week ID_1 ID_2
3 26 1182 39.0
4767 42.0
31393 20.0
31690 42.0
32962 3.0
....................................
I also have a dataframe df which contains all the columns (and more) used for indices in the Series above, and I want to create a new column in my dataframe df that contains the value matching the ID_1 and ID_2 and the Week - 2 from the Series.
For example, for the row in dataframe that has ID_1 = 26, ID_2 = 1182 and Week = 3, I want to match the value in the Series indexed by ID_1 = 26, ID_2 = 1182 and Week = 1 (3-2) and put it on that row in a new column. Further, my Series might not necessarily have the value required by the dataframe, in which case I'd like to just have 0.
Right now, I am trying to do this by using:
[multiindex_series.get((x[1].get('week', 2) - 2, x[1].get('ID_1', 0), x[1].get('ID_2', 0))) for x in df.iterrows()]
This however is very slow and memory hungry and I was wondering what are some better ways to do this.
FWIW, the Series was created using
saved_groupby = df.groupby(['Week', 'ID_1', 'ID_2'])['Target'].median()
and I'm willing to do it a different way if better paths exist to create what I'm looking for.
Increase the Week by 2:
saved_groupby = df.groupby(['Week', 'ID_1', 'ID_2'])['Target'].median()
saved_groupby = saved_groupby.reset_index()
saved_groupby['Week'] = saved_groupby['Week'] + 2
and then merge df with saved_groupby:
result = pd.merge(df, saved_groupby, on=['Week', 'ID_1', 'ID_2'], how='left')
This will augment df with the target median from 2 weeks ago.
To make the median (target) saved_groupby column 0 when there is no match, use fillna to change NaNs to 0:
result['Median'] = result['Median'].fillna(0)
For example,
import numpy as np
import pandas as pd
np.random.seed(2016)
df = pd.DataFrame(np.random.randint(5, size=(20,5)),
columns=['Week', 'ID_1', 'ID_2', 'Target', 'Foo'])
saved_groupby = df.groupby(['Week', 'ID_1', 'ID_2'])['Target'].median()
saved_groupby = saved_groupby.reset_index()
saved_groupby['Week'] = saved_groupby['Week'] + 2
saved_groupby = saved_groupby.rename(columns={'Target':'Median'})
result = pd.merge(df, saved_groupby, on=['Week', 'ID_1', 'ID_2'], how='left')
result['Median'] = result['Median'].fillna(0)
print(result)
yields
Week ID_1 ID_2 Target Foo Median
0 3 2 3 4 2 0.0
1 3 3 0 3 4 0.0
2 4 3 0 1 2 0.0
3 3 4 1 1 1 0.0
4 2 4 2 0 3 2.0
5 1 0 1 4 4 0.0
6 2 3 4 0 0 0.0
7 4 0 0 2 3 0.0
8 3 4 3 2 2 0.0
9 2 2 4 0 1 0.0
10 2 0 4 4 2 0.0
11 1 1 3 0 0 0.0
12 0 1 0 2 0 0.0
13 4 0 4 0 3 4.0
14 1 2 1 3 1 0.0
15 3 0 1 3 4 2.0
16 0 4 2 2 4 0.0
17 1 1 4 4 2 0.0
18 4 1 0 3 0 0.0
19 1 0 1 0 0 0.0