I'm trying to sum the number of different consecutive rows for each customer.
So my data looks like this dummy one:
df = pd.DataFrame({'Customer':['A','A','A','A','A','A','A','A', 'B','B','B','B','B','B','B','B'],
'Time':['00:00','01:00','02:00','03:00','04:00', '05:00','06:00','07:00','00:00','01:00','02:00','03:00','04:00','05:00','06:00','07:00'],
'Lat':[20,20,30,30,30,40,20,20,20,20,30,30,30, 40,20,20],
'Lon':[40,40,50,50,50,60,40,40,40,40,50,50,50,60,40,40]})
Customer Time Lat Lon
0 A 00:00 20 40
1 A 01:00 20 40
2 A 02:00 30 50
3 A 03:00 30 50
4 A 04:00 30 50
5 A 05:00 40 60
6 A 06:00 20 40
7 A 07:00 20 40
8 B 00:00 20 40
9 B 01:00 20 40
10 B 02:00 30 50
11 B 03:00 30 50
12 B 04:00 30 50
13 B 05:00 40 60
14 B 06:00 20 40
15 B 07:00 20 40
And I want to count the number of different rows (according to both Lat and Lon) by customer that aren't consecutive. So, in the example it would return 4 for both customers even though there are only 3 different pairs of Lat and Lon.
This:
test = (df['Lat'] != df['Lat'].shift(1)).values.sum()
Takes care of only one column and doesn't group by Customer.
But I can't seem to do
df[['Lat','Lon']] != df[['Lat','Lon']]
it gives:
ValueError: Wrong number of items passed 2, placement implies 1
or group by Customer. Can somebody help?
I am using shift create a new key , then with drop_duplicates
df['key']=df.groupby('Customer').apply(lambda x : x[['Lat','Lon']].ne(x[['Lat','Lon']].shift()).all(1).cumsum()).reset_index(level=0,drop=True)
df.drop_duplicates(['Customer','key'])
Customer Time Lat Lon key
0 A 00:00 20 40 1
2 A 02:00 30 50 2
5 A 05:00 40 60 3
6 A 06:00 20 40 4
8 B 00:00 20 40 1
10 B 02:00 30 50 2
13 B 05:00 40 60 3
14 B 06:00 20 40 4
IIUC,
df.groupby('Customer')[['Lat', 'Lon']].apply(lambda s: s.diff().ne(0).all(1).sum())
Customer
A 4
B 4
dtype: int64
Related
I have a dataframe like this:
DATE MIN_AMOUNT MAX_AMOUNT MIN_DAY MAX_DAY
01/09/2022 10 20 1 2
01/09/2022 15 25 4 5
01/09/2022 30 50 7 10
05/09/2022 10 20 1 2
05/09/2022 15 25 4 5
07/09/2022 15 25 4 5
I want to expand the dataframe to all date range between the DATE column with forward filling. The desired putput is:
DATE MIN_AMOUNT MAX_AMOUNT MIN_DAY MAX_DAY
01/09/2022 10 20 1 2
01/09/2022 15 25 4 5
01/09/2022 30 50 7 10
02/09/2022 10 20 1 2
02/09/2022 15 25 4 5
02/09/2022 30 50 7 10
03/09/2022 10 20 1 2
03/09/2022 15 25 4 5
03/09/2022 30 50 7 10
04/09/2022 10 20 1 2
04/09/2022 15 25 4 5
04/09/2022 30 50 7 10
05/09/2022 10 20 1 2
05/09/2022 15 25 4 5
06/09/2022 10 20 1 2
06/09/2022 15 25 4 5
07/09/2022 15 25 4 5
Could you please help me about this?
First convert values to datetimes, create helper counter Series g by GroupBy.cumcount for reshape by DataFrame.set_index and DataFrame.unstack, then use DataFrame.asfreq with method='ffill' and reshape back by DataFrame.stack, remove helper level by DataFrame.droplevel, convert DatetimeIndex to column, change format of datetimes and last create same dtypes like original DataFrame:
df['DATE'] = pd.to_datetime(df['DATE'], dayfirst=True)
g = df.groupby('DATE').cumcount()
df = (df.set_index(['DATE',g])
.unstack()
.asfreq('D', method='ffill')
.stack()
.droplevel(-1)
.reset_index()
.assign(DATE = lambda x: x['DATE'].dt.strftime('%d/%m/%Y'))
.astype(df.dtypes)
)
print (df)
DATE MIN_AMOUNT MAX_AMOUNT MIN_DAY MAX_DAY
0 2022-01-09 10 20 1 2
1 2022-01-09 15 25 4 5
2 2022-01-09 30 50 7 10
3 2022-02-09 10 20 1 2
4 2022-02-09 15 25 4 5
5 2022-02-09 30 50 7 10
6 2022-03-09 10 20 1 2
7 2022-03-09 15 25 4 5
8 2022-03-09 30 50 7 10
9 2022-04-09 10 20 1 2
10 2022-04-09 15 25 4 5
11 2022-04-09 30 50 7 10
12 2022-05-09 10 20 1 2
13 2022-05-09 15 25 4 5
14 2022-06-09 10 20 1 2
15 2022-06-09 15 25 4 5
16 2022-07-09 15 25 4 5
A couple of merges should help with this, and should still be efficient as the data size increases:
Get the unique dates and build a new dataframe from that:
out = df.DATE.drop_duplicates()
dates = pd.date_range(out.min(), out.max(), freq='D')
dates = pd.DataFrame(dates, columns=['dates'])
Merge dates with out, and subsequently merge the outcome with the original dataframe:
(dates
.merge(
out,
left_on='dates',
right_on='DATE',
how = 'left')
# faster to fill on a Series than a Dataframe
.assign(DATE = lambda df: df.DATE.ffill())
.merge(
df,
on = 'DATE',
how = 'left')
.drop(columns='DATE')
.rename(columns= {'dates':'DATE'})
)
DATE MIN_AMOUNT MAX_AMOUNT MIN_DAY MAX_DAY
0 2022-09-01 10 20 1 2
1 2022-09-01 15 25 4 5
2 2022-09-01 30 50 7 10
3 2022-09-02 10 20 1 2
4 2022-09-02 15 25 4 5
5 2022-09-02 30 50 7 10
6 2022-09-03 10 20 1 2
7 2022-09-03 15 25 4 5
8 2022-09-03 30 50 7 10
9 2022-09-04 10 20 1 2
10 2022-09-04 15 25 4 5
11 2022-09-04 30 50 7 10
12 2022-09-05 10 20 1 2
13 2022-09-05 15 25 4 5
14 2022-09-06 10 20 1 2
15 2022-09-06 15 25 4 5
16 2022-09-07 15 25 4 5
I have a pandas df, like this:
ID date value
0 10 2022-01-01 100
1 10 2022-01-02 150
2 10 2022-01-03 0
3 10 2022-01-04 0
4 10 2022-01-05 200
5 10 2022-01-06 0
6 10 2022-01-07 150
7 10 2022-01-08 0
8 10 2022-01-09 0
9 10 2022-01-10 0
10 10 2022-01-11 0
11 10 2022-01-12 100
12 23 2022-02-01 490
13 23 2022-02-02 0
14 23 2022-02-03 350
15 23 2022-02-04 333
16 23 2022-02-05 0
17 23 2022-02-06 0
18 23 2022-02-07 0
19 23 2022-02-08 211
20 23 2022-02-09 100
I would like calculate the days of last value. Like the bellow example. How can I using diff() for this? And the calculus change by ID.
Output:
ID date value days_last_value
0 10 2022-01-01 100 0
1 10 2022-01-02 150 1
2 10 2022-01-03 0
3 10 2022-01-04 0
4 10 2022-01-05 200 3
5 10 2022-01-06 0
6 10 2022-01-07 150 2
7 10 2022-01-08 0
8 10 2022-01-09 0
9 10 2022-01-10 0
10 10 2022-01-11 0
11 10 2022-01-12 100 5
12 23 2022-02-01 490 0
13 23 2022-02-02 0
14 23 2022-02-03 350 2
15 23 2022-02-04 333 1
16 23 2022-02-05 0
17 23 2022-02-06 0
18 23 2022-02-07 0
19 23 2022-02-08 211 4
20 23 2022-02-09 100 1
Explanation below.
import pandas as pd
df = pd.DataFrame({'ID': 12 * [10] + 9 * [23],
'value': [100, 150, 0, 0, 200, 0, 150, 0, 0, 0, 0, 100, 490, 0, 350, 333, 0, 0, 0, 211, 100]})
days = df.groupby(['ID', (df['value'] != 0).cumsum()]).size().groupby('ID').shift(fill_value=0)
days.index = df.index[df['value'] != 0]
df['days_last_value'] = days
df
ID value days_last_value
0 10 100 0.0
1 10 150 1.0
2 10 0 NaN
3 10 0 NaN
4 10 200 3.0
5 10 0 NaN
6 10 150 2.0
7 10 0 NaN
8 10 0 NaN
9 10 0 NaN
10 10 0 NaN
11 10 100 5.0
12 23 490 0.0
13 23 0 NaN
14 23 350 2.0
15 23 333 1.0
16 23 0 NaN
17 23 0 NaN
18 23 0 NaN
19 23 211 4.0
20 23 100 1.0
First, we'll have to group by 'ID'.
We also creates groups for each block of days, by creating a True/False series where value is not 0, then performing a cumulative sum. That is the part (df['value'] != 0).cumsum(), which results in
0 1
1 2
2 2
3 2
4 3
5 3
6 4
7 4
8 4
9 4
10 4
11 5
12 6
13 6
14 7
15 8
16 8
17 8
18 8
19 9
20 10
We can use the values in this series to also group on; combining that with the 'ID' group, you have the individual blocks of days. This is the df.groupby(['ID', (df['value'] != 0).cumsum()]) part.
Now, for each block, we get its size, which is obviously the interval in days; which is what you want. We do need to shift one up, since we've counted the total number of days per group, and the difference would be one less; and fill with 0 at the bottom. But this shift has to be by ID group, so we first group by ID again before shifting (as we lost the grouping after doing .size()).
Now, this new series needs to get assigned back to the dataframe, but it's obviously shorter. Since its index it also reset, we can't easily reassign it (not with df['days_last_value'], df.loc[...] or df.iloc).
Instead, we select the index values of the original dataframe where value is not zero, and set the index of the days equal to that.
Now, it's easy step to directly assign the days to relevant column in the dataframe: Pandas will match the indices.
I have a dataset where I would like to sum two columns and then perform a subtraction while displaying a cumulative sum.
Data
id date t1 t2 total start cur_t1 cur_t2 final_o finaldb de_t1 de_t2
a q122 4 1 5 50 25 20 55 21 1 1
a q222 1 1 2 50 25 20 57 22 0 0
a q322 0 0 0 50 25 20 57 22 5 5
b q122 5 5 10 100 30 40 110 27 4 4
b q222 2 2 4 100 30 70 114 29 5 1
b q322 3 4 7 100 30 70 121 33 0 1
Desired
id date t1 t2 total start cur_t1 cur_t2 final_o finaldb de_t1 de_t2 finalt1
a q122 4 1 5 50 25 20 55 21 1 1 28
a q222 1 1 2 50 25 20 57 22 0 0 29
a q322 0 0 0 50 25 20 57 22 5 5 24
b q122 5 5 10 100 30 40 110 27 4 4 31
b q222 2 2 4 100 30 70 114 29 5 1 28
b q322 3 4 7 100 30 70 121 33 0 1 31
Logic
Create 'finalt1' column by summing 't1' and 'cur_t1'
initially and then subtracting 'de_t1' cumulatively and grouping by 'id' and 'date'
Doing
df['finalt1'] = df['cur_t1'].add(df.groupby('id')['t1'].cumsum())
I am still researching on how to subtract the 'de_t1' column cumulatively.
I can't test right now, but logically:
(df['cur_t1'].add(df.groupby('id')['t1'].cumsum())
.sub(df.groupby('id')['de_t1'].cumsum())
)
Of note, there was also this possibility to avoid grouping twice (it is calculating both cumsums at once and computing the difference), but it is actually slower:
df['cur_t1'].add(df.groupby('id')[['de_t1', 't1']].cumsum().diff(axis=1)['t1'])
So I have a data frame that is something like this
Resource 2020-06-01 2020-06-02 2020-06-03
Name1 8 7 8
Name2 7 9 9
Name3 10 10 10
Imagine that the header is literal all the days of the month. And that there are way more names than just three.
I need to reduce the columns to five. Considering the first column to be the days between 2020-06-01 till 2020-06-05. Then from Saturday till Friday of the same week. Or the last day of the month if it is before Friday. So for June would be these weeks:
week 1: 2020-06-01 to 2020-06-05
week 2: 2020-06-06 to 2020-06-12
week 3: 2020-06-13 to 2020-06-19
week 4: 2020-06-20 to 2020-06-26
week 5: 2020-06-27 to 2020-06-30
I have no problem defining these weeks. The problem is grouping the columns based on them.
I couldn't come up with anything.
Does someone have any ideas about this?
I have to use these code to generate your dataframe.
dates = pd.date_range(start='2020-06-01', end='2020-06-30')
df = pd.DataFrame({
'Name1': np.random.randint(1, 10, size=len(dates)),
'Name2': np.random.randint(1, 10, size=len(dates)),
'Name3': np.random.randint(1, 10, size=len(dates)),
})
df = df.set_index(dates).transpose().reset_index().rename(columns={'index': 'Resource'})
Then, the solution starts from here.
# Set the first column as index
df = df.set_index(df['Resource'])
# Remove the unused column
df = df.drop(columns=['Resource'])
# Transpose the dataframe
df = df.transpose()
# Output:
Resource Name1 Name2 Name3
2020-06-01 00:00:00 3 2 7
2020-06-02 00:00:00 5 6 8
2020-06-03 00:00:00 2 3 6
...
# Bring "Resource" from index to column
df = df.reset_index()
df = df.rename(columns={'index': 'Resource'})
# Add a column "week of year"
df['week_no'] = df['Resource'].dt.weekofyear
# You can simply group by the week no column
df.groupby('week_no').sum().reset_index()
# Output:
Resource week_no Name1 Name2 Name3
0 23 38 42 41
1 24 37 30 43
2 25 38 29 23
3 26 29 40 42
4 27 2 8 3
I don't know what you want to do for the next. If you want your original form, just transpose() it back.
EDIT: OP claimed the week should start from Saturday end up with Friday
# 0: Monday
# 1: Tuesday
# 2: Wednesday
# 3: Thursday
# 4: Friday
# 5: Saturday
# 6: Sunday
df['weekday'] = df['Resource'].dt.weekday.apply(lambda day: 0 if day <= 4 else 1)
df['customised_weekno'] = df['week_no'] + df['weekday']
Output:
Resource Resource Name1 Name2 Name3 week_no weekday customised_weekno
0 2020-06-01 4 7 7 23 0 23
1 2020-06-02 8 6 7 23 0 23
2 2020-06-03 5 9 5 23 0 23
3 2020-06-04 7 6 5 23 0 23
4 2020-06-05 6 3 7 23 0 23
5 2020-06-06 3 7 6 23 1 24
6 2020-06-07 5 4 4 23 1 24
7 2020-06-08 8 1 5 24 0 24
8 2020-06-09 2 7 9 24 0 24
9 2020-06-10 4 2 7 24 0 24
10 2020-06-11 6 4 4 24 0 24
11 2020-06-12 9 5 7 24 0 24
12 2020-06-13 2 4 6 24 1 25
13 2020-06-14 6 7 5 24 1 25
14 2020-06-15 8 7 7 25 0 25
15 2020-06-16 4 3 3 25 0 25
16 2020-06-17 6 4 5 25 0 25
17 2020-06-18 6 8 2 25 0 25
18 2020-06-19 3 1 2 25 0 25
So, you can use customised_weekno for grouping.
Assume the following:
df1:
x y z
1 10 11
2 20 22
3 30 33
4 40 44
1 20 21
1 30 31
1 40 41
2 10 12
2 30 32
2 40 42
3 10 31
3 20 23
3 40 43
4 10 14
4 20 24
4 30 34
df2:
x b
1 100
2 200
df3:
y c
10 1000
20 2000
I want all rows from df1, for which either x or y appears in either df2 or df3 respectively, meaning in this case
out:
x y z
1 10 11
2 20 22
1 20 21
1 30 31
1 40 41
2 10 12
2 30 32
2 40 42
3 10 31
3 20 23
4 10 14
4 20 24
I would like to do this in pure pandas, with no for loops, seems standard enough to me, but I don't really know what to look for
You can use isin on both cases, chain the conditions with a bitwise OR and perform boolean indexation on the dataframe with the result:
df1[df1.x.isin(df2.x) | df1.y.isin(df3.y)]