merge duplicate rows by adding a column 'count' [duplicate] - python

This question already has answers here:
Get statistics for each group (such as count, mean, etc) using pandas GroupBy?
(9 answers)
Closed 3 years ago.
I want to merge duplicate rows by adding a new column 'count'
Final dataframe that I want
rows can be in any order

You can use:
df["count"] = 1
df = df.groupby(["user_id", "item_id", "total"])["count"].count().reset_index()

Related

Append only last row in a panda dataframe to a new dataframe [duplicate]

This question already has answers here:
How to get the last N rows of a pandas DataFrame?
(3 answers)
Closed 2 years ago.
I have the following panda dataframe:(df)
How can I append only the last row (Date 2021-01-22) to a new dataframe (df_new)?
df_new = df_new.append(df.tail(1))
if df_new is not defined. The following code will do it.
df_new = df.tail(1)

Python: transpose and group dataframe [duplicate]

This question already has answers here:
How can I pivot a dataframe?
(5 answers)
Closed 2 years ago.
I have dataframe: table_revenue
how can I transpose the dataframe and have grouping by 'stations_id' to see final result as:
where values of cells is the price, aggregated by exact date (column) for specific 'station_id' (row)
It seems you need pivot_table():
output = input.pivot_table(index='station_id',columns='endAt',values='price',aggfunc='sum',fill_value=0)

Create an index column by group [duplicate]

This question already has answers here:
Add a sequential counter column on groups to a pandas dataframe
(4 answers)
Closed 4 years ago.
I would like to index my dataframe such that in each group it starts from 0 to the number of observations in the group. Ie from :
pd.DataFrame([["John","Car"],["John","House"],["Sam","Skate"],["Sam","Disco"],["Sam","Space"]])
I would like to have :
pd.DataFrame([["John","Car",0],["John","House",1],["Sam","Skate",0],["Sam","Disco",1],["Sam","Space",2]])
Thanks
Youre looking for the cumulative count function:
df = pd.DataFrame([["John","Car"],["John","House"],["Sam","Skate"],["Sam","Disco"],["Sam","Space"]])
df.groupby(0).cumcount()
Use:
df.groupby(0)[0].apply(lambda x:x.duplicated().cumsum())

Drop duplicate from 2 dataframe [duplicate]

This question already has answers here:
How to filter Pandas dataframe using 'in' and 'not in' like in SQL
(11 answers)
pandas get rows which are NOT in other dataframe
(17 answers)
Pandas Merging 101
(8 answers)
Closed 4 years ago.
I've 2 dataframes, df1 and df2, with an emails column (and other non important ones.)
I want to drop rows in df2 that contain emails that are already in df1.
How can I do that?
You can do something like this:
df_1[~df_1['email_column'].isin(df_2['email_column'].tolist())

How to rename an aggregate column in groupby in pandas [duplicate]

This question already has answers here:
Naming returned columns in Pandas aggregate function? [duplicate]
(6 answers)
Rename result columns from Pandas aggregation ("FutureWarning: using a dict with renaming is deprecated")
(6 answers)
Closed 4 years ago.
I'm doing a group by in a pandas dataframe, how can I change the name of the aggregate column after the group by?
df.groupby(['open_year','open_month','source']).size().reset_index()
it creates a dataframe with the following columns
open_year, open_month, CREATED_BY_REVISED, 0
I', trying to rename the last colum(0) but it doesn't work
x.rename({'0':'xyz'})

Categories

Resources