Python,Pandas,DataFrame, add new column doing SQL GROUP_CONCAT equivalent - python

My question is very similar to the one asked but unanswered here
Replicating GROUP_CONCAT for pandas.DataFrame
I have a Pandas DataFame which I want to group concat into a data frame
+------+---------+
| team | user |
+------+---------+
| A | elmer |
| A | daffy |
| A | bugs |
| B | dawg |
| A | foghorn |
+------+---------+
Becoming
+------+---------------------------------------+
| team | group_concat(user) |
+------+---------------------------------------+
| A | elmer,daffy,bugs,foghorn |
| B | dawg |
+------+---------------------------------------+
As answeed in the original topic, it can be done via any of these:
df.groupby('team').apply(lambda x: ','.join(x.user))
df.groupby('team').apply(lambda x: list(x.user))
df.groupby('team').agg({'user' : lambda x: ', '.join(x)})
But the resulting object is not a Pandas Dataframe anymore.
How can I get the GROUP_CONCAT results in the original Pandas DataFrame as a new column?
Cheers

You can apply list and join after grouping by, then reset_index to get the dataframe.
output_df = df.groupby('team')['user'].apply(lambda x: ",".join(list(x))).reset_index()
output_df.rename(columns={'user': 'group_concat(user)'})
team group_concat(user)
0 A elmer,daffy,bugs,foghorn
1 B dawg

Let's break down the below code:
Firstly, groupby team and, use apply on the user to join it's elements using a ,
Then, reset the index, and rename the resulting dataframe (axis=1, refers to columns and not rows)
res = (df.groupby('team')['user']
.apply(lambda x: ','.join(str(i) for i in x))).reset_index().rename({'user':'group_concat(user)'},axis=1)
Output:
team group_concat(user)
0 A elmer,daffy,bugs,foghorn
1 B dawg

Related

DataFrame groupby on each item within a column of lists

I have a dataframe (df):
| A | B | C |
| --- | ----- | ----------------------- |
| CA | Jon | [sales, engineering] |
| NY | Sarah | [engineering, IT] |
| VA | Vox | [services, engineering] |
I am trying to group by each item in the C column list (sales, engineering, IT, etc.).
Tried:
df.groupby('C')
but got list not hashable, which is expected. I came across another post where it was recommended to convert the C column to tuple which is hashable, but I need to groupby each item and not the combination.
My goal is to get the count of each row in the df for each item in the C column list. So:
sales: 1
engineering: 3
IT: 1
services: 1
While there is probably a simpler way to obtain this than using groupby, I am still curious if groupby can be used in this case.
You can explode & value_counts :
out = df.explode("C").value_counts("C")
​
Output :
print(out)
C
engineering 3
IT 1
sales 1
services 1
dtype: int64

PySpark - how to select all columns to be used in groupby

I'm trying to chain a join and groupby operation together. The inputs and operations I want to do look like below. I want to groupby all the columns except the one used in agg. Is there a way of doing this without listing out all the column names like groupby("colA","colB")? I tried groupby(df1.*)but that didn't work. In this case I know that I'd like to group by all the columns in df1. Many thanks.
Input1:
colA | ColB
--------------
A | 100
B | 200
Input2:
colAA | ColBB
--------------
A | Group1
B | Group2
A | Group2
df1.join(df2, df1colA==df2.colAA,"left").drop("colAA").groupby("colA","colB"),agg(collect_set("colBB"))
#Is there a way that I do not need to list ("colA","colB") in groupby? there will be many cloumns.
Output:
colA | ColB | collect_set
--------------
A | 100 | (Group1,Group2)
B | 200 | (Group2)
Based on your clarifying comments, use df1.columns
df1.join(df2, df1.colA==df2.colAA,"left").drop("colAA").groupby(df1.columns).agg(collect_set("colBB").alias('new')).show()
+----+----+----------------+
|colA|ColB| new|
+----+----+----------------+
| A| 100|[Group2, Group1]|
| B| 200| [Group2]|
+----+----+----------------+
Just simple:
.groupby(df1.columns)

Creating new Columns and fill them based on another columns values

Let's say I have a dataframe df looking like this:
|ColA |
|---------|
|B=7 |
|(no data)|
|C=5 |
|B=3,C=6 |
How do I extract the data into new colums, so it looks like this:
|ColA | B | C |
|------|---|---|
|True | 7 | |
|False | | |
|True | | 5 |
|True | 3 | 6 |
For filling the columns I know I can use regex .extract, as shown in this solution.
But how do I set the Column name at the same time? So far I use a loop over df.ColA.loc[df["ColA"].isna()].iteritems(), but that does not seem like the best option for a lot of data.
You could use str.extractall to get the data, then reshape the output and join to a derivative of the original dataframe:
# create the B/C columns
df2 = (df['ColA'].str.extractall('([^=]+)=([^=,]+),?')
.set_index(0, append=True)
.droplevel('match')[1]
.unstack(0, fill_value='')
)
# rework ColA and join previous output
df.notnull().join(df2).fillna('')
# or if several columns:
df.assign(ColA=df['ColA'].notnull()).join(df2).fillna('')
output:
ColA B C
0 True 7
1 False
2 True 5
3 True 3 6

How to apply a function on each group of data in a pandas group by

Suppose the data frame below:
|id |day | order |
|---|--- |-------|
| a | 2 | 6 |
| a | 4 | 0 |
| a | 7 | 4 |
| a | 8 | 8 |
| b | 11 | 10 |
| b | 15 | 15 |
I want to apply a function to day and order column of each group by rows on id column.
The function is:
def mean_of_differences(my_list):
return sum([ my_list[i] - my_list[i-1] for i in range(1, len(my_list))]) / len(my_list)
This function calculates mean of differences of each element and the next one. For example, for id=a, day would be 2+3+1 divided by 4. I know how to use lambda, but didn't find a way to implement this in a pandas group by. Also, each column should be ordered to get my desired output, so apparently it is not possible to sort by one column before group by
The output should be like this:
|id |day| order |
|---|---|-------|
| a |1.5| 2 |
| b | 2 | 2.5 |
Any one know how to do so in a group by?
First, sort your data by day then group by id and finally compute your diff/mean.
df = df.sort_values('day') \
.groupby('id') \
.agg({'day': lambda x: x.diff().fillna(0).mean()}) \
.reset_index()
Output:
>>> df
id day
0 a 1.5
1 b 2.0

Python Pandas groupby: how to do conditional aggregation dependent on other column

I would like to use Panda's groupby with multiple aggregation functions, but also including conditional statements per aggregation. Imagine having this data as an example:
df = pd.DataFrame({
'id': ['a', 'a', 'a', 'b', 'b'],
'type': ['in_scope', 'in_scope', 'exclude', 'in_scope', 'exclude'],
'value': [5, 5, 99, 20, 99]
})
INPUT DATA:
| id | in_scope | value |
|----|----------|-------|
| a | True | 5 |
| a | True | 5 |
| a | False | 99 |
| b | True | 20 |
| b | False | 99 |
And I want to do a Pandas groupby like this:
df.groupby('id').agg(
num_records=('id', 'size'),
sum_value=('value', np.sum)
)
OUTPUT OF SIMPLE GROUPBY:
| id | num_records | sum_value |
|----|-------------|-----------|
| a | 3 | 109 |
| b | 2 | 119 |
However, I would like to do the sum depending on a condition, namely that only the "in_scope" records that are defined as True in column in_scope should be used. Note, the first aggregation should still use the entire table. In short, this is the desired output:
DESIRED OUTPUT OF GROUPBY:
| id | num_records | sum_value_in_scope |
|----|-------------|--------------------|
| a | 3 | 10 |
| b | 2 | 20 |
I was thinking about passing two arguments to a lambda function, but I do not succeed. Of course, it can be solved by performing two separate groupbys on filtered and unfiltered data and combine them together afterwards. But I was hoping there was a shorter and more elegant way.
Unfortunately, you cannot do this with aggregate, however you can do it in one step with apply and a custom function:
def f(x):
d = {}
d['num_records'] = len(x)
d['sum_value_in_scope'] = x[x.in_scope].value.sum()
return pd.Series(d, index=['num_records', 'sum_value_in_scope'])
df.groupby('id').apply(f)
Since the column df.in_scope is already boolean, you can use it as a mask directly to filter the values which are summed. If the column you are working with is not boolean, it is better to use df.query('<your query here>') to get the subset of the data (there are optimizations under the hood which make it faster than most other methods).
Updated answer: Create a temporary column that contains values only when type is in_scope, then aggregate:
(
df.assign(temp=np.where(df["type"] == "in_scope", df["value"], None))
.groupby("id", as_index=False)
.agg(num_records=("type", "size"), sum_value=("temp", "sum"))
)
id num_records sum_value
a 3 10
b 2 20

Categories

Resources