grouping in pandas data frame - python

I have a pandas data frame as below. I want to get the list of 'Job_No' for all the combinations of ('User_ID', 'Exec_No')
User_ID Exec_No Job_No
1: 2 1 1
2: 2 2 2
3: 3 2 3
4: 1 2 4
5: 1 1 5
6: 3 2 6
7: 2 2 7
8: 1 1 8
The desired output is another data frame that looks like
User_ID Exec_No Job_No
1: 2 1 [1]
2: 2 2 [2,7]
3: 3 2 [3,6]
4: 1 2 [4]
5: 1 1 [5,8]
How do I do this using a few lines of code?
Also, the data frame is expected to have around a million rows. Therefore the performance is also important.

As a note, if you care about performance, storing lists in a DataFrame is not very efficient. After grouping the data, Job_No values can be accessed immediately, no need to create a new DataFrame (memory !) holding lists of Job_No per (User_Id, Exec_No) pair.
In [21]: df
Out[21]:
User_ID Exec_No Job_No
0 2 1 1
1 2 2 2
2 3 2 3
3 1 2 4
4 1 1 5
5 3 2 6
6 2 2 7
7 1 1 8
In [22]: grouped = df.groupby(['User_ID', 'Exec_No'])
In [23]: grouped.get_group((3, 2))
Out[23]:
User_ID Exec_No Job_No
2 3 2 3
5 3 2 6
In [24]: grouped.get_group((3, 2))['Job_No']
Out[24]:
2 3
5 6
Name: Job_No, dtype: int64
In [25]: list(grouped.get_group((3, 2))['Job_No'])
Out[25]: [3, 6]

The solution is straight forward.
say if 'df' is the dataframe object, then
grp_df = df.groupby(['User_ID','Exec_No'])
newdf = grp_df['Job_No']

This will give a Series in return:
df.groupby(['User_ID', 'Exec_No']).apply(lambda x: x.Job_No.values)
Wrapping it in a Series in the apply returns a DataFrame:
df.groupby(['User_ID', 'Exec_No']).apply(lambda x: pd.Series([x.Job_No.values]))
User_ID Exec_No
1 1 [5, 8]
2 [4]
2 1 [1]
2 [2, 7]
3 2 [3, 6]
It would be nice if the name= of the Series would be used as the resulting column name, but it isnt.

How about this way:
df = pd.DataFrame({'User_ID' : [2,2, 3, 1, 1, 3, 2, 1], 'Exec_No': [1, 2, 2, 2, 1, 2, 2, 1], 'Job_No':[1,2,3,4,5,6,7,8]}, columns=['User_ID', 'Exec_No','Job_No'])
df
User_ID Exec_No Job_No
0 2 1 1
1 2 2 2
2 3 2 3
3 1 2 4
4 1 1 5
5 3 2 6
6 2 2 7
7 1 1 8
Let's do the group by:
df2 = df.groupby(['User_ID', 'Exec_No'], sort=False).apply(lambda x: list(x['Job_No']))
df2
User_ID Exec_No
2 1 [1]
2 [2, 7]
3 2 [3, 6]
1 1 [5, 8]
2 [4]
and put the way you wanted it:
df2.reset_index()
User_ID Exec_No 0
0 2 1 [1]
1 2 2 [2, 7]
2 3 2 [3, 6]
3 1 1 [5, 8]
4 1 2 [4]

Related

Compare dataframes and only use unmatched values

I have two dataframes that I want to compare, but only want to use the values that are not in both dataframes.
Example:
DF1:
A B C
0 1 2 3
1 4 5 6
DF2:
A B C
0 1 2 3
1 4 5 6
2 7 8 9
3 10 11 12
So, from this example I want to work with row index 2 and 3 ([7, 8, 9] and [10, 11, 12]).
The code I currently have (only remove duplicates) below.
df = pd.concat([di_old, di_new])
df = df.reset_index(drop=True)
df_gpby = df.groupby(list(df.columns))
idx = [x[0] for x in df_gpby.groups.values() if len(x) == 1]
print(df.reindex(idx))
I would do :
df_n = df2[df2.isin(df1).all(axis=1)]
ouput
A B C
0 1 2 3
1 4 5 6

Select rows of pandas dataframe in order of a given list with repetitions and keep the original index

After looking here and here and in the documentation, I still cannot find a way to select rows from a DataFrame according to all these criteria:
Return rows in an order given from a list of values from a given column
Return repeated rows (associated with repeated values in the list)
Preserve the original indices
Ignore values of the list not present in the DataFrame
As an example, let
df = pd.DataFrame({'A': [5, 6, 3, 4], 'B': [1, 2, 3, 5]})
df
A B
0 5 1
1 6 2
2 3 3
3 4 5
and let
list_of_values = [3, 4, 6, 4, 3, 8]
Then I would like to get the following DataFrame:
A B
2 3 3
3 4 5
1 6 2
3 4 5
2 3 3
How can I accomplish that? Zero's answer looks promising as it is the only one I found which preserves the original index, but it does not work with repetitions. Any ideas about how to modify/generalize it?
We have to preserve the index by assigning it as a column first so we can set_index after the mering:
list_of_values = [3, 4, 6, 4, 3, 8]
df2 = pd.DataFrame({'A': list_of_values, 'order': range(len(list_of_values))})
dfn = (
df.assign(idx=df.index)
.merge(df2, on='A')
.sort_values('order')
.set_index('idx')
.drop('order', axis=1)
)
A B
idx
2 3 3
3 4 5
1 6 2
3 4 5
2 3 3
If you want to get rid of the index name (idx), use rename_axis:
dfn = dfn.rename_axis(None)
A B
2 3 3
3 4 5
1 6 2
3 4 5
2 3 3
Here's a way to do that using merge:
list_df = pd.DataFrame({"A": list_of_values, "order": range(len(list_of_values))})
pd.merge(list_df, df, on="A").sort_values("order").drop("order", axis=1)
The output is:
A B
0 3 3
2 4 5
4 6 2
3 4 5
1 3 3

Dataframe set values using square bracket, doesn't follow the order of Dataframe passed

a = pd.DataFrame([[1,2], [3,4]], columns=[0,1])
b = pd.DataFrame([[5,6], [6,7]], columns=[1,0])
a[[0, 1]] = b
print(a)
result in
0 1
0 5 6
1 6 7
shouldn't it replace a with the same column in b, which results in:
0 1
0 6 5
1 7 6
it's a little confusing
Use DataFrame.loc with : for select all rows with list for columns names:
a.loc[:, [0, 1]] = b
print(a)
0 1
0 6 5
1 7 6
Or:
cols = [0,1]
a[cols] = b[cols]
print(a)
0 1
0 6 5
1 7 6

create a dataframe from 3 other dataframes in python

I am trying to create a new df which summarises my key information, by taking that information from 3 (say) other dataframes.
dfdate = {'x1': [2, 4, 7, 5, 6],
'x2': [2, 2, 2, 6, 7],
'y1': [3, 1, 4, 5, 9]}
dfdate = pd.DataFrame(df, index=range(0:4))
dfqty = {'x1': [1, 2, 6, 6, 8],
'x2': [3, 1, 1, 7, 5],
'y1': [2, 4, 3, 2, 8]}
dfqty = pd.DataFrame(df2, range(0:4))
dfprices = {'x1': [0, 2, 2, 4, 4],
'x2': [2, 0, 0, 3, 4],
'y1': [1, 3, 2, 1, 3]}
dfprices = pd.DataFrame(df3, range(0:4))
Let us say the above 3 dataframes are my data. Say, some dates, qty, and prices of goods. My new df is to be constructed from the above data:
rng = len(dfprices.columns)*len(dfprices.index) # This is the len of new df
dfnew = pd.DataFrame(np.nan,index=range(0,rng),columns=['Letter', 'Number', 'date', 'qty', 'price])
Now, this is where I struggle to piece my stuff together. I am trying to take all the data in dfdate and put it into a column in the new df. same with dfqty and dfprice. (so 3x5 matricies essentially goto a 1x15 vector and are placed into the new df).
As well as that, I need a couple of columns in dfnew as identifiers, from the names of the columns of the old df.
Ive tried for loops but to no avail, and don't know how to convert a df to series. But my desired output is:
dfnew:
'Lettercol','Numbercol', 'date', 'qty', 'price'
0 X 1 2 1 0
1 X 1 4 2 2
2 X 1 7 6 2
3 X 1 5 6 4
4 X 1 6 8 4
5 X 2 2 3 2
6 X 2 2 1 0
7 X 2 2 1 0
8 X 2 6 7 3
9 X 2 7 5 4
10 Y 1 3 2 1
11 Y 1 1 4 3
12 Y 1 4 3 2
13 Y 1 5 2 1
14 Y 1 9 8 3
where the numbers 0-14 are the index.
letter = letter from col header in DFs
number = number from col header in DFs
next 3 columns are data from the orig df's
(don't ask why the original data is in that funny format :)
thanks so much. my last Q wasn't well received so have tried to make this one better, thanks
Use:
#list of DataFrames
dfs = [dfdate, dfqty, dfprices]
#list comprehension with reshape
comb = [x.unstack() for x in dfs]
#join together
df = pd.concat(comb, axis=1, keys=['date', 'qty', 'price'])
#remove second level of MultiIndex and index to column
df = df.reset_index(level=1, drop=True).reset_index().rename(columns={'index':'col'})
#extract all values without first by indexing [1:] and first letter by [0]
df['Number'] = df['col'].str[1:]
df['Letter'] = df['col'].str[0]
cols = ['Letter', 'Number', 'date', 'qty', 'price']
#change order of columns
df = df.reindex(columns=cols)
print (df)
Letter Number date qty price
0 x 1 2 1 0
1 x 1 4 2 2
2 x 1 7 6 2
3 x 1 5 6 4
4 x 1 6 8 4
5 x 2 2 3 2
6 x 2 2 1 0
7 x 2 2 1 0
8 x 2 6 7 3
9 x 2 7 5 4
10 y 1 3 2 1
11 y 1 1 4 3
12 y 1 4 3 2
13 y 1 5 2 1
14 y 1 9 8 3

Sort all columns of a pandas DataFrame independently using sort_values()

I have a dataframe and want to sort all columns independently in descending or ascending order.
import pandas as pd
data = {'a': [5, 2, 3, 6],
'b': [7, 9, 1, 4],
'c': [1, 5, 4, 2]}
df = pd.DataFrame.from_dict(data)
a b c
0 5 7 1
1 2 9 5
2 3 1 4
3 6 4 2
When I use sort_values() for this it does not work as expected (to me) and only sorts one column:
foo = df.sort_values(by=['a', 'b', 'c'], ascending=[False, False, False])
a b c
3 6 4 2
0 5 7 1
2 3 1 4
1 2 9 5
I can get the desired result if I use the solution from this answer which applies a lambda function:
bar = df.apply(lambda x: x.sort_values().values)
print(bar)
a b c
0 2 1 1
1 3 4 2
2 5 7 4
3 6 9 5
But this looks a bit heavy-handed to me.
What's actually happening in the sort_values() example above and how can I sort all columns in my dataframe in a pandas-way without the lambda function?
You can use numpy.sort with DataFrame constructor:
df1 = pd.DataFrame(np.sort(df.values, axis=0), index=df.index, columns=df.columns)
print (df1)
a b c
0 2 1 1
1 3 4 2
2 5 7 4
3 6 9 5
EDIT:
Answer with descending order:
arr = df.values
arr.sort(axis=0)
arr = arr[::-1]
print (arr)
[[6 9 5]
[5 7 4]
[3 4 2]
[2 1 1]]
df1 = pd.DataFrame(arr, index=df.index, columns=df.columns)
print (df1)
a b c
0 6 9 5
1 5 7 4
2 3 4 2
3 2 1 1
sort_values will sort the entire data frame by the columns order you pass to it. In your first example you are sorting the entire data frame with ['a', 'b', 'c']. This will sort first by 'a', then by 'b' and finally by 'c'.
Notice how, after sorting by a, the rows maintain the same. This is the expected result.
Using lambda you are passing each column to it, this means sort_values will apply to a single column, and that's why this second approach sorts the columns as you would expect. In this case, the rows change.
If you don't want to use lambda nor numpy you can get around using this:
pd.DataFrame({x: df[x].sort_values().values for x in df.columns.values})
Output:
a b c
0 2 1 1
1 3 4 2
2 5 7 4
3 6 9 5

Categories

Resources