Sum values in df column based on partial name of another column - python

Given the dataframe
a b
foo123 5
foo456 8
bar234 1
bar324 6
How do I add the values from b based on the only the first several characters of a? The ouput I'm looking for is:
a b
foo 13
bar 7
There are too many entries for column a to set manually, so something like the following won't work:
if df['a'].startswith('foo'):
sum(b)
I'm thinking something more like if df['a'] has first three characters that match, add all the corresponding rows for b.

If your substrings do not all have the same length, use str.extract, extract relevant portions from a and then use that to perform a groupby + sum operation on b:
# assuming your frame is df1
df1.groupby(df1['a'].str.extract(r'^(\D+)', expand=False))['b'].sum().reset_index()
a b
0 bar 7
1 foo 13
For more performance, pre-assign a first;
df1['a'] = df1['a'].str.extract(r'^(\D+)', expand=False)
df1.groupby('a', as_index=False)['b'].sum()
a b
0 bar 7
1 foo 13
If all substrings are of the same size, just slice and groupby:
df1.groupby(df1['a'].str[:3])['b'].sum().reset_index()
a b
0 bar 7
1 foo 13

replace number to ''
df.groupby(df.a.str.replace('\d+', '')).b.sum()
Out[1353]:
a
bar 7
foo 13
Name: b, dtype: int64

Related

selecting rows with min and max values of a defined column in pandas

I have the following dataframe:
A,B,C,D
10,1,2,3
1,4,7,3
10,5,2,3
40,7,9,3
9,9,9,9
I would like to create another dataframe starting from the previous one which have only two row. The selection of these two rows is based on the minimum and maximum value in the column "A". I would like to get:
A,B,C,D
1,4,7,3
40,7,9,3
Do you think I should work with a sort of index.min e index.max and then select only the two rows and append then in a new dataframe? Do you have same other suggestions?
Thanks for any kind of help,
Best
IIUC you can simply subset the dataframe with an OR condition on df.A.min() and df.A.max():
df = df[(df.A==df.A.min())|(df.A==df.A.max())]
df
A B C D
1 1 4 7 3
3 40 7 9 3
Yes, you can use idxmin/idxmax and then use loc:
df.loc[df['A'].agg(['idxmin','idxmax']) ]
Output:
A B C D
1 1 4 7 3
3 40 7 9 3
Note that this only gives one row for min and one for max. If you want all values, you should use #CHRD's solution.

Pandas DataFrame : selection of multiple elements in several columns

I have this Python Pandas DataFrame DF :
DICT = { 'letter': ['A','B','C','A','B','C','A','B','C'],
'number': [1,1,1,2,2,2,3,3,3],
'word' : ['one','two','three','three','two','one','two','one','three']}
DF = pd.DataFrame(DICT)
Which looks like :
letter number word
0 A 1 one
1 B 1 two
2 C 1 three
3 A 2 three
4 B 2 two
5 C 2 one
6 A 3 two
7 B 3 one
8 C 3 three
And I want to extract the lines
letter number word
A 1 one
B 2 two
C 3 three
First I tired :
DF[(DF['letter'].isin(("A","B","C"))) &
DF['number'].isin((1,2,3)) &
DF['word'].isin(('one','two','three'))]
Of course it didn't work, and everything has been selected
Then I tested :
Bool = DF[['letter','number','word']].isin(("A",1,"one"))
DF[np.all(Bool,axis=1)]
Good, it works ! but only for one line ...
If we take the next step and give an iterable to .isin() :
Bool = DF[['letter','number','word']].isin((("A",1,"one"),
("B",2,"two"),
("C",3,"three")))
Then it fails, the Boolean array is full of False ...
What I'm doing wrong ? Is there a more elegant way to do this selection based on several columns ?
(Anyway, I want to avoid a for loop, because the real DataFrames I'm using are really big, so I'm looking for the fastest optimal way to do the job)
Idea is create new DataFrame with all triple values and then merge with original DataFrame:
L = [("A",1,"one"),
("B",2,"two"),
("C",3,"three")]
df1 = pd.DataFrame(L, columns=['letter','number','word'])
print (df1)
letter number word
0 A 1 one
1 B 2 two
2 C 3 three
df = DF.merge(df1)
print (df)
letter number word
0 A 1 one
1 B 2 two
2 C 3 three
Another idea is create list of tuples, convert to Series and then compare by isin:
s = pd.Series(list(map(tuple, DF[['letter','number','word']].values.tolist())),index=DF.index)
df1 = DF[s.isin(L)]
print (df1)
letter number word
0 A 1 one
4 B 2 two
8 C 3 three

Python: Efficently extract a single value for every group

I need to add a description column to a dataframe that is built by grouping items from another dataframe.
grouped= df1.groupby('item')
list= grouped['total'].agg(np.sum)
list= list.reset_index()
to assign a description label to every item I've come up with this solution:
def des(item):
return df1['description'].loc[df1['item']== item].iloc[0]
list['description'] = list['item'].apply(des)
it works but it takes an enourmous amount of time to execute.
I'd like to do something like that
list=list.assign(description= df1['description'].loc[df1['item']==list['item']]
or
list=list.assign(description= df1['description'].loc[df1['item'].isin(list['item'])]
Theese are very wrong but hope you get the idea, hoping there is some pandas stuff that do the trick more efficently but can't find it
Any ideas?
I think you need DataFrameGroupBy.agg by dict of functions - for column total sum and for description first:
df = df1.groupby('item', as_index=False).agg({'total':'sum', 'description':'first'})
Also dont use variable name list, because list is python code reserved word.
Sample:
df1 = pd.DataFrame({'description':list('abcdef'),
'B':[4,5,4,5,5,4],
'total':[5,3,6,9,2,4],
'item':list('aaabbb')})
print (df1)
B description item total
0 4 a a 5
1 5 b a 3
2 4 c a 6
3 5 d b 9
4 5 e b 2
5 4 f b 4
df = df1.groupby('item', as_index=False).agg({'total':'sum', 'description':'first'})
print (df)
item total description
0 a 14 a
1 b 15 d

Python Pandas - filtering df by the number of unique values within a group

Here is an example of data I'm working on. (as a pandas df)
index inv Rev_stream Bill_type Net_rev
1 1 A Original -24.77
2 1 B Original -24.77
3 2 A Original -409.33
4 2 B Original -409.33
5 2 C Original -409.33
6 2 D Original -409.33
7 3 A Original -843.11
8 3 A Rebill 279.5
9 3 B Original -843.11
10 4 A Rebill 279.5
11 4 B Original -843.11
12 5 B Rebill 279.5
How could I filter this df, in a way to only get the lines where invoice/Rev_stream combo has both original and rebill kind of Net_rev. In the example above it would be only lines with index 7 and 8.
Is there an easy way to do it, without iterating over the whole dataframe and building dictionaries of invoice+RevStream : Bill_type?
What I'm looking for is some kind of
df = df[df[['inv','Rev_stream']]['Bill_type'].unique().len() == 2]
Unfortunately the code above doesn't work.
Thanks in advance.
You can group your data by inv and Rev_stream columns and then check for each group if both Original and Rebill are in the Bill_type values and filter based on the condition:
(df.groupby(['inv', 'Rev_stream'])
.filter(lambda g: 'Original' in g.Bill_type.values and 'Rebill' in g.Bill_type.values))

How to df.groupby(cols).apply(my_func) for some columns, while leave a few columns not tackled?

Suppose I have a Pandas dataframe df has columns a,b,c,d...z . And I want to: df.groupby('a').apply(my_func()) for columns d-z, while leave column 'b' & 'c' unchanged . How to do that ?
I notice Pandas can apply different function to different column by passing a dict . But I have a long column list and just want parameters to set or tip to simply tell Pandas to bypass some columns and apply my_func() to rest of columns ? (Otherwise I have to build a long dict)
One simple (and general) approach is to create a view of the dataframe with the subset you are interested in (or, stated for your case, a view with all columns except the ones you want to ignore), and then use APPLY for that view.
In [116]: df
Out[116]:
a b c d f
0 one 3 0.493808 a bob
1 two 8 0.150585 b alice
2 one 6 0.641816 c michael
3 two 5 0.935653 d joe
4 one 1 0.521159 e kate
Use your favorite methods to create the view you need. You could select a range of columns like so df_view = df.ix[:,'b':'d'], but the following might be more useful for your scenario:
#I want all columns except two
cols = df.columns.tolist()
mycols = [x for x in cols if not x in ['a','f']]
df_view = df[mycols]
Apply your function to that view. (Note this doesn't yet change anything in df.)
In [158]: df_view.apply(lambda x: x /2)
Out[158]:
b c d
0 1 0.246904 20
1 4 0.075293 25
2 3 0.320908 28
3 2 0.467827 28
4 0 0.260579 24
Update the df using update()
In [156]: df.update(df_view.apply(lambda x: x/2))
In [157]: df
Out[157]:
a b c d f
0 one 1 0.246904 20 bob
1 two 4 0.075293 25 alice
2 one 3 0.320908 28 michael
3 two 2 0.467827 28 joe
4 one 0 0.260579 24 kate

Categories

Resources