Python: Sum values in DataFrame if other values match between DataFrames - python

I have two dataframes of different length like those:
DataFrame A:
FirstName LastName
Adam Smith
John Johnson
DataFrame B:
First Last Value
Adam Smith 1.2
Adam Smith 1.5
Adam Smith 3.0
John Johnson 2.5
Imagine that what I want to do is to create a new column in "DataFrame A" summing all the values with matching last names, so the output in "A" would be:
FirstName LastName Sums
Adam Smith 5.7
John Johnson 2.5
If I were in Excel, I'd use
=SUMIF(dfB!B:B, B2, dfB!C:C)
In Python I've been trying multiple solutions but using both np.where, df.sum(), dropping indexes etc., but I'm lost. Below code is returning "ValueError: Can only compare identically-labeled Series objects", but I don't think it's written correctly anyways.
df_a['Sums'] = df_a[df_a['LastName'] == df_b['Last']].sum()['Value']
Huge thanks in advance for any help.

Use boolean indexing with Series.isin for filtering and then aggregate sum:
df = (df_b[df_b['Last'].isin(df_a['LastName'])]
.groupby(['First','Last'], as_index=False)['Value']
.sum())
If want match both, first and last name:
df = (df_b.merge(df_a, left_on=['First','Last'], right_on=['FirstName','LastName'])
.groupby(['First','Last'], as_index=False)['Value']
.sum())

df_b_a = (pd.merge(df_b, df_a, left_on=['FirstName', 'LastName'], right_on=['First', 'Last'], how='left')
.groupby(by=['First', 'Last'], as_index=False)['Value'].sum())
print(df_b_a)
First Last Value
0 Adam Smith 5.7
1 John Johnson 2.5

Use DataFrame.merge + DataFrame.groupby:
new_df=( dfa.merge(dfb.groupby(['First','Last'],as_index=False).Value.sum() ,
left_on='LastName',right_on='Last',how='left')
.drop('Last',axis=1) )
print(new_df)
to join for both columns:
new_df=( dfa.merge(dfb.groupby(['First','Last'],as_index=False).Value.sum() ,
left_on=['FirstName','LastName'],right_on=['First','Last'],how='left')
.drop(['First','Last'],axis=1) )
print(new_df)
Output:
FirstName LastName Value
0 Adam Smith 5.7
1 John Johnson 2.5

Related

Join two pandas DataFrame based on substring [duplicate]

I have 2 dataframes that I would like to merge on a common column. However the column I would like to merge on are not of the same string, but rather a string from one is contained in the other as so:
import pandas as pd
df1 = pd.DataFrame({'column_a':['John','Michael','Dan','George', 'Adam'], 'column_common':['code','other','ome','no match','word']})
df2 = pd.DataFrame({'column_b':['Smith','Cohen','Moore','K', 'Faber'], 'column_common':['some string','other string','some code','this code','word']})
The outcome I would like from d1.merge(d2, ...) is the following:
column_a | column_b
----------------------
John | Moore <- merged on 'code' contained in 'some code'
Michael | Cohen <- merged on 'other' contained in 'other string'
Dan | Smith <- merged on 'ome' contained in 'some string'
George | n/a
Adam | Faber <- merged on 'word' contained in 'word'
New Answer
Here is one approach based on pandas/numpy.
rhs = (df1.column_common
.apply(lambda x: df2[df2.column_common.str.find(x).ge(0)]['column_b'])
.bfill(axis=1)
.iloc[:, 0])
(pd.concat([df1.column_a, rhs], axis=1, ignore_index=True)
.rename(columns={0: 'column_a', 1: 'column_b'}))
column_a column_b
0 John Moore
1 Michael Cohen
2 Dan Smith
3 George NaN
4 Adam Faber
Old Answer
Here's a solution for left-join behaviour, as in it doesn't keep column_a values that do not match any column_b values. This is slower than the above numpy/pandas solution because it uses two nested iterrows loops to build a python list.
tups = [(a1, a2) for i, (a1, b1) in df1.iterrows()
for j, (a2, b2) in df2.iterrows()
if b1 in b2]
(pd.DataFrame(tups, columns=['column_a', 'column_b'])
.drop_duplicates('column_a')
.reset_index(drop=True))
column_a column_b
0 John Moore
1 Michael Cohen
2 Dan Smith
3 Adam Faber
My solution involves applying a function to the common column. I can't imagine it holds up well when df2 is large but perhaps someone more knowledgeable than I can suggest an improvement.
def strmerge(strcolumn):
for i in df2['column_common']:
if strcolumn in i:
return df2[df2['column_common'] == i]['column_b'].values[0]
df1['column_b'] = df1['column_common'].apply(strmerge)
df1
column_a column_common column_b
0 John code Moore
1 Michael other Cohen
2 Dan ome Smith
3 George no match None
4 Adam word Faber
A simple, readable and purely vectorized approach could be to have a cross join and then filter where columns column_common of one is substring of other:
df = df1.merge(df2, how='cross')
df.loc[df.column_common_x.eq('no match'),'column_b'] = pd.NA
df.loc[df.apply(lambda x:x.column_common_y.__contains__(x.column_common_x) or x.column_common_x == 'no match', axis=1),
['column_a', 'column_b']].drop_duplicates(subset=['column_a'])
Output:
column_a
column_b
John
Moore
Michael
Cohen
Dan
Smith
George
Adam
Faber

How to collapse all rows in pandas dataframe across all columns

I am trying to collapse all the rows of a dataframe into one single row across all columns.
My data frame looks like the following:
name
job
value
bob
business
100
NAN
dentist
Nan
jack
Nan
Nan
I am trying to get the following output:
name
job
value
bob jack
business dentist
100
I am trying to group across all columns, I do not care if the value column is converted to dtype object (string).
I'm just trying to collapse all the rows across all columns.
I've tried groupby(index=0) but did not get good results.
You could apply join:
out = df.apply(lambda x: ' '.join(x.dropna().astype(str))).to_frame().T
Output:
name job value
0 bob jack business dentist 100.0
Try this:
new_df = df.agg(lambda x: x.dropna().astype(str).tolist()).str.join(' ').to_frame().T
Output:
>>> new_df
name job value
0 bob jack business dentist 100.0

Compare two data-frames with different column names and update first data-frame with the column from second data-frame

I am working on two data-frames which have different column names and dimensions.
First data-frame "df1" contains single column "name" that has names need to be located in second data-frame. If matched, value from df2 first column df2[0] needs to be returned and added in the result_df
Second data-frame "df2" has multiple columns and no header. This contains all the possible diminutive names and full names. Any of the column can have the "name" that needs to be matched
Goal: Locate the name in "df1" in "df2" and if it is matched, return the value from first column of the df2 and add in the respective row of df1
df1
name
ab
alex
bob
robert
bill
df2
0
1
2
3
abram
ab
robert
rob
bob
robbie
alexander
alex
al
william
bill
result_df
name
matched_name
ab
abram
alex
alexander
bob
robert
robert
robert
bill
william
The code i have written so far is giving error. I need to write it as an efficient code as it will be checking millions of entries in df1 with df2:
'''
result_df = process_name(df1, df2)
def process_name(df1, df2):
for elem in df2.values:
if elem in df1['name']:
df1["matched_name"] = df2[0]
'''
Try via concat(),merge(),drop() and rename() and reset_index() method:
df=(pd.concat((df1.merge(df2,left_on='name',right_on=x) for x in df2.columns))
.drop(['1','2','3'],1)
.rename(columns={'0':'matched_name'})
.reset_index(drop=True))
Output of df:
name matched_name
0 robert robert
1 ab abram
2 alex alexander
3 bill william
4 bob robert

Convert vertical table to horizontal in python(Flattening the table)

import numpy as np
import pandas as pd
df = pd.read_csv(“data.csv”)
pd.pivot_table(df, index = ‘Employee ID’ , values = [ ‘ Member ID’, ‘Firstname’, ‘Lastname’] , aggfunc =‘first)
The format seems to work but only for one value , how do i display everthing ?
Any help is appreciated .
You can use set_index() and unstack(), but you will need to fix the columns, e.g.:
In []:
df = pd.read_csv(“data.csv”)
df['ID'] = df['MemberID'] # Copy because you want it in the values too
df = df.set_index(['EmployeeID', 'MemberID']).unstack(level=1, fill_value='').sort_index(level=1, axis=1)
df.columns = df.columns.to_series().apply(lambda x: 'Member{}{}'.format(x[1], x[0]))
print(df)
Out[]:
Member1ID Member1Lastname Member1firstname Member2ID Member2Lastname Member2firstname Member3ID Member3Lastname Member3firstname
EmployeeID
1 1 Ann Anu 2 Ann Aju 3 vAnn Abi
2 1 John Cini 2 John Biju
3 1 Peter Mathew 2 Peter Joseph
But I feel you can simplify if you really don't need MemberID in the values (you have it in the column name) or if you don't mind a MultiIndex then:
In []:
df.set_index(['EmployeeID', 'MemberID']).unstack(level=1, fill_value='').swaplevel(axis=1).sort_index(axis=1)
Out[]:
MemberID 1 2 3
Lastname firstname Lastname firstname Lastname firstname
EmployeeID
1 Ann Anu Ann Aju Ann Abi
2 John Cini John Biju
3 Peter Mathew Peter Joseph
You can use pivot_table of pandas
df = df.pivot_table(index=['Employe-id'],
columns=['MemberID','firstname','lastname'])
To install pandas use pip install pandas
then first make a dataframe object by read_csv()
then use above method to convert

Fill dataframe nan values from a join

I am trying to map owners to an IP address through the use of two tables, df1 & df2. df1 contains the IP list to be mapped and df2 contains an IP, an alias, and the owner. After running a join on the IP column, it gives me a half joined dataframe. Most of the remaining data can be joined by replacing the NaN values with a join on the Alias column, but I can’t figure out how to do it.
My initial thoughts were to try nesting pd.merge inside fillna(), but it won't accept a dataframe. Any help would be greatly appreciated.
df1 = pd.DataFrame({'IP' : ['192.18.0.100', '192.18.0.101', '192.18.0.102', '192.18.0.103', '192.18.0.104']})
df2 = pd.DataFrame({'IP' : ['192.18.0.100', '192.18.0.101', '192.18.1.206', '192.18.1.218', '192.18.1.118'],
'Alias' : ['192.18.1.214', '192.18.1.243', '192.18.0.102', '192.18.0.103', '192.18.1.180'],
'Owner' : ['Smith, Jim', 'Bates, Andrew', 'Kline, Jenny', 'Hale, Fred', 'Harris, Robert']})
new_df = pd.DataFrame(pd.merge(df1, df2[['IP', 'Owner']], on='IP', how= 'left'))
Expected output is:
IP Owner
192.18.0.100 Smith, Jim
192.18.0.101 Bates, Andrew
192.18.0.102 Kline, Jenny
192.18.0.103 Hale, Fred
192.18.0.104 nan
No need to merge, Just pull data where condition satisfies. This is way faster than merge and less complicated.
condition = (df1['IP'] == df2['IP']) | (df1['IP'] == df2['Alias'])
df1['Owner'] = np.where(condition, df2['Owner'], np.nan)
print(df1)
IP Owner
0 192.18.0.100 Smith, Jim
1 192.18.0.101 Bates, Andrew
2 192.18.0.102 Kline, Jenny
3 192.18.0.103 Hale, Fred
4 192.18.0.104 NaN
Try this one:
new_df = pd.DataFrame(pd.merge(df1, pd.concat([df2[['IP', 'Owner']], df2[['Alias', 'Owner']].rename(columns={"Alias": "IP"})]).drop_duplicates(), on='IP', how= 'left'))
The result:
>>> new_df
IP Owner
0 192.18.0.100 Smith, Jim
1 192.18.0.101 Bates, Andrew
2 192.18.0.102 Kline, Jenny
3 192.18.0.103 Hale, Fred
4 192.18.0.104 NaN
Let's melt then use map:
df1['IP'].map(df2.melt('Owner').set_index('value')['Owner'])
Output:
0 Smith, Jim
1 Bates, Andrew
2 Kline, Jenny
3 Hale, Fred
4 NaN
Name: IP, dtype: object

Categories

Resources