Compare two dataframes with different format column values - python

I have two dataframes
df1:
AccountNo
name
a/ctype
11.22.21
Henry
checking
11.22.22
Sam
Saving.
11.22.23
John
Checking
df2:
AccountNo
name
a/ctype
11-22-21
Henry
checking
11-22-23
John
Checking
11-22-24
Rita
Checking
output:
df3:
A/cNO_df1
A/cNO_df2
result.
Name_df1
Name_df2
result
a/ctype_df1
a/ctype_df2
result.
11.22.21
11-22-21
Match
Henry
Henry
Match.
checking
checking
Match.
11.22.22
Notindf2
Sam.
Notindf2
checking
Notindf2
11.22.23
11-22-23
Match
John.
john
Match.
checking
checking
Match.
.
11-22-24
Notindf1
.
Rita
Notindf1
checking
Notindf2
I tried removing the non numeric character for the accounts to compare both data set using:
df1['AccountNo'] = df1.AccountNo.replace(regex=[r'\D+', value='')
df2['AccountNo'] = df2.AccountNo.replace(regex=[r'\D+', value='')
And then concat two dataframes. But, When I remove the character I cannot print it in the same format and for ac not in df1 and ac not in df2 I am not able to concat that. I tried using numpy where to compare and concat.
Is there a way it can be done?

You can merge with an external Series as key:
df1.merge(df2, left_on='AccountNo', right_on=df2['AccountNo'].str.replace('-', '.'),
suffixes=('_df1', '_df2'), how='outer')
output:
AccountNo AccountNo_df1 name_df1 a/ctype_df1 AccountNo_df2 name_df2 a/ctype_df2
0 11.22.21 11.22.21 Henry checking 11-22-21 Henry checking
1 11.22.22 11.22.22 Sam Saving NaN NaN NaN
2 11.22.23 11.22.23 John Checking 11-22-23 John Checking
3 11.22.24 NaN NaN NaN 11-22-24 Rita Checking

Related

keep one column's data in pandas and show all NANs from other columns only

Goal: Id like to still show who the person is so that I can display the NANs associated with them so that I can quickly find who is missing info.
Consider this dataset:
df:
Name Phone Address
John Doe NAN 123 lane
Jenny Gump 222-222-2222 NAN
Larry Bean NAN 561 road
Harry Smidlap 111-111-1111 555 highway
I'd like to clean the data up and show something like this (similar to an excel view when filtering for blanks):
Then maybe populate the empty data with something that says "Data exists" or just leave it blank. Im open to suggestions. And also drop the rows that have all data populated.
df:
Name Phone Address
John Doe NAN
Jenny Gump NAN
Larry Bean NAN
I've tried:
df[df.isnull().any(axis=1)]
That works great but I have a big data source and I see a lot of unnecessary info that already has data. I only care about seeing the person's name and what their missing.
Anyone have any ideas?
Since you require the Name column to be intact, you can just select other columns except Name and mask them, then create another data frame df2 which removes all the NaN values. After that you can just drop the indexes in df2 from df which you give you rows with only the NaN values as follows.
df.mask((df.columns != 'Name') & (df.notnull()), "", inplace=True)
df2 = df.dropna()
df.drop(df2.index, inplace=True)
This should give you the following output.
Name Phone Address
John Doe NAN
Jenny Gump NAN
Larry Bean NAN
Mask (replacing values where the condition is true) any place where it is not null with an empty string.
df.mask(df.notnull(), '')
This operates over multiple dimensions, passing a 2D set of true/false answers to the question "Replace or not?". And where it is true, it send the contents to /dev/null, while where it is not, it allows them to remain precariously.

Compare two data-frames with different column names and update first data-frame with the column from second data-frame

I am working on two data-frames which have different column names and dimensions.
First data-frame "df1" contains single column "name" that has names need to be located in second data-frame. If matched, value from df2 first column df2[0] needs to be returned and added in the result_df
Second data-frame "df2" has multiple columns and no header. This contains all the possible diminutive names and full names. Any of the column can have the "name" that needs to be matched
Goal: Locate the name in "df1" in "df2" and if it is matched, return the value from first column of the df2 and add in the respective row of df1
df1
name
ab
alex
bob
robert
bill
df2
0
1
2
3
abram
ab
robert
rob
bob
robbie
alexander
alex
al
william
bill
result_df
name
matched_name
ab
abram
alex
alexander
bob
robert
robert
robert
bill
william
The code i have written so far is giving error. I need to write it as an efficient code as it will be checking millions of entries in df1 with df2:
'''
result_df = process_name(df1, df2)
def process_name(df1, df2):
for elem in df2.values:
if elem in df1['name']:
df1["matched_name"] = df2[0]
'''
Try via concat(),merge(),drop() and rename() and reset_index() method:
df=(pd.concat((df1.merge(df2,left_on='name',right_on=x) for x in df2.columns))
.drop(['1','2','3'],1)
.rename(columns={'0':'matched_name'})
.reset_index(drop=True))
Output of df:
name matched_name
0 robert robert
1 ab abram
2 alex alexander
3 bill william
4 bob robert

Python: Sum values in DataFrame if other values match between DataFrames

I have two dataframes of different length like those:
DataFrame A:
FirstName LastName
Adam Smith
John Johnson
DataFrame B:
First Last Value
Adam Smith 1.2
Adam Smith 1.5
Adam Smith 3.0
John Johnson 2.5
Imagine that what I want to do is to create a new column in "DataFrame A" summing all the values with matching last names, so the output in "A" would be:
FirstName LastName Sums
Adam Smith 5.7
John Johnson 2.5
If I were in Excel, I'd use
=SUMIF(dfB!B:B, B2, dfB!C:C)
In Python I've been trying multiple solutions but using both np.where, df.sum(), dropping indexes etc., but I'm lost. Below code is returning "ValueError: Can only compare identically-labeled Series objects", but I don't think it's written correctly anyways.
df_a['Sums'] = df_a[df_a['LastName'] == df_b['Last']].sum()['Value']
Huge thanks in advance for any help.
Use boolean indexing with Series.isin for filtering and then aggregate sum:
df = (df_b[df_b['Last'].isin(df_a['LastName'])]
.groupby(['First','Last'], as_index=False)['Value']
.sum())
If want match both, first and last name:
df = (df_b.merge(df_a, left_on=['First','Last'], right_on=['FirstName','LastName'])
.groupby(['First','Last'], as_index=False)['Value']
.sum())
df_b_a = (pd.merge(df_b, df_a, left_on=['FirstName', 'LastName'], right_on=['First', 'Last'], how='left')
.groupby(by=['First', 'Last'], as_index=False)['Value'].sum())
print(df_b_a)
First Last Value
0 Adam Smith 5.7
1 John Johnson 2.5
Use DataFrame.merge + DataFrame.groupby:
new_df=( dfa.merge(dfb.groupby(['First','Last'],as_index=False).Value.sum() ,
left_on='LastName',right_on='Last',how='left')
.drop('Last',axis=1) )
print(new_df)
to join for both columns:
new_df=( dfa.merge(dfb.groupby(['First','Last'],as_index=False).Value.sum() ,
left_on=['FirstName','LastName'],right_on=['First','Last'],how='left')
.drop(['First','Last'],axis=1) )
print(new_df)
Output:
FirstName LastName Value
0 Adam Smith 5.7
1 John Johnson 2.5

Preserving Column Order - Python Pandas and Column Concat

So my google-fu doesn't seem to be doing me justice with what seems like should be a trivial procedure.
In Pandas for Python I have 2 datasets, I want to merge them. This works fine using .concat. The issue is, .concat reorders my columns. From a data retrieval point of view, this is trivial. From a "I just want to open the file and quickly see the most important column" point of view, this is annoying.
File1.csv
Name Username Alias1
Tom Tomfoolery TJZ
Meryl MsMeryl Mer
Timmy Midsize Yoda
File2.csv
Name Username Alias 1 Alias 2
Bob Firedbob Fire Gingy
Tom Tomfoolery TJZ Awww
Result.csv
Alias1 Alias2 Name Username
0 TJZ NaN Tom Tomfoolery
1 Mer NaN Meryl MsMeryl
2 Yoda NaN Timmy Midsize
0 Fire Gingy Bob Firedbob
1 TJZ Awww Tom Tomfoolery
The result is fine, but in the data-file I'm working with I have 1,000 columns. The 2-3 most important are now in the middle. Is there a way, in this toy example, I could've forced "Username" to be the first column and "Name" to be the second column, preserving the values below each all the way down obviously.
Also as a side note, when I save to file it also saves that numbering on the side (0 1 2 0 1). If theres a way to prevent that too, that'd be cool. If not, its not a big deal since it's a quick fix to remove.
Thanks!
Assuming the concatenated DataFrame is df, you can perform the reordering of columns as follows:
important = ['Username', 'Name']
reordered = important + [c for c in df.columns if c not in important]
df = df[reordered]
print df
Output:
Username Name Alias1 Alias2
0 Tomfoolery Tom TJZ NaN
1 MsMeryl Meryl Mer NaN
2 Midsize Timmy Yoda NaN
0 Firedbob Bob Fire Gingy
1 Tomfoolery Tom TJZ Awww
The list of numbers [0, 1, 2, 0, 1] is the index of the DataFrame. To prevent them from being written to the output file, you can use the index=False option in to_csv():
df.to_csv('Result.csv', index=False, sep=' ')

pandas merge dataframe with NaN (or "unknown") for missing values

I have 2 dataframes, one of which has supplemental information for some (but not all) of the rows in the other.
names = df({'names':['bob','frank','james','tim','ricardo','mike','mark','joan','joe'],
'position':['dev','dev','dev','sys','sys','sys','sup','sup','sup']})
info = df({'names':['joe','mark','tim','frank'],
'classification':['thief','thief','good','thief']})
I would like to take the classification column from the info dataframe above and add it to the names dataframe above. However, when I do combined = pd.merge(names, info) the resulting dataframe is only 4 rows long. All of the rows that do not have supplemental info are dropped.
Ideally, I would have the values in those missing columns set to unknown. Resulting in a dataframe where some people are theives, some are good, and the rest are unknown.
EDIT:
One of the first answers I received suggested using merge outter which seems to do some weird things. Here is a code sample:
names = df({'names':['bob','frank','bob','bob','bob''james','tim','ricardo','mike','mark','joan','joe'],
'position':['dev','dev','dev','dev','dev','dev''sys','sys','sys','sup','sup','sup']})
info = df({'names':['joe','mark','tim','frank','joe','bill'],
'classification':['thief','thief','good','thief','good','thief']})
what = pd.merge(names, info, how="outer")
what.fillna("unknown")
The strange thing is that in the output I'll get a row where the resulting name is "bobjames" and another where position is "devsys". Finally, even though bill does not appear in the names dataframe it shows up in the resulting dataframe. So I really need a way to say lookup a value in this other dataframe and if you find something tack on those columns.
In case you are still looking for an answer for this:
The "strange" things that you described are due to some minor errors in your code. For example, the first (appearance of "bobjames" and "devsys") is due to the fact that you don't have a comma between those two values in your source dataframes. And the second is because pandas doesn't care about the name of your dataframe but cares about the name of your columns when merging (you have a dataframe called "names" but also your columns are called "names"). Otherwise, it seems that the merge is doing exactly what you are looking for:
import pandas as pd
names = pd.DataFrame({'names':['bob','frank','bob','bob','bob', 'james','tim','ricardo','mike','mark','joan','joe'],
'position':['dev','dev','dev','dev','dev','dev', 'sys','sys','sys','sup','sup','sup']})
info = pd.DataFrame({'names':['joe','mark','tim','frank','joe','bill'],
'classification':['thief','thief','good','thief','good','thief']})
what = pd.merge(names, info, how="outer")
what.fillna('unknown', inplace=True)
which will result in:
names position classification
0 bob dev unknown
1 bob dev unknown
2 bob dev unknown
3 bob dev unknown
4 frank dev thief
5 james dev unknown
6 tim sys good
7 ricardo sys unknown
8 mike sys unknown
9 mark sup thief
10 joan sup unknown
11 joe sup thief
12 joe sup good
13 bill unknown thief
I think you want to perform an outer merge:
In [60]:
pd.merge(names, info, how='outer')
Out[60]:
names position classification
0 bob dev NaN
1 frank dev thief
2 james dev NaN
3 tim sys good
4 ricardo sys NaN
5 mike sys NaN
6 mark sup thief
7 joan sup NaN
8 joe sup thief
There is section showing the type of merges can perform: http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging
For outer or inner join also join function can be used. In the case above let's suppose that names is the main table (all rows from this table must occur in result). Then to run left outer join use:
what = names.set_index('names').join(info.set_index('names'), how='left')
resp.
what = names.set_index('names').join(info.set_index('names'), how='left').fillna("unknown")
set_index functions are used to create temporary index column (same in both tables). When dataframes would have contain such index column, then this step wouldn't be necessary. For example:
# define index when create dataframes
names = pd.DataFrame({'names':['bob',...],'position':['dev',...]}).set_index('names')
info = pd.DataFrame({'names':['joe',...],'classification':['thief',...]}).set_index('names')
what = names.join(info, how='left')
To perform other types of join just change how attribute (left/right/inner/outer are allowed). More info here
Think of it as an SQL join operation. You need a left-outer join[1].
names = pd.DataFrame({'names':['bob','frank','james','tim','ricardo','mike','mark','joan','joe'],'position':['dev','dev','dev','sys','sys','sys','sup','sup','sup']})
info = pd.DataFrame({'names':['joe','mark','tim','frank'],'classification':['thief','thief','good','thief']})
Since there are names for which there is no classification, a left-outer join will do the job.
a = pd.merge(names, info, how='left', on='names')
The result is ...
>>> a
names position classification
0 bob dev NaN
1 frank dev thief
2 james dev NaN
3 tim sys good
4 ricardo sys NaN
5 mike sys NaN
6 mark sup thief
7 joan sup NaN
8 joe sup thief
... which is fine. All the NaN results are ok if you look at both the tables.
Cheers!
[1] - http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging

Categories

Resources